id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2303.08857
Phase transition kinetics revealed by in situ X-ray diffraction in laser-heated dynamic diamond anvil cells
We report on a novel approach to dynamic compression of materials that bridges the gap between previous static- and dynamic- compression techniques, allowing to explore a wide range of pathways in the pressure-temperature space. By combining a dynamic-diamond anvil cell setup with double-sided laser-heating and in situ X-ray diffraction, we are able to perform dynamic compression at high temperature and characterize structural transitions with unprecedented time resolution. Using this method, we investigate the $\gamma-\epsilon$ phase transition of iron under dynamic compression for the first time, reaching compression rates of hundreds of GPa/s and temperatures of 2000 K. Our results demonstrate a distinct response of the $\gamma-\epsilon$ and $\alpha-\epsilon$ transitions to the high compression rates achieved. These findings open up new avenues to study tailored dynamic compression pathways in the pressure-temperature space and highlight the potential of this platform to capture kinetic effects in a diamond anvil cell.
Matthew Ricks, Arianna E. Gleason, Francesca Miozzi, Hong Yang, Stella Chariton, Vitali B. Prakapenka, Stanislav V. Sinogeikin, Richard L. Sandberg, Wendy L. Mao, Silvia Pandolfi
2023-03-15T18:23:46Z
http://arxiv.org/abs/2303.08857v4
_In situ_ study of iron phase transitions at high pressure and temperature over millisecond timescales via time-resolved X-ray diffraction ###### Abstract We investigate the phase transitions of iron at high pressure and high temperature conditions using a fast-loading dynamic-diamond anvil cell (dDAC) setup. Using the dDAC apparatus coupled with _in situ_ X-ray-diffraction at the 13-IDD beamline at Advanced Photon Source in Argonne National Laboratory, we demonstrate compression rates of hundreds of GPa/s and monitor the structural evolution with millisecond time resolution. This technique allows us to cover an intermediate compression rate between conventional static- and dynamic-compression experiments, providing new insight on the kinetic effects influencing iron phase transitions. Crucially, the dDAC setup is compatible with doubled sided laser heating, enabling a detailed investigation of the pressure-temperature phase diagram under dynamic compression, as opposed to shock-compression techniques, which are constrained along the Hugoniot curve. We provide thus the first insight on the \(\gamma-\epsilon\) phase transition (_i.e.,_\(\mathit{fcc}\) to \(\mathit{hcp}\)) of iron under dynamic loading and compare the results with the trends observed for the \(\alpha-\epsilon\) (_i.e.,_\(\mathit{bcc}\) to \(\mathit{hcp}\)) phase transition. Our results demonstrate that the specific deformation mechanism strongly influences the response under dynamic loading. ## I Introduction Iron (Fe) is the main constituent of the Earth's core, and its behavior at extreme conditions has been extensively studied, both experimentally and theoretically. Fe is also one of the most commonly used materials in the industrial sector. Its versatility, strength, and durability make it an essential component for various applications. The stable structure of Fe at ambient conditions, the so-called \(\alpha\) phase, with body-centered cubic (_bcc_) structure, transforms into an hexagonal-close packed (_hcp_) structure, the \(\epsilon\) phase, under high-pressure (_HP_) [1], while at high-temperature (_HT_), a face-centered cubic (_fcc_) structure is stabilized, _i.e.,_ the \(\gamma\) phase. Upon compression at high-pressure and high-temperature (_HP-HT_), the \(\gamma\) phase also transforms into the \(\epsilon\) phase. This \(\epsilon\) phase remains stable up to multi-Mbar pressures and is believed to be the phase present in the Earth's solid core [2]. Numerous static experiments have been conducted to explore Fe _HP-HT_ equilibrium phase diagram [3; 4; 5; 6; 7; 8; 9]. In these experiments, efforts were made to ensure hydrostatic conditions in order to mimic the environment of the Earth's interior as closely as possible, and to avoid uniaxial strain and temperature gradients that make it difficult to accurately estimate pressure (_P_) and temperature (_T_). Although understanding Fe behaviour under the hydrostatic conditions has important implications for our understanding of the Earth's core, not all geological environments on Earth or extraterrestrial planets are static: shearing in subduction zones and planetary impacts are examples of dynamic geophysical processes [10]. Dynamic compression techniques, such as gas gun and laser ablation compression, have been extensively used to characterize Fe deformation and melting at ultrafast timescales, from \(\mu\)s down to ps [11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21]. These approaches can attain pressures up to several TPa [22] (_i.e.,_ tens of millions atmospheres); however, single-shock experiments are constrained to probe states along the Hugoniot curve in the _P-T_ space [23]. For Fe, the Hugoniot curve crosses the \(\alpha-\epsilon\) phase boundary, and the transition has been observed at ca. 13 GPa, a slightly higher pressure compared to the static value [24; 25; 26]. Despite recent developments that allow us to deploy more complex, off-Hugoniot compression profiles [27; 28; 29], the \(\gamma-\epsilon\) transition remains inaccessible using conventional dynamic compression techniques. Between the timescales characteristics of static and shock compression experiments lies a vast region of unexplored compression rates. Recent development in dynamic diamond-anvil cell (dDAC) technology has started to fill in this gap. Using either gas supplied membrane or electromechanical piezoelectric actuators to control pressure, dDACs can access compression rates on the second to millisecond timescales [30] For example Konopkova _et al._ investigated the \(\alpha-\epsilon\) transition at compression rates up to 4.1 GPa/s at ambient temperature using a dDAC equipped with a membrane enclosure [10]. They find this phase boundary to be close to the values found in static compression experiments if the transition happens under quasi-hydrostatic conditions, when a pressure transmitting medium is used in the sample chamber. Here, we provide the first insight on the \(\gamma-\epsilon\) phase transition of Fe under dynamic compression, using a dDAC setup to attain millisecond (ms) compression timescales at _HT_. The phase transition onset at these compression rates was measured for two different temperatures, and the values agree with previous static compression experiments from Shen _et al._[1], while it also shows a phase transition lowering with respect to results from more recent studies [4; 5; 7]. We also investigated the \(\alpha-\epsilon\) phase transition at ambient temperature; in this case, a noticeable increase of the transition offset is observed at the increased compression rate. Our results demonstrate that the extent and nature of kinetic effects on phase transition boundaries is strongly influenced by the specifics of the transition mechanism, which should be taken into account when using static- or dynamic-compression data to inform geological models. ## II Methods Samples consisted of reagent grade Fe powder with micrometer-sized grains, commercially available from Alfa Aesar. The samples were loaded in pre-indented stainless steel gaskets (80 \(\mu\)m-thick SS 304, initial thickness 250 \(\mu\)m). The diamonds had flat culets of 250 and 300 \(\mu\)m. The sample chamber (100 \(\mu\)m diameter) was drilled using an electrical discharge drilling system. Cold-pressed flakes of KCl were placed on either side of the Fe grains, providing thermal insulation form the diamond culets as well as acting as a pressure transmitting medium and a pressure calibrant with a well known equation of state [31]. The sample assemblages were loaded in a mini-BX80 DAC developed by DAC Tools, which is a modified version of the mini-BX90 apparatus [32],and it was equipped with tungsten carbide seats. The mini-BX80 allows for pressure control either by screws or remotely, _e.g._, using a membrane enclosure and an online remote control system [30]. To enable dynamic loading of the DAC using the membrane gas supply, our experimental setup included an intermediate buffer. The buffer allows us to pre-load the gas supply to a desired pressure, and it is located near the DAC. An electric solenoid valve allows release of the gas in the membrane on a short (\(\sim\)ms) timescale for high compression rate experiments (FIG.1). Analogous setups for fast compression and _in situ_ characterization using synchrotron radiation have already been demonstrated at other facilities [33; 34]. Experiments were conducted at the 13-IDD beamline of the GSECARS sector of the Advanced Photon Source [35]. The _P-T_ conditions probed during our experiments, Figure 1: Schematic view of the experimental setup used on the 13-IDD beamline at the APS synchrotron. Loading in the DAC was performed using a membrane and an enclosure compatible with the mini-BX80 cells; the intermediate buffer allowed to perform fast (ms) compression runs. The structure of the sample was monitored in-real-time using XRD, and the X-ray beam was spatially overlapped with the laser-heating spot. Figure 2: Temporal evolution of pressure during dynamic loading for the four experimental runs; pressure was measured using the known EOS of KCl. For each run, temperature values as well as compression rates are reported; in Run 4, no intermediate buffer was used, resulting in a \(\sim\)100 times lower compression rate. as well as the compression rates attained during dynamic loading, are shown in FIG.2 In Run 1, compression was performed at 2000K, with an average compression rate of 400 GPa/s: Run 2 reached 530 GPa/s at 1400K. Run 3 and Run 4 were performed at 300 K, and reached 360 GPa/s and 2.5 GPa/s compression rates, respectively. It should be noted that in Run 4 the compression was performed without using the intermediate buffer. X-ray diffraction data (XRD) were collected _in situ_ at _HP-HT_ using a monochromatic X-ray beam with energy of either 37 keV or 42 keV and a Pilatus3 X 1M CdTe detector. LaB\({}_{6}\) was used as reference to calibrate the detector distance and geometry. 1D integration from 2D detector images was done using the Fit2D and Dioptas software packages [36; 37]. Peak identification and fitting was conducted in the PDindexer software package [38]. During Run 1-3, the acquisition time was 2 ms/pattern; this ensured good temporal resolution for the characterization of the high-pressure phase transitions and an accurate determination of the transition onset. During Run 4, given the lower compression rate, the acquisition time was increased to 20 ms/pattern. Changes in pressure were monitored using the known EOS of the B2 phase of KCl [31] and fitting of the KCl (110) reflection; this choice was dictated by the visibility of the KCl (110) reflection throughout the whole experiment and the lack of superposition with any Fe peaks. Thanks to the short (less than 20 mm) working distance on both sides, the mini-BX80 cell equipped with membrane enclosure is compatible with the double-sided laser heating setup of the 13-IDD beamline [39]. Temperature was measured on both sides of the DAC with 300 ms temporal resolution by fitting a Planck equation to the thermal radiation spectrum [39]. The size of X-ray beam on the sample was 2\(\mu\)m (V) x 3.5\(\mu\)m (H) and the size of the flat-top laser-heating spot was 12 \(\mu\)m in diameter; the beams were coaxially aligned to spatially overlap. In all runs, the sample was initially compressed to 7 GPa, well past the phase boundary in KCl from the B1 to the B2 phase. This ensured more accurate pressure measurements, as the KCl does not undergo any structural transition over the explored pressure range. In Run 1 and Run 2, the sample was first heated, and then pressure was increased using the buffer at _HT_. With this experimental approach, we demonstrate a peak compression rate of over 500 GPa/s over 30 ms. ## III Results We performed time-resolved XRD to characterize both the \(\alpha\)-\(\epsilon\) and the \(\gamma\)-\(\epsilon\) phase transitions of Fe. Experiments were performed at 300K and _HT_, respectively; the _in situ_ data is reported in FIG.3. For each experiment, the sample was compressed statically up to a pressure value close to the phase boundary before launching the dynamic compression run, as to ensure completion of the transformation during loading. In Run 1 (FIG.3(a)) measurements were performed keeping temperature at 2000 K, and the sample was pre-compressed up to 37 GPa. At these conditions only the \(\gamma\) phase is present. During loading, we observe the appearance of the \(\epsilon\)-(010) reflection (\(\sim\)9.2\({}^{\circ}\)) after 12 ms, at 40 GPa. Coexistence of the \(\gamma\) and \(\epsilon\) phases is observed up to 46 GPa, and progression of the transformation is confirmed by the changes in relative intensity of the correspondent peaks. At 46 GPa, the peak observed at \(\sim\)9.7\({}^{\circ}\) can be indexed as either the \(\gamma\)-(111) or the \(\epsilon\)-(002) reflections. The crystalline texture, which can be inferred from the 2D-XRD pattern by examining the intensity distribution along the Debye-Scherrer ring (see also FIG.4 and correspondent discussion), is more consistent with the \(\gamma\) phase. Moreover, as observed in previous experiments, the \(\epsilon\) phase is expected to grow along a preferred orientation in a DAC, which results in a decrease of the \(\epsilon\)-(002) reflection's intensity [40]; thus, we do not expect to observe signal from the \(\epsilon\)-(002) peak over the 2 ms integration time. In Run 2 (FIG.3(b)), the sample was pre-compressed up to 18 GPa and temperature was maintained at 1400 K during dynamic loading. At \(t=0\) ms, all the Fe peaks can be indexed as reflections from the \(\gamma\) phase. The \(\epsilon\)-(010) peak becomes visible after 10 ms, at 23 GPa. The transition takes place over a few GPa, and at 28 GPa all of the \(\gamma\) phase reflections, with the exception of the \(\gamma\)-(111) (plus \(\epsilon\)-(002)) have disappeared. As mentioned for Run 1, also in this case the peak at \(\sim\)9.5\({}^{\circ}\) is more likely to indicate the persistence of small amounts of the \(\gamma\) phase rather than corresponding to the \(\epsilon\)-(002) reflection. In Run 3 (FIG.3(c)) the sample was initially compressed to about 11 GPa, a pressure at which only the \(\alpha\) phase is present, and dynamic loading was performed at 300 K. At 13.6 GPa, multiple reflections from the \(\epsilon\) phase appear, namely, \(\epsilon\)-(010), \(\epsilon\)(011) and \(\epsilon\)-(012). It is worth noting that the value of the transition onset for this run (with compression rate of 360 GPa/s) is higher than the pressures reported in previous static compression experiments [1] At 14.6 GPa, the \(\alpha\)-(110) peak is most likely still present, and superimposed with the \(\epsilon\)-(002). In Run 4 (FIG.3(d)), compression was performed at 300 K without the use of the intermediate buffer, resulting in a compression rate reduction by a factor \(\sim\)100 (2.5 GPa/s as compared to 360 GPa/s in Run 3). The sample was compressed statically up to 7 GPa, and at this compression rate the emergence of the \(\epsilon\)-(010) and \(\epsilon\)-(011) reflections was observed at pressures as low as 10.8 GPa., a value that is in agreement with the \(\alpha\)-\(\epsilon\) phase boundary identified in static compression experiments [1]. The quality of the XRD data collected _in situ_ with ms aquisition time allows identification of the phases present at each investigated pressure. Additionally, longer-acquisition data with higher quality was also collected before and after dynamic compression for more detailed analysis. Representative data from two distinct runs is shown in FIG.4, in which 2D XRD data projected onto the 2\(\theta\)-\(\phi\) (azimuthal angle) space is overlaid with the 1D azimuthally integrated data to highlight the texture corresponding to each Fe phase. FIG.4 (a, d) shows the structure of the sample at 7 GPa and 300 K, prior to being heated or compressed dynamically; at these conditions, no phase transition is observed and the sample maintains the ambient Fe structure (\(\alpha\) phase). FIG.4 (b, e) shows the sample transformation upon annealing at different temperatures; in particular, we notice that a higher temperature is required to ensure completion of the \(\alpha\)-\(\gamma\) transformation at _HT_ (1450 K rather than 1360 K). The reflections corresponding to the \(\gamma\) phase show a non-uniform intensity distribution over the Debye-Scherrer rings, along the \(\phi\) direction. This indicates that this phase crystallizes in large grains that do not cover the whole range of orientations with respect to the X-ray beam. FIG.4 (c, f) shows the final state reached by the sample, after dynamic loading and quenching. In both cases, complete transition to the \(\epsilon\) phase is observed; this phase appears to have wider XRD peaks with more uniform azimuthal intensity distribution compared with its precursor, which is indicative of a finely-grained powder. However, a strongly preferred orientation can be inferred by the low intensity of the \(\epsilon\)-(002) reflection; this is expected, as the \(\epsilon\)[002] direction is the most compressible one in this phase. It is also interesting to note that, in the case of a pure \(\gamma\) precursor Figure 3: Azimuthally-integrated XRD patterns as a function of time show the structural changes in the sample upon compression. The XRD data is shown in the same colors used in FIG.2 to represent pressure evolution. (a) and (b): _HT_ experiments, showing the \(\gamma\)-\(\epsilon\) transition at 2000 K and 1400 K, respectively. (c) and (d): experiments at 300 K, showing the \(\alpha\)-\(\epsilon\) transition for compression at 360 GPa/s and 2.5 GPa/s, respectively. For all patterns, the peaks of the observed Fe phases, as well as those of KCl, are indexed; the time values are measured with respect to the beginning of the dDAC compression run. (FIG.4(c,f)), the \(\epsilon\) appears more textured than in the case of a mixed \(\alpha\) and \(\gamma\) precursor (FIG.4(b,c)). It is thus possible that the lower intensity of the peaks corresponding to the \(\epsilon\) phase observed in the \(\gamma\)-\(\epsilon\) transition (FIG.3(a,b)) with respect to the \(\alpha\)-\(\gamma\) transition (FIG.3(c)) may also be due to the difference in grain size imparted by the starting phase. Indeed, bigger grains may results in fewer crystallites in the Bragg condition contributing to the peaks' intensity; in particular, in Run 1 only the \(\epsilon(010)\) reflection is visible _in situ_ (FIG.3(a)). ## IV Discussion The experiments here presented use a dDAC apparatus to perform _HP-HT_ experiments and reach compression rates up to \(\sim\)500 GP/s to investigate the influence of the loading timescale on Fe behaviour at extreme conditions. The transition onsets measured at different temperatures for both the _fcc-hcp_ (\(\gamma\)-\(\epsilon\)) and the _bcc-hcp_ (\(\alpha\)-\(\epsilon\)) phase transitions of Fe are shown in FIG.5 and overlaid with previous results from static compression experiments [1; 4; 5; 7]. For the \(\gamma\)-\(\epsilon\) transition, results from dDAC are in very good agreement with the phase boundaries proposed by Shen _et al._[1], while they exhibit a lower transition onset compared with other static compression experiments [4; 5; 7]. It is worth noting that, independently of the considered reference for the equilibrium phase boundary, no increase in the phase transition onset is observed under dynamic loading. On the contrary, for a compression rate of 360 GPa/s at 300 K, the \(\alpha\)-\(\epsilon\) transition is observed at higher P, _i.e._, 13.6 GPa, as compared with the static phase boundary. To confirm that the shift observed in the \(\alpha\)-\(\epsilon\) transition is due to the compression timescale, we have performed an additional experiment (Run 4, not shown in FIG.5) at \(\sim\)100 lower compression rate; the XRD data confirms that, at 2.5 GPa/s the transition onset is 10.8 GPa, in much closer agreement with the equilibrium value. Our results thus demonstrate that the dDAC apparatus allows us to perform compression in a wide range of timescales (s down to ms) over which noticeable kinetic effects can be studied. In previous _HP_ studies, deviations of the transition pressure from the equilibrium phase diagram has been attributed to the presence of non-hydrostatic stresses in the DAC; in particular, sluggishness in the \(\alpha\)-\(\epsilon\) transition has been observed [41]. To ensure that the results obtained in our _HP-HT_ experiments are not influenced by non-hydrostatic stresses, we have investigated the hydrostaticity. The hexagonal unit cell is described by two lattice parameters, \(a\) in the hexagonal plane, and \(c\) in the stacking direction; as mentioned earlier, the \(c\) axis is the most compressible one, and it tends to align with the compression axis in DAC experiments (see also FIG.4). Thus, a deviation from hydrostaticity could be detected by analyzing the _c/a_ ratio, as a lowering of its Figure 4: 2D XRD data projected onto the 2\(\theta\)-\(\phi\) (azimuthal angle) space is overlaid with the 1D azimuthally integrated data. Data was acquired using 1s integration time. (a) and ( d): structure and texture of the sample after static compression to 7 GPa at 300 K. (b) and (e): structural and textural changes occurring after heating at _HP_ up to 1360 K and 1450 K, respectively. (c) and (f): samples’ structure after dynamic loading and quench down to 300 K; in both cases, the final Fe structure corresponds to the \(\epsilon\) phase. value would indicate the presence of a stress gradient and a higher pressure along the compression axis [40; 42]. FIG.6 shows the _c/a_ ratio of the \(\epsilon\) phase calculated from our experimental data and compared with previous results from the literature [10; 43] In particular, Konopkova _et al._ measured the _c/a_ ratio both in hydrostatic and non-hydrostatic conditions, _i.e._, with and without the use of a pressure transmitting medium, giving a reliable reference for the _c/a_ values measured in either condition. Our results are consistent with a hydrostatic compression state in the DAC sample chamber; thus, any deviation from the equilibrium phase diagram here observed can be ascribed to the effects of the strain rate. The influence of strain rate on _HP_ phase transitions has already been analyzed in several systems; however, the effects of fast compression on the phase boundaries are not univocal, and they strongly depend on the system and on the specifics of the deformation mechanism. For example, in certain systems higher compression rates can cause an increase of the transition pressure, as the fast loading hinders the rearrangement of the atoms (so-called _kinetic hindrance_) [44]. In contrast, several systems have been observed to exhibit a phase transition lowering under dynamic compression. Silicon exhibits a lowering of the Si-I to Si-II transition under laser driven shock-compression [45], which our recent work has demonstrated to be due to a defect-free _inelastic_ deformation mechanism activated at ultrafast (ns) timescales [46]. Characterization of bismuth and antimony under dynamic compression has shown that certain transitions take place at pressures lower than the static phase boundary [47; 48]. Interestingly, the lowering in pressure is observed for _displacive_ transitions, _i.e._, there is no change in unit cell volume through the transformation, which requires only small displacements of the atoms. Our results evidence two distinct trends in the \(\alpha\)-\(\epsilon\) and \(\gamma\)-\(\epsilon\) phase transitions of Fe under dynamic loading: compared to static compression experiments, the onset of \(\epsilon\) formation is increased and unchanged (or lowered), respectively. Based on previous results from dynamic compression experiments, this could be due to differences in the transition mechanisms that govern the transformations at the atomic level. Indeed, the _bcc-hcp_ (\(\gamma\)-\(\epsilon\)) transformation happens via a combination of compression along one axis and shuffle of the planes [24; 25], and recent experiments have confirmed that the completion of the transformation requires two step: a displacive seeding followed by a _reconstructive_ (_i.e._, involving bond breaking) deformation [49]. Under dynamic compression, a reconstructive transformation is expected to exhibit kinetic hindrance, as also suggested by molecular dynamics simulations of Fe \(\alpha\)-\(\epsilon\) transition[50]. On the other hand, the _fcc_ and _hcp_ structures are more closely related, as they only differ for the stacking of the planes along one direction; the transformation is thus expected to happen via a purely displacive deformation[51]. An increase of the compression rate can cause a high density of stacking faults, which could result in a high number of nucleation sites and ultimately favor the phase transition [52; 53; 47; 54; 55]. It is also worth noting that, at the strain-rates characteristics of our experiments, Fe plastic deformation is predominantly driven by thermally-activated dislocation flow [56], so the generation and diffusion of crystalline defects could strongly influence structural transformations in our _HP_-_HT_ dDAC experiments. ## V Conclusion In this study, we demonstrate for the first time dynamic compression of a material in a dDAC setup coupled with stable laser-heating. Compression rates of hundreds of GPa/s were attained while simultaneously maintaining high temperatures up to 2000K. Collection of time-resolved XRD data with millisecond time resolution enabled characterization of the phase transitions of Fe _in situ_. Interestingly, the dDAC-laser-heating setup allows to explore (quasi)isothermal compression of a material, a pathway not attainable using conventional shock-compression techniques. We provide the first insight on the \(\gamma\)-\(\epsilon\) phase transition of Fe, and compare our results with those obtained for the \(\alpha\)-\(\epsilon\) transition, as well as the equilibrium phase diagram. We observe that the increase in strain-rate affects differently the phase transitions of Figure 5: Experimental results from dDAC experiments compared with the state-of-the-art equilibrium phase diagram of Fe. Data are represented using markers of different shapes for each run, while the colours correspond to different Fe phases. The solid line is the equilibrium phase diagram by Shen _et al._[1]; the dotted lines represent the \(\gamma-\epsilon\) equilibrium boundary from other experimental studies [4; 5; 7] Fe, and we attribute the differences to the specific deformation mechanisms. Indeed, no discernible change with respect to static compression experiments is observed for the displacive \(\gamma\)-\(\epsilon\) phase transition up to 500 GPa/s. In contrast, the reconstructive \(\alpha-\epsilon\) transition exhibits a marked shift of the transition onset with the compression timescale. This study demonstrates a new approach for exploration of _HP-HT_ phase transitions under dynamic loading, covering an intermediate timescale between the well-established static- and shock- compression methods. Insight at these intermediate (ms) timescales will provide a more complete understanding of matter deformation at extreme conditions and dynamic geophysical processes. Our results also demonstrate that the strain rate affects differently phase transformations depending on the deformation mechanism, thus particular care should be taken when using experimental data to model geological processes at different timescales. ###### Acknowledgements. This work was carried out at the GeoSoilEnviroCARS (The University of Chicago, Sector 13), Advanced Photon Source (Argonne National Laboratory). GeoSoilEnviroCARS is supported by the National Science Foundation--Earth Sciences (No. EAR-1634415). The Advanced Photon Source is a U.S. Department of Energy (DOE) Office of Science User Facility operated for the DOE Office of Science by Argonne National Laboratory under Contract No. DE-AC02-06CH11357. A.E.G., R.L.S., and S.P. acknowledge support from 2019 DOE FES ECA. A.E.G. and W.L.M. acknowledge support by the Geophysics Program at NSF (EAR2049620). M.R. acknowledges support from the College of Physical and Mathematical Sciences at Brigham Young University and DOE SULI 2021 at SLAC National Lab.
2304.12670
Patch-based 3D Natural Scene Generation from a Single Example
We target a 3D generative model for general natural scenes that are typically unique and intricate. Lacking the necessary volumes of training data, along with the difficulties of having ad hoc designs in presence of varying scene characteristics, renders existing setups intractable. Inspired by classical patch-based image models, we advocate for synthesizing 3D scenes at the patch level, given a single example. At the core of this work lies important algorithmic designs w.r.t the scene representation and generative patch nearest-neighbor module, that address unique challenges arising from lifting classical 2D patch-based framework to 3D generation. These design choices, on a collective level, contribute to a robust, effective, and efficient model that can generate high-quality general natural scenes with both realistic geometric structure and visual appearance, in large quantities and varieties, as demonstrated upon a variety of exemplar scenes.
Weiyu Li, Xuelin Chen, Jue Wang, Baoquan Chen
2023-04-25T09:19:11Z
http://arxiv.org/abs/2304.12670v2
# Patch-based 3D Natural Scene Generation from a Single Example ###### Abstract We target a 3D generative model for general natural scenes that are typically unique and intricate. Lacking the necessary volumes of training data, along with the difficulties of having ad hoc designs in presence of varying scene characteristics, renders existing setups intractable. Inspired by classical patch-based image models, we advocate for synthesizing 3D scenes at the patch level, given a single example. At the core of this work lies important algorithmic designs w.r.t the scene representation and generative patch nearest-neighbor module, that address unique challenges arising from lifting classical 2D patch-based framework to 3D generation. These design choices, on a collective level, contribute to a robust, effective, and efficient model that can generate high-quality general natural scenes with both realistic geometric structure and visual appearance, in large quantities and varieties, as demonstrated upon a variety of exemplar scenes. Data and code can be found at [http://wyysf-98.github.io/Sin3DGen](http://wyysf-98.github.io/Sin3DGen). ## 1 Introduction 3D scene generation generally carries the generation of both realistic geometric structure and visual appearance. A wide assortment of scenes on earth, or digital ones across the internet, exhibiting artistic characteristics and ample variations over geometry and appearance, can be easily listed. Being able to populate these intriguing scenes in the virtual universe has been a long pursuit in the community. Research has taken several routes, among which a prevalent one is learning to extract common patterns of the geometry _or_ appearance from homogeneous scene samples, such as indoor scenes [27, 40, 50, 54, 89, 94, 102, 104, 107], terrains [28, 32, 34, 41], urban scenes [45, 67, 25], etc. Another line learns to generate single objects [16, 17, 29, 40, 49, 51, 70, 106]. A dominant trend in recent has emerged that learns 3D generative models to jointly synthesize 3D structures and appearances via differentiable rendering [14, 15, 18, 31, 66, 79]. Nevertheless, all these learning setups are limited in their ability to generalize in terms of varied scene types. While a more promising direction is the exemplar-based one, where one or a few exemplars featuring the scene of interest are provided, algorithm designs tailored for certain scene types in existing methods [59, 60, 105] again draw clear boundaries of scene characteristics they can handle. This work seeks to generate _general natural_ scenes, wherein the geometry and appearance of constituents are often tightly entangled and contribute jointly to unique features. This uniqueness hinders one from collecting sufficient homogeneous samples for learning common features, directing us to the exemplar-based paradigm. On the other hand, varying characteristics across different exemplar scenes restrain us from having ad hoc designs for a certain scene type (e.g., terrains). Hence, we resort to classical patch-based algorithms, which date long before the deep learning era and prevail in several image generation tasks even today [26, 30, 33]. Specifically, given an input 3D scene, we synthesize novel scenes at the patch level and particularly adopt the multi-scale generative patch-based framework introduced in [30], where the core is a _Generative Patch Nearest-Neighbor_ module that maximizes the bidirectional visual summary [84] between the input and output. Nevertheless, key design questions yet remain in the _3D_ generation: What representation to work with? And how to synthesize _effectively_ and _efficiently_? In this work, we exploit a grid-based radiance field - Plenoxels [100], which boasts great visual effects, for representing the input scene. While its simplicity and regular structure benefit patch-based algorithms, important designs must be adopted. Specifically, we construct the exemplar pyramid via coarse-to-fine training Plenoxels on images of the input scene, instead of trivially downsampling a pretrained high-resolution one. Furthermore, we transform the high-dimensional, unbounded, and noisy features of the Plenoxels-based exemplar at each scale into more well-defined and compact geometric and appearance features, improving the robustness and efficiency in the subsequent patch matching. On the other end, we employ heterogeneous representations for the synthesis inside the generative nearest neighbor module. Specifically, the patch matching and blending operate in tandem at each scale to gradually synthesize an intermediate _value_-based scene, which will be eventually converted to a _coordinate_-based counterpart at the end. The benefits are several-fold: a) the transition between consecutive generation scales, where the value range of exemplar features may fluctuate, is more stable; b) the transformed features in the synthesized output is inverted to the original "rendezule" Plenoxels features; so that c) the visual realism in the Plenoxels-based exemplar is preserved intactly. Last, working on voxels with patch-based algorithms necessarily leads to high computation issues. So we use an _exact-to-approximate_ patch nearest-neighbor module in the pyramid, which keeps the search space under a manageable range while introducing negligible compromise on the visual summary optimality. These designs, on a collective level, essentially lay a solid foundation for an effective and efficient 3D generative model. To our knowledge, our method is the _first_ 3D generative model that can generate 3D general natural scenes from a _single_ example, with _both_ realistic geometry and visual appearance, in large quantities and varieties. We validate the efficacy of our method on random scene generation with an array of exemplars featuring a variety of general natural scenes, and show the superiority by comparing to baseline methods. The importance of each design choice is also validated. Extensive experiments also demonstrates the versatility of our method in several 3D modeling applications. ## 2 Related Work 3D Generative Models.The goal of 3D generative models is to synthesize 3D contents with realistic geometric structures and visual appearances. While procedural models are capable of mass-producing particular 3D models, they take expertise and time to obtain rules and elementary assets. Hence, automating this process has been an active area of research, resulting in a vast body of work. A prevalent route is the learning-based one, assuming having access to sufficient homogeneous samples for training. Some learn to generate realistic 3D geometric structures, such as indoor scenes [27, 50, 54, 89, 94, 102, 104], terrains [28, 32], urban scenes [25, 67], etc. Others focus on the visual appearance, attempting to automatically texturize or assign materials for geometric scaffolds [34, 40, 41, 45, 107]. Another line has been directed at generating single objects with realistic structures or/and textures [16, 17, 29, 49, 51, 106], showing the potential in enriching the elementary asset library. A dominant trend in recent has also emerged [14, 15, 18, 66, 79], where deep generative models are trained on large volumes of images collected from scenes of a specific category, to allow joint synthesis of realistic 3D structure and appearance with neural radiance fields. Nevertheless, all these learning setups require large volumes of training data, and are limited in their ability to generalize, especially in terms of varied scene types. A more relevant direction is the exemplar-based one, where one or a few exemplars featuring the scene of interest are provided. However, existing methods with algorithm designs tailored for certain scene types again draw clear boundaries of scene characteristics they can handle. [105] extract height field patches from exemplars to synthesize terrains, but the synthesis is guided with particular emphasis on dominant visual elements in terrains. [59, 60, 61] use structured units specified in the input exemplar to facilitate architecture model synthesis. Extending texture image synthesis, [48] synthesizes signed distance fields from an input geometry, the method can not generalize to complex general natural scenes, and the result is inadequate for displaying due to the lack of appearance properties. In this paper, we aim for 3D general natural scenes, with an emphasis on generating both realistic geometry and appearance. Lacking the necessary volume of data characterizing the target scene, along with the difficulties of having ad hoc designs in presence of varying scene characteristics, we advocate for synthesizing novel 3D scenes at the patch level, given a single exemplar scene. **Generative Image Models.** Generative image models have made great strides in the past years. State-of-the-art methods can now learn generative models from large volumes of homogeneous image samples, achieving unprecedented success in producing realistic images [19, 43, 44, 46, 87, 88]. On the other end, there has also been a surge of developments to learn a generative model from a single training image [38, 80, 97]. But, these learning-based single image models typically require a long training time. Differing from these learning-based paradigms, a classical patch-based approach, that dates back long before the deep learning era, is revived in [23, 30, 33], showing amazing performance. The core of these models is to maximize the bidirectional patch similarity between the input and synthesized output in a coarse-to-fine manner, and have demonstrated their capability to generate diverse outputs from a single exemplar image, with orders of magnitude faster than learning-based ones. Our work is particularly inspired by this line of work but must address challenges arising from lifting the multi-scale generative patch-based framework to _effective_ and _efficient_ 3D scene generation. **3D Scene Representations.** While it is common to represent an image as a distributed amplitude of colors over a 2D grid, more often than not, the 3D representation varies. Polygon meshes and points offer a compact representation, with precedents in patch-based synthesis [35, 82], but the irregularity makes them intractable for high-quality 3D generation. The same holds for point clouds. Recently, the community has indeed witnessed a revolution started by an emerging representation, i.e. neural radiance field [62], which approximates the 5D plenoptic function [11] of the underlying scene with neural networks and shows unprecedented photo-realistic visual results. An explosion of techniques occurred since then that improves the representation in various aspects [73, 74, 74, 85, 98, 99, 101, 63]. We refer readers to [95, 86] for more in-depth summaries. Among these variants, we opt for a simple yet expressive voxel-based representation - Plenoxels [100], which has shown great competence on novel view synthesis. Its simplicity and regular structure benefit patch-based algorithms, however, important designs must be taken to fit it into our framework for high-quality generation of general natural scenes. **Concurrent Work.** Concurrent works [90, 42] propose to learn a 3D generative model from images of an input scene, producing variations that can be rendered with realistic imagery. [93] focus on generating diverse geometric structures from an input shape. Their core idea is to extend 2D SinGAN [80] for learning the internal distribution in the 3D exemplar, differing significantly from our technical route. While these methods require a long training time (typically days), our method can generate high-quality samples in minutes, without offline training. Last, [71] can generate arbitrary 3D models represented by NeRF, with pretrained powerful image diffusion models as priors. ## 3 Method The input 3D scene to our method can be a real-world or digital scene, as we first train Plenoxels on the images of the input scene to obtain a Plenoxels-parameterized exemplar. Then, our method synthesize novel variations at the patch level, with a multi-scale generative patch-based framework. In the following, we describe important designs w.r.t. the scene representation (Section 3.1 & 3.2) and the generative patch nearest-neighbor field module (Section 3.3), that, integrated into the multi-scale patch-based framework (Section 3.2), contribute collectively to our success. ### Scene Representations **Exemplar Scene Representation.** We assume the exemplar scene \(E\) lies within an axis-aligned box \(\mathbb{B}\) centered at the origin, around which we can distribute cameras to capture images for training Plenoxels. As per Plenoxels, \(E\) is represented by a sparse voxel grid, where each occupied voxel center stores features including a scalar opacity \(\rho\) and a vector of spherical harmonic (SH) coefficients **h** for each color channel: \(E:\mathbf{x}\rightarrow(\rho,\textbf{h})\), where **x** indicates a voxel center within \(\mathbb{B}\). These features can be further trilinearly in Figure 2: a) The synthesized scene \(S\) is represented as a field mapping a coordinate in \(S\) to one in \(E\). b) The Plenoxels-based exemplar \(E\) uses a sparse grid, where each occupied voxel stores a scalar opacity \(\rho\) and spherical harmonic coefficients **h**. c) Appealing imagery of \(S\) can be produced via the volumetric rendering function. Empty voxels are omitted for simplicity. terpolated to model the full plenoptic function continuously in space. Notably, the appearance feature uses 2-degree harmonics, which requires 9 coefficients per color channel for a total of 27 harmonic coefficients per voxel. **Exemplar Transformation.** While Plenoxels features can be used to render pleasing imagery, naively using them for the patch distance is unsuitable. Density values are not well-bounded, contain outliers, and can not accurately describe the geometric structure within a patch. On the other hand, high-dimensional SH coefficients are excessively consumptive for patch-based frameworks. Hence, we transform the exemplar features for the input to the generative patch nearest neighbor module. First, the density field is converted to a signed distance field (SDF). Specifically, the signed distance at each voxel is computed against the surface mesh extracted from the density field by Marching Cubes [52]. Note that Plenoxels prunes unnecessary voxels during training, which creates holes and irregular structures in invisible regions. So we flood-fill these regions with high-density values, prior to the mesh extraction. Last, we rescale and truncate the signed distance to ignore distance values far away from the surface. Formally, the geometry transformation is as follow: \(G(\textbf{x})=\max\bigl{(}-1,\min\bigl{(}1,SDF(\textbf{x})/t\bigr{)}\bigr{)}\), where the truncated scale \(t\) is set to 3 times of the voxel size at each generation scale. Moreover, we normalize SH coefficient vectors and use the principal component analysis (PCA) to reduce the dimensionality (from 27 to 3 by default), significantly reducing the computation overhead. Finally, the transformed exemplar \(\hat{E}\) is now given as: \[\hat{E}:\textbf{x}\rightarrow\bigl{(}G(\textbf{x}),P(\textbf{h})\bigr{)}, \tag{1}\] where \(G(\cdot)\) denotes transforming of the geometric feature, and \(P(\cdot)\) transforming the appearance feature. **Synthesized Scene Representation.** In the multi-scale generation, the output scene \(S\) at each scale is represented by a coordinate-based mapping field, instead of a value-based one that stores features. Specifically, \(S\) is represented as a field that maps a 3D voxel center in the synthesis grid to one in the exemplar \(E\), \(S:\textbf{x}_{s}\rightarrow\textbf{x}_{e}\), with which the original Plenoxels features \(E\bigl{(}S(\textbf{x}_{s})\bigr{)}\) can be queried for \(S\). Note, in addition to discrete grid samplings, dense samplings \(\textbf{x}_{s}\) in \(S\) can also be mapped to the continuous exemplar space, by simply considering the local offset \(\delta\) to the nearest voxel center, i.e., \(S(\textbf{x})=S(\textbf{N}(\textbf{x}))+\delta\), where \(\textbf{N}(\cdot)\) returns the nearest voxel center of **x**. This is particularly useful, as it enables upscaling \(S\) to finer grids in the multi-scale framework, and sufficient sampling for rendering the final generation result with high-quality imagery. **Viewing Synthesized Results.** The synthesized scene can be projected onto 2D through the volume rendering equation as in NeRF [62], yielding highly photo-realistic imagery under varying views. We refer readers to [100] for more details. Figure 2 illustrates how a synthesized result, paired with the exemplar, can display appealing imagery. ### Multi-scale Generation We use the same multi-scale framework as in previous works [22, 30, 80], which generally employs a coarse-to-fine process, so we have the opportunity to synthesize a more detailed scene based on an initial guess upscaled from the previous scale. In this pyramidal pipeline, different information is captured and reproduced at varying scales, spanning from global layouts at coarser scales to fine geometric and appearance details at finer scales (See Figure 3). **Exemplar Pyramid Construction.** Given the input scene, we build a pyramid \((E_{0},...,E_{N})\), where \(E_{n-1}\) is a downscaled version of \(E_{n}\) by a factor \(r^{-1}\) (\(r=4/3\)). By default, we use \(N=7\) (8 scales in total) for balancing quality and efficiency. Specific resolutions in the pyramid are listed in the supplementary. When working with an exemplar pyramid obtained by recursively downsampling a pretrained high-resolution exemplar, we observed lots of Figure 3: Multi-scale generation. At each scale \(n\), the NNF module updates the generation based on the transformed exemplar \(\hat{E}_{n}\) and an initial guess \(\tilde{S}_{n}\) upscaled from the previous. The coarsest scale takes a shuffled identity mapping field as input. Note that our coordinate-based representation \(S_{N}\) can map to patches in a higher-resolution exemplar for higher quality (top row). artifacts due to missing features at coarser exemplars, and severe feature inconsistency between exemplars at consecutive scales. Hence, we build the exemplar pyramid by coarse-to-fine training Plenoxels, at increasing resolutions synchronized with the multi-scale framework. Such exemplar pyramid prevents losing thin structures at coarser scales, and offers rather smoother transition and consistent features between consecutive exemplars, leading to stable transition in the multi-scale generation (See Figure 4). Coarse-to-fine Generation.At each scale \(n\), an initial guess \(\tilde{S}_{n}\) is produced by upsampling the output in the previous scale: \(\tilde{S}_{n}=S_{n-1}\uparrow^{\tau}\), with the same factor \(r\) to match with the exemplar. Then, the mapping field in \(\tilde{S}_{n}\) is updated by the generative nearest neighbor module with matched coordinates in the exemplar. The patch size at all scales is \(p^{3}\) (\(p=5\) by default), which captures around 1/3 of the content in the coarsest exemplar. Unlike adding noise to raw exemplar values as in the image synthesis, our initial guess \(\tilde{S}_{0}\) at the coarsest scale is an identity mapping field shuffled with Gaussian noise \(z_{0}=\mathcal{N}(0,\sigma^{2})\), \(\sigma=0.5\) by default, scaled by the extents of the bounding box \(\mathbb{B}\), which is natural for our coordinate-based synthesis. ### Generative Patch Nearest-Neighbor Field Usually, two stages, namely the patch _matching_ and _blending_, operate in tandem in the nearest neighbor field (NNF) search of patch-based algorithms. Specifically, the matching finds the most suitable patch from the exemplar for each in \(S\), and then the latter blends of the _values_ of overlapping patches together. This is vital to a robust EM-like optimization in patch-based image synthesis [8, 9], leading to converging synthesis results in several iterations. Value-/Coordinate-based Synthesis.In this work, we use heterogeneous representations for the synthesis in NNF. Specifically, at each scale \(n\), the patch matching and blending first operate in tandem for \(T-1\) iterations, to gradually synthesize an intermediate _value_-based scene with averaged values over overlapping patches. Then, when the synthesis is stable at the last iteration, the final output of NNF uses coordinate-based representation, which stores only the center location of the nearest patch in \(\hat{E}_{n}\). As aforementioned, this design offers stable transition between consecutive generation scales, where the value range of exemplar features may fluctuate, and, importantly, helps us trace back to the original Plenoxels features that can be rendered into photo-realistic imagery, via simply mapping to the original exemplar, even to a higher-resolution version for the final generated scene (See top of Figure 3). Specifically, each iteration in NNF at each scale proceeds as follows: (1) _Extract Patches_: Patches in \(\hat{E}_{n}(\tilde{S}_{n})\) are extracted to form a query patch set \(Q\), and ones in \(\hat{E}_{n}\) form a key set \(K\). (2) _Match Nearest Neighbors_: We first compute distance between each query patch \(Q_{i}\) and each key patch \(K_{j}\) as the weighted sum of the appearance and geometric features using \(L2\) distance: \[D_{i,j}=w_{a}||Q_{i,j}^{a}-K_{i,j}^{a}||^{2}+(1-w_{a})||Q_{i,j}^{g}-K_{i,j}^{g }||^{2}, \tag{2}\] where \(w_{a}\) (0.5 by default) is the trade-off parameter. To control the visual completeness in the synthesis by the bidirectional similarity [84], the final patch similarity scores normalize the distance with a per-key factor: \[C_{i,j}=\frac{D_{i,j}}{(\alpha+\min_{l}(D_{l,j}))}, \tag{3}\] where \(\alpha\) (0.01 by default) controls the degree of completeness, and smaller \(\alpha\) encourages completeness. (3) _Update \(\tilde{S}_{n}\)_: For each query patch \(Q_{i}\) in \(\hat{E}_{n}(\tilde{S}_{n})\), we find its nearest patch in \(K_{l}\), then update \(\tilde{S}_{n}\) with averaged values over overlapping patches for the first \(T-1\) iterations, and with the nearest patch center for the last iteration. Exact-to-Approximate NNF.Although the computation above can be in parallel performed on GPUs, brutally enumerating all pairs of patches would apparently lead to surprisingly huge distance matrices as the resolution increases, preventing us from obtaining high-resolution synthesis even with modern powerful GPUs. Hence, to avoid searching in tremendous space, we propose to perform the NNF in an _exact-to-approximate_ manner. Specifically, at first 5 coarser scales, _exact nearest-neighbor field_ (E-NNF) search is performed with \(T_{e}=10\) times to stabilize global layout synthesis when the memory consumption is low. At rest 3 finer scales, an _approximate nearest-neighbor field_ (A-NNF) search - PatchMatch [8] - with jump flood [75] is used for \(T_{a}=2\) times to reduce memory footprint from \(O(M^{2})\) to \(O(M)\) (\(M\) is the number patches), which is equivalent to only considering visual coherence. Figure 4: Exemplar pyramid. Coarse-to-fine training (top) shows more consistency between consecutive exemplars, whereas common downsampling algorithms (mid and bottom) result in missing geometry (e.g., the ground) and blurry appearance. ## 4 Experiments We collected a rich variety of 3D scene models to examine the performance of our method on random scene generation, ranging from rocks to plants, sculptures, landscapes, terrains, artistic scenes, etc. Some are digitalized _real-world_ scenes, e.g., the _Devil's Tower_. These scenes possess varying degrees of complexity in terms of geometry and appearance. In the following, we present experiments conducted to evaluate various aspects of the proposed solution. Unless specified, we use the default parameters described above, 512 for the resolution along the max dimension of the \(E^{high}\), and \(512\times 512\) image resolution for rendering. Full visualization of all exemplars, more technical details and experimental results can be found in the supplementary. **Random Generation.** Figure 5 presents results obtained by our method on exemplar-based random scene generation. These results show our method can generalize to scenes of highly varied features, yielding high-quality and diverse scenes similar to the exemplar. A particular feature of our method is the photo-realism and view-dependent effects of the exemplar are inherited in the results, as evidenced by Figure 5 and 7. Each sample is generated in 1\(\sim\)3 minutes on a V100 GPU depending on the scenes, and viewing the results can be executed at an interactive rate (15 fps). **Comparisons.** We particularly compare to GRAF and StyleNeRF, which are representative GAN-based 3D generative models. We cast them into exemplar-based models via training separately on images of each exemplar. In addition, we also compare to GPNN-3D, which trivially extends [30] for our task. We investigate the advantages of exemplar-based scene generation using our method against these alternatives, on various exemplars listed in Table 1. Figure 8 presents part of their visual results. Generally, GAN-base baselines suffer from notorious mode collapse, producing almost identical results due to lacking diverse training scenes. The visuals also tend to be more blurry and noisy, compared to our sharp imagery. GPNN-3D can not synthesize high-resolution results due to computational efficiency issues, and quickly fails at coarse scales, producing meaningless content. For quantitative comparisons, we produce 50 generated scenes from each exemplar with each method, render multi-view images and extract 3D surface points of the exemplar and of each generated scene, and then rate the _Visual Quality_ (V-Qua.) _Visual Diversity_ (V-Div.), _Geometry Quality_ (G-Qua.), and _Geometry Diversity_ (G-Div.) using common metrics employed in both 2D [80] and 3D [92] generation. The supplementary contains more details. Table 1 presents quantitative results, where, by rating with the combination of these established metrics, ours outperforms baselines by large margins, suggesting high quality and diversity from both 2D and 3D perspective. **Ablation.** We compare to several variants derived from our full method: 1) _Ours (w/o TSDF)_ uses an occupancy Figure 5: Random generation. Our method generalizes to all these scenes with highly varied structures and appearances, producing highly diverse and realistic scenes. The supplementary presents exemplars and more artistic imagery rendered with these 3D scenes. field, instead of TSDF, converted from the exemplar density field for geometric features; 2) _Ours (w/o c2f)_ drops the deep coarse-to-fine exemplar training, and instead recursively trilinearly interpolates a high-resolution exemplar; 3) _Ours (value-only)_ uses only value-based synthesis in NNF, and does not use TSDF and PCA as we can not trace back to original Plenxels features, and the maximum resolution is limited to 68; 4) _Ours (coord.-only)_ uses only coordinate-based synthesis in NNF. Figure 9 and Table 2 present the qualitative and quantitative comparison results, respectively, showing the importance of each algorithmic design. **Higher-resolution Generation.** 1) In Figure 6, we show that our method supports generating a result scene of different size to the exemplar, and particularly of a much higher resolution and different aspect ratio. See specifications in the caption. 2) In addition, we also stress test with a very high-resolution setting, where \(E_{N}\) has 288 voxels along the max dimension, and our method can still synthesize a highly plausible sample in \(\sim\)10 minutes. We observed slightly improved visual quality over the default setting, as the default is sufficient for most complicated scenes. Results and details can be found in the supplementary. **Applications.** In Figure 10, we demonstrate the versatility of our method in several 3D modeling applications with our unified generation framework (more details in the supplementary): 1) _Retargeting:_ The goal is to resize a 3D scene \begin{table} \begin{tabular}{c|c c c|c c c|c c c|c c c|c c c} \hline \hline & \multicolumn{3}{c|}{GRAF} & \multicolumn{3}{c|}{StyleNeRF} & \multicolumn{3}{c|}{GPNN-3D} & \multicolumn{3}{c}{Ours} \\ \hline & V-Qu\_1 & V-Div\_1 & G-Qu\_1 & G-Div\_1 & V-Qu\_1 & V-Div\_1 & G-Qu\_1 & G-Div\_1 & V-Qu\_1 & G-Div\_1 & V-Qu\_1 & V-Div\_1 & G-Qu\_1 & G-Div\_1 \\ St Alphage & **0.078** & **0.046** & **0.473** & 0.040 & 0.206 & 0.032 & 0.769 & 0.012 & 3.929 & 0.041 & 134.997 & 0.059 & **0.022** & **0.312** & **0.612** & **0.473** \\ Devit’s Tower & **0.233** & 0.084 & **0.480** & 0.021 & 0.470 & 0.021 & 1.000 & 0.011 & 2.495 & **0.694** & 6.545 & **1.974** & **0.032** & **0.203** & **0.304** & **0.207** \\ Desert Lopoly & **0.057** & 0.048 & 0.721 & 0.043 & 0.255 & 0.026 & 0.813 & 0.018 & 1.312 & **0.405** & **0.344** & **1.048** & **0.020** & **0.312** & **0.568** & **0.454** \\ Green Island & **0.294** & 0.047 & **0.277** & 0.015 & 0.606 & 0.015 & 0.669 & 0.014 & 0.254 & **1.136** & 18.228 & **17.673** & **0.044** & **0.172** & **0.097** & **0.081** \\ Stone Arch & 0.101 & 0.055 & 0.063 & 0.029 & **0.060** & 0.011 & **0.339** & 0.005 & 1.068 & **0.504** & 53.943 & **29.448** & **0.003** & **0.146** & **0.126** & **0.100** \\ Mountain & 0.010 & 0.060 & **0.498** & 0.022 & **0.222** & 0.037 & **0.757** & 0.010 & 2.602 & **0.787** & 5.947 & 1.674 & **0.105** & **0.391** & 0.935 & **0.467** \\ Vast Land & **0.072** & 0.058 & 0.994 & 0.229 & 0.219 & 0.023 & 1.047 & 0.017 & 0.907 & **0.348** & **0.690** & **1.840** & **0.014** & **0.124** & **0.177** & **0.105** \\ \hline \hline \end{tabular} \end{table} Table 1: Quantitative comparisons. Ours outperforms baselines by large margins, with high quality and diversity scores in terms of both visual and geometric content. We highlight top two in bold and underline the top one. Note GPNN-3D’s high diversity scores can be explained by noisy contents shown in the visual results. Figure 8: Visual comparisons. GAN-based baselines suffer from severe mode collapse, producing samples (two shown) almost identical to the input. GPNN-3D fails on the task. Figure 7: View-dependent effects in our synthesized results. See the reflection on the river changing under spinning cameras. Figure 6: A novel _”A Thousand Li of Rivers and Mountains”_[96] is rendered from a generated 3D sample, that is of a different size, resolution and aspect ratio to the _Vast Land_ exemplar (inset). Specification: \(E_{N}\) - \(288\times 288\times 112\), \(E^{high}\) - \(512\times 512\times 200\), \(S_{N}\) - \(747\times 288\times 112\), \(E^{high}(S_{N})\) - \(1328\times 512\times 200\), final rendering resolution - \(4096\times 1024\). Figure 9: Ablation study. Ours (w/o TSDF) and (w/o c2f) can not well preserve the geometric structures. Ours (value-only) fails and produces with noisy content, while Ours (coord.-only) is unstable, easily leading to bulky structures or holes. to a target size (typically of a different aspect ratio), while maintaining the local patches in the exemplar. We simply change the size of the identity mapping field and use it as the initial guess \(\tilde{S}_{0}\) without shuffling. 2) _Editing:_ Users can manipulate on a 3D proxy, which can be the underlining mapping field or mesh, for editing an exemplar or generated scene, such as removal, duplication, and modification. The manually manipulated proxy is then converted and fed as the initial guess at the coarsest scale for synthesizing the final scene. 3) _Structural analogies:_ Given two scenes A and B, we create a scene with the patch distribution of A, but which is structurally aligned with B. This is realized by using the exemplar pyramid of A, and an identity mapping as the initial guess, but by replacing \(\hat{E}_{0}(\tilde{S}_{0})\) with the transformed features in B, and vice versa. 4) _Re-decoration:_ With the coordinate-based representation, we can re-decorate the generated ones with ease, via simply remapping to exemplars of different appearance. ## 5 Discussion, Limitations and Future Work This work makes an first attempt towards a generic generative model for synthesizing highly realistic general natural scenes from only one exemplar. Building upon Plenoxels, our method can efficiently synthesize diverse and high-quality scenes. The generated samples particulary inherit photo-realism and view-dependent effects from the example. Despite success demonstrated, we note a few shortcomings. We can not handle scenes eluding Plenoxels (e.g., transparent fluids, strong reflection), which is the actual input to our framework. Particularly, the Plenoxels-based representation is not suitable for large and unbounded scenes, leading to artifacts in the results (more discussion in supplementary). With voxelized volumetric representations, we can not perfectly synthesize scenes with tiny thin structures, and ones with highly semantic or structural information, e.g., human body and modern buildings. Moreover, in contrast to _continuous_ distributions learned in neural-based methods, we work on _discrete_ patch distributions and thus lack the capability of generating novel patches/pixels. A future direction is to learn a continuous distribution from a large number of homogeneous samples produced by our method, with GANs, VQ-VAEs, or diffusion models. Last, the view-dependent effects of the results are inherited from the input Plenoxels, although SH features have already _implicitly_ considered the veiw-dependent lighting, consistent global illumination can not be guaranteed in our results, leading to another future direction. **Acknowledgements.** This work was supported in part by National Key R&D Program of China 2022ZD0160801. \begin{table} \begin{tabular}{c|c|c c c c} \hline \hline & & V-Qua.\(\downarrow\) & V-Div.\(\uparrow\) & G-Qua.\(\downarrow\) & G-Div.\(\uparrow\) \\ \hline \multirow{4}{*}{St Alphage} & Ours & **0.022** & 0.312 & **0.612** & 0.473 \\ & w/o TSDF & 0.114 & **0.568** & 1.176 & **1.105** \\ & w/o c ZF & **0.024** & **0.353** & **0.847** & 0.639 \\ & value-only & 3.779 & 0.054 & 56.304 & **3.823** \\ & coord.-only & 0.044 & 0.336 & 1.003 & 0.719 \\ \hline \multirow{4}{*}{Devil’s Tower} & Ours & **0.032** & 0.203 & **0.304** & 0.207 \\ & w/o TSDF & 0.047 & 0.263 & 0.422 & 0.350 \\ & w/o c ZF & 0.082 & **0.547** & 2.101 & **3.500** \\ & value-only & 1.795 & **0.344** & 14.122 & **7.650** \\ & coord.-only & **0.041** & 0.201 & **0.256** & 0.492 \\ \hline \multirow{4}{*}{Desert Lowopoly} & Ours & **0.020** & 0.312 & **0.568** & 0.454 \\ & w/o TSDF & **0.041** & **0.462** & **1.347** & 1.007 \\ \cline{1-1} & w/o c ZF & 0.049 & 0.457 & 1.745 & 1.097 \\ \cline{1-1} & value-only & 0.763 & 0.419 & 29.047 & **6.674** \\ \cline{1-1} & coord.-only & 0.100 & **0.487** & 2.754 & **1.526** \\ \hline \hline \end{tabular} \end{table} Table 2: Quantitative ablation results. While some variants produce higher diversity scores with meaningless noisy contents, ours consistently produce _diverse_ results with _highest_ quality scores. Figure 10: Applications. 1\({}^{\text{st}}\): Retargeting 3D scenes (marked in boxes). 2\({}^{\text{nd}}\): Editing a 3D scene (removal, duplication and modification). 3\({}^{\text{rd}}\): Structural analogies. A \(\rightarrow\) B = Visual content of A + Structure of B, and _vice versa_. 4\({}^{\text{th}}\): Re-decoration is realized by simply re-mapping to exemplars of different appearance.
2306.12009
Some numbers and polynomials related to degenerate harmonic and degenerate hyperharmonic numbers
Recently, the degenerate harmonic and the degenerate hyperharmonic numbers are introduced respectively as degenerate versions of the harmonic and the hyperharmonic numbers. The aim of this paper is to introduce the degenerate harmonic-Fubini polynomials and numbers related to the degenerate harmonic numbers and to study their properties, explicit expressions and some identities. In addition, as generalizations of those polynomials and numbers, we also introduce the degenerate hyperharmonic-Fubini polynomials and numbers related to the degenerate hyperharmonic numbers and derive similar results to the degenerate harmonic-Fubini polynomials and numbers. \end{abstract}
Dae San Kim, Taekyun Kim
2023-06-21T04:19:50Z
http://arxiv.org/abs/2306.12009v1
# Some numbers and polynomials related to degenerate harmonic and degenerate hyperharmonic numbers ###### Abstract. Recently, the degenerate harmonic and the degenerate hyperharmonic numbers are introduced respectively as degenerate versions of the harmonic and the hyperharmonic numbers. The aim of this paper is to introduce the degenerate harmonic-Fubini polynomials and numbers related to the degenerate harmonic numbers and to study their properties, explicit expressions and some identities. In addition, as generalizations of those polynomials and numbers, we also introduce the degenerate hyperharmonic-Fubini polynomials and numbers related to the degenerate hyperharmonic numbers and derive similar results to the degenerate harmonic-Fubini polynomials and numbers. Key words and phrases:degenerate harmonic-Fubini polynomials; degenerate hyperharmonic-Fubini polynomials 2010 Mathematics Subject Classification: 11B73; 11B83 * is corresponding author ## 1. Introduction It is remarkable that various degenerate versions of many special polynomials and numbers have been studied in recent years with regained interest on them. This exploration for degenerate versions was initiated by Carlitz's work on the degenerate Bernoulli and the degenerate Euler numbers (see [5]). These investigations have been carried out by using such diverse tools as combinatorial methods, generating functions, umbral calculus, \(p\)-adic analysis, differential equations, probability theory, operator theory, analytic number theory, and so on. The aim of this paper is to introduce the degenerate harmonic-Fubini polynomials (see (20)) and numbers related to the degenerate harmonic numbers (see (15)) and to study some properties, explicit expressions and identities for them. As generalizations of those polynomials and numbers, the degenerate hyperharmonic-Fubini polynomials (see (27)) and numbers related to the degenerate hyperharmonic numbers (see (16), (18)) are also investigated and similar results to the degenerate harmonic-Fubini polynomials and numbers are obtained. The outline of this paper is as follows. In Section 1, we recall the degenerate logarithms together with their properties and the degenerate exponentials. We remind the reader of the degenerate Stirling numbers of the first kind and those of the second kind. We recall the degenerate Fubini polynomials and the generalized degenerate Fubini polynomials. We remind the reader of the degenerate harmonic numbers and the degenerate hyperharmonic numbers. Then we state a useful lemma giving functional equations for two power series. Section 2 is the main result of this paper. We introduce the degenerate harmonic-Fubini polynomials and numbers related to the degenerate harmonic numbers. In Theorem 1, we express the degenerate harmonic-Fubini polynomial a finite sum involving the degenerate Stirling numbers of the second kind and the degenerate harmonic numbers. In Theorem 2, we find an expression of the degenerate harmonic number as a finite sum involving the degenerate harmonic-Fubini numbers and the degenerate Stirling numbers of the first kind. Some generalized degenerate Fubini polynomial is represented in terms of the degenerate Stirling numbers of the second kind in Theorem 3. The degenerate harmonic-Fubini polynomial is expressed as an infinite sum involving the degenerare harmonic numbers in Theorem 4. In Theorem 5, the degenerate hyperharmonic-Fubini polynomial is expressed in terms of the degenerate hyperharmonic numbers and the degenerate Stirling numbers of the second kind, and also in terms of the degenerate harmonic numbers and the degenerate Stirling numbers of the second kind. In Theorem 6, we express the degenerate hyperharmonic-Fubini polynomial as an infinite sum involving the degenerate hyperharmonic numbers. Explicit expressions for the degenerate harmonic numbers are obtained in Theorem 7. In Theorem 8, a functional equation is obtained for any power series \(f(t)\) by applying Lemma 1 with \(g(t)=-\frac{1}{1-t}\log_{\lambda}(1-t)\). By applying this functional equation to \(f(x)=x^{k},\,(k\geq 1)\), we get an identity involving the degenerate harmonic numbers, the degenerate harmonic-Fubini polynomials and some generalized degenerate Fubini polynomials in Theorem 9. From Theorem 9, an identity of similar nature is derived in Theorem 10. An expression involving the degenerate harmonic-Fubini polynomials and some degenerate Fubini polynomials is shown to be equal to a differential operator applied to \(g(x)\), for the aforementioned \(g(t)\). Finally, explicit expressions for the degenerate hyperharmonic numbers are founded in Theorem 12. For the rest of this section, we recall the facts that are needed throughout this paper. For any nonzero \(\lambda\in\mathbb{R}\), the degenerate logarithms are defined by \[\log_{\lambda}(1+t)=\sum_{k=1}^{\infty}\frac{(1)_{k,1/\lambda}\lambda^{k-1}}{k!}t^{k}=\frac{1}{\lambda}\big{(}(1+t)^{\lambda}-1\big{)},\quad(\text{see }[4]), \tag{1}\] where \[(x)_{0,\lambda}=1,\quad(x)_{n,\lambda}=x(x-\lambda)(x-2\lambda)\cdots(x-(n-1) \lambda),\quad(n\geq 1). \tag{2}\] From (1), we note that \[\log_{\lambda}(AB)=A^{\lambda}\log_{\lambda}B+\log_{\lambda}A=B^{\lambda}\log _{\lambda}A+\log_{\lambda}B, \tag{3}\] and \[\log_{\lambda}\left(\frac{B}{A}\right)=\frac{1}{A^{\lambda}}\Big{(}\log_{ \lambda}B-\log_{\lambda}A\Big{)},\quad(\text{see }[9-13,15]).\] For any nonzero \(\lambda\in\mathbb{R}\), the degenerate exponentials \(e_{\lambda}^{x}(t)\) are defined by \[e_{\lambda}^{x}(t)=(1+\lambda t)^{\frac{x}{\lambda}}=\sum_{n=0}^{\infty}(x)_{n,\lambda}\frac{t^{n}}{n!},\quad e_{\lambda}(t)=e_{\lambda}^{1}(t),\quad(\text{ see }[10,14]). \tag{4}\] Note that \(\lim\limits_{\lambda\to 0}\log_{\lambda}(1+t)=\log(1+t),\,\,\lim\limits_{ \lambda\to 0}e_{\lambda}^{x}(t)=e^{\alpha t}\), and \(e_{\lambda}(\log_{\lambda}(t))=\log_{\lambda}(e_{\lambda}(t))=t\). Thus the inverse of the degenerate logarithm \(\log_{\lambda}(t)\) is the degenerate exponential \(e_{\lambda}(t)\). The degenerate Stirling numbers of the first kind are defined by \[(x)_{n}=\sum_{k=0}^{n}S_{1,\lambda}(n,k)(x)_{k,\lambda},\quad(n\geq 0),\quad( \text{see }[9]), \tag{5}\] where \((x)_{0}=1,\,\,(x)_{n}=x(x-1)\cdots(x-n+1),\,\,(n\geq 1)\). In addition, the degenerate unsigned Stirling numbers of the first kind are given by \[\begin{bmatrix}n\\ k\end{bmatrix}=(-1)^{n-k}S_{1,\lambda}(n,k),\quad(n,k\geq 0),\quad(\text{see }[11]). \tag{6}\] As the inversion formula of (5), the degenerate Stirling numbers of the second kind are given by \[(x)_{n,\lambda}=\sum_{k=0}^{n}\begin{Bmatrix}n\\ k\end{Bmatrix}_{\lambda}(x)_{k},\quad(n\geq 0),\quad(\text{see }[9]). \tag{7}\] In [15], the degenerate Fubini polynomials are given by \[\frac{1}{1-x(e_{\lambda}(t)-1)}=\sum_{n=0}^{\infty}F_{n,\lambda}(x)\frac{t^{n} }{n!}. \tag{8}\] In particular, for \(x=1\), \(F_{n,\lambda}=F_{n,\lambda}(1)\) are called the degenerate Fubini numbers. From (8), we have \[F_{n,\lambda}(x)=\sum_{k=0}^{n}\left\{\begin{matrix}n\\ k\end{matrix}\right\}_{\lambda}k!x^{k},\quad(n\geq 0),\quad(\text{see }[15]). \tag{9}\] Note that \(F_{n}(x)=\lim_{\lambda\to 0}F_{n,\lambda}(x)\) are the ordinary Fubini polynomials given by \[\frac{1}{1-x(e^{t}-1)}=\sum_{n=0}^{\infty}F_{n}(x)\frac{t^{n}}{n!},\quad(\text {see }[1-8,16-20]). \tag{10}\] For any \(\alpha\in\mathbb{R}\), the generalized degenerate Fubini polynomials (called the degenerate Fubini polynomials of order \(\alpha\)) are given by \[\left(\frac{1}{1-x(e_{\lambda}(t)-1)}\right)^{\alpha}=\sum_{k=0}^{\infty}F_{k, \lambda}^{(\alpha)}(x)\frac{t^{k}}{k!},\quad(\text{see }[15]), \tag{11}\] Thus, by (11), we get \[F_{k,\lambda}^{(\alpha)}(x)=\sum_{k=0}^{n}\langle\alpha\rangle_{k}x^{k} \left\{\begin{matrix}n\\ k\end{matrix}\right\}_{\lambda},\quad(\text{see }[15]), \tag{12}\] where \(\langle\alpha\rangle_{0},\ \langle\alpha\rangle_{k}=\alpha(\alpha+1) \cdots(\alpha+k-1),\ (k\geq 1)\). We recall that the Stirling numbers of the first kind \(S_{1}(n,k)\) and those of the second kind \(\left\{\begin{matrix}n\\ k\end{matrix}\right\}\) are defined by \[(x)_{n}=\sum_{k=0}^{n}S_{1}(n,k)x^{k},\quad x^{n}=\sum_{k=0}^{n}\left\{ \begin{matrix}n\\ k\end{matrix}\right\}(x)_{k},\quad(n\geq 0),\quad(\text{see }[3,4,6,8,12,17]).\] Note that \(\lim_{\lambda\to 0}S_{1,\lambda}(n,k)=S_{1}(n,k),\ \lim_{\lambda\to 0}\left\{ \begin{matrix}n\\ k\end{matrix}\right\}_{\lambda}=\left\{\begin{matrix}n\\ k\end{matrix}\right\}\). It is well known that the harmonic numbers are defined by \[H_{0}=0,\quad H_{n}=1+\frac{1}{2}+\cdots+\frac{1}{n},\quad(n\geq 1),\quad( \text{see }[6,7,17]). \tag{13}\] From (13), we note that the generating function of the harmonic numbers is given by \[-\frac{\log(1-t)}{1-t}=\sum_{k=1}^{\infty}H_{k}t^{k},\quad(\text{see }[6,7,17]). \tag{14}\] Recently, the degenerate harmonic numbers are defined by \[-\frac{\log_{\lambda}(1-t)}{1-t}=\sum_{n=1}^{\infty}H_{n,\lambda}t^{n},\quad (\text{see }[12,13]). \tag{15}\] For \(n\geq 0\), \(r\geq 1\), the degenerate hyperharmonic numbers are defined by \[H_{0,\lambda}^{(r)}=0\ (r\geq 1),\quad H_{n,\lambda}^{(1)}=H_{n,\lambda},\quad H _{n,\lambda}^{(r)}=\sum_{k=1}^{n}H_{k,\lambda}^{(r-1)},\ (r\geq 2,\ n\geq 1),\ (\text{see }[11,13]). \tag{16}\] From (16), we note that \[H_{n,\lambda}^{(r+1)}=\frac{\binom{n+r}{r}}{\binom{r-\lambda}{r}}(H_{n+r, \lambda}-H_{r,\lambda}),\quad(n,r\in\mathbb{N}),\quad(\text{see }[11,13]). \tag{17}\] The generating function of the degenerate hyperharmonic numbers is given by \[-\frac{\log_{\lambda}(1-t)}{(1-t)^{r}}=\sum_{n=1}^{\infty}H_{n,\lambda}^{(r)}t ^{n},\quad(r\in\mathbb{N}),\quad(\text{see }[11,13]). \tag{18}\] For \(f(x)=\sum_{n=0}^{\infty}a_{n}x^{n}\in\mathbb{C}[\![x]\!]\), we define \[f_{\lambda}(x)=\sum_{n=0}^{\infty}a_{n}(x)_{n,\lambda}\in\mathbb{C}[\![x]\!],\] where \(\lambda\) is any fixed real number. Now, we introduce the next lemma which contains functional equations useful for deriving identities in this paper. **Lemma 1** ([11], Theorem 2).: _Let \(f(x)=\sum_{n=0}^{\infty}a_{n}x^{n},\ g(x)=\sum_{k=0}^{\infty}b_{k}x^{k}\in \mathbb{C}[\![x]\!]\). Then we have_ \[\sum_{n=0}^{\infty}\frac{f^{(n)}(0)}{n!}\sum_{k=0}^{n}\left\{\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Replacing \(t\) by \(\log_{\lambda}(1-t)\) in (20) and (21), we get \[\sum_{n=1}^{\infty}x^{n}H_{n,\lambda}(-1)^{n}t^{n} =\sum_{k=1}^{\infty}HF_{k,\lambda}(x)\frac{1}{k!}\Big{(}\log_{ \lambda}(1-t)\Big{)}^{k}\] \[=\sum_{k=1}^{\infty}HF_{k,\lambda}(x)\sum_{n=k}^{\infty}S_{1, \lambda}(n,k)(-1)^{n}\frac{t^{n}}{n!}\] \[=\sum_{n=1}^{\infty}(-1)^{n}\sum_{k=1}^{n}HF_{k,\lambda}(x)S_{1, \lambda}(n,k)\frac{t^{n}}{n!}. \tag{22}\] Therefore, by comparing the coefficients on both sides of (22), we obtain the following theorem. **Theorem 2**.: _For \(n\in\mathbb{N}\), we have_ \[H_{n,\lambda}x^{n}=\sum_{k=1}^{n}S_{1,\lambda}(n,k)HF_{k,\lambda}(x).\] _In particular, for \(x=1\), we have_ \[H_{n,\lambda}=\sum_{k=1}^{n}S_{1,\lambda}(n,k)HF_{k,\lambda}.\] From (11), we note that \[\sum_{n=0}^{\infty}F_{n,\lambda}^{(1-\lambda)}(x)\frac{t^{n}}{n!} =\left(\frac{1}{1-x(e_{\lambda}(t)-1)}\right)^{1-\lambda}=\sum_{ k=0}^{\infty}\binom{k-\lambda}{k}x^{k}(e_{\lambda}(t)-1)^{k}\] \[=\sum_{n=0}^{\infty}\langle 1-\lambda\rangle_{k}x^{k}\frac{1}{k! }(e_{\lambda}(t)-1)^{k}\] \[=\sum_{k=0}^{\infty}\langle 1-\lambda\rangle_{k}x^{k}\sum_{n=k}^{ \infty}\left\{\begin{matrix}n\\ k\end{matrix}\right\}_{\lambda}\frac{t^{n}}{n!}\] \[=\sum_{n=0}^{\infty}\sum_{k=0}^{n}\langle 1-\lambda\rangle_{k}x^{k} \left\{\begin{matrix}n\\ \lambda\end{matrix}\right\}_{\lambda}\frac{t^{n}}{n!}. \tag{23}\] Therefore, by comparing the coefficients on both sides of (23), we obtain the following theorem. **Theorem 3**.: _Let \(n\) be a nonnegative integer. Then we have_ \[F_{n}^{(1-\lambda)}(x)=\sum_{k=0}^{n}\langle 1-\lambda\rangle_{k}\left\{ \begin{matrix}n\\ k\end{matrix}\right\}_{\lambda}x^{k}. \tag{24}\] Now, by using (3) we observe that \[-\Big{(}\frac{1}{1-y}\Big{)}\frac{\log_{\lambda}\Big{(}1-\frac{y}{1- y}(e_{\lambda}(t)-1)\Big{)}}{1-\frac{y}{1-y}(e_{\lambda}(t)-1)}=-\frac{\log_{ \lambda}\Big{(}\frac{1-ye_{\lambda}(t)}{1-y}\Big{)}}{1-ye_{\lambda}(t)}\] \[=\frac{-\Big{(}\frac{1}{1-y}\Big{)}^{\lambda}\Big{(}\log_{ \lambda}\big{(}1-ye_{\lambda}(t)\big{)}-\log_{\lambda}(1-y)\Big{)}}{1-ye_{ \lambda}(t)}\] \[=-\Big{(}\frac{1}{1-y}\Big{)}^{\lambda}\frac{\log_{\lambda}(1-ye_{ \lambda}(t))}{1-ye_{\lambda}(t)}+\frac{\log_{\lambda}(1-y)}{(1-y)^{\lambda}} \frac{1}{1-ye_{\lambda}(t)}\] \[=\frac{1}{(1-y)^{\lambda}}\sum_{k=0}^{\infty}H_{k,\lambda}y^{k}e _{\lambda}^{k}(t)+\frac{\log_{\lambda}(1-y)}{(1-y)^{\lambda}}\sum_{k=0}^{ \infty}y^{k}e_{\lambda}^{k}(t)\] \[=\frac{1}{(1-y)^{\lambda}}\sum_{n=0}^{\infty}\sum_{k=0}^{\infty} \Big{(}H_{k,\lambda}+\log_{\lambda}(1-y)\Big{)}y^{k}(k)_{n,\lambda}\frac{t^{n} }{n!}. \tag{25}\] On the other hand, by (20), we get \[-\Big{(}\frac{1}{1-y}\Big{)}\frac{\log_{\lambda}\Big{(}1-\frac{y}{1-y}(e_{ \lambda}(t)-1)\Big{)}}{1-\frac{y}{1-y}(e_{\lambda}(t)-1)}=\frac{1}{1-y}\sum_{n =0}^{\infty}HF_{n,\lambda}\Big{(}\frac{y}{1-y}\Big{)}\frac{t^{n}}{n!}. \tag{26}\] Therefore, by (25) and (26), we obtain the following theorem. **Theorem 4**.: _For \(n\geq 0\), we have_ \[\frac{1}{1-y}HF_{n,\lambda}\Big{(}\frac{y}{1-y}\Big{)}=\frac{1}{(1-y)^{ \lambda}}\sum_{k=0}^{\infty}y^{k}(k)_{n,\lambda}\big{(}H_{k,\lambda}+\log_{ \lambda}(1-y)\big{)}.\] _Equivalently, we also have_ \[HF_{n,\lambda}(y)=(1+y)^{\lambda-1}\sum_{k=0}^{\infty}\Big{(}\frac{y}{1+y} \Big{)}^{k}(k)_{n,\lambda}\Big{(}H_{k,\lambda}+\log_{\lambda}\Big{(}\frac{1}{ 1+y}\Big{)}\Big{)}.\] _Note from Theorem 4 and (15) that_ \[\frac{1}{(1-y)^{\lambda}}\bigg{\{}\bigg{(}y\frac{d}{dy}\bigg{)}_{n,\lambda} \Big{(}-\frac{\log_{\lambda}(1-y)}{1-y}\Big{)}+\log_{\lambda}(1-y)\bigg{(}y \frac{d}{dy}\bigg{)}_{n,\lambda}\frac{1}{1-y}\bigg{\}}=\frac{1}{1-y}HF_{n, \lambda}\Big{(}\frac{y}{1-y}\Big{)}.\] For \(r\in\mathbb{N}\), we define the _degenerate hyperharmonic-Fubini polynomials_ given by \[-\frac{\log_{\lambda}\big{(}1-y(e_{\lambda}(t)-1)}{\big{(}1-y(e_{\lambda}(t)- 1)\big{)}^{r}}=\sum_{n=1}^{\infty}HF_{n,\lambda}^{(r)}(y)\frac{t^{n}}{n!}. \tag{27}\] When \(y=1\), \(HF_{n,\lambda}^{(r)}=HF_{n,\lambda}^{(r)}(1)\) are called the _degenerate hyperharmonic-Fubini numbers_. From (27) and (17), we have \[\sum_{n=1}^{\infty}HF_{n,\lambda}^{(r)}(y)\frac{t^{n}}{n!} =-\frac{\log_{\lambda}(1-y\big{(}e_{\lambda}(t)-1\big{)}}{\big{(}1- y(e_{\lambda}(t)-1)\big{)}^{r}}=\sum_{k=1}^{\infty}H_{k,\lambda}^{(r)}\big{(}e_{ \lambda}(t)-1\big{)}^{k}y^{k}\] \[=\sum_{k=1}^{\infty}H_{k,\lambda}^{(r)}y^{k}k!\frac{1}{k!}\big{(}e _{\lambda}(t)-1\big{)}^{k}=\sum_{k=1}^{\infty}H_{k,\lambda}^{(r)}y^{k}k!\sum_{ n=k}^{\infty}\bigg{\{}\genfrac{}{}{0.0pt}{}{n}{k}\bigg{\}}_{\lambda}\frac{t^{n}}{n!}\] \[=\sum_{n=1}^{\infty}\sum_{k=1}^{n}H_{k,\lambda}^{(r)}y^{k}k! \genfrac{\{}{\}}{0.0pt}{}{\genfrac{}{}{0.0pt}{}{n}{k}\bigg{\}}_{\lambda}\frac{ t^{n}}{n!}\] \[=\sum_{n=1}^{\infty}\Bigg{(}\sum_{k=1}^{n}\frac{\binom{k+r-1}{r-1 }}{\binom{r-1-\lambda}{r-1}}(H_{k+r-1,\lambda}-H_{r-1,\lambda})y^{k}k!\genfrac{ }{}{0.0pt}{}{n}{k}\bigg{\}}_{\lambda}\bigg{)}\frac{t^{n}}{n!}. \tag{28}\] Therefore, by comparing the coefficients on both sides of (28), we obtain the following theorem. **Theorem 5**.: _Let \(n,r\) be positive integers. Then we have_ \[HF_{n,\lambda}^{(r)}(y)=\sum_{k=1}^{n}H_{k,\lambda}^{(r)}y^{k}k! \genfrac{\{}{\}}{0.0pt}{}{\genfrac{}{}{0.0pt}{}{n}{k}\bigg{\}}_{\lambda}= \sum_{k=1}^{n}\frac{\binom{k+r-1}{r-1}}{\binom{r-1-\lambda}{r-1}}\big{(}H_{k+r -1,\lambda}-H_{r-1,\lambda}\big{)}k!\genfrac{\{}{\}}{0.0pt}{}{n}{k}_{\lambda}y ^{k}.\] _In particular, for \(y=1\), we get_ \[HF_{n,\lambda}^{(r)}=\sum_{k=1}^{n}H_{k,\lambda}^{(r)}k!\genfrac{\{}{\}}{0.0 pt}{}{\genfrac{}{}{0.0pt}{}{n}{k}\bigg{\}}_{\lambda}=\sum_{k=1}^{n}\frac{\binom{k+r-1 }{r-1}}{\binom{r-1-\lambda}{r-1}}\big{(}H_{k+r-1,\lambda}-H_{r-1,\lambda}\big{)} k!\genfrac{\{}{\}}{0.0pt}{}{n}{k}_{\lambda}.\] By using (3) and (18), we observe that \[-\frac{1}{(1-y)^{r}}\frac{\log_{\lambda}\big{(}1-\frac{y}{1-y}(e_ {\lambda}(t)-1)\big{)}}{\big{(}1-\frac{y}{1-y}(e_{\lambda}(t)-1)\big{)}^{r}}=- \frac{\log_{\lambda}\big{(}\frac{1-y\varphi_{\lambda}(t)}{1-y}\big{)}}{\big{(}1 -ye_{\lambda}(t)\big{)}^{r}}\] \[=-\Big{(}\frac{1}{1-ye_{\lambda}(t)}\Big{)}^{r}\frac{1}{(1-y)^{ \lambda}}\Big{(}\log_{\lambda}\big{(}1-ye_{\lambda}(t)\big{)}-\log_{\lambda}(1 -y)\Big{)}\] \[=\frac{1}{(1-y)^{\lambda}}\bigg{(}-\Big{(}\frac{1}{1-ye_{\lambda} (t)}\Big{)}^{r}\log_{\lambda}\big{(}1-ye_{\lambda}(t)\big{)}+\frac{\log_{ \lambda}(1-y)}{(1-ye_{\lambda}(t))^{r}}\bigg{)}\] \[=\frac{1}{(1-y)^{\lambda}}\bigg{(}\sum_{k=1}^{\infty}H_{k,\lambda }^{(r)}y^{k}e_{\lambda}^{k}(t)+\log_{\lambda}(1-y)\sum_{k=0}^{\infty}\binom{r +k-1}{k}y^{k}e_{\lambda}^{k}(t)\bigg{)}\] \[=\sum_{n=0}^{\infty}\frac{1}{(1-y)^{\lambda}}\sum_{k=0}^{\infty}y^ {k}(k)_{n,\lambda}\bigg{(}H_{k,\lambda}^{(r)}+\binom{r+k-1}{k}\log_{\lambda}(1 -y)\bigg{)}\frac{t^{n}}{n!}. \tag{29}\] Therefore, by (27) and (29), we obtain the following theorem. **Theorem 6**.: _For \(n\geq 0\) and \(r\in\mathbb{N}\), we have_ \[\frac{1}{(1-y)^{r}}HF_{n,\lambda}^{(r)}\Big{(}\frac{y}{1-y}\Big{)}=\frac{1}{(1 -y)^{\lambda}}\sum_{k=0}^{\infty}(k)_{n,\lambda}\bigg{(}H_{k,\lambda}^{(r)}+ \binom{r+k-1}{k}\log_{\lambda}(1-y)\bigg{)}y^{k}.\] _Equivalently, we also have_ \[HF_{n,\lambda}^{(r)}(y)=(1+y)^{\lambda-r}\sum_{k=0}^{\infty}(k)_{n,\lambda} \bigg{(}H_{k,\lambda}^{(r)}+\binom{r+k-1}{k}\log_{\lambda}\Big{(}\frac{1}{1+y }\Big{)}\bigg{)}\Big{(}\frac{y}{1+y}\Big{)}^{k}.\] Let \[g(t)=-\frac{\log_{\lambda}(1-t)}{1-t}=-\frac{1}{\lambda}\big{(}(1-t)^{\lambda- 1}-(1-t)^{-1}\big{)},\quad(\text{see \eqref{eq:1}}).\] Then, from (1) we have \[g^{(k)}(t) =\left(\frac{d}{dt}\right)^{k}g(t)=-\frac{1}{\lambda}\frac{\langle 1 -\lambda\rangle_{k}}{(1-t)^{k+1}}(1-t)^{\lambda}+\frac{1}{\lambda}\frac{k!}{(1-t )^{k+1}}\] \[=-\frac{\langle 1-\lambda\rangle_{k}}{(1-t)^{k+1}}\log_{\lambda}(1-t) +\frac{1}{(1-t)^{k+1}}\left(\frac{k!-\langle 1-\lambda\rangle_{k}}{\lambda} \right). \tag{30}\] From (30), we have \[g^{(k)}(0)=\left(\frac{d}{dt}\right)^{k}g(t)\bigg{|}_{t=0}=\frac{k!-\langle 1 -\lambda\rangle_{k}}{\lambda},\quad(k\in\mathbb{N}). \tag{31}\] By (15), we get \[g^{(k)}(0)=\left(\frac{d}{dt}\right)^{k}\bigg{(}-\frac{\log_{\lambda}(1-t)}{1 -t}\bigg{)}\bigg{|}_{t=0}=\left(\frac{d}{dt}\right)^{k}\sum_{k=1}^{\infty}H_{k,\lambda}t^{k}\bigg{|}_{k=0}=k!H_{k,\lambda}. \tag{32}\] Therefore, by (31) and (32), we obtain the following theorem. **Theorem 7**.: _For \(k\in\mathbb{N}\), we have_ \[H_{k,\lambda}=\frac{1}{k!}\bigg{(}\frac{k!-\langle 1-\lambda\rangle_{k}}{ \lambda}\bigg{)}=\frac{1}{\lambda}\bigg{(}1-\binom{k-\lambda}{k}\bigg{)}.\] Let \(r\) be a nonnegative integer, \(f(x)\in\mathbb{C}[\![x]\!]\), and let \(g(t)=-\frac{1}{1-t}\log_{\lambda}(1-t)\). By (19), (30) and (32), we get \[\sum_{n=r}^{\infty}\binom{n}{r}H_{n,\lambda}\,r!\bigg{(}\sum_{m=r }^{\infty}\frac{f^{(m)}(0)}{m!}(n)_{m-r,\lambda}\bigg{)}x^{n}\] \[=\frac{1}{1-x}\sum_{m=r}^{\infty}\frac{f^{(m)}(0)}{m!}\sum_{k=r}^{ m}\binom{m}{k}_{r,\lambda}\bigg{(}\frac{k!-\langle 1-\lambda\rangle_{k}}{ \lambda}\bigg{)}\Big{(}\frac{x}{1-x}\Big{)}^{k}\] \[\quad-\frac{1}{1-x}\log_{\lambda}(1-x)\sum_{m=r}^{\infty}\frac{f ^{(m)}(0)}{m!}\sum_{k=r}^{m}\bigg{\{}m\bigg{\}}_{k=r}\bigg{\{}m\bigg{\}}_{r, \lambda}\langle 1-\lambda\rangle_{k}\Big{(}\frac{x}{1-x}\Big{)}^{k}. \tag{33}\] Let \(r=0\) in (33). Then, by Theorem1, Theorem 7 and (24), we have \[\sum_{n=0}^{\infty}H_{n,\lambda}\bigg{(}\sum_{m=0}^{\infty}\frac{ f^{(m)}(0)}{m!}(n)_{m,\lambda}\bigg{)}x^{n}\] \[=\frac{1}{1-x}\sum_{m=0}^{\infty}\frac{f^{(m)}(0)}{m!}\sum_{k=0}^ {m}\bigg{\{}m\bigg{\}}_{\lambda}\bigg{(}\frac{k!-\langle 1-\lambda\rangle_{k}}{ \lambda}\bigg{)}\Big{(}\frac{x}{1-x}\Big{)}^{k}\] \[\quad-\frac{1}{1-x}\log_{\lambda}(1-x)\sum_{m=0}^{\infty}\frac{f^ {(m)}(0)}{m!}\sum_{k=0}^{m}\bigg{\{}m\bigg{\}}_{\lambda}\sum_{k=0}^{m}\bigg{\{} m\bigg{\}}_{\lambda}\langle 1-\lambda\rangle_{k}\Big{(}\frac{x}{1-x}\Big{)}^{k}\] \[=\frac{1}{1-x}\sum_{m=0}^{\infty}\frac{f^{(m)}(0)}{m!}HF_{m, \lambda}\Big{(}\frac{x}{1-x}\Big{)}-\frac{\log_{\lambda}(1-x)}{1-x}\sum_{m=0}^ {\infty}\frac{f^{(m)}(0)}{m!}F_{m,\lambda}^{(1-\lambda)}\Big{(}\frac{x}{1-x} \Big{)}. \tag{34}\] Therefore, by (34), we obtain the following theorem. **Theorem 8**.: _Let \(f(x)\in\mathbb{C}[\![x]\!]\). Then we have_ \[\begin{split}&\sum_{n=0}^{\infty}H_{n,\lambda}\bigg{(}\sum_{m=0}^{ \infty}\frac{f^{(m)}(0)}{m!}(n)_{m,\lambda}\bigg{)}x^{n}\\ &=\frac{1}{1-x}\sum_{m=0}^{\infty}\frac{f^{(m)}(0)}{m!}HF_{m, \lambda}\Big{(}\frac{x}{1-x}\Big{)}-\frac{\log_{\lambda}(1-x)}{1-x}\sum_{m=0} ^{\infty}\frac{f^{(m)}(0)}{m!}F_{m,\lambda}^{(1-\lambda)}\Big{(}\frac{x}{1-x }\Big{)}.\end{split} \tag{35}\] Let \(f(x)=x^{k},\ (k\geq 1)\), in (35). Then we have \[\sum_{n=0}^{\infty}H_{n,\lambda}(n)_{k,\lambda}x^{n}=\frac{1}{1-x}HF_{k, \lambda}\Big{(}\frac{x}{1-x}\Big{)}-\frac{\log_{\lambda}(1-x)}{1-x}F_{k, \lambda}^{(1-\lambda)}\Big{(}\frac{x}{1-x}\Big{)}. \tag{36}\] Therefore, by (35), we obtain the following theorem. **Theorem 9**.: _For \(k\geq 1\), we have_ \[\sum_{n=1}^{\infty}H_{n,\lambda}(n)_{k,\lambda}x^{n}=\frac{1}{1-x}HF_{k, \lambda}\Big{(}\frac{x}{1-x}\Big{)}-\frac{\log_{\lambda}(1-x)}{1-x}F_{k, \lambda}^{(1-\lambda)}\Big{(}\frac{x}{1-x}\Big{)}.\] From (36), we note that \[\frac{1}{(1-x)^{2}}\bigg{(}HF_{k,\lambda}\Big{(}\frac{x}{1-x} \Big{)}-\log_{\lambda}(1-x)F_{k,\lambda}^{(1-\lambda)}\Big{(}\frac{x}{1-x} \Big{)}\bigg{)}=\frac{1}{1-x}\sum_{l=1}^{\infty}H_{l,\lambda}(l)_{k,\lambda}x ^{l}\] \[=\sum_{j=0}^{\infty}x^{j}\sum_{l=1}^{\infty}H_{l,\lambda}(l)_{k, \lambda}x^{l}=\sum_{n=1}^{\infty}\bigg{(}\sum_{l=1}^{n}H_{l,\lambda}(l)_{k, \lambda}\bigg{)}x^{n}. \tag{37}\] Therefore, by (37), we obtain the following theorem. **Theorem 10**.: _For \(k\in\mathbb{N}\), we have_ \[\begin{split}&\sum_{n=1}^{\infty}\bigg{(}(1)_{k,\lambda}H_{1, \lambda}+(2)_{k,\lambda}H_{2,\lambda}+\cdots+(n)_{k,\lambda}H_{n,\lambda} \bigg{)}x^{n}\\ &\quad=\frac{1}{(1-x)^{2}}\bigg{(}HF_{k,\lambda}\Big{(}\frac{x}{ 1-x}\Big{)}-\log_{\lambda}(1-x)F_{k,\lambda}^{(1-\lambda)}\Big{(}\frac{x}{1-x }\Big{)}\bigg{)}.\end{split}\] By (36), we get \[\begin{split}&\sum_{n=1}^{\infty}\big{(}(n)_{1,\lambda}+(n)_{2, \lambda}+\cdots+(n)_{k,\lambda}\big{)}H_{n,\lambda}x^{n}\\ &=\frac{1}{1-x}\sum_{l=1}^{k}\bigg{(}HF_{l,\lambda}\Big{(}\frac{ x}{1-x}\Big{)}-\log_{\lambda}(1-x)F_{l,\lambda}^{(1-\lambda)}\Big{(}\frac{x}{1-x }\Big{)}\bigg{)}.\end{split} \tag{38}\] From (15) and (36), we note that \[\begin{split}\Big{(}x\frac{d}{dx}\Big{)}_{k,\lambda}\bigg{(}- \frac{\log_{\lambda}(1-x)}{1-x}\bigg{)}&=\Big{(}x\frac{d}{dx} \Big{)}_{k,\lambda}\bigg{(}\sum_{n=1}^{\infty}H_{n,\lambda}x^{n}\bigg{)}=\sum _{n=1}^{\infty}(n)_{k,\lambda}H_{n,\lambda}x^{n}\\ &=\frac{1}{1-x}\bigg{(}HF_{k,\lambda}\Big{(}\frac{x}{1-x}\Big{)}- \log_{\lambda}(1-x)F_{k,\lambda}^{(1-\lambda)}\Big{(}\frac{x}{1-x}\Big{)} \bigg{)}.\end{split} \tag{39}\] Therefore, by (39), we obtain the following differential equation. **Theorem 11**.: _Let \(k\) be a positive integer. Then we have_ \[\Big{(}x\frac{d}{dx}\Big{)}_{k,\lambda}\bigg{(}-\frac{\log_{\lambda}(1-x)}{1-x }\bigg{)}=\frac{1}{1-x}\bigg{(}HF_{k,\lambda}\Big{(}\frac{x}{1-x}\Big{)}-\log _{\lambda}(1-x)F_{k,\lambda}^{(1-\lambda)}\Big{(}\frac{x}{1-x}\Big{)}\bigg{)}.\] For an integer \(r\) with \(r>1\), we let \[g(t)=-\frac{1}{(1-t)^{r}}\log_{\lambda}(1-t)=-\frac{1}{\lambda}\big{(}(1-t)^{ \lambda-r}-(1-t)^{-r}\big{)},\quad(\text{see (\ref{eq:1})}). \tag{40}\] Then, for \(k\in\mathbb{N}\), we have \[g^{(k)}(t) =\bigg{(}\frac{d}{dt}\bigg{)}^{k}g(t)=-\frac{\langle r-\lambda \rangle_{k}}{\lambda}(1-t)^{\lambda-r-k}+\frac{1}{\lambda}\langle r\rangle_{k} (1-t)^{-r-k}\] \[=-\frac{\langle r-\lambda\rangle_{k}}{(1-t)^{r+k}}\log_{\lambda} (1-t)+\frac{1}{(1-t)^{r+k}}\bigg{(}\frac{\langle r\rangle_{k}-\langle r- \lambda\rangle_{k}}{\lambda}\bigg{)}. \tag{41}\] From (18), we note that \[g^{(k)}(0)=k!H^{(r)}_{k,\lambda},\quad(k\geq 1). \tag{42}\] Therefore, by (41) and (42), we obtain the following theorem. **Theorem 12**.: _For \(k\geq 1\), we have_ \[H^{(r)}_{k,\lambda}=\frac{1}{k!}\bigg{(}\frac{\langle r\rangle_{k}-\langle r- \lambda\rangle_{k}}{\lambda}\bigg{)}=\frac{1}{\lambda}\bigg{(}\binom{r+k-1}{ k}-\binom{r+k-\lambda-1}{k}\bigg{)}.\] Let \(f(x)=\sum_{n=0}^{\infty}a_{n}x^{n}\in\mathbb{C}[\![x]\!]\), and let \(g(t)=-\frac{\log_{\lambda}(1-t)}{(1-t)^{r}}\). Then, by Theorem 12, (19), (40) and (41), we get \[\sum_{n=r}^{\infty}H^{(r)}_{n,\lambda}\binom{n}{r}r!\bigg{(}\sum_ {m=r}^{\infty}\frac{f^{(m)}(0)}{m!}(n)_{m-r,\lambda}\bigg{)}x^{n}\] \[=-\frac{\log_{\lambda}(1-x)}{(1-x)^{r}}\sum_{m=r}^{\infty}\frac{f^ {(m)}(0)}{m!}\sum_{k=r}^{m}\binom{m}{k}_{r,\lambda}\langle r-\lambda\rangle_{ k}\Big{(}\frac{x}{1-x}\Big{)}^{k}\] \[\qquad+\frac{1}{(1-x)^{r}}\sum_{m=r}^{\infty}\frac{f^{(m)}(0)}{m! }\sum_{k=r}^{m}\binom{m}{k}_{r,\lambda}\left(\frac{x}{1-x}\right)^{k}k!H^{(r)} _{k,\lambda} \tag{43}\] Let \(f(x)=x^{k},\ (k\geq 1)\) in (43). Then we have \[\sum_{n=r}^{\infty}H^{(r)}_{n,\lambda}\binom{n}{r}r!(n)_{k-r, \lambda}x^{n}\] \[=-\frac{\log_{\lambda}(1-x)}{(1-x)^{r}}\sum_{l=r}^{k}\left\{l \right\}_{r,\lambda}\langle r-\lambda\rangle_{l}\Big{(}\frac{x}{1-x}\Big{)}^{ l}+\frac{1}{(1-x)^{r}}\sum_{l=r}^{k}\left\{l\right\}_{r,\lambda}\Big{(}\frac{x}{1-x} \Big{)}^{l}!H^{(r)}_{l,\lambda}. \tag{44}\] ## 3. Conclusion The degenerate harmonic-Fubini polynomials are given by \(HF_{n,\lambda}(x)=\sum_{k=1}^{n}\left\{{}_{k}^{n}\right\}_{\lambda}H_{k, \lambda}k!x^{k}\), with \(H_{k,\lambda}\) the degenerate harmonic numbers, while the degenerate Fubini polynomials are given by \(F_{n,\lambda}(x)=\sum_{k=0}^{n}\left\{{}_{k}^{n}\right\}_{\lambda}k!x^{k}\). The degenerate harmonic-Fubini polynomials are so named for this reason. The degenerate hyperharmonic-Fubini polynomials are also so named, as it is given by \(HF^{(r)}_{n,\lambda}(x)=\sum_{k=1}^{n}\left\{{}_{k}^{n}\right\}_{\lambda}H^{(r )}_{k,\lambda}k!x^{k}\), with \(H^{(r)}_{k,\lambda}\) the degenerate hyperharmonic numbers. We introduced the degenerate harmonic-Fubini polynomials and numbers and studied their properties, explicit expressions and some identities by using generating functions. In addition, as generalizations of those polynomials and numbers, we also introduced the degenerate hyperharmonic-Fubini polynomials and derived similar results to the degenerate harmonic-Fubini polynomials and numbers. It is one of our future projects to continue to study various degenerate versions of some special polynomials and numbers and to find their applications to physics, science and engineering as well as to mathematics.
2310.17697
Mitigating Temporal Fragility in the XY Surface Code
An important outstanding challenge that must be overcome in order to fully utilize the XY surface code for correcting biased Pauli noise is the phenomena of fragile temporal boundaries that arise during the standard logical state preparation and measurement protocols. To address this challenge we propose a new logical state preparation protocol based on locally entangling qubits into small Greenberger-Horne-Zeilinger-like states prior to making the stabilizer measurements that place them in the XY-code state. We prove that in this new procedure $O(\sqrt{n})$ high-rate errors along a single lattice boundary can cause a logical failure, leading to an almost quadratic reduction in the number of fault-configurations compared to the standard state-preparation approach. Moreover, the code becomes equivalent to a repetition code for high-rate errors, guaranteeing a 50% code-capacity threshold during state preparation for infinitely biased noise. With a simple matching decoder we confirm that our preparation protocol outperforms the standard one in terms of both threshold and logical error rate in the fault-tolerant regime where measurements are unreliable and at experimentally realistic biases. We also discuss how our state-preparation protocol can be inverted for similar fragile-boundary-mitigated logical-state measurement.
Pei-Kai Tsai, Yue Wu, Shruti Puri
2023-10-26T18:00:02Z
http://arxiv.org/abs/2310.17697v2
# Mitigating Temporal Fragility in the XY Surface Code ###### Abstract An important outstanding challenge that must be overcome in order to fully utilize the XY surface code for correcting biased Pauli noise is the phenomenon of _fragile temporal boundaries_ that arises during the standard logical state preparation and measurement protocols. To address this challenge we propose a new logical state preparation protocol based on locally entangling qubits into small Greenberger-Horne-Zeilinger-like states prior to making the stabilizer measurements that place them in the XY-code state. We prove that in this new procedure \(O(\sqrt{n})\) high-rate errors along a single lattice boundary can cause a logical failure, leading to an almost quadratic reduction in the number of fault-configurations compared to the standard state-preparation approach. Moreover, the code becomes equivalent to a repetition code for high-rate errors, guaranteeing a 50% code-capacity threshold during state preparation for infinitely biased noise. With a simple matching decoder we confirm that our preparation protocol outperforms the standard one in terms of both threshold and logical error rate in the fault-tolerant regime where measurements are unreliable and at experimentally realistic biases. We also discuss how our state-preparation protocol can be inverted for similar fragile-boundary-mitigated logical-state measurement. ## I Introduction Fault-tolerant, scalable quantum computation with noisy physical hardware relies on encoding quantum information in a large number of physical qubits making up an error correcting code. Two important performance metrics for an error correcting code are its threshold, which is the physical error rate below which error correction becomes successful, and the amount of error suppression possible for a given number of qubits. These depend on its _distance_ which sets the minimum weight of an uncorrectable error, the number of fault-configurations leading to uncorrectable errors, and their likelihood which heavily depends on the details of underlying noise model [1; 2]. _Biased-Pauli noise_ is a common noise model describing many practical qubit architectures in which errors that cause bit-flips are far less likely than those that only lead to phase-flips [3; 4; 5; 6; 7; 8; 9; 10]. When only phase-flip noise is present, we say that the noise is infinitely biased. The discovery of native bias-preserving controlled-not gates [9; 10; 11; 12; 13] has driven research towards tailoring codes to be highly effective at correcting biased-Pauli noise [14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29]. Two leading candidates for such codes are the XY surface code [19] and XZZX surface code [23]. These are obtained from the standard CSS surface code by local Clifford-deformation of its stabilizers [30]. Their favorable properties arise from the underlying symmetry of their stabilizers due to which these codes reduce to repetition codes when noise is infinitely biased (see [31] for further discussion on the role of symmetries in error correction). Thus, these codes have a 50% threshold at infinite bias. Moreover, compared to the planar XZZX surface code, the XY code can also tolerate quadratically higher weight phase errors, making it more desirable for correcting strongly biased noise. It is natural to ask if the favorable properties of the XY code persist during logical state preparation, which is the very first step in any quantum algorithm. In the standard approach, a logical \(X\) state \(|+_{\rm L}\rangle\) is prepared by initializing each physical qubit in the \(|+\rangle\) state, followed by a measurement of all the code stabilizers. However, in this process half of the stabilizers cannot be used for detecting phase errors, destroying the symmetry of the XY code, a phenomenon that has been referred to as _temporal fragile boundaries_[32]. As a consequence, the code does not reduce to a repetition code under pure phase noise and, as we will show, the threshold at infinite bias degrades to \(\sim 11\%\). Moreover, during state preparation the code distance to phase errors scales as \(\sqrt{n}\) where \(n\) is the number of physical qubits of the code. In contrast, if all the stabilizers could be used for error correction then the code could tolerate more phase-flip errors and its distance to these high-rate errors would be \(n\). Thus, fragile boundaries during state preparation cause an overall degradation of the XY code. Similar fragile-boundary-induced degradation also arises during standard approach for logical measurements. In this work, we propose a new logical state preparation protocol to mitigate the effect of fragile boundaries. In our protocol, physical qubits are entangled into local two- and four-body Greenberger-Horne-Zeilinger(GHZ) states before being entangled into the surface code state by stabilizer measurements. This initialization pattern allows us to use three-fourths of the stabilizers for correcting phase noise and, as we prove, the code reduces to a single repetition code for phase errors with distance \(\sqrt{n}\). Thus a 50% threshold against phase-noise is guaranteed in the state preparation process, a substantial improvement over the standard preparation scheme. Moreover, the fact that there is a single repetition code implies that there are \(O(2^{\sqrt{n}})\) fault-configurations. This is almost a quadratic reduction compared to \(O(\alpha^{\sqrt{n}})\) fault-configurations in the standard preparation protocol where \(3.41\leq\alpha\leq 3.67\)[2]. Thus, our scheme is able to mitigate the degrading effect of fragile-boundaries. While our analytical results are limited to the case of pure phase noise, we numerically examine the performance of our scheme under fault-tolerant setting where some bit-flip errors and measurement errors are present. For decoding we use a modified minimum-weight matching (MWPM) algorithm as proposed in [22] and implemented using an open-source library [33]. We find that our code is successful at reducing the effects of fragile temporal boundaries at experimentally realistic noise biases. We also discuss how our state-preparation scheme can be inverted for fragile-boundary-mitigated logical state measurement. Finally, we present short-depth circuits for Bell-state preparation that are compatible with the conventional layout of the surface code and that introduce minimum additional noise into the code. This paper is organized as follows. Section II starts with a brief outline of the XY surface code. The standard logical state preparation scheme is described in section II.1 and our new scheme along with the main theorems are described in section II.2. Finally, section III presents the results of numerical simulations and we conclude with discussion on Bell state preparation and further opportunities in section IV. ## II XY code We focus on the rotated XY code, also referred to as the tailored surface code [18; 19], which is defined on a \(d\times d\) square lattice for odd \(d\) with data qubits on the vertices. The \(X\)- and \(Y\)-type stabilizer generators are defined on the faces of the lattice in an alternating checkerboard pattern as shown in Fig. 1. The \(X\) (\(Y\))-type stabilizers are product of Pauli \(X\) (\(Y\)) operators on data qubits around each face. This \(d\times d\) rotated XY code encodes a single logical qubit in \(n=d^{2}\) physical qubits. The \(X\) (\(Y\))-type logical operator is a product of Pauli \(X\) (\(Y\)) acting on qubits on a string connecting the left (top) and right (bottom) boundary. The distance to \(X\) and \(Y\) errors is \(d\). The only non-trivial \(Z\)-type logical operator is a Pauli \(Z\) acting on every qubit [19]. \(X\) (\(Y\)) errors anticommute with the \(Y\) (\(X\))-stabilizers creating pairs of syndrome defects oriented diagonally as shown by green (blue) stars in Fig. 1. A \(Z\) error anticommutes with both \(X\)- and \(Y\)-type stabilizers leading to four syndrome defects, with pairs on neighboring rows (or columns) oriented vertically (or horizontally), as shown by red stars in Fig. 1. The underlying structure of syndrome defects generated due to pure \(Z\) errors leads to enhanced performance against pure phase noise. More precisely, it has been shown that a \(d\times d\) rotated XY code, reduces to a length-\(d^{2}\) or equivalently a length-\(n\) repetition code under pure \(Z\) noise [19]. Consequently, its threshold to pure \(Z\) noise is 50% and the distance to \(Z\) errors is exactly the number of qubits \(n\). Thus, the code also leads to lower logical error rates under pure \(Z\) noise compared to the standard surface code with \(O(\sqrt{n})\)\(Z\)-distance. However, the high \(Z\)-distance in the XY-code is fragile when \(X\) or \(Y\) are present [32]. At any one of the four spatial boundaries, \(O(\sqrt{n})\)\(Z\) errors can combine with a single \(X\) or \(Y\) error to cause a undetectable logical error. In addition to the spatial fragile boundaries, there are temporal fragile boundaries that occur during state preparation and measurement. While a strategy to mitigate spatial fragile boundary by modifying the stabilizers has been proposed previously [32], we present the first approach for mitigating temporal fragile boundaries. ### Standard Logical State Preparation We first describe the phenomena of temporal fragile boundaries in the standard state preparation approach. For concreteness we consider the preparation of the \(\left|+_{\mathrm{L}}\right\rangle\) state but the analysis can be extended to preparation of the \(\left|-_{\mathrm{L}}\right\rangle,\;\left|+i_{\mathrm{L}}\right\rangle,\; \left|-i_{\mathrm{L}}\right\rangle\) states as well. The protocol for the preparation of \(\left|+_{\mathrm{L}}\right\rangle\) begins by the initialization of each physical qubit in the \(\left|+\right\rangle\) state in step 1, followed by the measurement of all the code stabilizers in step 2 [1]. The initial unentangled product state \(\left|+\right\rangle^{\otimes n}\) is the \(+1\) eigenstate of \(X_{\mathrm{L}}\) and all the \(X\)-stabilizers. In the absence of errors, the outcomes of all the \(X\)-stabilizer measurements are guaranteed to be \(+1\), while the \(Y\)-stabilizer measurements result in outcomes randomly chosen from \((+1,-1)\). Since the stabilizers commute with the logical operators, the qubits after measurements are projected in the \(\left|+_{\mathrm{L}}\right\rangle\) state up to a local gauge determined by the outcomes of the \(Y\)-stabilizer measurements. \(Z\) or \(Y\) errors can be detected as they anticommute with the \(X\)-stabilizers and can be corrected. However, because the outcomes of the \(Y\)-stabilizer measurements are completely random, they cannot be used to detect \(Z\) errors. As a result, a \(Z\) error only produces two syndrome defects and cannot be differentiated from \(Y\) errors. In fact, at this stage the code appears to be iden Figure 1: Layout of a rotated XY code, its stabilizers, and error syndromes for high-rate(\(Z\)) and low-rate(\(X,Y\)) errors. tical to the standard surface code with a \(\sim 11\%\) threshold and \(d=\sqrt{n}\) distance to \(Z\)-noise on data qubits [1]. Moreover, the number of ways to get a minimum-weight \(Z\) error scales as \(O(\alpha^{\sqrt{n}})\) where \(3.41\leq\alpha\leq 3.67\)[2]. ### New Protocol We first describe the new protocol to prepare the \(\ket{+_{\mathrm{L}}}\) state which proceeds in two steps. The protocol begins, in step 1, by initializing the qubits in Bell states as indicated in Fig. 2. The four physical qubits around the \(Y\)-stabilizer plaquettes, highlighted by dark blue squares, are entangled into the GHZ state \(\ket{\phi_{4}}=\frac{1}{\sqrt{2}}\left(\ket{+}^{\otimes 4}+\ket{-}^{\otimes 4}\right)\). Each pair of qubits involved in the two-body \(Y\)-stabilizer along the top \(X\)-logical boundary, marked by dark blue line, is entangled into the Bell state \(\ket{\phi_{2}}=\frac{1}{\sqrt{2}}\left(\ket{++}-\ket{--}\right)\) and the remaining qubits along a \(Y\)-logical boundary, marked as blue dots, are prepared in \(\ket{+}\). We will refer to all the qubits except the ones prepared in single-qubit \(\ket{+}\) states as _bulk_ qubits. Subsequently in step 2 all the stabilizers of the code are measured. Note that \(\ket{\phi_{4}}\) and \(\ket{\phi_{2}}\) are respectively the \(+1\) eigenstates of \(X^{\otimes 4}\) and \(X^{\otimes 2}\). Thus the initial state is a \(+1\) eigenstate of all the \(X\)-stabilizers and \(X\)-type logical operators. Consequently, in the absence of errors, the post-stabilizer-measurement state is the \(+1\) eigenstate of the \(X\)-type logical operators and all the \(X\)-stabilizer measurement outcomes must be \(+1\). Importantly, \(\ket{\phi_{4}}\) and \(\ket{\phi_{2}}\) are \(+1\) eigenstates of \(Y^{\otimes 4}\) and \(Y^{\otimes 2}\), respectively. Consequently, in the absence of errors, the measurement of outcomes of the marked \(Y\)-stabilizers must be \(+1\). The measurement outcomes of the unmarked \(Y\) stabilizers are random (\(\pm 1\)). Thus, unlike in the standard protocol, half of the \(Y\)-stabilizers can be used to detect errors in the new scheme. Note that although only \(\ket{+_{\mathrm{L}}}\) is prepared in Theorem 1, \(\ket{+i_{\mathrm{L}}}\) can be prepared in a similar manner due to the symmetry of the XY code as shown in Appendix A. We now state the main theorem. **Theorem 1**.: In the new state-preparation protocol for the square \(d\times d\)\(XY\) code, \(Z\) errors on all the bulk qubits are correctable (part I) and \(Z\) errors on data qubits at the temporal boundary can be decoded as a single repetition code of length \(d\) (part II). **Corollary 1** (Fault-configurations).: There are \(O(2^{\sqrt{n}})\) least-weight fault-configurations due to pure \(Z\) errors, where \(n=d^{2}\) is the total number of qubits. This is nearly quadratic improvement over the least-weight fault-configurations in the standard state-preparation approach. **Corollary 2** (Threshold).: The threshold to pure \(Z\) noise is \(50\%\). **Proof.** Consider the square lattice of the \(XY\) code in Fig. 2 where the qubits are placed on the vertices. We will use indices \((i,j)\in\{1,2,...,d\}^{2}\) to denote the location of data qubits. \(Z\) errors on the data qubits can be expressed as \(Z(\mathbf{z})=\bigotimes_{i,j}(Z_{i,j})^{z_{i,j}}\) with a corresponding binary vector \(\mathbf{z}=(z_{1,1},z_{1,2},...,z_{d,d})\in\{0,1\}^{d^{2}}\). \(z_{ij}=0\) (1) implies no (a \(Z\)) error on the data qubit located at \((i,j)\). Thus the probability for \(z_{i,j}=1\) is equal to the probability of \(Z\) errors and the problem of decoding \(Z\) errors reduces to correctly determining \(\mathbf{z}\). In the following, we will refer to the \(X\) and \(Y\) stabilizers with fixed measurement outcome of \(+1\) as _fixed stabilizers_. Under pure \(Z\) errors, the syndrome measurement of any fixed stabilizer \(S\) is \(\prod_{(i,j)\in\mathrm{supp}\,S}(-1)^{z_{i,j}}\), where \(\mathrm{supp}\,S\) denotes the set of qubits on which \(S\) is supported. Thus, it is possible to interpret \(\bigoplus_{(i,j)\in\mathrm{supp}\,S}z_{i,j}=0\) as the parity checks of a classical code where \(\bigoplus\) denotes summation modulo two. A \(-1\) outcome of measuring a fixed stabilizer results in the violation of this parity check. The parity checks can be decoded to determine \(\mathbf{z}\) and the location of \(Z\) errors. Next we show that these parity checks reduce to a number of independent classical repetition codes. First, consider pairs of qubits on the top row for which we have two-bit parity checks \(z_{1,2j-1}\oplus z_{1,2j}\), \(j=1,2,...,(d-1)/2\) due to the fixed \(Y\) stabilizers. Each of the two-bit checks forms a classical 2-bit repetition code REP(2). By adding the check \(z_{1,2j-1}\oplus z_{1,2j}\) to the four-bit parity check \(z_{1,2j-1}\oplus z_{1,2j}\oplus z_{2,2j-1}\oplus z_{2,2j}\), corresponding to the fixed \(X\) stabilizers directly below the top \(Y\) stabilizers, we reduce that four-bit parity check to a two-bit parity check \(z_{2,2j-1}\oplus z_{2,2j}\). Adding this new two-bit parity check to the next four-bit parity arising due to the fixed \(Y\) stabilizer in the next row again reduces the latter to a two-bit parity check. In this recursive manner, all four-bit parity checks reduce to two-bit parity checks \(z_{i,2j-1}\oplus z_{i,2j}\) for \(i=1,2,...,d\) and \(j=1,2,...,(d-1)/2\) with support on pairs of adjacent qubits in every row. We can apply this same procedure but this time starting with pairs of qubits on the leftmost column for which we have two-bit parity checks \(z_{2i,1}\oplus z_{2i+1,1}\), \(i=1,2,...,(d-1)/2\) due to the fixed \(X\) stabilizers. By adding the check \(z_{2i,1}\oplus z_{2i+1,1}\) to the four-bit parity check \(z_{2i,1}\oplus z_{2i,2}\oplus z_{2i+1,1}\oplus z_{2i+1,2}\), corresponding to the fixed \(Y\) stabilizers directly to the right of the \(X\) stabilizers, we reduce that four-bit parity check to a two-bit parity check \(z_{2i,2j}\oplus z_{2i+1,2}\). Continuing the recursion, all four-bit parity checks this time reduce to two-bit parity checks \(z_{2i,j}\oplus z_{2i+1,j}\) with support on pairs of adjacent qubits in every column for \(i=1,2,...,(d-1)/2\) and \(j=1,2,...,d\). Now consider \(z_{2i,2j-1}\), \(z_{2i,2j}\), \(z_{2i+1,2j-1}\), \(z_{2i+1,2j}\), for \(i,j=1,2,...,(d-1)/2\), which are supported on qubits around the fixed \(Y\) stabilizers. These form a classical 4-bit repetition code REP(4) with parity checks \(z_{2i,2j-1}\oplus z_{2i,2j}\), \(z_{2i,2j}\oplus z_{2i+1,2j}\), \(z_{2i+1,2j}\oplus z_{2i+1,2j-1}\), and \(z_{2i+1,2j-1}\oplus z_{2i,2j-1}\). Thus we see that for every fixed \(Y\) stabilizer there cor responds a classical REP(2) or REP(4) code. A simple counting shows that there are \((d-1)/2\) REP(2) and \((d-1)^{2}/4\) REP(4) codes. At the outset it seems that the probability of successful decoding will be severely limited by the REP(2) and REP(4) codes. However, incorrect decoding of a REP(2) or REP(4) results in a flip applied to every bit in its support. Equivalently, this results in \(Z\) error applied to every qubit in the support of the corresponding fixed \(Y\) stabilizer. However, these qubits are prepared in the entangled states \(\ket{\phi_{2}}\) or \(\ket{\phi_{4}}\) which are eigenstates of \(Z^{\otimes 2}\) and \(Z^{\otimes 4}\) and are thus invariant under these operators. Hence \(Z\) errors on all qubits other than the ones on the last column are correctable. This proves part I of Theorem 1. Finally, we consider the last column of qubits. For this column, each fixed \(X\) stabilizer is supported in two qubits. The classical parity checks \(z_{i,d}\oplus z_{i+1,d}\) (\(i=1,2,...,d\)) form a classical \(d\)-bit repetition code REP(\(d\)). Decoding the classical repetition code results in decoding \(Z\) errors on these qubits, which is part II of Theorem 1. Thus, fewer than \((d-1)/2\)-flips on these bits can be corrected. The number of least-weight fault configurations is \(\binom{d}{(d+1)/2}=O(2^{d})=O(2^{\sqrt{n}})\), which is Corollary 1. Since the threshold for the classical repetition code is \(50\%\), the threshold for \(Z\) errors in the quantum code is also \(50\%\), from which Corollary 2 follows. ## III Results ### Noise model and decoder The analysis in the previous section demonstrates the advantage of our scheme over standard state preparation scheme under pure \(Z\)-noise. In practice there will be some bit-flip noise affecting the qubits. To compare the performance of the two approaches in this experimentally relevant situation we resort to numerical simulation of the state preparation protocol with practical decoding algorithms. We will use a phenomenological model where (a) \(X\), \(Y\), and \(Z\) errors are applied, with probabilities \(p_{x}\), \(p_{y}\), and \(p_{z}\) respectively, on qubits after they are initialized in the product state \(\ket{+}^{\otimes d^{2}}\) in case of standard scheme or Bell states in case of our proposed scheme and (b) measurement errors are applied with probability \(p_{\rm m}\). For fault-tolerance, to measurement errors we perform \(d\) rounds of stabilizer measurements after the measurements in step 2. Our aim is to estimate the logical error rate as a function of the total probability of errors on the data qubits \(p=p_{x}+p_{y}+p_{z}\) for a given bias \(\eta=p_{z}/(p_{x}+p_{y})\). We also assume \(p_{x}=p_{y}\) for simplicity. In the standard state-preparation approach, a \(Z\) error on the data qubits flips two stabilizers in the measurement round in step 2. In contrast, in our scheme, a \(Z\) error on a qubit on the top or right boundary flips two stabilizers while a \(Z\) error on any other qubit flips three stabilizers in step 2. A \(Y\) error in either schemes flips two stabilizer measurement outcomes. In both the schemes after the first measurement round, a \(Z\) error always flips all neighboring stabilizers since all stabilizers can be used for error correction. This implies that standard minimum-weight perfect matching (MWPM) algorithm cannot be used to optimally correct for \(Z\)-biased noise and instead we use a modified MWPM algorithm introduced in [22]. We refer to this as the Tuckett decoder, which we further adapt to account for the fact that only certain stabilizers can used for error correction in the first measurement round (see Appendix D for details). Figure 2: The initialization pattern for \(\ket{+_{\rm L}}\) in our new state preparation protocol. The fixed \(Y\) stabilizers are marked by dark blue outline. All the \(X\) stabilizers are fixed. Figure 3: Scaling of logical error rate below threshold for various \(p\) with \(p_{\rm m}=0\). Filled circles show the logical error rates at different \(p\) with the new protocol. For comparison, the sub-threshold scaling curve for \(p=0.05\) with standard protocol is also plotted with square markers. The solid and dotted lines are obtained from linear fits. Filled triangles are the logical error rates of repetition codes with distance \(d\) and bit-flip rate \(p\). The dashed lines through the triangles are just drawn for easy visualization. ### No measurement error, \(p_{\rm m}=0\) We first benchmark the adapted Tuckett decoder under ideal measurements \(p_{\rm m}=0\) and \(\eta=\infty\), so that only pure \(Z\) noise is present. In this limit we find that the adapted Tuckett decoder results in a \(49.1(2)\%\) threshold for our state-preparation scheme and \(10.1(1)\%\) threshold for the standard scheme, which are in agreement with our analytical predictions. Next, we analyze the sub-threshold scaling of logical error rate. The filled-circles in Fig. 3 show the logical error rate as a function of \(d\) for different values of \(p\) with \(\eta=\infty\) and \(p_{\rm m}=0\) for the new preparation scheme. For comparison, the filled triangles are the logical error rates for repetition codes of length \(d\) as a lower bound of the performance of our protocol. We observe that the numerically obtained logical error rate of our scheme is systematically larger than that for repetition code, indicating that there is scope to further improve over the decoder. We fit the data to the ansatz \(\log p_{\rm L}=(\alpha d+\beta)\log p+(\gamma d+\delta)\). For our preparation scheme we find the fit parameters to be \((\alpha,\beta,\gamma,\delta)=(0.46(1),0.04(1),0.243(9),-0.7(1))\). Recall Theorem 1 due to which \((d+1)/2\) phase-flip errors occurring on the last column of qubits are uncorrectable so that the logical error rate scales as \(p^{(d+1)/2}\) (at low \(p\)) and \(\alpha_{\rm ideal}=0.5\). The value of the slope with the adapted Tuckett decoder, \(\alpha\sim 0.46\), is close to \(\alpha_{\rm ideal}\). Despite good agreement with the ideal slope, the adapted Tuckett decoder does not reduce to an ideal decoder as shown with examples in the Appendix D. Nonetheless, at fixed \(p\), \(p_{\rm L}\) for our scheme is several orders of magnitude smaller than with the standard preparation scheme, as seen for example, by comparing the logical error rate for \(p=0.05\) with the standard preparation scheme (filled red squares) and the new scheme (filled red circles). For more precise comparison, we examine the error suppression factor \(\Lambda\) which determines how fast the logical error rate decreases as the distance increases at a given \(p\) (\(p_{\rm L}=O(\Lambda^{-d})\)) [34]. We obtain \(\Lambda\) for \(p=0.05\) from the slopes of the fitted straight lines through the red circles and squares respectively for the two protocols. For our protocol we get \(\Lambda=2.3(1)\) which is nearly twice of \(\Lambda=1.2(1)\) for the standard protocol, indicating a much faster suppression in logical error rates as larger codes are used with our protocol. The large difference in the values of \(\Lambda\) indicates that the number of fault configurations with the adapted Tuckett decoder for our scheme is indeed much smaller compared to that for the standard approach. Despite the sub-optimality of the Tuckett decoder, it reproduces the analytically predicted thresholds and overall sub-threshold scaling behavior for \(\eta=\infty\) fairly well. Thus, we also use it for the finite-\(\eta\) case. For an experimentally realistic high bias \(\eta=10^{4}\)[12; 28], the threshold for the total error rate with our scheme is \(15.1(2)\%\) in comparison to that of \(10.2(1)\%\) with the standard approach. A plot of threshold as a function of the bias for \(p_{\rm m}=0\) has been shown in Appendix B. ### With measurement error, \(p_{\rm m}=p\) We now consider non-zero measurement errors and assume \(p_{\rm m}=p\)[22]. The plot of threshold as a function of \(\eta\) with the two schemes is shown in Fig. 4. In this case, the threshold difference between the two schemes is less dramatic. At infinite bias the threshold increases from \(5.66(1)\%\) for the standard scheme to \(7.47(5)\%\) for our scheme, while for \(\eta=10^{4}\) the threshold increases from \(5.62(2)\%\) to \(6.03(4)\%\). Figure 5 compares the logical error rates for the two preparation approaches with \(d=7\). For \(\eta=\infty\) we find that the logical error rate with our scheme is about an order of magnitude smaller compared to the standard approach. However, the difference between the two approaches becomes smaller as \(p\) decreases. We attribute this effect to the decreasing contribution from temporal boundaries at smaller \(p\) where the gain from our protocol is reduced. For example, at \(\eta=10^{4}\) and \(p=4\%\), the state preparation error rate with our approach almost reaches the floor set by the logical memory error rate \((5.3(1)\times 10^{-3})\) for the same parameters shown as black triangle in Fig. 5. The standard preparation scheme clearly cannot reach this floor due to large contribution from temporal boundary errors. On the other hand at \(\eta=10^{4}\) and \(p=0.6\%\), the logical memory error rate is \(1.5(2)\times 10^{-5}\). Our state preparation reaches this value but the logical error rate with the standard preparation scheme is slightly higher \(2.8(2)\times 10^{-5}\). These results confirm that our scheme can indeed reduce the amount of additional state-preparation errors due to temporal fragile boundaries. For low physical error rates where the improvement looks less significant, the dominant contribution to logical error rate is mainly from measurement errors and not due to temporal boundaries. Figure 4: Noise thresholds as a function of bias \(\eta\) for the standard state-preparation (green) and the new protocol (red) for \(p_{\rm m}=p\). ## IV Discussion In this work we proposed a new state-preparation protocol for the XY code which mitigates the effect of fragile temporal boundaries based on using local GHZ states. We also studied the performance of our approach with a practical decoder. In Appendix C we discuss how this protocol can be inverted to realize fragile-boundary-mitigated logical measurements. Practically, it is necessary to be able to prepare the Bell states with a short-depth circuit in a bias-preserving way. One possible circuit is shown in Fig. 6(a) in which the Bell states are prepared using CX gates between the data qubits. One drawback of this circuit is that a single high-rate \(Z\) error on the data qubit which is the common target for all the CX gates can spread to multiple data qubits causing a correlated error. Moreover, this circuit is also not compatible with the standard connectivity of the surface code where the data qubits don't interact with each other directly, but only interact with an ancilla. To overcome these shortcomings, we also give an alternative circuit in Fig. 6(b). This circuit can be effectively understood as first creating a four- (two-)body Bell state on three(one) data qubits and one ancilla and then swapping the ancilla with the remaining data qubit. Crucially, the ancilla is left in the \(|+\rangle\) state at the end in the absence of noise. A single \(Z\)-type error on the ancilla causes it to end up in the \(|-\rangle\) state. Thus a \(X\)-measurement performed on the ancilla at the very end reveals the presence of \(Z\) errors on it. The Bell state is used in the code only after the ancilla is measured in \(|+\rangle\). Moreover, this heralding also eliminates error correlations caused by \(Z\) errors on the ancilla to first order. The ancilla-noise robustness only comes at the cost of one extra CX gate compared to the circuit in Fig. 6(a) and compared to the standard stabilizer measurement circuit. Ultimately, future work should consider a full circuit-level simulation of our scheme with additional modifications to mitigate the fragile spatial boundaries [32]. Moreover, there is considerable room for improving the performance of our scheme by improving the decoder. One possible path would be to combine the Tuckett decoder with belief-propagation [35; 36; 37; 33]. Ultimately, in order to fully understand the advantages and limits of our scheme a hypergraph decoder will be necessary. While such a decoder may be inefficient, approximate solutions may be sufficient to reach reasonably low error rates with reasonable latency [38]. ###### Acknowledgements. This material is based upon work supported by the National Science Foundation (CAREER grant no. 2145223) Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. We also thank Shilin Huang and Shraddha Singh for useful discussions. ## Appendix A State preparation of \(Y_{\rm L}\) eigenstate \(|+i_{\rm L}\rangle\) A similar construction in Fig. 7 shows the local initialization pattern which is an eigenstate of \(Y_{\rm L}\). Due to the symmetry of \(X\) and \(Y\) in the XY code, the same Figure 5: Logical error rates for the standard (filled-squares) and new preparation (filled-circles) schemes with \(p_{\rm m}=p\) at \(\eta=\infty\) (red) and \(\eta=10^{4}\) (green) for \(d=7\). The solid and dashed lines are shown as guides for the eye and are not obtained from any fits. The black triangles show memory logical error rate at \(p=0.6\%,4\%\) for \(\eta=10^{4}\). Figure 6: (a) Simple circuits to prepare \(|\phi_{4}\rangle\) and \(|\phi_{2}\rangle\). (b) An alternative circuit for \(|\phi_{4}\rangle\) which is compatible of standard surface code qubit layout where data qubits only connect to ancilla. (c) A similar circuit to prepare \(|\phi_{2}\rangle\) on data qubits. argument as in Theorem 1 applies to prove the \(\text{REP}(d)\) structure and \(50\%\) threshold for state preparation. ## Appendix B Threshold when \(p_{\text{m}}=0\) Figure 8 shows threshold as a function of \(\eta\) when \(p_{\text{m}}=0\) and Pauli errors applied on data qubits. ## Appendix C Logical Measurement of \(X_{\text{L}}\) The standard protocol for \(X_{\text{L}}\) measurement proceeds by measuring each data qubit in the \(X\) basis [1]. The measurement outcomes \(x_{i}\in\{0,1\}\) can be added to obtain the logical measurement result \(x_{\text{L}}=\bigoplus_{i}x_{i}\). In the absence of errors, the measurement outcomes of qubits supported by an \(X\)-type stabilizer \(S\) must sum to zero: \(\bigoplus_{v\in\text{supp}\,S}x_{v}=0\) since all measurements commute with \(S\). The \(X\)-type stabilizers can thus be effectively used to detect and correct data qubit \(Z\) errors (caused by a physical \(Z\) or \(Y\) error on the data qubit or by a measurement errors). The \(Y\)-type stabilizers do not provide any information about errors since they don't commute with the data qubit \(X\) measurements. Thus, there is no way to detect \(X\) errors which is not a problem as these errors don't affect the \(X_{\text{L}}\) measurement anyway. Nonetheless, only half the stabilizers can be used for correcting errors and there is no way to distinguish a \(Z\) error from \(Y\) error. This results in fragile temporal boundaries similar to the case of state preparation. We overcome this challenge by "inverting" the new state preparation protocol. We measure the local operators that stabilize the Bell states \(\ket{\phi_{4}}\) and \(\ket{\phi_{2}}\) which are \(\{YYYY,XXII,IXXI,IIXX\}\) and \(\{YY,XX\}\) respectively. The qubits in the last column are measured in the \(X\) basis. The result of \(X_{\text{L}}\) measurement can be inferred from summing over all disjoint \(XX\) and \(X\) measurements. Moreover, the measurement outcomes obey the set of \(\text{REP}(4)\), \(\text{REP}(2)\), and \(\text{REP}(d)\) parity checks as described under Theorem 1. We know from the discussion under Theorem 1 that incorrect decoding of a \(\text{REP}(2)\) or \(\text{REP}(4)\) results in \(Z^{\otimes 2}\) or \(Z^{\otimes 4}\) applied to qubits supporting \(\ket{\phi_{2}}\) or \(\ket{\phi_{4}}\). However, \(Z^{\otimes 2}\) on \(\ket{\phi_{2}}\) or \(Z^{\otimes 4}\) on \(\ket{\phi_{4}}\) commutes with operators being measured. Thus we conclude that in this new measurement protocol for the square \(d\times d\)\(XY\) code, \(Z\) errors on data qubits at the temporal boundary can be decoded as a single repetition code \(\text{REP}(d)\) on the last column of qubits. It also follows that there are \(O(2^{\sqrt{n}})\) least-weight fault-configurations, where \(n=d^{2}\) is the total number of qubits and that the threshold to pure \(Z\) noise is \(50\%\). ## Appendix D Adapted Tuckett Decoder In this work, we apply the XY code decoder exploiting the symmetries of the code and noise bias [22]. We refer the reader to [22] for details. In this section we only highlight the modifications made to the original decoder for state-preparation. The only difference between the original decoder and the decoder we use is how the matching graph is weighed in the first time-step to account for the temporal boundaries. We need to add virtual vertices at the temporal boundaries. The vertex for an unfixed stabilizer can be either matched to its virtual temporal vertex with zero weight, or matched to any other vertices with normal weights corresponding to qubit \(X\), \(Y\), \(Z\) errors. The vertex for a fixed stabilizer can be matched to its virtual temporal vertex with weight corresponding to measurement errors \(p_{\text{m}}\). For the standard preparation approach however, the syndromes due to \(Z\) errors are identical to the syndromes due to \(Y\) errors in the first time-step. Thus, we modify the Tuckett decoder in this case so that only diagonal edges corresponding to \(Y\) errors are allowed in the first time-step with weight corresponding to the probability Figure 8: Noise thresholds as a function of bias \(\eta\) for the standard state-preparation (green) and the new protocol (red) for \(p_{\text{m}}=0\). of \(Z\) and \(Y\) errors. If we don't do this and instead use parallel edges like in the original Tuckett decoder then the performance of the standard approach degrades substantially. Recall that in the case of state preparation with the new protocol, an optimal decoder should correct up to \((d-1)/2\)\(Z\) errors on the last column of qubits, however, with an example shown in Fig. 9 we find that the Tuckett decoder is unable to achieve this. The unfixed stabilizers or stabilizers whose measurement outcome is unknown and cannot be used for error correction are marked with thick black outline in the figure for clarity. Figure 9(a) shows the syndromes in filled stars due to two \(Z\) errors on qubits marked in red circles. The solid lines shows the possible edges from matching. In this case the decoder assigns \(Z\) errors to qubits correctly. However, there is an alternate edge-matching of same weight shown in Fig. 9(b). In this case the decoder assigns \(Z\) errors to qubits marked in solid blue, which differs from the actual \(Z\) errors in red by a logical operator.
2301.10861
Hubble Tension and Gravitational Self-Interaction
One of the most important problems vexing the $\Lambda$CDM cosmological model is the Hubble tension. It arises from the fact that measurements of the present value of the Hubble parameter performed with low-redshift quantities, e.g., the Type IA supernova, tend to yield larger values than measurements from quantities originating at high-redshift, e.g., fits of cosmic microwave background radiation. It is becoming likely that the discrepancy, currently standing at $5\sigma$, is not due to systematic errors in the measurements. Here we explore whether the self-interaction of gravitational fields in General Relativity, which are traditionally neglected when studying the evolution of the universe, can explain the tension. We find that with field self-interaction accounted for, both low- and high-redshift data are simultaneously well-fitted, thereby showing that gravitational self-interaction could explain the Hubble tension. Crucially, this is achieved without introducing additional parameters.
Corey Sargent, Alexandre Deur, Balsa Terzic
2023-01-25T23:04:25Z
http://arxiv.org/abs/2301.10861v1
# Hubble Tension and Gravitational Self-Interaction ###### Abstract One of the most important problems vexing the \(\Lambda\)CDM cosmological model is the Hubble tension. It arises from the fact that measurements of the present value of the Hubble parameter performed with low-redshift quantities, e.g., the Type IA supernova, tend to yield larger values than measurements from quantities originating at high-redshift, e.g., fits of cosmic microwave background radiation. It is becoming likely that the discrepancy, currently standing at \(5\sigma\), is not due to systematic errors in the measurements. Here we explore whether the self-interaction of gravitational fields in General Relativity, which are traditionally neglected when studying the evolution of the universe, can explain the tension. We find that with field self-interaction accounted for, both low- and high-redshift data are _simultaneously_ well-fitted, thereby showing that gravitational self-interaction could explain the Hubble tension. Crucially, this is achieved without introducing additional parameters. ## I The Hubble Tension Modern cosmology began with the discovery of Hubble's law. Its central element, the present value of the Hubble parameter, \(H_{0}\), has a troubled history of measurements and it is only in the last two decades since precise determinations became available. However, two types of precision measurements of \(H_{0}\) are in conflict. The first type comprises observations of phenomena originating at high redshift \(z\), principally the power spectrum of the cosmic microwave background (CMB) [1] and the baryon acoustic oscillations (BAO) [2]. The second type consists of determination of \(H_{0}\) from low-\(z\) phenomena, notably using standard candles [3] and time-delay cosmography [4] methods. See [5] for the low- and high-\(z\) methods providing \(H_{0}\). The high-\(z\) phenomena yield \(H_{0}\) values significantly lower than those from low-\(z\). This is known as the "Hubble tension" [5; 6; 7]. The discrepancy presently reaches a \(5\sigma\) significance: the combined high-\(z\) measurements yield \(67.28\pm 0.60\) km/s/Mpc while the combined low-\(z\) measurements yield \(H_{0}=73.04\pm 1.04\) km/s/Mpc [8]. Yet, individual low-\(z\) measurements can be as much as \(6\sigma\) away [5] from the most precise high-\(z\) datum, the Planck satellite result [1]. Although the Hubble tension may originate from unaccounted systematic effects [9], the consistency of the high-\(z\) results on the one hand, and that of the low-\(z\) results on the other, suggests that it could instead reveal a limitation of the current standard model of cosmology, the dark energy-cold dark matter model (\(\Lambda\)CDM). This would be just one of the several malisages of \(\Lambda\)CDM. A first worry is that detection of dark matter particles by direct [10] or indirect [11] measurements is still wanting, with searches having almost exhausted the allowed parameter spaces of likely candidates. Furthermore, the most natural extensions of the standard model of particle physics which offer convincing dark matter candidates are mostly ruled out, e.g., minimal SUSY [12]. Other worries with \(\Lambda\)CDM include overestimating the number of globular clusters and dwarf galaxies [13] or the lack of uncontrived explanation for tight correlations between the supposedly sub-dominant baryonic matter and quantities characterizing galaxy dynamics, e.g., the Tully-Fisher relation [14], radial acceleration relation (RAR) [15], or Renzo's rule [16]. These issues motivate developing alternatives to \(\Lambda\)CDM that could naturally resolve these problems. Here we follow this direction and investigate whether the Hubble tension can be understood with a model that incorporates the fact that in General Relativity (GR), gravitational fields interact with each others (field self-interaction, SI). That central feature of GR is the basis for the GR-SI model. This model already explains the chief observations involving dark matter/energy without recourse to dark components: the flat rotation curves of galaxies [17; 18]; the high-\(z\) supernova luminosities [19]; the CMB anisotropies [20]; the formation of large structures [21]; the matter power spectrum [20]; the internal dynamics of galaxy clusters, including that of the Bullet Cluster [17]; and the RAR [22] and Tully-Fisher [17] relations. In the next section, we recall the physical basis of the GR-SI framework and its predictions. We then discuss how, from the perspective of the GR-SI model, a Hubble tension should arise if low- and high-\(z\) data are analyzed with \(\Lambda\)CDM, and why the tension is not present in GR-SI. After summarizing how the evolution of the universe affects the CMB anisotropy observations in both the GR-SI and \(\Lambda\)CDM frameworks, we use GR-SI to fit luminosity distance data. This constrains the GR-SI parameters describing the effects of large-scale structure formation on the long distance propagation of gravity, effects that are encapsulated in a so-called _depletion function_\(D_{M}(z)\). Finally, we verify that with the constrained parameters, the GR-SI fit reproduces better the CMB power spectrum with the low-\(z\) value of \(H_{0}\) than with the high-\(z\)\(H_{0}\) determination. We also find that if \(H_{0}\) is left a free parameter, its best fit value agrees with the low-\(z\) determination rather than the high-\(z\) one. This indicates an absence of Hubble tension in the GR-SI model. We will consider only the scalar multipole coefficient \(C_{TT,l}^{s}\) since it is sufficient to investigate whether a Hubble tension is present in the GR-SI model. In particular, it is not necessary for the goal of this article to investigate the polarized CMB data. ## II Field self-interaction and its consequences A defining feature of GR is that it is a non-linear theory: gravity fields interact with each other, in contrast to Newtonian gravity. The linear character of the latter allows for the field superposition principle, while in GR, the combination of fields differ from their sum since the fields interact. In fact, the GR Lagrangian \(\mathcal{L}_{\text{GR}}=\sqrt{\det(g_{\mu\nu})}\,g_{\mu\nu}R^{\mu\nu}/(16\pi G)\) (here \(g_{\mu\nu}\) is the metric, \(G\) is Newton's constant and \(R_{\mu\nu}\) is the Ricci tensor) expressed in a polynomial form [23]: \[\mathcal{L}_{\text{GR}}\!=\!\sum_{n=0}^{\infty}(16\pi MG)^{n/2}\left[\phi^{n} \partial\phi\partial\phi\right], \tag{1}\] explicitly shows that a gravitational field self-interacts. Here, \(\phi_{\mu\nu}\) is the gravitational field due to a unit mass and is defined as the deviation of \(g_{\mu\nu}\) from a reference constant metric \(\eta_{\mu\nu}\), \(\phi_{\mu\nu}\equiv(g_{\mu\nu}-\eta_{\mu\nu})/\sqrt{M}\), where \(M\) is the mass of the system. For simplicity, we ignored the matter term of \(\mathcal{L}_{\text{GR}}\): to discuss the pure field case is sufficient. The bracket in \([\phi^{n}\partial\phi\partial\phi]\) signifies a sum of Lorentz-invariant terms whose forms are \(\phi^{n}\partial\phi\partial\phi\), e.g., \([\partial\phi\partial\phi]\) is the Fierz-Pauli Lagrangian of linearized GR [24]. Newtonian gravity is recovered if \(\eta_{\mu\nu}\) is the Minkowski metric and if one keeps only the time-time component of the \(n=0\) term of Eq. (1): \([\partial\phi\partial\phi]\,\partial^{\mu}\phi_{00}\partial_{\mu}\phi^{00}\) and \(\partial^{0}\phi_{00}=0\). The term \([\partial\phi\partial\phi]\) formalizes the free motion of the field, _viz_, it generates the two-point correlation function that gives the probability for the field to freely propagate from one spacetime point to another. The \(n>0\) terms are interaction terms and therefore cause the field SI. An analogous phenomenon occurs for the nuclear Strong Force, whose theory is Quantum Chromodynamics (QCD). Actually, the reason why GR and QCD are non-linear theories is the same: they possess several types of distinct "charges". For GR, they are the mass/energy, momentum and stress. For QCD, they are the three color charges. This causes the fields of GR and QCD to be rank-2 tensors, i.e., non-commuting objects. The non-zero commutators in turn give rise to SI terms. It results in GR and QCD having the same Lagrangian structure. Field SI is a central and conspicuous feature of QCD due to its large coupling \(\alpha_{s}\)[25]. In contrast, field SI in GR is controlled by \(\sim\!\!\sqrt{GM/L}\) (with \(L\) a characteristic length of the system), whose value is typically small. This makes the linear approximations of GR, e.g., the Newtonian or the Fierz-Pauli theories, adequate for most applications. However, if \(\sqrt{GM/L}\) is large enough, SI _must_ be accounted for: it is an unavoidable consequence of GR. The calculations in [17; 18; 28] indicate that for galaxies or galaxy clusters, \(\sqrt{GM/L}\) is large enough to enable SI. One consequence of SI in QCD is to enhance the binding of quarks, resulting in their confinement. Likewise in GR, if a galactic mass is large enough to enable SI, it would enhance the binding of galactic components in a manner that directly leads to flat galactic rotation curves [17] without requiring dark matter. The increased binding also dispenses with the need for dark matter to account for the growth of large-scale structures [21]. On the other hand, using Newtonian gravity to analyze systems in which SI is important overlooks the binding enhancement and produces an apparent mass discrepancy interpreted as dark matter. Importantly, SI effects cancel out in isotropic and homogeneous systems. For example, a nearly spherical galaxy has much less evidence of dark matter than a flatter galaxy [26; 27]. Another direct and crucial consequence of the binding enhancement comes from energy conservation: the increase of binding energy inside a system must be balanced by a reduction of the gravitational energy outside of the system. In QCD, the larger binding confines quarks into hadrons, while outside hadrons, the Strong Force declines into the much weaker residual Yukawa interaction. Likewise, if SI binds more tightly massive systems, gravitation must be reduced outside these systems. Overlooking that large-distance reduction of gravity would require a compensating global repulsion in much the same way as overlooking the binding enhancement requires a compensating dark mass. The purported repulsion would then be interpreted as dark energy. The enhanced binding of structures, _viz_, the _local_ effect of SI, is computed starting from GR's Lagrangian, Eq. (1) [17; 28]. The large-distance suppression of gravity, _viz_, the _global_ effect, is evaluated effectively using a _depletion function_\(D_{M}(z)\) that originates from lifting the traditional assumptions that the universe is isotropic and homogeneous [19]. If \(D_{M}=0\), gravity is fully quenched at large-distance while for \(D_{M}=1\) there is no net SI effect. Thus, \(D_{M}(z)\approx 1\) for the early universe since it was nearly isotropic and homogeneous. In contrast, the large-scale structures of the present universe entail \(D_{M}(z\approx 0)<1\). The form of \(D_{M}(z)\) first proposed in [19] can be approximated by: \[D_{M}(z)=1-(1+e^{(z-z_{0})/\tau})^{-1}+Ae^{-z/b}. \tag{2}\] Here, \(z_{0}\) is the redshift characterizing the large-scale structure formation epoch and \(\tau\) its duration. \(A\) is the mass fraction of structures whose shapes have evolved into more symmetric ones (e.g., disk galaxies merging to form elliptical galaxies) and \(b\) is the duration of that evolution process. Fig. 1 displays \(D_{M}(z)\). ## III The Hubble tension from the GR-SI perspective A Hubble tension arising within \(\Lambda\)CDM is expected from the perspective of GR-SI: \(H_{0}\) affects the observation of the CMB anisotropies essentially _via_ the angular diameter distance of last scattering, \(d_{A}\). This quantity depends upon the evolution of the universe similarly to the luminosity distance \(D_{\mathcal{L}}\) that enters the lower-\(z\) determination of \(H_{0}\), e.g., _via_ supernova observations. Specifically, \(d_{A}(z)=D_{\mathcal{L}}(z)/(1+z)^{2}\). For example, in the \(\Lambda\)CDM model, \[d_{A}(z) = \frac{1}{H_{0}(1+z)\sqrt{\Omega_{K}}}\sinh\left(\sqrt{\Omega_{K}} \int_{(1+z)^{-1}}^{1}\frac{dx}{\sqrt{\Omega_{\Lambda}x^{4}+\Omega_{K}x^{2}+ \Omega_{M}x+\Omega_{\gamma}}}\right), \tag{3}\] \[D_{\mathcal{L}}(z) = \frac{(1+z)}{H_{0}\sqrt{\Omega_{K}}}\sinh\left(\sqrt{\Omega_{K}} \int_{(1+z)^{-1}}^{1}\frac{dx}{\sqrt{\Omega_{\Lambda}x^{4}+\Omega_{K}x^{2}+ \Omega_{M}x+\Omega_{\gamma}}}\right), \tag{4}\] with \(\Omega_{\Lambda}\), \(\Omega_{M}\) and \(\Omega_{\gamma}\) the dark energy, total matter and radiation densities relative to the critical density, respectively, and \(\Omega_{K}\equiv{}^{K}\!/_{a_{0}^{2}H_{0}^{2}}\) with \(K\) the curvature and \(a_{0}\) the Friedmann-Lemaitre-Robertson-Walker scale factor at present time. Therefore, the determination of \(H_{0}\) from CMB observations is analogous to a highly accurate \(D_{\mathcal{L}}(z_{L})\) observation, where \(z_{L}\) is the redshift at the time of last rescattering. Figure 2 depicts two luminosity distances \(D_{\mathcal{L}}(z)\) calculated within \(\Lambda\)CDM with \(\Omega_{\Lambda}=0.69\), \(\Omega_{M}=0.31\) and \(K=0\), but different \(H_{0}\) values: \(73.06\) km/s/Mpc, which matches the supernova and \(\gamma\)-ray data at low-\(z\) (dashed blue line in the left panel and blue dots in the right), and the other with \(67.28\) km/s/Mpc to match the CMB \(D_{\mathcal{L}}(z_{L})\) (dotted green line and green points). The uncertainty of the CMB datum is adjusted to equalize the \(\chi^{2}/ndf\) values of the fits for the comparison of the data and the two \(\Lambda\)CDM cosmologies. The Hubble tension is evident in the two \(\Lambda\)CDM curves which match well either the low-\(z\) data or the high-\(z\) data, but not both. However, the GR-SI model for \(D_{\mathcal{L}}(z)\)[19], \[D_{\mathcal{L}}(z)=\frac{(1+z)}{\sqrt{\Omega_{K}}H_{0}}\sinh\left(\sqrt{\Omega _{K}}\int_{1/(1+z_{L})}^{1}\frac{dx}{\sqrt{\Omega_{K}x^{2}+D_{M}(1/x-1)x}} \right), \tag{5}\] fits both data sets well, as quantified by a significantly smaller \(\chi^{2}/ndf\) value, therefore exhibiting no signs of Hubble tension. Here, we elected to let the parameters of \(D_{M}(z)\) be determined from the best fit to the \(D_{\mathcal{L}}(z)\) data. This yields \(z_{0}=2.20\pm 0.18\), \(\tau=0.84^{+0.15}_{-0.19}\), \(A=0.33\pm 0.09\) and \(b=0.24^{+0.10}_{-0.16}\). Originally the values of the parameters were obtained from the knowledge of the evolution of large-scale structures. The value \(z_{0}=2.20\pm 0.18\) is smaller than the estimate from large structure formation, \(z_{0}=6.3^{+1.6}_{-2.0}\)[21], but the ratio \(z_{0}/\tau=2.62\) happens to be the same for the fit and the estimates from large structure formation. The fit values for the \(A\) and \(b\) parameters agree with the earlier values, \(A=0.25^{+0.20}_{-0.17}\) and \(b=0.20^{+0.15}_{-0.05}\). The \(D_{\mathcal{L}}(z)\) calculated within \(\Lambda\)CDM and GR-SI differ chiefly at intermediate values of \(z\) because SI induces a large-distance suppression of gravity which curves \(D_{\mathcal{L}}(z)\) in the \(1\lesssim z\lesssim 10\) domain, when large-scale structures start forming [19; 20]. Figure 1: Depletion function \(D_{M}(z)\) determined from the optimizing the fit to the low- and high-\(z\)\(D_{\mathcal{L}}\) data in Fig. 2. The specific timing and amount of matter involved in the formation of large-scale structures result in the particular \(z\)-dependence of \(D_{M}(z)\) which differs from the \(\propto z^{4}\) effect of dark energy in \(\Lambda\)CDM. Thus, if SI noticeably influences the evolution of the universe, there will arise a discrepancy with \(D_{\mathcal{L}}(z)\) determinations using smaller-\(z\) phenomena for which the evolution spans a much smaller range. Since the determination of \(H_{0}\) from the CMB is analogous to a determination using \(D_{\mathcal{L}}(z_{L})\), extracting \(H_{0}\) from the CMB using the \(\Lambda\)CDM framework will cause a tension with \(H_{0}\) measurements at lower \(z\). The same applies to the baryonic acoustic oscillations (BAO) observation from the CMB. It is characterized by the acoustic horizon angular size, \(\theta={}^{d_{H}}\!/_{d_{A}(z_{L})}\), where \(d_{H}\) is the acoustic horizon. Since \(d_{H}\) is the comoving distance travelled by a sound wave until recombination, _viz_, it happens for \(z>z_{L}\) when the universe was homogeneous and dark energy negligible, \(d_{H}\) is essentially the same for \(\Lambda\)CDM and GR-SI. It is the distinct evolution of \(d_{A}(z)\) in \(\Lambda\)CDM and GR-SI that makes their \(\theta\) predictions different. Like \(D_{\mathcal{L}}\), \(d_{A}\) is predicted by \(\Lambda\)CDM to be larger at \(z=0\), yielding smaller \(\theta\) and \(H_{0}\) values compared to local measurements and the expectation from GR-SI. ## IV Dependence of the CMB observations on the expansion of the universe The GR-SI fit of \(D_{\mathcal{L}}(z)\) just discussed indicates that there is no Hubble tension in the GR-SI model. An independent test that would support this conclusion is to fit the CMB within the GR-SI framework, and check that the low-\(z\)\(H_{0}\) determination provides a better fit to the CMB than the high-\(z\)\(H_{0}\) one. We use an analytical expression of the CMB anisotropies to show how the expansion of the universe affects their observations at present-day. Such analytical expression is provided by the hydrodynamic approximation [29]. Despite not being as accurate as state-of-the-art numerical treatments of the CMB, this treatment is sufficient for the goal of this article, namely to investigate the Hubble tension within the GR-SI model. This is verified _a posteriori_ by the small \(\chi^{2}/ndf\) characterizing the GR-SI fits to the CMB. At \(z_{L}\), the universe is very homogeneous, making SI effects negligible. Thus, the phenomena that created the CMB anisotropies are unaffected and so are the mathematical expressions formalizing them. However, some of the parameters entering the CMB anisotropy expression use their present time values. They are thus affected by the expansion of the universe and therefore contribute to the Hubble tension. In what follows, values of parameters at the present time, matter-radiation equilibrium time, and last scattering time are indicated by the subscripts \(0,\ EQ\) and \(L\), respectively. Baryon relative density is denoted by \(\Omega_{B}\), and, for \(\Lambda\)CDM, the dark matter relative density is \(\Omega_{DM}\). We consider \(C^{s}_{TTL}\), the scalar multipole coefficient for the temperature-temperature angular correlation (here Figure 2: Left: Luminosity distance \(D_{\mathcal{L}}\) as a function of redshift \(z\) for: \(\Lambda\)CDM using \(h\equiv{}^{H_{0}}\!/_{100\ {\rm km}/{\rm s}/{\rm Mpc}}=0.67\) (dashed green line) or \(h=0.73\) (dotted blue line); and GR-SI with \(h=0.73\) (solid red line). The embedded figure is the same but in linear rather than log scales. The low-\(z\) observational data, shown by the square, triangle, circle and star symbols, are normalized using the \(h=0.73\) average low-\(z\) determination. The pentagon symbol shows \(D_{\mathcal{L}}(z_{L})\) as it would be obtained using the values of \(z_{L}\) and \(H_{0}\) from the \(\Lambda\)CDM fit of the CMB. Right: Same as the left panel but for the normalized residual \(r=(D_{\mathcal{L}}-d_{\rm obs})^{2}/e_{\rm obs}^{2}\), where \(d_{\rm obs}\) is the observed data, \(e_{\rm obs}\) their uncertainty, and the colors match that of the three different models used to compute \(D_{\mathcal{L}}\) in the left panel. The Hubble tension appears as the offset between the \(\Lambda\)CDM curve which fits the low-\(z\) data (dotted blue line in the left panel and blue dots in the right panel) and the blue dot at \(z_{L}\). The green dot at \(z_{L}\) is near \(r=0\) and hence not visible with the log scale. \(l\) is the multipole moment). Its expression within the _hydrodynamic approximation_ is provided in [29]: \[\frac{l(l+1)C_{TT,l}^{s}}{2\pi} = \frac{4\pi T_{0}^{2}N^{2}e^{-2\tau_{reion}}}{25}\int_{1}^{\infty}d \beta\bigg{(}\frac{\beta l}{l_{\mathcal{R}}}\bigg{)}^{n_{s}-1}\bigg{\{}\frac{3 \sqrt{\beta^{2}-1}}{\beta^{4}(1+R_{L})^{3/2}}\mathcal{S}^{2}(\beta l/l_{T})e^ {-2\beta^{\prime}l_{\mathcal{D}}^{s}}\sin^{2}\big{(}\beta l/l_{H}+\Delta(\beta l /l_{T})\big{)}+ \tag{6}\] \[\frac{1}{\beta^{2}\sqrt{\beta^{2}-1}}\bigg{[}3\mathcal{T}(\beta l /l_{T})R_{L}-(1+R_{L})\,^{-\,\nicefrac{{1}}{{4}}}\mathcal{S}(\beta l/l_{T})e ^{-\beta^{\prime}l_{\mathcal{D}}^{s}}\cos\big{(}\beta l/l_{H}+\Delta(\beta l /l_{T})\big{)}\bigg{]}^{2}\bigg{\}}+\mathcal{C}(l).\] The first term in the curly bracket formalizes the Doppler effect. The second term provides the Sachs-Wolf and intrinsic temperature anisotropy effects. Both terms also contains the large-\(l\) damping. \(N\) is the normalization of the primordial perturbations, \(\tau_{reion}\) is the reionized plasma optical depth, \(\beta\) is an integration variable akin to a wave number, \(n_{s}\) is the scalar spectral index, \(l_{\mathcal{R}}\equiv(1+z_{L})k_{\mathcal{R}}d_{A}\) is a multipole characteristic value, with \(k_{\mathcal{R}}\equiv 0.05\) Mpc\({}^{-1}\) a conventional scale. Other multipole characteristic values are \(l_{T}=\nicefrac{{4}}{{d_{A}}}/\!\!\nicefrac{{d_{T}}}{{d_{T}}}\) (\(d_{T}\) is a length scale whose form differs in \(\Lambda\)CDM and GR-SI; see below), \(l_{D}=\nicefrac{{d_{A}}}{{d_{D}}}\) (\(d_{D}\) is the damping length) and \(l_{H}=\nicefrac{{d_{A}}}{{d_{H}}}\). \(R_{L}=\nicefrac{{3\Omega_{B}}}{{d_{\Omega}}}/\!\!\nicefrac{{d_{\Omega}}}{{1+ z_{L}}}\) is a ratio of relative densities and \(\mathcal{S}\), \(\mathcal{T}\) and \(\Delta\) are transfer functions. Finally, \(\mathcal{C}(l)\) is a second-order term correcting the approximations of the hydrodynamic model [20]. Hereafter, since \(\mathcal{C}(l)\) is small, we will ignore its possible dependence on the difference between the universe evolutions according to \(\Lambda\)CDM and GR-SI. The integrated Sachs-Wolf, Sunyaev-Zel'dovich and cosmic variance effects, which produce anisotropies that are extrinsic to the CMB origin, are not included in the hydrodynamic model. This does not affect our study of the Hubble tension since we will focus on the multipole range \(48<l<1800\), a domain where these effects are unimportant. In Eq. (6), the quantities that depend on the expansion of the universe are integrated over \(z\). There are only two such parameters: \(d_{A}\) and \(t_{L}\). Their expressions in \(\Lambda\)CDM and GR-SI are given in Table 1. The expressions of the quantities not explicitly affected by the expansion of the universe are tabulated in Appendix for convenience. Some of these quantities depend indirectly on the expansion of the universe as they contain \(t_{L}\), \(z_{L}\) or \(d_{A}\), namely \(d_{T}\), \(R_{L}\), \(d_{\Lambda\text{Landau}}\), \(d_{\text{Shik}}\), \(d_{H}\) and \(d_{D}\) (the latter through \(d_{\Lambda\text{Landau}}\) and \(d_{\text{Shik}}\)), \(l_{\mathcal{R}}\), \(l_{T}\), \(l_{D}\) and \(l_{H}\). In all, this shows that the Hubble tension may be cast as the problem of properly modeling the distances \(d_{A}\) and \(D_{\mathcal{L}}\). In fact, once SI is accounted for in the CMB anisotropy expression, we can fit the \(C_{TT,l}^{s}\) data while keeping \(H_{0}\) to its low-\(z\) determination of \(73.06\) km/s/Mpc and the \(D_{M}(z)\) parameters obtained from the best fit of \(D_{\mathcal{L}}(z)\) (red line of Fig. 1). The parameters allowed to vary are \(z_{L}\), \(N\), \(n_{s}\), \(\sigma\) and \(\Omega_{B}\), with the \(C_{TT,l}^{s}\) spectrum reproduced for \(z_{L}=1728\pm 1,\ N=(1.1995\pm 0.0019)\times 10^{-5},\ n_{s}=0.9759\pm 0.0028,\ \sigma=1.751\pm 0.002\) and \(\Omega_{B}h^{2}=0.370\pm 0.002\), with \(\chi^{2}/ndf=0.59\), see Fig. 3. We remark that the quoted uncertainties are only fit uncertainties and do not include other systematic effects, e.g., coming from approximations in the CMB hydrodynamics model or from the choice of functional form for \(D_{M}(z)\) and its parameters. This fit must use the \(H_{0}\) value determined by low-\(z\) observations since there is no Hubble tension in the GR-SI model due to the universe expanding differently than in the \(\Lambda\)CDM model. This is verified by performing a CMB fit with \(H_{0}=67.28\) km/s/Mpc and observing that the \(\chi^{2}/ndf\) of that fit is larger (by about \(20\%\)) than that of the nominal fit. It is also interesting to perform the fit with \(H_{0}\) kept a free parameter despite the fact that it introduces a slight inconsistency since the determination of the \(D_{M}(z)\) parameters is obtained with the \(H_{0}\) value fixed by \(z\simeq 0\) observations. Such fit yields \(H_{0}=72.99\pm 0.06\) km/s/Mpc, \(z_{L}=1728\pm 1,\ N=(1.2014\pm 0.0015)\times 10^{-5},\ n_{s}=0.9738\pm 0.0027,\ \sigma=1.751\pm 0.002\) and \(\Omega_{B}h^{2}=0.368\pm 0.002\), with \(\chi^{2}/ndf=0.58\). ## V Conclusion Our results show that the Hubble tension may be resolved if one accounts, when quantifying the evolution of the universe, for the self-interaction of gravitational fields, a feature of General Relativity ordinarily neglected. In the cosmological model used in this article, as in the previous studies using that model, the effects of self-interaction are contained within a depletion function which effectively relaxes the traditional assumptions of the Cosmological Principle--isotropy and homogeneity of the evolving universe. Here, the parameters of the depletion function are \begin{table} \begin{tabular}{|c|c|c|} \hline & \(\Lambda\)CDM & GR-SI \\ \hline \hline \(d_{A}\) & \(\frac{1}{\sqrt{\Omega_{K}}\,h_{0}(1+z_{L})}\sinh\bigg{(}\sqrt{\Omega_{K}}\, \int_{1/(1+z_{L})}^{1}\frac{dx}{\sqrt{\Omega_{\Lambda}x^{4}+\Omega_{\Lambda}x^{ 2}+\Omega_{\Lambda}x}}\bigg{)}\) & \(\frac{1}{\sqrt{\Omega_{K}}\,h_{0}(1+z_{L})}\sinh\bigg{(}\sqrt{\Omega_{K}}\, \int_{1/(1+z_{L})}^{1}\frac{dx}{\sqrt{\Omega_{K}x^{2}+D_{M}(1/x-1)x}}\bigg{)}\) \\ \hline \(t_{L}\) & \(\frac{1}{H_{0}}\int_{0}^{1/(1+z_{L})}\frac{\sigma dx}{\sqrt{\Omega_{\Lambda}x^{4} +\Omega_{K}x^{2}+\Omega_{\Lambda}x+\Omega_{R}}}\) & \(\frac{1}{H_{0}}\int_{0}^{1/(1+z_{L})}\frac{\sigma dx}{\sqrt{\Omega_{K}x^{2}+D_{M}(1 /x-1)+\Omega_{R}}}\) \\ \hline \end{tabular} \end{table} Table 1: CMB quantities depending explicitly on the expansion of the universe. Column 1: quantity. Column 2: \(\Lambda\)CDM expression. Column 3: GR-SI expression. determined from the best fit to the luminosity distance data, a procedure that appears more accurate than the method used in [19], _viz_, determining the parameters from our knowledge of the timescale at which large-scale structures form, and of the amount of baryonic matter present in these structures. We show that the resulting luminosity distance fits simultaneously both low-redshift supernovae data as well as high-redshift CMB data. Furthermore, the model, with the depletion function thus determined, fits better the CMB power spectrum data with the \(H_{0}\) value determined by the low-\(z\) observations, supporting the finding that there is no Hubble tension in the GR-SI model. Crucially, this possible solution to the problem of the Hubble tension does not require adding parameters beyond those already present in the model. This is important because to be compelling alternate to \(\Lambda\)CDM, a model should display a consistency and simplicity on par with \(\Lambda\)CDM, i.e., it should avoid introducing too many new and ad-hoc parameters, particles or fields. This is the case for the model used here which requires no new physics beyond the standard model of particle physics and General Relativity. Explaining the Hubble tension did not compromise this attractive feature of the model. ## Acknowledgements This work is done in part with the support of the U. S. National Science Foundation award No. 1847771. The authors are grateful to A. Mand and T. Moller for their useful comments on the manuscript. \begin{table} \begin{tabular}{|c|c|c|} \hline & FLRW universe & Universe with GR’s SI accounted for \\ \hline \hline \(d_{T}\) & \(\sqrt{\Omega_{R}/[(1+z_{L})H_{0}\Omega_{M}]}\) & \(\sqrt{\Omega_{R}/[(1+z_{L})H_{0}D_{M}(0)]}\) \\ \hline \(R_{L}\) & \([3\Omega_{B}]/[4\Omega_{\gamma}(1+z_{L})]\) & Same as for \(\Lambda\)CDM \\ \hline \(R_{EQ}\) & \([3\Omega_{R}\Omega_{B}]/[4\Omega_{\gamma}\Omega_{\gamma}]\) & \([3\Omega_{R}\Omega_{B}]/[4\Omega_{\gamma}D(0)\Omega_{\gamma}]\) \\ \hline \(d_{H}\) & \(\frac{2}{H_{0}(3R_{L}\Omega_{M})^{\gamma/(1+z_{L})\gamma}}\ln[(\sqrt{1+R_{L}}+ \sqrt{R_{EQ}+R_{L}})/[1+\sqrt{R_{EQ}}])\) & \(\frac{2}{H_{0}(3R_{L}D_{M}(0)^{\gamma/2}(1+z_{L})^{\gamma/2}}\ln[(\sqrt{1+R_{L} }+\sqrt{R_{EQ}+R_{L}})/[1+\sqrt{R_{EQ}}])\) \\ \hline \(d_{D}\) & \(\sqrt{d_{Landau}^{2}+d_{\rm{Silk}}^{2}}\) & Same as for \(\Lambda\)CDM \\ \hline \(d_{\rm{Landau}}^{2}\) & \(\frac{3d_{T}^{2}}{8T^{2}(1+R_{L})}\), & Same as for \(\Lambda\)CDM \\ \hline \(d_{\rm{Silk}}^{2}\) & \(\frac{R_{L}^{2}}{6(1-Y)(n_{B}0)\sigma r\theta_{0}\sqrt{\Omega_{M}R_{0}^{2}} \int_{0}^{R}\frac{R^{2}dR}{N((1+R)\sqrt{R}R_{EQ}+R}\left[\frac{16}{15}+\frac {R^{2}}{1+R}\right]}\) & \(\frac{R_{L}^{2}}{6(1-Y)(n_{B}0)\sigma r\theta_{0}\sqrt{\Omega_{M}(0)R_{0}^{2}} \int_{0}^{R_{L}}\frac{R^{2}dR}{N((R)+R)\sqrt{R}R_{EQ}+R}\left[\frac{16}{15}+ \frac{R^{2}}{1+R}\right]}\) \\ \hline \(R(t)\) & \(\frac{3\rho_{0}(t)\mu_{\alpha}(t)}{3\rho_{0}(t)\mu_{\alpha}(t)}\) & Same as for \(\Lambda\)CDM \\ \hline \(X(T)\) & \(1/\left[X^{-1}(3400)+\frac{(\Omega_{B}h^{2}}{(\Omega_{M}h^{2})^{1/2}}\int_{T}^ {3400}g(T^{\prime})dT^{\prime}\right]\) & \(1/\left[X^{-1}(3400)+\frac{\Omega_{B}h^{2}}{(D_{M}(0)^{2})^{1/2}}\int_{T}^{3400 }g(T^{\prime})dT^{\prime}\right]\) \\ \hline \end{tabular} \end{table} Table 2: Expressions of the quantities that are not explicitly dependent on the expansion of the universe. Column 1: Quantity. Column 2: \(\Lambda\)CDM expression. Column 3: GR-SI expression. In these expressions, \(\sigma\) is the standard deviation for the temperature \(T_{L}\), \(Y\simeq 0.24\) is the density ratio of nucleons to neutral \({}^{4}\)He, \(\sigma_{\mathcal{T}}\) is the Thompson cross-section, \(\rho_{B}\) and \(\rho_{\gamma}\) are average absolute densities of baryon and radiation, respectively, and \(n_{B0}\) is the baryon number density at present time. Figure 3: Power spectrum of the CMB temperature anisotropy. The continuous line is \(l(l+1)C_{TT,l}^{s}/(2\pi)\) computed using GR-SI with the low-\(z\) average for the Hubble parameter, \(H_{0}=73.06\) km/s/Mpc. The squares are the Planck measurement ([30], 2018 release).
2307.14989
Decoding algorithms for surface codes
Quantum technologies have the potential to solve certain computationally hard problems with polynomial or super-polynomial speedups when compared to classical methods. Unfortunately, the unstable nature of quantum information makes it prone to errors. For this reason, quantum error correction is an invaluable tool to make quantum information reliable and enable the ultimate goal of fault-tolerant quantum computing. Surface codes currently stand as the most promising candidates to build near term error corrected qubits given their two-dimensional architecture, the requirement of only local operations, and high tolerance to quantum noise. Decoding algorithms are an integral component of any error correction scheme, as they are tasked with producing accurate estimates of the errors that affect quantum information, so that they can subsequently be corrected. A critical aspect of decoding algorithms is their speed, since the quantum state will suffer additional errors with the passage of time. This poses a connundrum, where decoding performance is improved at the expense of complexity and viceversa. In this review, a thorough discussion of state-of-the-art decoding algorithms for surface codes is provided. The target audience of this work are both readers with an introductory understanding of the field as well as those seeking to further their knowledge of the decoding paradigm of surface codes. We describe the core principles of these decoding methods as well as existing variants that show promise for improved results. In addition, both the decoding performance, in terms of error correction capability, and decoding complexity, are compared. A review of the existing software tools regarding surface codes decoding is also provided.
Antonio deMarti iOlius, Patricio Fuentes, RomΓ‘n OrΓΊs, Pedro M. Crespo, Josu Etxezarreta Martinez
2023-07-27T16:34:52Z
http://arxiv.org/abs/2307.14989v6
# Review on the decoding algorithms for surface codes ###### Abstract Quantum technologies have the potential to solve computationally hard problems that are intractable via classical means. Unfortunately, the unstable nature of quantum information makes it prone to errors. For this reason, quantum error correction is an invaluable tool to make quantum information reliable and enable the ultimate goal of fault-tolerant quantum computing. Surface codes currently stand as the most promising candidates to build near term error corrected qubits given their two-dimensional architecture, a requirement of only local operations, and high tolerance to quantum noise. Decoding algorithms are an integral component of any error correction scheme, as they are tasked with producing accurate estimates of the errors that affect quantum information, so that it can subsequently be corrected. A critical aspect of decoding algorithms is their speed, since the quantum state will suffer additional errors with the passage of time. This poses a conundrum-like tradeoff, where decoding performance is improved at the expense of complexity and viceversa. In this review, a thorough discussion of state-of-the-art surface code decoding algorithms is provided. This work is oriented to readers which may be introducing themselves within the field or willing to further their understanding of the relevant decoders for the surface code, as well as of the state-of-the-art of the field. The core operation of these methods is described along with existing variants that show promise for improved results. In addition, both the decoding performance, in terms of error correction capability, and decoding complexity, are compared. A review of the existing software tools regarding surface code decoding is also provided. + Footnote †: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding: author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding: author: Corresponding author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding Corresponding: author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding: Corresponding author: Corresponding: author: Corresponding: author: Corresponding author: Corresponding author: Corresponding: Corresponding author: Corresponding: author: Corresponding author: Corresponding: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: Corresponding: author: Corresponding: author: Corresponding author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding: author: Corresponding author: Corresponding author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding Corresponding: author: Corresponding author: Corresponding: author: Corresponding: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding: Corresponding author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding Corresponding: author: Corresponding: Corresponding author: Corresponding: author: Corresponding: author: Corresponding Corresponding: author: Corresponding: author: Corresponding: Corresponding author: Corresponding: author: Corresponding: Corresponding author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: Corresponding author: Corresponding: author: Corresponding: Corresponding author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: Corresponding author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: Corresponding author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding:: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding:: author: Corresponding:: author: Corresponding: author: Corresponding:: author: Corresponding: author: Corresponding:: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author:: Corresponding: author: Corresponding:: author: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: author: Corresponding: author: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: author: Corresponding: author:: Corresponding:: author: Corresponding:: author: Corresponding:: author: Corresponding:: author: Corresponding: author: Corresponding:: author: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: author: Corresponding:: author: Corresponding:: author:: Corresponding: author:: Corresponding: author: Corresponding:: author:: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding:: author: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: author:: author:: Corresponding: author:: author: Corresponding: author:: Corresponding: author::: Corresponding: author:: author: Corresponding:: author:: Corresponding: author:: author: Corresponding:: author:: Corresponding: author:: Corresponding: author::: Corresponding: author:: author: Corresponding: author:: author:: Corresponding: author:: author: Corresponding:: author:: Corresponding: author:: author:: Corresponding: author:: author:: Corresponding: author:: author: Corresponding:: author:: Corresponding:: author: author: Corresponding:: author:: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: author:: author: Corresponding:: author:: Corresponding: author:: Corresponding: author:: author: Corresponding: author:: Corresponding: author::: author: Corresponding:: author:: Corresponding: author::: Corresponding: author:: author: Corresponding: author:: author: Corresponding: author::: author: Corresponding: author:: Corresponding: author::: author: Corresponding: author:: Corresponding: author:: author:: Corresponding: author:: author: Corresponding: author:: Corresponding: author: author: Corresponding: author: Corresponding: author: author: Corresponding: author:: author: Corresponding: author: author: Corresponding: author:: author: Corresponding: author:: author: Corresponding: author: author: Corresponding: author:: Corresponding: author: author: Corresponding: author:: author: Corresponding: author: author: Corresponding: author:: author: Corresponding: author:: Corresponding: author: author: Corresponding: author: author: Corresponding: author:: author: Corresponding: author:: author: Corresponding: author: author: Corresponding: author:: Corresponding: author: Corresponding: author:: author: Corresponding: author: Corresponding: author:: Corresponding: author: author: Corresponding: author:: Corresponding: author:: author: Corresponding: author:: Corresponding: author: author: Corresponding: author: author: Corresponding: author:: author: Corresponding: author:: Corresponding: author ###### Abstract This paper presents a novel approach to quantum computing systems that are used to simulate quantum computing. The quantum computing algorithm is based on the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical computation of the classical of the classical computation of the coding [30, 39, 54]. In order to estimate the errors suffered from those qubits, a non-destructive measurement (with this we mean that the actual qubits constituting the code are not directly measured) named syndrome measurement is done so that useful information about such error is retrieved [30]. The obtained syndrome is then used so that an error candidate for returning the code to the previous undamaged state is estimated. This process, referred as syndrome decoding depends on the specific code construction [32, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53] and is a critical task for the QECCs to work correctly. Decoding quantum error correction codes is a subtle task due to a quantum unique effect named code degeneracy [46, 54], which refers to the existence of errors that share the same syndrome, but transform the quantum state in an indistinguishable manner. As a consequence of degeneracy, the optimal decoding of stabilizer codes has been proven to belong to the #P-complete complexity class, which is computationally much harder than decoding classical linear codes [54, 55]. This high complexity imposes a trade-off between decoding performance and decoding time, as related to complexity of the decoding algorithm, due to the fact that decoders of quantum error correction codes should be fast enough in order to correct the noisy quantum state before it suffers from more errors or decoheres completely [56, 57]. Therefore, the field of constructing more efficient, in terms of speed and correction performance, decoders for stabilizer codes is a very active and relevant field of quantum error correction. Surface codes are one of the most promising families of codes for constructing primitive fault-tolerant quantum computers in the near-term [32, 40, 58, 59, 60, 61, 62]. This family of topological codes presents the benefit of being implementable in two-dimensional grids of qubits with local syndrome measurements, or check operators in the surface code terminology, while presenting a high tolerance to quantum noise [59]. Considering that many of the physical platforms being considered for constructing quantum processors such as quantum dots, cold atoms or even the mainstream superconducting qubits have architectural restrictions limiting qubit connectivity; surface codes represent a perfect candidate for implementing QEC using those technologies. Recently, major breakthroughs in the field of quantum error correction have been achieved with the first successful experimental implementations of surface codes over superconducting qubit processors [63, 64]. The result by Google Quantum AI is specially relevant due to the fact that it experimentally proves that QEC improves its performance when the distance of the surface code increases [64]. In this sense, designing decoders for surface codes is a really important task for near-term quantum computers. At the time of writing, there are many potential methods in order to perform the inference of the error from the syndrome data, each of them with their own strong and weak points. Generally, the performance versus decoding complexity trade-off stands for those methods and, therefore, each of them is a potential candidate for being the one selected for experimental quantum error correction depending on the specific needs of the system to be error corrected. Due to the fact that surface code decoding is a relevant and timely topic, the main aim of this tutorial is twofold. The first is to provide a compilation and a comprehensive description of the main decoders for the surface code, while the second is to offer a comparison between those methods in terms of decoding complexity and performance. With such scope in mind, we first provide an introductory section describing the basic notions of stabilizer code theory in Section 2 so that surface codes can be introduced in Section 3. We follow by discussing the noise sources in quantum computers and the way they are modelled in Section 4. These sections serve as preliminary background in order to understand the decoding methods that are discussed in the core of the review, Section 5. We describe the most popular surface code decoding algorithms: the Minimum-Weight Perfect Matching (MWPM) decoder, the Union-Find (UF) decoder, the Belief Propagation (BP) decoder and the Tensor Network (TN) decoder. We present their functioning in a comprehensive manner and later discuss not only their performance under depolarizing and biased noise via simulations but also their computational complexity. In addition, we also discuss other decoding methods proposed through the literature for general surface (topological) codes. Also, we review existing and publicly available software implementations of the discussed decoding methods. We then provide a discussion section, Section 6, where we compare the decoders for rotated planar codes and provide an overview of the challenges in decoding surface codes. ## 2 Background Quantum computers leverage the principles of quantum mechanics to achieve computational capabilities beyond those of conventional machines, enabling them to process complex computations that would be infeasible for traditional computers. The basic unit of classical computation is the so called _bit_, which represents a logical state with one of two possible values, i.e. \[x\in\{0,1\}. \tag{1}\] In stark contrast, the building block of quantum computers are the elements referred as _qubits_. Those quantum mechanical systems, named by Benjamin Schumacher in 1995 [65], are two-level quantum systems that allow coherent superpositions of them. In this sense, a qubit can be described as a vector in a two-dimensional complex Hilbert space, \(\mathcal{H}_{2}\)[20]: \[\ket{\psi}=\alpha\ket{0}+\beta\ket{1}, \tag{2}\] where \(\alpha,\beta\in\mathbb{C}\) and \(\{\ket{0},\ket{1}\}\in\mathcal{H}_{2}\), usually referred as the computational basis [20], form an orthonormal basis of such Hilbert space. In this sense, qubits allow to have linear combinations of the two orthonormal states. This and other properties of quantum mechanics such as entanglement [66] allow for a series of advantages in computation (speedups in algorithm complexity [3, 4]), or in communications (superadditivity of the quantum channel capacity [67, 68, 69, 70]), among others. Nevertheless, such promises are put in question by the inherent noise present in these quantum mechanical systems. As good as it is to make use of quantum unique properties such as superposition or entanglement, quantum noise does also follow different laws and, thus, it is somewhat different to the noise in classical computers/communications. Classical bit noise can be summarized to flips, a logical operation which would turn the \(0\) into \(1\) and vice versa. We refer to Section 4 for understanding quantum noise in general, but as it will be seen there, for the case of a single qubit the noise can be described by means of Pauli channels. The Pauli channel refers to a stochastic model where a qubit can suffer from bit-flips, phase-flips or bit-and-phase-flips each with some probability. With this consideration in mind, these noise effects are described by the elements of the Pauli group [30, 54], \(\mathcal{G}_{1}\), whose generators are the Pauli matrices \(\langle X,Y,Z\rangle\). Those operators transform the state of an arbitrary qubit as in eq.2 in the following way: \[X\ket{\psi}= \alpha\ket{1}+\beta\ket{0}, \tag{3}\] \[Z\ket{\psi}= \alpha\ket{0}-\beta\ket{1},\] \[Y\ket{\psi}= iXZ\ket{\psi}= i(\alpha\ket{1}-\beta\ket{0}).\] Note that the the \(Y\) operator not only does perform a bit-and-phase-flip operation on the arbitrary quantum state but also changes its overall phase. However, neglecting the global phase has no observable physical consequences and, thus, it is often completely ignored from the point of view of quantum error correction [30, 54, 71]. Dealing with these noise processes is a task of vital importance if complex quantum algorithms are meant to be executed reliably. In this context, there are two main approaches to deal with noisy quantum computations: quantum error mitigation (QEM) and quantum error correction (QEC). The two approaches have shown to be complementary with recent papers proposing schemes such as distance scaled zero-noise extrapolation (DSZNE) [72]. QEM attempts to evaluate accurate expectation values of physical observables of interest by using noisy qubits and quantum circuits [72, 73, 74, 75, 76], while the main objective of QEC is to obtain qubits and computations that are "noiseless". There are many approaches to quantum error correction, but the general idea behind QECCs is to protect the information of a number of qubits \(k\), named _logical qubits_, within a larger number of qubits \(n\), named _physical qubits_ in a way that makes the whole system tolerant to a number of errors. Within these QECCs, many lay within the framework named Quantum Stabilizer Coding [39]. Stabilizer codes allow for a direct mapping of many classical methods into the quantum spectrum, making them very useful [30, 54, 39]. Since the surface code belongs to the family of QSCs, it is pertinent to cover the basics of such QEC constructions. ### Stabilizer Codes Quantum error correction is based on protecting the state of \(k\) logical qubits by means of \(n\) physical qubits so that the protected qubits operate as if they were noiseless. Note that the set of \(n\)-fold Pauli operators, \(\mathcal{P}_{n}=\{\mathrm{I},\mathrm{X},\mathrm{Y},\mathrm{Z}\}^{\otimes n}\), together with the the overall factors \(\{\pm 1,\pm i\}\) forms a group under multiplication, usually named as the \(n\)-fold Pauli group, \(\mathcal{G}_{n}\)[39, 30]. Generally, an unassisted1\([[n,k,d]]\) stabilizer code is constructed using an abelian subgroup \(\mathcal{S}\subset\mathcal{G}_{n}\) defined by \(n-k\) independent generators2 so that \(k\) logical qubits are encoded into \(n\) physical qubits with distance-\(d\)[39, 30, 54, 77]. The distance of a stabilizer code is defined as the minimum of the weights3 of the Pauli operators that belong to the normalizer of the stabilizer, which consists of the elements that commute with the generators but do not belong to the stabilizer [78]. Thus, it is related to the maximum weight of the errors that can be corrected by the code. The codespace \(\mathcal{T}(\mathcal{S})\) associated to the stabiliser set is defined as: Footnote 1: The so called entanglement assisted QECCs make use of Bell states as ancilla qubits for constructing the codes [52, 53, 77]. Here we restrict our discussions to the regular stabilizer codes, where the ancilla qubits are usually defined by \(\ket{0}\) states [39]. Footnote 2: The stabilizer set will have \(2^{n-k}\) elements up to an overall phase. Footnote 3: The weight of an error is defined as the number of non-trivial elements of a Pauli operator in \(\mathcal{G}_{n}\). \[\mathcal{T}(\mathcal{S})=\{\ket{\bar{\psi}}\in\mathcal{H}_{2}^{\otimes n}:M \ket{\bar{\psi}}=\ket{\bar{\psi}},\;\forall M\in\mathcal{S}\}, \tag{4}\] i.e. the simultaneous \(+1\)-eigenspace4 defined by the elements of \(\mathcal{S}\). We use the notation \(\ket{\bar{\psi}}\) to refer that this state is within the codespace. Within that code, the physical qubits will experience errors that belong to the Pauli group5\(\mathcal{G}_{n}\). Footnote 5: This comes from the so called _error discretization_ that arises from the Knill-Laflamme theorem [30, 20, 79]. In stabilizer codes, the set of stabilizer generators of \(\mathcal{S}\) are named checks and, thus, there will be \(n-k\) checks. In order to perform quantum error correction, one must perform measurements of the checks in order to obtain information of the error that has occurred. The classical information obtained by measuring the checks of a stabilized code is named the _syndrome_ of the error, \(\bar{s}\). Due to the fact that quantum measurements destroy superposition, these measurements must be done in an indirect way so that the codestate is not lost. This can usually be done by means of a Hadamard test that requires ancilla qubits that are usually referred as measurement qubits6[30]. Therefore, the error syndrome, \(\bar{s}\), is defined as a binary vector of length \(n-k\), \(\bar{s}\in\mathbb{F}_{2}^{n-k}\). Given a set of checks, \(\{M_{1},M_{2},...,M_{n-k}\}\in\mathcal{S}\), and a Pauli error, \(E\in\mathcal{G}_{n}\), the \(i^{th}\) element of the syndrome will capture the commutation relationships of the error and the \(i^{th}\) check. This comes from the common knowledge that any two elements of \(\mathcal{G}_{n}\) commute or anticommute. Thus, this commutation relationship is captured by the syndrome as Footnote 6: Note that, for stabilizer codes, the measurement of the checks is responsible of the error discretization [20]. \[EM_{i}=(-1)^{s_{i}}M_{i}E, \tag{5}\] where \(s_{i}\) represents the \(i^{th}\) element of the syndrome vector. One interesting thing to note from this construction is that, since the codespace is not altered by the application of stabilizers (recall eq.4), a channel error that coincides with those operators will have a trivial action on the codestate, i.e. \[E\ket{\bar{\psi}}=\ket{\bar{\psi}}, \tag{6}\] if \(E\in\mathcal{S}\subset\mathcal{G}_{n}\). In this sense, there will be different error operators that share the same error syndrome that affects the encoded quantum state in a similar manner. This phenomenon is usually termed as _degeneracy_. The concept of error degeneracy has the consequence that the Pauli space that represents all possible error operators is not just partitioned into syndrome cosets, but also into degenerate error cosets7[54]. Specifically, the Pauli group is partitioned in \(2^{n-k}\) cosets that share error syndrome, and each of those cosets will be partitioned in to \(2^{2k}\) cosets that contain \(2^{n-k}\) errors that are degenerate among them [67, 54]. How degenerate a quantum code is depends on the difference between the weight of its stabilizer generators and its distance. If \(w(M_{k})<<d\), \(\forall k\in 0,\dots,n-k\), where \(M_{k}\in\mathcal{S}\) denotes a stabilizer generator and \(w\) denotes the weight, then each logical coset (equivalence class) will contain many operators of the same weight and the code will be highly degenerate. In cases where \(w(M_{k})=d\), the code will be non-degenerate. Footnote 7: Note that degeneracy is somehow different in the entanglement-assisted paradigm [77, 52] In summary, the checks give us a partial information of the error operator that corrupted the encoded information. Since the aim of quantum error correction is to recover the noiseless quantum state, an estimate of the channel error must be obtained so that the noisy state can be corrected. The process of estimating the quantum error from the measured syndrome is named decoding. Once a guess of the error, \(\hat{E}\in\mathcal{G}_{n}\), is obtained by the decoder, its complex conjugate will be applied to the encoded quantum state. If the estimation turns out to be correct, the noisy quantum state will be successfully corrected since the elements of the Pauli group are unitary. Moreover, if the estimated error is not the exact element of the Pauli group but it belongs to the same degenerate coset, the correction will also be succesful [54]. Finally, whenever the estimated error does not fulfill any of those two cases, the correction operation will result in a non-trivial action on the logical qubits encoded in the state, implying that the error correction method has failed. ### The decoding problem The decoding problem in QEC is different from the decoding problem in classical error correction due to the existence of degeneracy. In this sense, the following classification can be done as a function of the decoding problem being solved [54, 55]: * **Quantum maximum likelihood decoding (QMLD):** those are an extrapolation of the classical decoding methods where the estimation problem is described as finding the most likely error pattern associated to the syndrome that has been measured [54, 55]. Mathematically, \[\hat{E}=\operatorname*{arg\,max}_{E\in\mathcal{G}_{n}}P(E|\bar{s}),\] (7) where \(P\) refers to the probability distribution function of the errors. Due to the fact that degeneracy is ignored by this type of decoding, it is also referred as non-degenerate decoding. * **Degenerate quantum maximum likelihood decoding (DQMLD):** due to the existence of degenerate errors that form cosets of errors that affect the coded state in a similar manner, it is possible that a the probability of occurrence of the coset containing the most probable error sequence (in the sense of QMLD) is smaller than other coset allowed by the measured syndrome. Thus, the QMLD decoder will be suboptimal as it ignores the degenerate structure of stabilizer codes. Therefore, DQMLD decoding can be described mathematically as [54, 55]: \[\hat{L}=\operatorname*{arg\,max}_{L\in\mathcal{L}}P(L|\bar{s}), \tag{8}\] where by \(\mathcal{L}\) we refer to the coset partition of \(\mathcal{G}_{n}\) and \(L\) a coset belonging to such partition. Note that once the coset is estimated, the decoding operation will be the application of any of the elements of such coset since the operation to the logical state is the same for all the elements of such coset. Therefore, the optimal decoding rule for stabilizers is the DQMLD. However, it was proven that QMLD falls into the NP-complete complexity class (similar to the classical decoding problem), while DQMLD belongs to the #P-complete class [55, 80, 81, 82]. The latter is computationally much harder than the other, implying that the optimal decoding rule may pose serious issues for the fast decoding needed in quantum error correction [55]. Therefore, even if the optimal rule for decoding and, thus, code performance is obtained by using DQMLD, non-degenerate decoding is important and widely used as it is less expensive in terms of computational complexity. ## 3 The surface Code Surface codes are a family of quantum error correcting codes in which the information of logical qubits is mapped to a set of physical qubits, commonly named as data qubits, which are displayed in a lattice array. Moreover, the measurement qubits that are used to measure the checks are also displayed within the lattice. Alexei Kitaev first proposed the concept of surface codes in his prominent work [40], where the qubits were displayed in a torus-shaped lattice. This toric code has periodic boundary conditions and it is able to protect two logical qubits. Nevertheless, such toric displacement of the qubits makes the hardware implementation and logical qubit connectivity complicated [83] since many experimental implementations such as superconducting hardware, for example, may require the system to be placed in a two-dimensional lattice. Thus, the so called planar code encoding a single logical qubit is obtained by stripping the periodic boundary conditions from the toric code [32, 84]. Specifically, it will be a \([[d^{2}+(d-1)^{2},1,d]]\) QECC. Furthermore, the number of data and measurement qubits used for protecting the single logical qubit can be lowered down, i.e. the rate of the code can be increased8, by considering a specific set of data qubits and checks within the planar code. The obtained code is usually known as the rotated planar code, which will be a \([[d^{2},1,d]]\)[62]. FIG.1 shows a distance-7 planar code where the dashed lines indicate the subset of qubits that form the rotated planar code with the same distance. In this this tutorial we will consider the rotated planar code in a square lattice with Calderbank-Shor-Steane (CSS) structure 9 due to its practicality and relevance at the time of writing. Note that this code is the one that has been recently implemented experimentally by Wallraff's group at ETH Zurich [63] and by the Google Quantum AI team [64]. Nevertheless, the decoding methods discussed in this tutorial apply for all surface codes including the tailored versions proposed through the past years [41, 42, 87, 88] or the different lattices considered [62, 43, 89]. Footnote 8: The rate of a quantum error correction code is defined as the ratio between the number of logical qubits and the number of physical qubits, i.e. \(R_{Q}=k/n\) Footnote 9: CSS codes refer to stabilizer codes admitting a set of generators that are either \(X\) or \(Z\)-check operators. This means that the check operators will consist of either tensor products of identities with \(X\) or with \(Z\) operators exclusively [85, 86]. #### 3.0.1 Stabilizer and check structure Due to the structure of their stabilizers, CSS quantum surface codes have two types of checks: \(X\)-checks and \(Z\)-checks. The former detect \(X\)-errors, while the latter detect \(Z\)-errors. FIG. 2 portrays the structure of the check operators in a distance-5 CSS rotated planar code. As it can be seen in the figure, this structure fulfills the condition that the stabilizer generators form an abelian group [39]. The locality of the check operations ensures that checks that are far apart commute with each other, while adjacent checks commute either because they are of the same type and, thus, apply the same operators to their adjacent data qubits; or because they anti-commute for two data qubits at the same time, making the whole operators to commute. For example, Figure 1: Distance-7 planar code. Data qubits are represented as grey circles, \(X\) and \(Z\)-checks are represented as green and orange circles respectively. \[M_{\mathrm{x}}M_{\mathrm{z}}\ket{\bar{\psi}} =\mathrm{X}_{1}\mathrm{X}_{2}\mathrm{X}_{3}\mathrm{X}_{4}\mathrm{Z }_{3}\mathrm{Z}_{4}\mathrm{Z}_{5}\mathrm{Z}_{6}\ket{\bar{\psi}}\] \[=\mathrm{X}_{1}\mathrm{X}_{2}\mathrm{X}_{3}\mathrm{Z}_{3}\mathrm{X }_{4}\mathrm{Z}_{4}\mathrm{Z}_{5}\mathrm{Z}_{6}\ket{\bar{\psi}} \tag{9}\] \[=\mathrm{X}_{1}\mathrm{X}_{2}(-\mathrm{Z}_{3}\mathrm{X}_{3})(- \mathrm{Z}_{4}\mathrm{X}_{4})\mathrm{Z}_{5}\mathrm{Z}_{6}\ket{\bar{\psi}}\] \[=\mathrm{Z}_{3}\mathrm{Z}_{4}\mathrm{Z}_{5}\mathrm{Z}_{6}\mathrm{ X}_{1}\mathrm{X}_{2}\mathrm{X}_{3}\mathrm{X}_{4}\ket{\bar{\psi}}\] \[=M_{\mathrm{z}}M_{\mathrm{x}}\ket{\bar{\psi}}=\ket{\bar{\psi}},\] where \(M_{z}\) and \(M_{x}\) are two arbitrary adjacent \(X\) and \(Z\)-checks in the bulk of the code, respectively, and the data qubits labeled with 3 and 4 are located in between both checks. The surface code is usually initialized by taking all the data qubits in the \(\ket{0}\) state [63, 64]. Note that this corresponds to the logical \(\ket{\bar{0}}\) state of the surface code and, thus, one can perform the desired computations over such logical state. As explained before, the data qubits of the surface code may undergo a Pauli error. The syndrome of the associated error is measuring the check operators, which correspond to the quantum circuits shown in FIG.3. The top circuit represents an \(X\)-check and the bottom circuit a \(Z\)-check. As seen in such figure, if an odd number of adjacent data qubits suffer from an \(X\) or \(Z\)-error, the measurement of the respective \(X\) or \(Z\)-checks will be triggered. However, as seen in the top image from FIG.3, in the event of an even number of errors, those will cancel, due to their unitary nature, and no error will be detected by the check operator measurement. To sum up, whenever an error consisted of an odd number of \(X\) or \(Z\)-errors affects the data qubits surrounding a check operator, the circuit from the picture will make said errors propagate to the measurement qubit associated to such check, changing the measurement and thus enunciating that an error has occurred in its vicinity [32]. #### 3.0.2 Types of errors and code threshold In FIG.4 we show some examples of errors that may arise in the rotated planar code. In the upper right section of the code, there are three isolated Pauli errors, namely an \(X\), a \(Y\) and a \(Z\)-error, which cause the adjacent susceptible checks to exhibit non-trivial syndrome elements upon measurement. Note that the \(Y\)-error triggers both the \(X\) and \(Z\)-checks that are adjacent to such data qubit. This happens because \(Y\)-errors are a combination of \(X\) and \(Z\)-errors, neglecting the global phase, as seen in eq.(3). These isolated Pauli errors will be detected by the code, and using the syndrome information, the decoder will try to estimate which are those errors. Note that these errors have weight one, and since this specific rotated planar code has distance-5, those errors can be corrected. In addition, two other errors forming a vertical and horizontal chain along the left and bottom boundaries are represented in FIG. 4. As seen in the figure, those error chains are not detected by the code since they do not trigger any of the surrounding measurement qubits. This is because each susceptible check is connected to two of the Pauli operators, i.e. it refers to the previously described case where there is an even number of operators acting on each of the checks. This error chains act non-trivially on the codestate without being part of the stabilizer group while presenting a trivial syndrome, i.e. they belong to the normalizer of the code. This type of errors receive the name of logical errors [32]. Specifically, the left error chain is a logical \(Z\)-error, \(\mathrm{Z}_{L}\), the bottom error chain is a logical \(X\)-error, \(\mathrm{X}_{L}\), and the combination Figure 3: Stabilizing circuits of the \(X\)-checks (top) and the \(Z\)-checks (bottom). The green and yellow circles indicate the check qubits, and the grey qubits their 4 adjacent data qubits. In both cases an example of an error in the data qubits is introduced altogether with the path it follows within the circuit. Figure 2: Distance-5 rotated planar code. The stabilizing operators that the checks yield over their adjacent data qubits are denoted through light green and orange circles. of them two is a logical \(Y\)-error, \(\mathrm{Y}_{L}\). Notice that the anti-commutation relation \(\mathrm{X}_{L}Z_{L}=-\mathrm{Z}_{L}\mathrm{X}_{L}\) is preserved through the bottom left data qubit. Note that these are just two examples of logical errors, there are other possible logical operators. Moreover, if a logical error is applied, the resulting state will still be within the codespace, so eq.(4) will still be preserved for the new sate and it will remain invariant upon the application of stabilizer operators. This can be reflected in the bottom right of FIG.4, where the application of the bottom left \(Z\)-check modifies the shape of the \(\mathrm{X}_{L}\) operator while still commuting with all the checks. When protecting against quantum noise, experiencing a logical error10 is fatal since those alter the information stored in the code without any indication from the checks and, thus, cannot be detected nor corrected. One way of mitigating the impact of logical errors is to increase the size of the surface code. By doing so, the code distance increases and, therefore, the minimum number of Pauli operators needed to form a logical error will also be higher, making such event to be more improbable. This comes from the fact that logical errors belong to the normalizer of the code and the distance is defined as the minimum weight of those Pauli operators. Although it may seem intuitive that a larger surface code would perform better, this is not always the case. The momentous result in quantum computing named the threshold theorem states that the performance of a QECC improves as its distance increases, provided that the physical error rate of the data qubits is below a certain value [29, 40, 91, 90]. This value is known as the probability threshold (\(p_{th}\)). As long as the physical error rate is below \(p_{th}\), increasing the size of the surface code will lead to a better code performance. Footnote 10: Note that here we refer to the unwanted event that the codesstate is altered by a logical error due to the noise. However, whenever the logical state is wanted to be manipulated with the scope of computation, the way of applying logical Pauli gates to the logical qubit is by means of these logical operators on the physical qubits. The probability threshold is a useful metric for benchmarking the performance of the surface code under a particular decoder. However, the value of the threshold does not only depend on the code and decoder in consideration but also on the structure of the underlying noise that affects the physical qubits of the code. In the following section, we will discuss the origin of quantum noise and the most relevant noise models considered in QEC. ## 4 Noise models The main obstacle to construct quantum computers is the proneness of quantum information to suffer from errors. There are many error sources that corrupt quantum information while being processed such as state preparation and measurement (SPAM) errors or errors introduced by imperfect implementations of quantum gates, for example. Many of the errors occur due to the fact that the technology used for manipulating qubits is imperfect [32, 33]. However, qubits do also suffer from errors due to their undesired and unavoidable, in principle, interaction with their surrounding environment. This last source of errors is named as environmental decoherence, and corrupts quantum information even if the quantum system is left to evolve freely [34, 30, 35, 20]. Thus, decoherence poses a fundamental problem to the field of quantum information processing since its existence does not depend on imperfect implementations of qubit manipulations or measurements, which we may deem as engineering problems11. Therefore, quantum error correction will be necessary if we want to run arbitrarily large quantum algorithms with enough precision so that the obtained results are reliable, even if perfect quantum gates, measurements and state preparations are available. Footnote 11: It may seem unfair to state that decoherence is not related to engineering since the way qubits are constructed fundamentally determines how fast a qubit will decohere. However, even in the case that very long decoherence times are obtained, it would not be possible to apply an arbitrarily large amount of perfect quantum gates, since at some point the quantum information would be corrupted. Figure 4: The figure shows a CSS rotated planar code, with physical Pauli operators represented by red circles. Check measurements resulting in non-trivial syndrome elements are marked with an exclamation mark. The thick red line represent the shape of two logical operators, while the pink dashed line shows the changed path of one of them after interacting with a stabilizer. ### Decoherence In general, decoherence comprises several physical processes that describe the qubit environment interaction, and the nature of those processes depends on the qubit technology, i.e. superconducting qubits, ion traps or NV centers, for example. However, most of those physical interactions can be grouped into three main decoherence mechanisms12 since their operational effect on the two-level coherent system that is the qubit is the same [30, 92]: Footnote 12: Here we are considering noise sources that operate on the computational subspace of the qubit, i.e. transitions to other possible levels are neglected by now. This will be described later on. * **Energy relaxation or dissipation:** this mechanism includes the physical processes in which a quantum mechanical system suffers from spontaneous energy losses. For example, atoms in excited states tend to return to the ground state by spontaneous photon emission. The amalgamation of relaxation processes is described by the so called relaxation time, \(T_{1}\), which is the characteristic timescale of the decay process [30, 35, 92]. * **Pure dephasing:** these physical processes involve the corruption of quantum information without energy loss. For example, this occurs a photon scatters in a random manner when going through a waveguide. Pure dephasing is also quantified by the characteristic timescale of the decay process [35, 92], which in this case is named as pure dephasing time, \(T_{\phi}\). * **Thermal excitation:** this refers to the undesired excitation of the qubit from the ground state to the excited state and the assisted relaxation from the excited state to the ground state caused by the finite temperature of the system [20, 92, 93]. Every qubit platform will be at a finite temperature in the real world, but the contribution of thermal excitation can be usually neglected when qubits are cooled down significantly13. Generally, the temperature of the system, \(T\), and the energy levels of the ground state and the excited state quantify this effect on the quantum system. Footnote 13: Note that superconducting qubits are cooled down to \(T\approx 20~{}mK\), for example [63, 64, 34, 92]. There are many ways in which this set of physical interactions can be mathematically described such as the Gorini-Kossakowski-Lindblad-Sudarshan (GKLS) master equation [94, 95, 96] or the quantum channel formulation [30, 97]. In the context of QECC, quantum channels are used to describe noisy evolution. In general, quantum channels are linear, completely-positive, trace-preserving (CPTP) maps between spaces of operators. As a consequence of those properties, quantum channels fulfill the Choi-Kraus theorem, implying that the application of such maps on a density operator \(\rho\) can be written as the following decomposition [20, 97, 30]: \[\mathcal{N}(\rho)=\sum_{k}E_{k}\rho E_{k}^{\dagger}, \tag{10}\] where the \(E_{k}\) matrices are named Kraus or error operators, and should fulfill \(\sum_{k}E_{k}^{\dagger}E_{k}=\mathrm{I}\) since the quantum channels must be trace-preserving. Thus, quantum channels are characterized by sets of Kraus operators that are associated to some physical interaction of the qubit with its surrounding environment. The generalized amplitude and phase damping channel (GAPD), \(\mathcal{N}_{\mathrm{GAPD}}\), describes the evolution of a quantum state when decoherence arises from the three qubit-to-environment interactions presented before [30, 35, 93, 98, 99]. Such channel consists of a generalized amplitude damping channel (GAD) describing the thermal and relaxation interaction [93, 98, 99] and of a pure dephasing channel (PD) [30, 35]. The action of the GAD is described by the damping parameter, \(\gamma\), and the probability that the ground state is excited by finite temperature, \(N\). The damping parameter relates to the relaxation time of the qubit as, \(\gamma(t)=1-e^{-(2n_{\mathrm{th}}+1)t/T_{1}}\), with \(t\) being the evolution time, while \(n_{\mathrm{th}}\) and \(N(n_{\mathrm{th}})\) depend on the temperature and the energy gap of the system [93, 98, 99]. Whenever the thermal excitation is considered negligible, \(n_{\mathrm{th}}\approx 0\) and \(N(n_{\mathrm{th}})=0\); and such channel reduces to an amplitude damping channel (AD). The pure dephasing channel (PD) is described by the so called scattering probability, \(\lambda\), which relates to the pure dephasing time as, \(\lambda(t)=1-e^{-2t/T_{\phi}}\), with \(t\) the evolution time again [35, 78]. In this sense, the GAPDH channel is defined by those parameters. ### Twirled quantum channels Therefore, the GAPD channel is a complete mathematical description of the evolution that a qubit undergoes when decoherence is considered. The problem with this quantum channel is the fact that it is not possible to simulate it efficiently by means of classical computer as the dimension of the Hilbert state increases exponentially with the number of qubits considered [30]. This makes it impossible to construct and simulate efficient error correction codes that will be used for protecting quantum information by using conventional methods. That is why a technique named twirling is usually employed in order to obtain channels that can be managed by classical computers and that capture the essence of the GAPD channel [30, 35, 100, 101]. The significance of the twirling method comes after the fact that a correctable code for the twirled channel will also be a correctable code for the original channel [30, 102] and, thus, we can consider the simplified channels for designing codes that will eventually be successful for the actual noise. Following this logic, the most common twirling operations are the so called Pauli [30, 100, 78] and Clifford twirl approximations [30, 35, 101, 78] (PTA and CTA), where the quantum channel in consideration is averaged uniformly with the elements of the Pauli group, \(\mathcal{P}\), and the elements of the Clifford group, \(\mathcal{C}\), respectively. Twirling the GAPD channel with these two groups results in Pauli channels, i.e. those with Kraus operators \(\{\sqrt{(1-p_{x}-p_{y}-p_{z})}\mathrm{I},\sqrt{p_{x}}\mathrm{X},\sqrt{p_{y}} \mathrm{Y},\sqrt{p_{z}}\mathrm{Z}\}\) with probabilities: * **Pauli twirl:**\(p_{x}=p_{y}=\frac{\gamma}{4}\) and \(p_{z}=\frac{2-\gamma-2\sqrt{1-\gamma-(1-\gamma)\lambda}}{4}\). * **Clifford twirl:** depolarizing channel, \(p_{x}=p_{y}=p_{z}=\frac{2+\gamma-2\sqrt{1-\gamma-(1-\gamma)\lambda}}{12}\). The usefulness of Pauli channels resides in the fact that they can be efficiently simulated in classical computers since they fulfill the Gottesman-Knill theorem [20, 103]. Thus, we can use them in order to construct and simulate quantum error correction codes that will then be useful to protect qubits from the more general GAPD (or APD) noise [30]. Note that the CTA results in a depolarizing or symmetric Pauli channel where all the errors are equiprobable, while the PTA presents a probability bias respect to the Z types of errors. Therefore, the PTA is usually referred as the biased noise model, where the bias14 is defined as \(\eta=p_{z}/(p_{x}+p_{y})\approx T_{1}/T_{2}-1/2\)[104]. The bias of the channel varies significantly as a function of the technology or even as a function of the qubit of a processor being considered [30]. Footnote 14: The bias is usually defined using the so called Ramsey dephasing time, \(T_{2}\), that includes the dephasing induced by relaxation [35]. In this sense, the parameters relate as \(1/T_{2}=1/(2T_{1})+1/T_{\phi}\)[35, 34]. ### Noise models for multiple qubits There are several ways in order to construct the \(n\)-qubit quantum channel that is required to study the action of the quantum error correction code being designed [30, 104, 105]. The literature on QEC usually assumes that each of the qubits of the system experience noise independently15. In this sense, the following \(n\)-qubit twirl approximation channels will be considered: Footnote 15: Note that the independence assumption is not generally true since correlated noise has been considered in the literature. For surface codes, correlation between the nearest qubits of the code is considered whenever this scenario is studied, assuming that the other ones are far enough so that the correlations are negligible [105, 74]. However, considering the channel to be memoryless is considered to be a reasonable assumption. * **Independent and identically distributed (i.i.d.):** in this model each of the qubits will have the same experience of suffering a particular Pauli error [30]. Joining this with the fact that the noise is considered to be independent, the probability that a particular \(n\)-qubit Pauli error, \(\mathrm{A}=\mathrm{A}_{1}\otimes\mathrm{A}_{2}\otimes\cdots\otimes\mathrm{A}_ {n}\) with \(\mathrm{A}_{j}\in\{\mathrm{I},\mathrm{X},\mathrm{Y},\mathrm{Z}\}\), will be given by \[p_{\mathrm{A}}(\mu_{T_{1}},\mu_{T_{2}})=\prod_{j=1}^{n}p_{\mathrm{A}_{j}}(\mu_{ T_{1}},\mu_{T_{2}}),\] (11) where \(p_{\mathrm{A}_{j}}\) is described by the PTA (biased) or CTA (depolarizing) approximations given before, and where \(\mu_{T_{1}}\) and \(\mu_{T_{2}}\) refer to the mean values of the relaxation and dephasing times. Taking the mean value is the usual approach. * **Independent and non-identically distributed (i.ni.d.):** in this model every qubit experiences a different probability of suffering a Pauli error [104, 106]. The motivation of this error model is the fact that state-of-the-art quantum processors are consisted of qubits whose relaxation and dephasing times differ significantly. Considering that the environment-to-qubit interaction is still independent from qubit-to-qubit, the probability of occurrence for a \(n\)-qubit Pauli error is given by \[p_{\mathrm{A}}(\{T_{1}^{j}\}_{j=1}^{n},\{T_{2}^{j}\}_{j=1}^{n})=\prod_{j=1}^{n} p_{\mathrm{A}_{j}}(T_{1}^{j},T_{2}^{j}),\] (12) where \(p_{\mathrm{A}_{j}}(T_{1}^{j},T_{2}^{j})\) is again given by the PTA (biased) and CTA (depolarizing) approximations, but now each of the terms have particular values of relaxation and dephasing times. ### SPAM and gate errors As stated before, physical qubits experience errors from sources other than the unavoidable decoherence. Those errors refer to imperfect implementation of the operations that are done whenever the physical qubits are prepared, measured or manipulated by means of quantum gates, and are usually referred as circuit-level noise [20, 43, 107]. These errors are usually classified and modelled in the following way: * **SPAM errors:** these refer to the errors that occur due to the imperfect preparation of the states that are needed to initialize the surface code and the fact that the measurement operations done to detect the syndrome are not always successful. Since it is usually considered that a surface code is initialized with all the physical qubits on the \(|0\rangle\) state, state preparation errors are usually modelled so that a \(|1\rangle\) state is prepared instead of the \(|0\rangle\) with a probability of error \(2p_{\mathrm{prep}}/3\)[32, 43, 107]. This is the same as considering a depolarizing channel after state preparation. Imperfect measurements are usually mod elled by considering that the single-qubit measurement is flipped with a probability \(2p_{\rm meas}/3\) error [32, 43, 107]. * **Noisy single-qubit gates:** due to their imperfect implementation, single qubit quantum gates, \(\mathcal{U}\), do not perform the desired operation in a perfect way and, thus, introduce noise to the qubit. In this sense, the noisy quantum gate, \(\hat{\mathcal{U}}\), can be seen as the operation of the quantum gate followed by a quantum channel, \(\Lambda\), that describes the noise introduced by the gate, i.e. \(\hat{\mathcal{U}}=\Lambda\circ\mathcal{U}\)[20, 74, 108]. In this sense, single-qubit gate errors are usually modelled by considering that they are followed by a depolarizing channel with probability of error \(p_{\rm 1Q}\)[32, 43, 107]. This implies that an X, Y or Z-error will be applied to the physical qubit with probability \(p_{\rm 1Q}/3\). * **Noisy two-qubit gates:** similar to the single-qubit gate, two-qubit gates are also modelled by a noisy channel being applied after the perfect operation. However, the usually considered error map is the two-qubit depolarizing channel with probability of error \(p_{\rm 2Q}\)[32, 43, 107]. Therefore, a Pauli error of the set \(\{\rm I,X,Y,Z\}^{\otimes 2}\backslash\mathbb{I}^{\otimes 2}\) will be randomly applied after the perfect two-qubit gate with probability \(p_{\rm 2Q}/15\). It is important to state that a biased circuit-level noise model can also be considered if the depolarizing channels are changed by Pauli channels with a bias towards Z-errors equal to \(\eta\)[107]. ### Erasure errors To finish with this section, we will discuss another error type that can corrupt the qubits of a quantum computer, named erasure error, and that will be considered for the Union Find decoder [109, 110]. Erasure errors come from two types of physical mechanisms that qubits may experience: * **Leakage:** qubits are defined as two-level coherent systems. However, when physically implemented, there exist other levels that can be populated. This would imply that the qubit has left the computational subspace and, thus, it is not useful anymore [109, 111]. Leakage may arise due to decoherence processes that make the qubit to leave the computational space or due to leaky quantum gates. * **Loss:** this refers to the scenario where the qubit is physically missing [109, 112]. For example, in a photonic system the qubit encoded in a photon may be lost. In this context, an erasure channel describes the fact that a qubit at a known location has been lost with probability \(p_{e}\)[110]. The fact that it is known which of the qubits is lost is important since it provides with useful information for treating those errors. The detection of leakage events in physical qubits can be done by means of the quantum jump technique or by means of ancillary qubits, for example [110, 111, 113, 114, 115]. Errors of this type with unknown locations are named deletion errors in the literature [116]. The significant difference between deletion and erasure errors lies in the fact that a deletion error leads to a decrease in the number of qubits of the system, i.e. the qubit is effectively lost, while an erasure error occurrence does not decrease the number of qubits. In this sense, the erasure channel on a qubit, \(\rho\), may be described as \[\mathcal{N}_{\rm er}(\rho)=(1-p_{e})\rho+p_{e}|e\rangle\langle e|, \tag{13}\] where \(|e\rangle\) refers to an erasure flag giving the information that such qubit has been erased. Since erasure errors are detected and their location is known, qubits subjected to such errors can be reinitialized, which results in those being subjected to a random Pauli error after the measurement of the stabilizers is performed [109]. ## 5 Decoders for the surface code As described in the previous sections, surface codes have the ability of detecting errors experienced by the data qubits, which can be accurately modelled by elements of the Pauli group \(\mathcal{G}_{n}\). However, once the error syndrome is measured, an estimation of the channel error must be done using such information, \(\hat{E}(\bar{s})\), so that active error correction can be performed on the noisy qubits. The methods used for performing this inference of the error are named decoders. Once the decoder makes a guess of the channel error, the recovery operation is performed by applying \(\hat{E}^{\dagger}(\bar{s})\) since the elements of the Pauli group are unitary matrices. Therefore, decoding methods for error correction codes are a critical element of the code itself since their efficiency on making correct guesses of channel error will be what will determine if the method is successful or not. In this sense, the threshold of a code is a function of the decoder in question, i.e. the code can perform better or worse as a function of the method used. Making the decoder to be more accurate usually comes with the drawback of increasing its computational complexity, which ultimately makes it to be slower in making guesses. Decoders must be fast enough since the action of decoherence will not stop while estimating the error after measurement, implying that the qubit may suffer from additional errors to which the decoder will be oblivious. Thus, a slow decoder will ultimately have a bad performance. To sum up, the trade-off between the accuracy and complexity of the methods is vital for the field of quantum error correction [56, 57]. Surface codes can be decoded by using many methods [117, 118, 119, 120, 109, 121, 46, 122, 123, 124, 125]. In this section we will explain the operation and performance, in terms of correction ability and complexity, of the main decoders for surface codes: the minimum-weight perfect matching [117, 118, 119, 120], the Union-Find decoder [109], the Belief Propagation decoder [54, 46, 122] and the Tensor Network or Matrix Product State decoder [121]. In addition, we will also discuss variants of those decoding methods that have proven to be more efficient in terms of error correction ability or complexity as the Belif-Propagation Ordered Statistics Decoder (BPOSD) [46, 122], for example. Furthermore, we discuss other decoding algorithms in the literature that can be used for decoding the surface code. Many of those were proposed for decoding other topological codes such as the toric or color codes, but could, in principle, be applied for the rotated planar code. Specifically, we discuss Cellular-automaton [124], renormalization group [123], neural network or machine learning based [125] and MaxSAT [126] decoders. The end of the section includes an overview of the available software implementations available to the general public of all those decoding methods. ### The Minimum Weight Perfect Matching Decoder Before the operation of the MWPM decoder is described, some definitions of graph theory must be provided [127]. Consider a weighted graph \(G\) composed by (\(V\),\(E\),\(W\)), where \(V=\{v_{i}\}\) are the vertices, \(E=\{e_{ij}\}\) is the set of edges which satisfy \(i\neq j\) and \(e_{ij}=\{v_{i},v_{j}\}\), meaning that the edge \(e_{ij}\) connects the nodes \(v_{i}\) and \(v_{j}\), and \(W=\{w_{e}\}\), \(e\in E\), which is the set of weights attributed to each edge. A matching of graph \(G\) is a subset of edges, denoted as \(M\subseteq E\), such that for any two edges \(e\) and \(f\) in \(M\), \(e\) and \(f\) do not share any common vertices. In other words, \(M\) is a set of edges without common endpoints. A perfect matching is a matching that additionally satisfies the condition that every vertex in \(V\) is incident to exactly one edge in \(M\). A minimum weight perfect matching is the perfect matching with the smallest possible weight among all possible perfect matchings [117, 127], where the weight of a matching is defined as the sum of the weights of its edges: \(\sum_{e\in M}w_{e}\). Additionally, a complete graph is a graph with the property that \(\forall v_{i},v_{j}\in V,i\neq j,\exists e_{ij}\in E\). As explained before, when the data qubits of the surface code experience a Pauli error, the checks adjacent to an odd number of errors turn into non-trivial syndrome elements. On the other hand, checks adjacent to an even number of errors will not be triggered by them, since the product of two errors of the same type will cancel. Therefore, combinations of those two types of events can be viewed as error chains over the lattice of data qubits that terminate in non-trivial checks. An example of these events is shown in FIG.5. In this sense, due to the CSS structure of the surface codes considered, two separate subgraphs where the nodes represent the non-trivial syndrome elements can be constructed: one for the \(X\)-checks and one for the \(Z\)-checks. The aforementioned nodes must be connected with all the other nodes resulting in complete subgraphs, each edge connecting two nodes must be of weight equivalent to the minimum distance between the respective checks in terms of data qubits. Once the two complete subgraphs are obtained, a perfect matching with minimum weight is seeked on those subgraphs so that the error chains corrupting the data qubits may be identified [127, 32]. One condition in order to construct a graph in which a minimum weight perfect matching can be found is that that all correctable error chains within the code have two non-trivial endpoint checks [127]. Nevertheless, the data qubits on the boundary of the code are not covered through four checks and so they can derive in chains with only one endpoint. Thus, it is necessary to consider virtual checks adjacent to the aforementioned boundary data qubits. Specifically, each non-trivial syndrome will have an associated virtual check outside the boundary of the code. Those virtual check nodes will also be connected among the other virtual checks, but the weight of those edges will be considered to be zero [127]. In FIG. 6, we provide an example of the operation of the MWPM decoder for a specific detected error pattern syndrome. From top to bottom and then from the left column to the right column, the first row shows the considered syndrome, where the exclamation marks correspond to the checks that have measured a non-trivial syndrome element. The second row depicts two separate graphs, one consisting of all \(X\)-checks (green nodes) and the other of all \(Z\)-checks (orange nodes). Note that in both graphs, the previously discussed virtual checks are located in the boundaries: on the left and right boundaries for the Figure 5: Graphical representation of the effect of a Pauli error in a 5x5 rotated surface code. The red lines connecting non-trivial checks represent chains. \(X\)-checks and on the top and bottom for the \(Z\)-checks. The data qubits of the code are then considered to be the edges of such graphs connecting the checks nodes with their nearest ones. Over these graphs, all the shortest paths connecting all non-trivial checks with each other and with their nearest virtual qubit are considered for the matching problem. These paths are represented with cyan, blue and violet colors for paths of weights 1, 2 or 3 respectively. By means of those paths, the subgraphs shown in the third row are constructed, where the weights of the edges are given by the number of data qubits that are crossed when following the path from non-trivial check to non-trivial check, i.e. each of the edges of the graphs of the second row of the figure has weight 1. Then, in the right column, the minimum weight perfect matching of those subgraphs is computed. The result of said process can be seen in the first row, notice how virtual qubits can be left unmatched. The matchings of each of the subgraphs refer to \(X\) or \(Z\)-operators applied to the data qubits that the matching crosses. Thus, for the example in consideration, the second row of the right columns presents the operators that will be applied on the data qubits of the surface code. The \(X\)-checks recover \(X\)-errors and the \(Z\)-checks recover \(Z\)-errors. In the last row of the figure, we present the final recovery operator, where it can be seen that whenever an \(X\) and \(Z\)-error are estimated from each of the graphs for the same data qubit, the resulting recovery operation is a \(Y\)-operator. The MWPM decoder estimates the Pauli error with minimum \(X\) and \(Z\) weight (since it considers said errors independently) that corresponds to a given syndrome. In this sense, this decoder always outputs an error whose syndrome is the same as the one measured16. In addition to always matching the syndrome, if it succeeds in matching the endpoints of an error chain to the observed syndrome it will always yield the correct outcome, even if the recovered error differs from the input one. The reason for this to happen is that the elements of the stabilizer in a 2D surface code correspond to Pauli sequences that form a closed loop in the surface code [129, 32, 104]. Thus, if the estimated error forms a closed chain with the true error occurred on the surface code, the resulting Pauli element will belong to the stabilizer set of the code and, thus, the correction will have been successful (recall eq.(6) and degeneracy of errors). In FIG. 7, we pictorically present this scenario where two error chains with same endpoints are separated by a stabilizer element. Note also that whenever an error that forms an error chain that is a loop on the data qubits of the code, all the checks will have trivial values but the codestate will not be affected by it as it will be a Figure 6: Graphical representation of a MWPM process for a specific syndrome in a 5v5 surface code. stabilizer element. Thus, those types of chains will be non-detectable but unharmful for the code. #### 5.1.1 Complexity As described before, the most critical part of the MWPM decoder consists in finding the perfect matching with minimum weight once the subgraphs are constructed from the syndrome information. An algorithm to efficiently solve such computational problem was proposed by Jack Edmonds back in the 60s, the so called Blossom's algorithm [117]. In general, the MWPM decoder is dominated by the Blossom step of the algorithm whose worst-case complexity in the number of nodes \(N\) is \(O(N^{3}\log{(N)})\)[57, 120]. However, the expected runtime of the decoder is roughly \(O(N^{2})\) whenever the decoder is implemented such that all Dijktra searches are needed for computing the subgraphs where the matching of interest is needed [120, 57]. Therefore, and due to the importance speed for real-time decoding, several implementations of the MWPM decoder have been proposed such as Fowler's implementation with \(O(1)\) parallel expected runtime [127] or the more recent Sparse Blossom by Higgot and Gidney with an observed complexity of \(O(N^{1.32})\)[57] and Fusion Blossom by Wu with \(O(N)\), i.e. linear complexity [118, 119, 130]. Each of the implementations have their own advantages and disadvantages, as for example, Sparse Blossom has a faster single-thread performance than Fusion Blossom, but the latter supports multi-thread execution, implying that it can be faster than the former if enough cores are available [57]. Proposing faster MWPM implementations is an arduous but significant task, since large distances are precised in order to have a fault-tolerant quantum computer, and the decoding schemes also need to be scalable in the sense that they are fast enough when the distance of the code increases. Following the previous discussions, it can be seen that the MWPM decoder follows the QMLD decoding rule as it aims to estimate the most probable error for the given syndrome. Note that, here, finding a perfect matching with minimum weight in the subgraphs formed with the measured syndrome implies that the Pauli element estimated will be the most probable to occur considering pure \(X\) and \(Z\) noise 17. Applying the suboptimal decoding rule has been observed to be an efficient for low physical error probabilities [120]. Footnote 17: This occurs because usually a i.i.d. model is assumed, implying that higher weights imply less probability. #### 5.1.2 Performance and threshold In FIG. 8 we plot the performance of the rotated planar code in terms of the logical error probability (\(P_{L}\)), that is, the probability of the decoding process failing in predicting an error given its syndrome, as a function of the physical error probability (\(p=p_{x}+p_{y}+p_{z}\)) whenever it is decoded using a MWPM decoder. Two noise models are considered: the top figure considers an i.i.d. depolarizing error model (\(p_{x}=p_{y}=p_{z}\)), while in the bottom figure considers an i.i.d. biased Pauli channel with bias \(\eta=100\). The results show how the MWPM decoder performs better when considering noise channels closer to the depolarizing channel. Specifically, not only the logical error probabilities are significantly higher for the biased case when a physical error probability is fixed, but also the probability threshold \(p_{th}\) is lower. This performance decrease when the channel is biased can be explained by the fact that both subgraphs are considered independently. Considering a bias towards \(Z\)-noise results in the \(Z\)-subgraph correspondent to the \(Z\)-checks being more dense, i.e. more non-trivial syndromes are triggered, as opposed to the \(X\)-checks one. This results in the \(Z\)-subgraph reaching the probability threshold before the total physical error probability reaches the threshold of the depolarizing channel. Further increasing the bias of the channel will produce a decrease of the \(p_{th}\) until the extreme value of \(\eta\rightarrow\infty\), that is, a pure dephasing error model. At such point, all triggered syndromes will correspond to the same subgraph, i.e., the right column in FIG. 6. Table 1, shows some \(p_{th}\) for different biases when decoded using the standard MWPM decoder for the rotated planar code. #### 5.1.3 Modifications and re-weighting for specific noise models Biased channels are important since experimentally implemented qubits have shown lower dephasing times, \(T_{2}\), than relaxation times, \(T_{1}\), which implies that those qubits are more prone to suffer from dephasing errors than bit-flip errors (recall the GAPDHA channel) [30, 35, 104]. Bias values in the range of \(\eta\in[1,10^{6}]\) are typical depending on the technology used for constructing the qubits [30, 78]. Therefore, the way to deal with biased noise has been studied by modifying the surface code structure Figure 7: Two \(Z\)-error chains that share the same endpoints and the stabilizing operator checks which separate them. so that the \(Z\)-subgraph contains more information than the \(X\)-subgraph, i.e. rectangular surface codes, [41, 87, 131] or by modifying the MWPM decoder by making it aware of the symmetries of the code [132]. In addition, the noise in experimentally constructed hardware is not identically distributed (recall the i.ni.d. error model) [104, 106, 88, 104]. The performance of surface codes is significantly affected by such non-uniformity of the noise in the data qubits of the lattice, as some of the data qubits will have a higher tendency to suffer from errors than other, and the standard MWPM decoder calculates the perfect matching considering that all the qubits are equiprobable. Thus, methods based on reweighting the syndrome subgraphs as a function of the probability of error of each of the qubits have been proposed so that the MWPM problem is solved over a weighted graph that takes such effects in consideration [104, 88]. Another limitation of the standard MWPM decoder is the fact that since the \(Z\) and \(X\)-subgraphs are decoded independently, \(Y\)-errors are underestimated. In fact, since \(Y\)-errors are combinations of bit- and phase-flips, it can be considered that there exists a correlation among them. Therefore, information can be passed from the \(Z\)-subgraph to the \(X\)-subgraph, and vice versa, so that the correlation betwen those events is taken into account by reweighting the other subgraph as a function of what has been estimated in the other [134, 88, 135]. #### 5.1.4 Measurement errors Another important thing to consider when decoding surface codes is the fact that the measured syndromes might be erroneous [87, 107, 32]. This means that the measured syndrome does not correspond to the Pauli operator affecting the data qubits after measurement. This may come due to the fact that the measurement operations are not perfect (recall SPAM errors) or due to the so called error propagation. Error propagation refers to the fact that Pauli errors on one qubit can propagate to other qubit when performing a two-qubit gate. For example, if we have an \(X\)-operator in the control qubit of a CNOT gate, such operator propagates to the target qubit, i.e. \(\text{CNOT}(\text{X}\otimes\text{I})=\text{X}\otimes\text{X}\). As a consequence of this, the circuit-level noise coming from the measurement qubits can propagate to the data qubits, changing the Pauli error due to propagation and, thus, making the measured error to be imperfect even with perfect SPAM. Those erroneous measurements have a big impact in code performance, lowering the code threshold in significant manner when circuit-level noise is considered [136, 87, 32, 107]. In order to deal with this problem, several measurement rounds are done before decoding so that a space-time like graph is obtained in which the MWPM problem is performed to estimate the error. Usually, \(d\) measurements are recorded for a distance-\(d\) surface code [136, 32]. It is noteworthy to say that once the measurements are done, a non-trivial syndrome element for a measurement that follows another one will be a measurement that has flipped from such last round, refer to the Appendix A for a more detailed description. Then, the complete graph is constructed by connecting all those non-trivial elements both spatially and temporally. This consideration enlarges the size of the graph where the perfect matching with minimum weight must be computed and, therefore, the complexity of the algorithm increases considerably. By considering this space-time decoding, the performance of the code will improve when compared to single-round decoding, but the code threshold significantly decreases compared to the perfect measurement scenario [107, 87, 32]. \begin{table} \begin{tabular}{|c|c|} \hline \(\eta\) & \(p_{th}\) \\ \hline \hline \(1/2\) & \(0.140\) \\ \hline \(1\) & \(0.138\) \\ \hline \(10\) & \(0.098\) \\ \hline \(100\) & \(0.095\) \\ \hline \(1000\) & \(0.088\) \\ \hline \end{tabular} \end{table} Table 1: Probability threshold values for different biases in the rotated planar code under the MWPM decoding scheme. Figure 8: Logical error probability with dependence on the physical error probability under depolarizing (top) \(\eta=100\) (bottom) noise. The dashed line represents the \(p_{th}\) case and the subplots are close ups to the points near the \(p_{th}\). ### Union-Find decoder The Union-Find decoder (UF) is a decoding scheme proposed by Nicolas Delfosse and Naomi Nickerson in 2017 which also consists in mapping the syndrome into a graph problem [109]. However, this decoder is based on clustering the non-trivial syndrome elements of the subgraphs by considering that Pauli errors at a known location can be treated as erasure errors. As mentioned in the error model section, an erasure error within a physical qubit can be treated as the qubit itself being in a mixed state (subjected to a random Pauli error). Therefore, having a uniform probability distribution for \(I\)-, \(X\)-, \(Y\)- or \(Z\)-operators [53] and, most importantly, a known location. Thus, for a pure erasure error model, all the qubits undergoing said erasure errors are localized. The surface code under the erasure channel can be efficiently decoded in linear time, through a method named peeling decoding scheme [137]. In light of this, the UF decoder is based on the idea of transforming the decoding problem of a surface code experiencing Pauli noise into an erasure error decoding problem. By doing this, the UF decoder achieves a decoding complexity of almost linear time \(O(n\alpha(n))\)[109], where \(\alpha\) is the inverse of the Ackerman function and for all practical purposes \(\alpha(n)\leq 3\)[138]. In order to do so, the UF decoding process consists of two different steps: _syndrome validation_ and _erasure decoder_. The syndrome validation step consists on mapping the set of Pauli errors or a mixture of Pauli and erasure errors into clusters of overall erasure errors [109]. Note that mixtures of Pauli and erasure errors can be considered by the UF decoder since the idea is to only have erasure errors. Once the step is completed, the erasure decoder is based on the peeling decoder [137]. #### 5.2.1 Syndrome validation Similar to the MWPM decoder, the UF decoder works on two separate graphs, one for the \(X\)-checks and one for the \(Z\)-checks, and they include the boundary virtual checks discussed before. In the syndrome validation step, the checks within each of the graphs are considered to be even parity nodes if they correspond to a trivial measured syndrome element and odd parity nodes otherwise [109]. All odd parity nodes are considered clusters (at the beginning every cluster will have just one element). Then, every cluster will grow, encompassing its nearest check neighbours. When a cluster grows, its parity is updated to the one of the combined constituent checks. Checks with zero parity will contribute trivially to the overall parity of the cluster. When two different clusters come into contact, they merge into a single cluster the parity of which is the resulting one of combining the two previous ones. A cluster is frozen, i.e. it stops to grow if: * The updated parity of the cluster results in an even parity. * The cluster reaches a virtual qubit. * The growing cluster merges with another cluster that is frozen as a result of reaching a virtual qubit. FIG. 9 presents an example of syndrome validation for the \(X\)-check graph (note that the execution of the Z-check graph will be performed in the same manner). The top figure represents the error considered and the triggered checks. Note that we omit the \(Z\)-checks which we represent with orange circles for simplicity. On the second picture from the top, the clusters increase reaching the adjacent checks from the adjacent data qubits from the initial non-trivial checks. The leftmost and rightmost non-trivial checks reach the boundary and, thus, freeze, as shown in the third row. We depict frozen clusters with the cyan color. Moreover, the two triggered checks on the bottom right of the surface code also freeze as a result of the even parity of the cluster. Therefore, by the third figure, only one cluster will continue to grow. For that reason, as it can be seen in the fourth figure, the cluster grows again making contact with one of the leftmost frozen clusters. Consequently, they merge into a single cluster which is frozen because the new cluster has reached a virtual qubit. This makes the syndrome validation step to conclude, as it can be seen in the last row of FIG. 9. #### 5.2.2 Erasure decoder Once the syndrome validation has been computed, the Pauli error within a surface code can be treated as an erasure error (due to its known location) and, thus, can be decoded through the peeling decoder [137]. First of all, the structure of the clusters must be that of a spanning tree in order to execute such method. In graph theory, a tree is an undirected graph in which two vertices are connected by exactly one path, and so there are no cycles. A spanning tree is a tree which contains all vertices within a graph [117]. Consequently, since the clusters after syndrome validation may have cycles, one of the associated spanning trees must be chosen. If a cluster spans from one open boundary, i.e. a boundary with virtual checks, of the surface code to the other one, this is also considered a cycle, and so it must be split in two spanning trees, one adjacent to each side of the open boundary. The vertices of degree 1 within the spanning tree, that is, the ones that are adjacent to a single edge, are named leaves [117]. For the peeling decoder, one of the leaves of each spanning tree is selected as the root of the tree. If a spanning tree contains a number of virtual qubit leaves, one of them will be considered the root. The decoding process commences by selecting a non-root leaf for each cluster and applying the following rule [109]: * **If the leaf vertex is a non-trivial check:** the edge adjacent to it is stored as a matching (decoded non-trivial Pauli error), the vertex adjacent to it is flipped (if it is trivial it becomes non-trivial and vice versa), and both the leaf vertex and the edge connecting it to the rest of the spanning tree are erased from the spanning tree. * **If the leaf is a trivial check:** the leaf and the edge adjacent to the leaf are removed from the spanning tree. The peeling is directed from the leaves all the way to the root of the spanning trees. When the spanning tree is composed by only a single vertex the decoding process has been completed. It is worth mentioning that removing leaves changes the structure of the tree and may produce more leaves which must be later decoded. Moreover, virtual qubits play a somewhat ambiguous role in this decoding scheme because when they are considered as leaves they act as trivial checks, but when they are roots they are the last vertex to appear implying that it does not really matter how they are considered [109]. In FIG. 10, a possible peeling process for the error after syndrome validation in FIG. 9 is presented. Given the even parity clusters from FIG. 9, a set of four spanning trees are chosen (top left figure). In the top right figure, these spanning trees are shown with identifying colors, green edges are edges incident to leaves and brown edges represent the trunks of the trees. Moreover, the arrows indicate the growth direction from the tree root to the leaves, the peeling should be done in the opposed direction. In the second-top left image, the result of peeling all the leaves from the previous image is shown. Since most of the leaves were trivial checks, no matchings arise except for one in the bottom right spanning tree, which is denoted with a blue line. The remaining figures show the progress of the peeling decoder until reaching the bottom figure which shows the recovered error. Edges which are part of the recovered error by the peeling decoder are represented with blue lines. Notice that the recovered Pauli error in FIG. 10 is the same one as the one in FIG. 9 up to a stabilizer from the top-right \(X\)-Pauli error implying a successful decoding round. #### 5.2.3 Performance and threshold In FIG. 11, the performance of the rotated planar code over depolarizing noise when decoded with the UF decoder is presented. Inspecting the figure, and TABLE 2, it can be seen that the code thresholds achieved by this decoding method are smaller than the ones obtained using the MWPM decoder for all biases considered (recall TABLE 1). Interestingly, the UF decoder always returns an error suited for the syndrome facilitated, nevertheless, this error does Figure 9: Graphical representation of the cluster growth in a distance-9 rotated planar code under a specific error. Violet lines indicate growing clusters and cyan lines indicate parity even and frozen clusters. not always result in the error of minimum weight. Thus, the decrease in performance when compared to the MWPM can be explained by the instances in which the peeling decoder misses to relate closest non-trivial syndrome elements. Several attempts have been done by the community in order to diminish the non-optimal choices made by the UF decoder while keeping its low complexity. At the time of writing, a popular approach towards this goal consists in reweighting the edges of the graph [139, 87]. The reweighting is usually done by previously running some method in order to estimate which data qubits are more prone to have suffered from errors for such syndrome measurement. With such information, the edges representing data qubits that are more prone to errors will have lower weights, which implies that the vertices they connect are closer. Thus, when growing clusters in the syndrome validation phase, the radial growth is fixed and clusters are more prone to grow towards likely to fail data qubits. For example, the so-called belief-find decoder uses a belief propagation method to estimate such information and then continues to decode the error by the UF method [87]. \begin{table} \begin{tabular}{|c|c|} \hline \(\eta\) & \(p_{th}\) \\ \hline \hline \(1/2\) & \(0.116\) \\ \hline \(1\) & \(0.114\) \\ \hline \(10\) & \(0.080\) \\ \hline \(100\) & \(0.078\) \\ \hline \(1000\) & \(0.077\) \\ \hline \end{tabular} \end{table} Table 2: Probability threshold values for different biases in the rotated planar code under the UF decoding scheme. Figure 11: Logical error probability with dependence on the physical error probability from the UF decoding method under depolarizing noise. Figure 10: Graphical representation of the forest peeling from a spanning forest chosen from the set of erasure errors from FIG.9. #### 5.2.4 Measurement errors As explained before, the so-called measurement errors due to imperfect measurement operations and propagation of errors in stabilizer measurement stage have been considered for the MWPM decoder. In the case of the UF decoder, this type of effects have also been taken into account by considering multiple syndrome measurements for a single decoding round, refer to Appendix A for an extended description. In this sense, syndrome validation and peeling are realized over the space-time graph that is obtained. Reweighting methods for a better performance of the UF decoder over those space-time graphs have been discussed too [87, 139]. In addition, and due to the fact that the complexity of the algorithm increases when the space-time graph is considered, truncated UF methods have also been proposed to maintain the fast decoding while not losing too much in terms of decoding success [139]. Ultimately, UF proves to be a very efficient method for decoding the surface code and stands as a fair counterpart to the conventional MWPM decoding process. So much so, that the cluster growth process in the syndrome validation has inspired new methods for optimizing the computational complexity of the minimum-weight perfect matching decoder [57, 118]. Moreover, the UF-decoding method also yields the great advantage of successfully taking into account erasure errors. Were a surface code to undergo a mixture of Pauli and erasure errors, the only difference with the process explained before would be that on the syndrome validation step, there would also be erasure errors, frozen from the beginning, in the form of edges that would join to whichever cluster gets in contact with them. Lastly, there have also been studies trying to strictly relate the conditions under which UF will return the same error as the MWPM method [130] and even more, it has been studied as a possible decoding method for QLDPCs [140], although the complexity problem in this specific case has yet to be addressed. Due to its low complexity and high threshold, as of the moment of writing, the UF method seems to be a promising candidate for early experimental real-time surface code decoding. ### Belief Propagation Belief Propagation (BP) is a message-passing algorithm that can be used to solve inference problems on probabilistic graphical models [141]. It is also sometimes referred to as the Sum-Product Algorithm (SPA) [142, 143], a more general-purpose algorithm that computes marginal functions associated with a global function. Although the terms BP and SPA are essentially interchangeable, throughout this paper we will use BP to refer to the algorithm employed to decode error correction codes. Error correction codes, irrespective of being applied in a quantum or classical paradigm, can be represented by bipartite graphs known as factor graphs. A factor graph \(G=(N,E)\) is defined by a set of nodes \(N=V\cup C\), where \(V,C\) represent two distinct types of nodes known as variable and check nodes18, respectively, and a set of edges \(E\). Against this back-drop, BP can be used to approximate the problem of Maximum Likelihood Decoding (MLD) by exchanging messages over the factor graph representation of an error correction code. If BP runs over a graph that is a tree, it will converge to the exact MLD solution in a time bounded by the tree's depth. In scenarios where this does not hold, i.e., when the algorithm is run over a loopy factor graph and convergence cannot be achieved, BP has proven to be a good heuristic decoding method, especially when the typical size of the loops in the graph is large. Footnote 18: In the context of classical codes, variable nodes represent bits (the columns of the Parity Check Matrix (PCM)) and check nodes represent parity check operations (the rows of the PCM). The same holds true for quantum codes, but instead of representing bits and parity checks, variable nodes represent qubits and check nodes represent the action of stabilizer generators. In both paradigms, edges between variable and check nodes exist if the associated entry in the PCM is non-zero. #### 5.3.1 BP: Specifics Earlier we introduced QMLD and DQMLD as the two possible domains of QECC decoding problems. We also mentioned how QMLD and DQMLD are both intractable problems (QMLD belongs to the NP-complete complexity class while DQMLD belongs to the #P-complete complexity class). This means that decoding algorithms generally work by finding solutions to good approximations of the decoding problem. This is precisely the operating principle behind BP works. In the classical context, BP decoders work by solving an approximation of the classical MLD problem known as bit-wise MLD [54]. In the quantum context, an analogous principle is employed: instead of tackling QMLD, we use classical BP decoders and the symplectic representation to solve the problem of qubit-wise MLD. Note that, not only are we not solving QMLD exactly, but we are also ignoring the phenomenon of degeneracy (an optimal quantum decoder would address DQMLD, not QLMD). Qubit-wise MLD is different from QMLD in that, instead of looking for the most likely error given the syndrome, i.e., looking for the global optimum, we will look for the qubit-wise most likely error, i.e., looking for the marginal optimum. This entails maximizing the probability of each individual qubit given a particular syndrome bit. Qubit-wise MLD can be written as: \[\hat{E}_{i}^{\text{bw}}=\operatorname*{arg\,max}_{E_{1}\in\mathcal{G}_{1}} \sum_{E_{1},\ldots,E_{i-1},E_{i+1},\ldots,E_{n}}P(E_{1}\ldots E_{n}|s), \tag{14}\] where the qubit-wise most likely error is obtained by running through all vaues of \(i\): \(\hat{E}^{\text{bw}}=[\hat{E}^{\text{bw}}_{1}\dots\hat{E}^{\text{bw}}_{n}]\). In general, \(\hat{E}^{\text{bw}}\) need not coincide with the global optimum obtained by solving (7) [144]. Although computing solutions to (14) is also difficult, this task is amenable to BP. Given a factor graph representation of the parity check matrix of a code, BP can run over the aforementioned graph and obtain the qubit-wise most likely error in polynomial time. Assuming that we are working with CSS codes, note that the rotated planar code belongs to this class, (refer to [144, 54] for a more general description), the BP decoding problem can be understood as the execution of two classical BP decoders, one for \(X\)-errors and the other for \(Z\)-errors. Thus, we can describe the iterative schedule of BP decoding as if applied to a classical code19. To begin, the BP algorithm requires an error syndrome \(s\) and the channel error probability distribution. The error syndrome is computed as \(s=H\cdot E\), where \(H\) denotes the parity check matrix of the classical code and \(E\) represents the error, and is a length \(n-k\) binary vector. Each of the entries in the syndrome vector is assigned to a check node. Conversely, the channel error probability distribution is made available to the variable nodes. In this sense, the BP decoder operates by executing the following instruction schedule: Footnote 19: Recall that quantum CSS codes can be understood as the amalgamation of two classical codes, where one code corrects bit-flips and the other corrects phase-flips. 1. At a given decoding iteration \(t\), each variable node, \(v_{i}\), sending out a message, \(\mu^{t}_{v_{i}\to c_{j}}\), to the check nodes, \(c_{j}\), it is connected to. In the first decoding iteration, \(t=1\), this message is equal to the a priori bit error probability, given by the noisy quantum channel in consideration. This message is generally expressed in the log-likelihood domain as \[\mu^{1}_{v_{i}\to c_{j}}=l_{\text{ch}}(E_{i})=\log(\frac{p(E_{i}=0)}{p(E_{i}=1 )}),\] where \(i\in[1,\dots,n]\), \(j\in[1,\dots,n-k]\), and the term \(l_{\text{ch}}\) represents a priori channel _log-likelihood ratio_ (llr). Note that here we used the slight abuse of notation \(E_{i}\), which refers only to a \(X\)-error or \(Z\)-error due to the CSS structure of the code, i.e. \(E_{i}=E_{i}^{e/z}\). At future iterations, the message sent from a variable node to neighboring check nodes is given by \[\mu^{t}_{v_{i}\to c_{j}}=l_{\text{ch}}(E_{i})+\sum_{k=1}^{\sigma-1}\mu^{t-1}_ {c_{k}\to v_{i}},\] where \(\mu^{t-1}_{c_{k}\to v_{i}}\) are messages received from check nodes in the previous iteration and \(\sigma\) is the degree (number of connections) of the variable nodes. Note that the sum goes to \(\sigma-1\) because the message received from check node \(c_{j}\) in iteration \(t-1\) is not considered. 2. Once all these initial messages have been received by the check nodes, every check node will reply to all neighbouring variable nodes with the following message: \[\mu^{t}_{c_{j}\to v_{i}}=(-1)^{s_{j}}2\text{atanh}|\prod_{k=1}^{\psi-1} \tanh(\mu^{t}_{v_{k}\to c_{j}})],\] where \(\psi\) represents the degree of the check nodes, \(t\) represents the BP iteration, and \(s_{j}\) denotes the syndrome bit associated to that particular check node. Notice how the product required to compute \(\mu_{c_{j}\to v_{i}}\) only goes to \(\psi-1\). Once again, this is because the prior variable node message received from the node the current message will be sent to should not be considered. 3. Once all check node messages have been received, variable nodes can compute their lrs (also referred to as beliefs or marginals), which we use to estimate (14). This is done as \[l^{t}_{\text{ap}}(E_{i})=l_{\text{ch}}(E_{i})+\sum_{k=1}^{\sigma}\mu^{t}_{c_{k }\to v_{i}}.\] Note how the a-posteriori lrs are a combination of the initial a-priori channel lrs and decoding information processing. 4. At this point we check whether more decoding iterations are needed. First we obtain the estimate \(\hat{E}^{\text{bw}}=[\hat{E}^{\text{bw}}_{\text{tr}}\dots\hat{E}^{\text{bw}}_ {n}]\) by making hard decisions on the beliefs \(l^{t}_{\text{ap}}(E_{i})\). These hard decisions commonly follow that negative llr, \(l^{t}_{\text{ap}}(E_{i})\), relates to the fact that the estimated probability of error is higher than no error for such qubit. Then, we compute the syndrome associated to the decoding estimate as \[\hat{s}=H\cdot\hat{E}^{\text{bw}}.\] If \(\hat{s}=s\) then decoding has been successful and the BP algorithm is halted. If not, then additional iterations will be run until either \(\hat{s}=s\) or a maximum number of iterations is reached. #### 5.3.2 Aftermath of neglecting DQMLD The excellent performance of BP as a decoding algorithm for classical random-like codes, such as LDPC codes and turbo codes [145, 38], is well-documented. In fact, classical LDPC codes are essentially capacity-achieving when decoded via BP [145]. For this reason, along with their finite-rate guarantees, the design of so-called 'good'20 quantum LDPC codes has been a long-pursued topic in the field of QEC. Although the existence of sparse codes exhibiting such favorable parameter scaling remained unproven for the past two decades, groundbreaking results by Panteleev and Kalachev [48] as well as [146, 147, 148] have finally shown that quantum analogues of robust LDPC codes do actually exist. However, this only addresses half of the problem (the code design aspect), as there are further quantum-specific challenges in terms of decoding. Quantum codes manifest a phenomenon known as degeneracy [54, 128, 67, 149, 144, 80, 46], which has no classical equivalent. This poses a quandary that the classical version of BP cannot resolve, as it is designed for a classical environment in which degeneracy is not present. Footnote 20: By β€˜good’ error correction codes we refer to codes whose number of encoded logical qubits and distance increase in a polynomial manner with the number of physical qubits. Essentially, \(k,d\approx O(n)\). Recall that degeneracy in the context of QEC refers to the fact that errors \(E\in\vec{\mathcal{G}}_{n}\) that differ by a stabilizer element (errors that belong to the same logical coset) have the same effect on the code and, thus, are correctable by the same recovery operation. An optimal decoding strategy for degenerate codes should weigh the probability of each logical coset and pick an operator from the most probable one (the operator itself is irrelevant, what matters is picking the right equivalence class). This nuance is critical, as it highlights the differences between optimal decoding for degenerate quantum codes: DQMLD and non-degenerate quantum codes: QMLD. Applying the QMLD rule to a degenerate quantum code is suboptimal, as the operator with highest probability need not belong to the coset with highest probability. This performance difference is further aggravated if a BP decoder is employed to solve QMLD21. This is due to the fact that degenerate quantum codes can simultaneously exhibit an error probability that is sharply peaked over a logical coset (which would lead to great performance under DQMLD) and that has a broad marginal distribution over individual qubits, which given the large number of low and similarly weighted operators of such codes, would severely hinder a BP decoder. The existence of the aforementioned operators and their equivalence under BP decoding lead to the peculiar _symmetric degeneracy error_ phenomenon [144], which is also referred to as _split-belief_[122] or _Quantum Trapping Set_ (QTS) [150, 151]. Quite frankly, if a code is degenerate enough (if the weight of its generators is substantially smaller than its distance), split beliefs can put the proverbial nail-in-the-coffin for a BP decoder in this context. This cannot be better exemplified than by the fact that that the surface/toric code exhibits no threshold under BP decoding [122]. Surface codes are degenerate by nature, becoming even more so as their size is increased (the weight of their stabilizer generators remains the same but the distance grows). This makes the split-belief phenomenon more prevalent for larger surface codes, explaining the absence of a threshold for this family of codes under BP decoding, as can be seen in FIG. 12. Footnote 21: Recall that BP is an approximation of QMLD that relies on marginalization: it optimizes the error probability individually for each qubit rather than jointly as is called for by QMLD. Split-beliefs have been analyzed in the literature [150, 144, 151] and their impact has been successfully alleviated through myriads of post-processing routines added to general BP decoding. The most successful of these modified BP-based decoding techniques is known as BP-OSD [46]. The performance increments this strategy provides have made it the front-runner in the conversation of a general purpose decoder for QLDPC codes. In fact, the toric code actually exhibits a threshold when decoded via this more sophisticated algorithm. In consequence, this begs the question of whether BP-OSD can compete with the other decoding strategies for the planar code that we have seen thus far. #### 5.3.3 Enhanced Belief Propagation: Quantum Ordered Statistics Decoding The post-processing algorithm known as Ordered Statistics Decoding (OSD) [152, 153] was originally designed to improve the performance of small classical codes, as well as to lower the error floors of certain LDPC codes. It was later adapted to the quantum paradigm by Panteleev and Kalachev in [46], where the authors successfully devised the so called qOSD routine via specific modifications to the classical OSD algorithm. In a similar fashion to other post-processing routines for sparse quantum codes, qOSD only works in conjunction with a BP decoder; i.e., Figure 12: Logical error probability with dependence on the physical error probability under depolarizing noise with BP decoding. it requires the soft outputs of a BP decoder22 in order to function. For this reason, the decoding routine that combines both BP and qOSD is generally referred to as BP-OSD. In later work, BP-OSD was shown to work well for Toric codes and a novel class of semi-topological codes [122], and recently, it has also been shown to be a valid decoding strategy for bista-tailored QLDPC codes [154]. The authors of [122] also made their implementation of BP-OSD public, which marked the first open-source demonstration of this decoding algorithm. Footnote 22: Or any other decoder capable of yielding soft values as its output. Soft in this context means that the hard decisions on the most probable error have not been taken so essentially it is aprobability distribution. As is done in [122], for the sake of notational simplicity, we describe OSD post-processing as applied to the classical decoding problem \[s=H\cdot e. \tag{15}\] It is obvious that this framework is equally applicable (albeit with minor programming modifications) to decoding the \(X\) and \(Z\) components of a CSS code or directly decoding over the entire parity check matrix of a non-CSS code. The OSD post-processing algorithm is called upon whenever BP fails to produce an estimate \(\hat{e}\) that matches the measured syndrome. Hence, our starting point is the following: after attempting to decode the measured syndrome \(s\) via BP we end up with the incorrect estimate of the error \(\hat{e}\). The estimate is known to be erroneous since its syndrome does not match the true syndrome. However, the fact that \(s\neq H\cdot\hat{e}\) does not imply that all components of \(\hat{e}\) are wrong, a rationale that OSD will exploit to find a valid solution to (15). We begin by introducing a necessary set of concepts. We will call the set of indices \([I]\) for which \(H\cdot e_{I}=H\cdot\hat{e}_{I}\) the most reliable information set. The complement of this set \([\bar{I}]\), all those indices for which \(H\cdot e_{I}\neq H\cdot\hat{e}_{I}\), will thus be referred to as the least reliable information set. As we explain in what follows, OSD post-processing exploits the concept of reliable information sets to solve the linear system described by (15). The parity check matrix \(H\) of a quantum error correction code is a rectangular matrix that does not have full column-rank, making it impossible to solve \(s=H\cdot\hat{e}\) via matrix inversion. However, the system described by an appropriate set of \(n-k\) linearly independent columns \([S]\) of \(H\) is actually solvable: \[s=H_{[S]}e_{[S]}, \tag{16}\] can be solved as \[H_{[S]}^{-1}s=e_{[S]}, \tag{17}\] (note that \(H_{[S]}\) is a full-rank matrix). For every choice of \([S]\), the basis of linearly independent columns, we will obtain a unique solution \(e_{[S]}\), that satisfies (16). Against this backdrop, we can understand what OSD post-processing is about: if a full-column rank subset of the parity check matrix can be found and used to solve (16) via matrix inversion, we will always find a solution \(e_{[S]}\) that produces a matching syndrome \(s\). Additionally, because this solution is unique, any symmetries that might hinder BP decoding (like those that cause split beliefs) are now broken. Naturally, the next question becomes how do we pick the basis \([S]\) in a way that guarantees that this solution is actually 'good', i.e., that it is the lowest weight operator associated to the measured syndrome. It is at this point that the previously introduced concept of reliable sets is applied. More precisely, we can use the soft-values (a posteriori lirs) produced in the final BP decoding iteration to rank the bits from most likely to least likely of being flipped (lowest to highest llr values). We can then apply this order to re-arrange the parity check matrix of the code into a new matrix \(\Lambda\). It is clear that the basis \([J]\) defined by the columns of the first full column rank submatrix23\(\Lambda_{[S]}\) of the rearranged matrix is the least reliable basis, as it is obtained from the indices of the linearly independent columns associated to the least reliable set of bits. By picking the OSD submatrix in this way, we are guaranteed to find a low weight solution to the syndrome equation. At this point, instances of the OSD algorithm with varying degrees of complexity, denoted as order-\(w\) OSD or OSD-\(w\), where \(w\in[0,\ldots,K]\) and \(K\in\mathbb{N}\), can be applied to search for the lowest weight solution to (15). In what follows we detail the functioning of OSD post-processing and the differences between the lowest order version of OSD, OSD-0, and higher order intances, OSD-\(w\). Footnote 23: This matrix is found by taking the first rank\((H)\) linearly independent columns of \(\Lambda\). **OSD-0**: Assume the following, after measuring a given syndrome \(s\) for a specific code with parity check matrix \(H\), we have decoded via BP and obtained an estimate of the error \(\hat{e}\), which unfortunately does not produce a matching syndrome: \(s\neq H\cdot\hat{e}\). In this context, we would execute OSD, which would operate as follows: 1. Take the soft-outputs24 of the BP decoder given by \(l_{i,\text{ap}}=\frac{P(e_{i}=0\text{s})}{P(e_{i}=1\text{s})},\ \forall i\in 0, \ldots,n\), and order them from most-likely to least-likely to have been flipped (increasing order of magnitude). Store the list of bit-indices [OG], as this defines the least reliable information set of bits. Footnote 24: The a-posteriori lirs estimated in the final decoding iteration. 2. Re-arrange the columns of the parity check matrix of the code according to the ranking defined by [OG]. We will denote this new matrix by \(\Lambda\). 3. Select the first \(n-k=\text{rank}(H)\) linearly independent columns of \(\Lambda\) to obtain the submatrix \(\Lambda_{[J]}\). The list of indices associated to these columns defines the least reliable information set of bits \([J]\). Note that these columns must be linearly independent, else \(\Lambda_{[J]}\) will not have full-column rank. 4. Invert \(\Lambda_{[J]}\) into \(\Lambda_{[J]}^{-1}\). Calculate the solution \(e_{[J]}\) to the OSD syndrome equation as \(\Lambda_{[J]}^{-1}s=e_{[J]}\). 5. The complete solution to the decoding problem is \(\mathbf{e}_{\Lambda_{0}}=[e_{[J]},e_{[J]}]\), where \(J\) and \(\bar{J}\) denote the least and most reliable information set of bits, respectively. Knowing that \(e_{[J]}\) satisfies \(\Lambda_{[J]}e_{[J]}=s\) then it is easy to see that \(\Lambda\mathbf{e}_{\Lambda_{0}}=\Lambda[e_{[J]},\mathbf{0}]=s\). 6. The last step is to take the solution \(\mathbf{e}_{\Lambda_{0}}=[e_{[J]},\mathbf{0}]\), and map it to the original bit-index order: \(\mathbf{e}_{\Lambda_{0}}\rightarrow\mathbf{e}_{\text{OSD-0}}\). We call the rearranged vector, \(e_{\text{OSD-0}}\), the OSD-0 solution. In FIG. 13, an example is provided where a specific error produces a syndrome which is later decoded finally obtaining a recovered error. In the top row, the error and the syndrome within the surface code are introduced, data qubits, \(X\) and \(Z\)-checks are labelled in black, green and orange colors respectively. In the second row, one can see a graphical representation of the BP method, where messages are sent from data qubits to checks and vice versa. Notice, how there are two grey circles for each data qubit label, that is because the two BP graphs are independent, and so each data qubit will have two resulting marginal probabilities, one for the \(X\)-operators and another one for the \(Z\) ones. In the following row, the resulting marginal probabilities are represented within the data qubits of the surface code with \(X\)-check qubits and \(Z\)-check ones. On the left surface code a data qubit being redder indicate its marginal probability of recovering an \(X\)-operator being larger, and thus its llr being lower, and whiter otherwise. The same rules go by the \(Z\)-check surface code on the right. On the following row, the matrix \(\Lambda\) is represented. The chosen data qubits are the ones which represent the independent columns with the higher marginal probabilities, the four columns on the left are extracted from the \(X\)-check subgraph and the remaining four from the \(Z\)-check subgraph, as indicated by the labels on top of the matrix. This matrix is inverted following the earlier described process, reaching a recovered error which is depicted in the last row. Notice how the error is not the same set of Pauli operators as the inputted one, nevertheless, it is within the same stabilising group, and thus, successfully corrects the error. **OSD-w:** Higher order OSD is similar to OSD-0, the difference being that we now consider solutions \(\mathbf{e}_{\Lambda_{w}}=[e_{[J]},e_{[\bar{J}]}]\) where \(e_{[\bar{J}]}\neq\mathbf{0}\). OSD-\(w\) begins by Figure 13: Graphical description of a BPOSD-0 process for a specific syndrome in a 3x3 surface code. running the first five steps of OSD-0 and computing \(e_{[J]}\), which for the sake of notation we will now denote by \(e_{[\bar{J}]}^{w=0}\). Once this is done, different candidates \(\mathbf{e}_{k}\) can be found by making choices for \(e_{[\bar{J}]}\) and solving \[\mathbf{e}_{\Lambda_{w}}=[e_{[J]},e_{[\bar{J}]}]=[e_{[\bar{J}]}^{w=0}+\Lambda_{[ \bar{J}]}^{-1}\Lambda_{[\bar{J}]}e_{[\bar{J}]},\ e_{[\bar{J}]}], \tag{18}\] where \(\Lambda_{[\bar{J}]}\) is the submatrix obtained by taking the columns of \(\Lambda\) indexed by \(\bar{J}\). The solution \(\mathbf{e}_{w}\) given in (18) satisfies the OSD syndrome equation \[\Lambda\mathbf{e}_{\Lambda_{w}}=s \tag{19}\] for any choice of \(e_{[\bar{J}]}\). The premise behind OSD-\(w\) (considering \(e_{[\bar{J}]}\neq\mathbf{0}\)) is to find solutions \(\mathbf{e}_{\Lambda_{w}}\) of lower Hamming weight than \(\mathbf{e}_{\Lambda_{0}}\). The \(e_{[\bar{J}]}\neq\mathbf{0}\) component has dimension \(n-\mathrm{rank}(H)\), implying that testing all possible configurations for \(e_{[\bar{J}]}\) is intractable beyond a small value of \(n\). For this reason, it is important to design a strategy that makes good choices for the \(e_{[\bar{J}]}\) candidates. This is best approached25 using the so called _combination sweep strategy_. This greedy search method works as follows: Footnote 25: A different strategy was employed in the works that first introduced qOSD. However, slight performance improvements were shown in [122] when using the combination sweep strategy. 1. At the start of the search, the OSD-\(w\) solution is equal to the unordered OSD-0 solution: \(\mathbf{e}_{\Lambda_{w}}=\mathbf{e}_{\Lambda_{0}}\). 2. Sort the bits in the \(e_{[\bar{J}]}\) subvector of the solution according to the soft-outputs of the original BP decoding attempt. Note that this step is already built into OSD-0 when re-arranging the parity check matrix according to the BP outputs. 3. Test all possible weight-1 configurations of \(e_{[\bar{J}]}\). There are a total of \(n-\mathrm{rank}(H)\) weight-1 candidates. If the weight of any of the candidates is lower than the weight of \(\mathbf{e}_{\Lambda_{0}}\), update the choice of \(\mathbf{e}_{\Lambda_{w}}\). 4. Try all possible weight-2 configurations in the first \(w\) bits of \(e_{[\bar{J}]}\). Obviously, the order26\(w\) will be upper bounded by \(n-\mathrm{rank}(H)\) the dimension of \(e_{[\bar{J}]}\). The number of candidates is given by the binomial coefficient \(\binom{w}{2}\). If the weight of any of the candidates is lower than the weight of the current choice for the solution, update the choice of \(\mathbf{e}_{\Lambda_{w}}\). Footnote 26: It is worth mentioning that in [122], the order \(w\) of the OSD algorithm is referred to as the search depth. 5. The final step is analogous to that of the order 0 version: map the solution to the original bit-index order accordingly: \(\mathbf{e}_{\Lambda_{w}}\rightarrow\mathbf{e}_{\text{OSD-w}}\). OSD-\(w\) entails testing a total of \(n-\mathrm{rank}(H)+\binom{w}{2}\) candidates for \(\mathbf{e}_{\Lambda_{w}}\), out of which the minimum Hamming weight solution, or at least a better choice than \(\mathbf{e}_{\Lambda_{0}}\), will have been found. As the order \(w\) of the combination sweep strategy is increased, the likelihood of finding the solution of minimum Hamming weight to (19) increases. At the same time, this also implies additional computational demand, negatively impacting the complexity of the algorithm and its runtime performance. #### 5.3.4 OSD Complexity In [46], the authors show that most of the performance improvements provided by qOSD postprocessing are achieved with OSD-0. While increasing the order \(w\) of the algorithm yields benefit over setting \(w=0\), for various QLDPC code families the improvement provided by running the algorithm with \(w>0\) is marginal. It is important to mention that in [46], the authors use a different algorithm to the combination sweep strategy [122]. Instead, they apply an exhaustive approach where they test all possible permutations in the first \(w\) bits of \(e_{[\bar{J}]}\). Given that OSD-0 requires the solving of a linear system, its complexity will be at most \(O(n^{3})\) (although there are matrix inversion algorithms with better complexity [155], they are quite impractical). The exhaustive approach to OSD-\(w\) has a complexity in the general case of \(O(n^{3}+n2^{w})\). Although the combination sweep also has an edge in terms of complexity (\(\approx O(w^{\alpha}n^{3})\)), it is likely that OSD-0 is the only version of the algorithm that can be successfully implemented in a real time system [122, 156]. Recently, an OSD-inspired reduced complexity approach known as _stabilizer inactivation_ has been proposed. In [157] the authors showed that this strategy, which has \(O(n^{2}\mathrm{logn})\) in the worst case, achieved a higher threshold for the family of generalized bicycle codes. #### 5.3.5 Performance and threshold As has been previously seen in FIG. 12, the BP decoding method has no probability threshold by itself and so is not a usable decoding method for the decoding of the surface code. Nevertheless, considering BPOSD-0 yields an enhancement which produces a probability threshold of 0.139 under depolarizing data qubit noise. This result can be seen in FIG. 14. This threshold can be further expanded when considering higher BPOSD orders at the expense of a higher complexity. Moreover, the BPOSD method is also compromised when considering \(Z\)-noise bias, which can be explained by the \(Z\)-check subgraph becoming more dense than the \(X\)-check one and thus being more prone to failure. Table 3 reviews several thresholds for three different ordered statistics decoding processes under different biases. #### 5.3.6 Measurement errors There have been many proposals for handling circuit-level noise by using belief propagation. An important proposal for handling this type of errors in the surface code is the decoder deemed as belief-matching [87]. This decoder is base on a combination between the BP and MWPM algorithms, idea that was previously explored in [158] with perfect syndrome measurements. The original belief-matching algorithm achieved a 17.76% threshold for the rotated planar code when depolarizing noise was considered over the data qubits. The decoding complexity of the algorithm was maintained when compared to MWPM when parallel processing was allowed [158]. Due to the impressive performance of this BP+MWPM approach, such method was generalized to handle noisy gates and SPAM errors in [87], resulting in one of the most vainglorious algorithms for obtaining logical qubits in such planar architecture codes. The basic idea behind the belief-matching algorithm is similar to the BPOSD decoder in the sense that the soft outputs of the sum-product algorithm are used for the posterior decoding algorithm to make use of them. In this sense, the weights of the the graph in the MWPM decoder are produced by using such a posteriori information. Regarding the generalization of the belief-matching for circuit-level noise, the authors discussed how to construct the circuit-level Tanner graph that takes the measurement circuits of the surface code into account so that belief propagation can be run over such graph and then use the soft output for obtaining the MWPM solution [87]. Note that \(\mathcal{O}(d)\) measurement rounds are required for this. The authors found a threshold of 0.94% for the belief-matching algorithms with circuit level noise, which is comparable to the belief-find decoder also proposed in such article, which follows the same procedure but using UF instead of MWPM and achieves a threshold of 0.937% [87]. The authors discuss that their performances are similar due to the fact that most of the information needed for decoding is provided by the BP part of the algorithm. Another approach to deal with noisy syndromes is to consider soft syndrome information for the BP graph used for decoding as it was done for QLDPC codes in [159]. In such work, the authors discuss the fact that when a measure met is noisy, the input information of the Tanner graph regarding the information of the syndrome can be soft, i.e. a probability distribution conditional to the obtained noisy measurement. Therefore, the fact that the measurement outcome might not be precise is also fed to the BP algorithm. However, this study only considered the fact that the noisy measurements are a result of SPAM errors, neglecting circuit level noise. The authors discuss that this is considered to be future work in this direction. Interestingly, such paper was based on the previous work by Pattison et al. that discussed the use of soft measurement information in the context of MWPM and UF for surface codes [160]. The authors concluded that their modified decoders using soft information improved the threshold obtained by hard decision decoders for the circuit-level noise considered. In this sense, considering this approaches for BP decoders discussed for the surface code might be an interesting path to follow for dealing with circuit level noise. Finally, single-shot decoding using BPOSD was investigated for the 4D toric code in [161]. Single-shot decoding refers to estimating the error when noisy measurements are present in a single measurement round, i.e. without needing to measure the usual \(\mathcal{O}(d)\) rounds. In such article, the authors propose to decode data qubit errors and noisy syndrome measurements altogether in a single stage via the BPOSD decoder. The authors discuss that they obtain better thresholds than using multiple-measurements and other single-shot decoder such as the cellular automaton decoder. However, they use a phenomenological noise model for faulty syndrome measurements as discussed before, i.e. they do not consider full circuit-level noise, which they consider to be future work [161]. Single-shot decoding is being actively investigated at the time of writing, specially from the point of view of QLDPC codes [162, 163, 164, 165]. This Figure 14: Logical error probability with dependence on the physical error probability under depolarizing noise with BPOSD-0 decoding. \begin{table} \begin{tabular}{|c|c|} \hline \(\eta\) & \(p_{th}\) \\ \hline \hline \(1/2\) & \(0.139\) \\ \hline \(1\) & \(0.138\) \\ \hline \(10\) & \(0.098\) \\ \hline \(100\) & \(0.094\) \\ \hline \(1000\) & \(0.092\) \\ \hline \end{tabular} \end{table} Table 3: Probability threshold values for different biases and orders in the rotated planar code under the BPOSD-0 decoding scheme. approach was originally proposed for some families of topological codes, and due to its advantage in terms of measurement rounds and, presumably, performance, seems to be interesting to study more codes that may admit this kind of decoding to deal with circuit-level noise. ### The Tensor Network decoder Tensor Network (TN) decoders are decoding methods, proposed by Bravyi et al. for the surface code, that aim to resolve the DQMLD problem, i.e. to estimate which is the most probable logical coset based on the syndrome information [121]. Note that, up to this point, all decoders considered have followed a QMLD logic, that is, their aim has been to seek for the most probable Pauli error (QMLD) since, as has been seen earlier, the error recovery problem contemplating error logical classes (DQMLD) belongs to the \(\#P\) complexity class [55]. For this reason, the TN decoder is also usually referred as the maximum likelihood decoder (MLD)27[121]. Fortunately, surface codes admit a natural representation in terms of Tensor Networks (TN) [166], feature that can be used for approximating the DQMLD problem targeting the most probable error coset. Decoding quantum error correcting codes with TNs was first considered in [167], where the authors described for the first time the equivalence between decoding a quantum code and contracting a TN for quantum turbo, polar and branching multiscale entanglement renormalization ansatz (MERA) codes [168, 169]. Later, the TN decoding was particularized for the planar code in [121], and, afterwards, in [170], the method was expanded to the rotated planar code yielding significant results. Lastly, in [171], the method was generalized to any quantum 2D code. Footnote 27: Note that this nomenclature might be somewhat confusing as it does not explicitly represent the fact that it is a degenerated decoder. Planar codes are not the only class of topological codes that admit the TN decoding methods as, in general, those admit a natural representation in terms of 2D Projected Entangled Pair States (PEPS) [172]. For instance, the ground states of the Toric Code Hamiltonian, codifying two logical qubits, can be written exactly as PEPS with low bond dimension, and the same is true for all other surface codes such as quantum double models, color codes, and even string-net models [173]. This analytical correspondence implies that decoding such codes can be done using standard TN techniques, in particular those related to the contraction of 2D TNs. Such algorithms have been used widely in the calculation of partition functions of 2D classical lattice models [174, 175], as well as in the calculation of expectation values of 2D quantum states on a lattice [172, 176]. Take for instance the straightforward example of the Toric Code. The expectation value of loop operators around the torus geometry decode the value of the logical qubits, in a completely analogue way to the expectation value of Wilson loop operators in lattice gauge theories. As such, these loop operators are built from tensor products of Pauli matrices along the sites on a loop, and the TN contraction involved to estimate them can be done via usual contraction methods for 2D TNs. In broad terms, the idea behind TN decoding consists in recovering an error \(E_{rec}\) that shares the observed error syndrome in a quick manner so that when combining it with the actual channel error the resulting element lays in the normalizer of the code. Once this is done, the probability of the logical error cosets of the normalizer are computed to select the most probable one. For the specific case of codes encoding a single logical qubit (such as the rotated planar code), there will be four logical cosets: \(I_{L},X_{L},Y_{L}\) and \(Z_{L}\). It is worth mentioning that \(E_{rec}\) does not need to be the most probable error since its unique purpose is to transport the error to the zero syndrome coset [128]. After \(p(E_{rec}I_{L}),p(E_{rec}X_{L}),p(E_{rec}Y_{L})\) and \(p(E_{rec}Z_{L})\) are computed, the most probable error class, \(\mathcal{L}\), determines the recovery operation \(\hat{E}=E_{rec}\mathcal{L}\). #### 5.4.1 Introduction to TNs In order to better understand the functioning of the TN decoder, we will first briefly introduce some generic terms about tensor networks. TNs are a class of variational wave functions used in the study of many-body quantum systems in which the quantum relation from tensor to tensor is known [166]. TNs are used in many research fields for studying complex and correlated systems with large configuration spaces. How they work is based on the fact that there are configurations much more significant than others, and so, given a certain threshold, one can omit the least reliable configurations in order to work with a reduced and manageable subspace. TNs accept an intuitive notation. FIG. 15 depicts some examples of tensors represented through various plots, where vectors are represented as circles, and matrices and higher order tensors, as squares. In a TN, some, or all the indices can be contracted according to a specific pattern, as shown in FIG. 15. If all the indices are contracted, the TN results in a scalar, Figure 15: Some examples of tensor operations. **a)** indicates a scalar product of two vectors, **b)** indicates the trace of a matrix, **c)** indicates the product of a vector with a matrix and **d)** indicates a matrix product state. as in the case of the two first examples of FIG. 15. The lines connecting tensors from one to the other represent the indices such that there is a sum over all their possible values, whereas lines that are not connecting to anywhere represent free indices. Those indices that correspond to the original physical Hilbert spaces are called _physical_ indices, and the rest (connecting the different tensors with each other) are called _bond_ indices, and are responsible for the entanglement in the quantum many-body wavefunction. The range of values of a bond index is called _bond dimension_, following common TN terminology [166, 121]. Consider now the following case: we have a 2D tensor network such as the one showed in FIG. 16, which is the one encountered when computing expectation values of a projected entanglement pair state (PEPS). The constituent tensors on the bulk have bond dimension 4, while the ones on the boundary have lower bond dimensions. One can consider the left-most (or right-most) column as a matrix product state (MPS) and columns within the bulk of the PEPS as matrix product operators (MPO). When contracting the left MPS with its nearest MPO, the overall 2D TN will be reduced by one column as shown in FIG. 16, and for the new column which is obtained the vertical bond dimension will have increased in an exponential manner. Contracting the entire 2D TN will most likely be an unfeasible challenge as a result of the exponentially increasing bond dimension. Fortunately, we can still obtain meaningful results for continuous column contraction while avoiding an exponential growth [177, 178]. For the cases we consider, this is made by using the QR matrix decomposition altogether with the singular value decomposition (SVD), which is used to truncate the vertical bond-dimensions [121, 179]. Truncating the bond dimension of the column contractions will yield an approximate result to the overall contraction as opposed to an analytical result which, nevertheless, allows for a defined run time since, when the contraction results in a last column, its vertical contraction can be carried at polynomial time [171]. #### 5.4.2 TN decoding for the planar code A surface code encodes a single logical qubit, therefore the normalizer consists of the following logical cosets: \(I_{L}\), \(X_{L}\), \(Y_{L}\) and \(Z_{L}\). As a result of degeneracy, the most probable Pauli error corresponding to a syndrome does not need to correspond to the most probable error class for such syndrome [54]. As mentioned earlier, when facing a non-trivial syndrome in a code, a TN decoder will seek to find a Pauli error correspondent to the syndrome, \(E_{rec}\), and the most probable logical coset given the chosen error \(E_{rec}\). In this sense, the planar code can be associated to a PEPS tensor network [121] following: \[p(E_{rec}\mathcal{L})=\sum_{\alpha,\beta}T(\alpha;\beta), \tag{20}\] where \(p(E_{rec}\mathcal{L})\) is the probability of the error being within the error class \(\mathcal{L}\) and \(\alpha_{i},\beta_{j}\in\{0,1\}\) indicate the application of the stabilizer generator operators. An arbitrary application of a stabilizer element operator would be denoted as \(g(\alpha,\beta):=\prod_{i}(A_{i})^{\alpha_{i}}\prod_{j}(B_{j})^{\alpha_{j}}\), where \(\alpha_{i}\) indicate \(X\)-stabilizer operators and \(\beta_{j}\) indicate \(Z\)-stabilizer operators. The resulting tensor network is graphically represented in FIG. 17. Note how now all stabilizer generators are considered as identical while the data qubits are labelled as horizontal (H) or vertical (V). This consideration is equivalent, now, horizontal data qubits are considered to be operated with \(Z\)-stabilizer operators from top and bottom and \(X\)-stabilizer operators from left and right, while the opposite happens for vertical data qubits. Under these considerations we can define the tensor nodes which form the overall planar code PEPS tensor network in FIG. 18. The elements of the data qubit tensors are the probabilities of experiencing Pauli errors given by the indices. Moreover, the indices \(n\), \(e\), \(s\) and \(w\) indicate the tensors to their north, east, south and west; respectively. The indices are binary and indicate if either of the adjacent stabilizer generators operates non-trivially on them as can be seen in Figure 16: A tensor network which contracts the left most column with the second one. Figure 17: On the left, a distance-3 planar code, on the right the tensor network which represents it. Stabilizer checks are denoted through brown squares and labelled as \(s\), and horizontal and vertical data qubits are denoted as \(H\) and \(V\) respectively. the right side of FIG. 18, where one can see that the state of a data qubit tensor is both dependent on the indices and the correspondent Pauli operator on the qubit from the error \(E_{rec}\). On the other side, the stabilizer tensor nodes are defined by Kronecker deltas, i.e. they are either \(1\) when \(n=e=s=w\) and \(0\) otherwise. That is because the stabilizer generators either operate on all their nearest data qubits or do not operate on any of them, as was earlier shown in FIG. 2. Since the probability of error of data qubits is independent from one to another, considering an error \(E_{rec}\) associated to a syndrome and contracting the tensor network from FIG. 17 results in the summation of products of tensor probabilities under all the combinations of the stabilizer generators, which itself is equivalent to finding the probability of the error class \(p(\mathcal{G}E_{rec})\). Unfortunately, and as explained before, contracting columns increases the vertical bond dimension exponentially and, thus, the reasonable approach consists in truncating the vertical bond dimension after each contraction to a truncated value \(\chi\). This is often done through the Schmidt decomposition [121, 179], which allows the resulting tensor after the column contraction to be represented as a sum of products of smaller tensors called "Schmidt tensors". The SVD is applied to said Schmidt tensors resulting in the Schmidt values, the truncation follows by only considering the "Schmidt tensors" correspondent to the highest \(\chi\) Schmidt values [179]. This allows for obtaining approximate values of the probabilities of the error classes, which can then be used for decoding at a higher precision than any of the aforementioned algorithms, if the truncation of the bond dimension is high enough. #### 5.4.3 TN decoding for the rotated planar code For the rotated planar code, the tensor network representation of the system is not as straight forward as in the case of the planar code, since the code can no longer be mapped directly to a PEPS tensor network. An elaborate and efficient way of adapting the TN decoder in [121] for rotated planar codes, while greatly improving their performance was elaborated in [170] as illustrated in FIG. 19. As depicted in the figure, the rotated planar code is first adapted to the tensor network model proposed in [121], which does not correspond to a PEPS state. In order to map such TN to the desired representation, the stabilizer nodes are split in 4 smaller tensor nodes which preserve the delta tensor structure themselves [180], i.e. the values are given by the Kronecker delta. Afterwards, the data qubit nodes altogether with their adjacent stabilizer nodes are contracted producing the desired PEPS. Note how the resulting PEPS is no longer isotropic, i.e. if the PEPS is rotated \(45^{0}\) counter clock-wise, most of the vertical bonds will be of dimension 2 while the horizontal ones will be of dimension 1. Given this new structure, the rotated planar code can be decoded through the TN decoder. In FIG. 20, we can see an example of the decoding process of an arbitrary syndrome through the TN decoder for a \(5\times 5\) rotated planar code. In the top image, the error \(E\) is presented altogether with the measured syndrome and in the second image an error \(E_{rec}\) which returns the code to the codespace is presented. This error is found by generating chains from every non-trivial syndrome element to its nearest virtual check. Afterwards, the probability cosets of the four logical operators are computed by contracting the associated tensor network. The combination of each of the logical operators with \(E_{rec}\) is also presented in the third and fourth row. Finally, the coset with the highest probability is \(Y_{L}\) for this example and, thus, the recovered error by the TN decoder is \(E_{rec}Y_{L}\). In the figure at the bottom, the resulting state of the code after recovery defined by \(E_{rec}Y_{L}E\) can be seen, and, as shown by the highlighted stabilizer operators it belongs to the stabilizer set, implying that the correction has been successful. #### 5.4.4 Performance and threshold The tensor network decoder has the highest code threshold out of the ones considered in this work equal to \(0.185\) under depolarizing noise with a bond dimension \(\chi=16\), as can be seen in FIG. 21. It is worth mentioning that this exceptional performance is highly related to the value of \(\chi\), the maximum value of which scales in an exponential manner as larger configurations are considered. Recent work has been done studying \(\chi\) values for achieving convergence of the tensor network decoding method under several noise model and code tailoring conditions [42, 170]. Figure 18: On the left, graphical representation of the tensor nodes which constitute the planar code tensor network. On the right, element values of the tensors, the superindex indicates the labelling of the tensor node. Figure 19: In a), a rotated planar code of distance-5. In b) the tensor network one could extract following [121]. In c), the tensor nodes correspondent to stabilizer generators are split in 4 tensor nodes connected with themselves. In d), a new tensor network arrangement is proposed for the rotated planar code. Notice the tensor network can be separated in sets of tensor nodes which are encircled in dashed violet lines. In e), the aforementioned encircled tensors are contracted and a new tensor network is obtained. Figure 20: Graphical representation of a TN decoding process for a specific syndrome in a 5x5 surface code. Moreover, as can be seen in Table 4, tensor network decoding also suffers under biased noise in a significant manner. There have been recent studies which have investigated the effect of biased noise in surface codes and ways to enhance its performance reaching significant results which allowed for lower values of \(\chi\) in order to obtain a convergence within the tensor network problem [170, 41]. At the current time, the exceptional performance of the tensor network decoding scheme motivates researchers to seek for methods to accelerate its poor run time. #### 5.4.5 Complexity The complexity of the tensor network decoding method is mainly given by two important tasks: the contraction of the MPS with its nearest MPO and the truncation of the resulting state. The contraction of the MPS with the MPO has a complexity of \(\mathcal{O}(d\chi^{2})\) while the truncation technique has a complexity \(\mathcal{O}(d\chi^{3})\)[181, 121]. Since for every matrix product operation there is a need for \(d\) truncation techniques, one for each tensor node within the MPS, the resulting truncation complexity is \(\mathcal{O}(d^{2}\chi^{3})=\mathcal{O}(n\chi^{3})\). Thus, the truncation complexity ends up defining the overall complexity of the decoding method. The resulting complexity is a high price to pay for a DQMLD method, as larger code sizes are considered, the required \(\chi\) should be also larger for taking into account the most probable configurations within the stabilizing cosets. The cubic growth with the bond dimension truncation harshly compromises the possible usability of the TN decoder in real-time decoding. Moreover, the Google Quantum AI team observed that this decoder is many orders of magnitude slower than the MWPM implementations used for their experiments [64]. #### 5.4.6 Measurement errors Considering measurement errors in tensor network decoding is a cumbersome task, since it requires a space-time syndrome extraction which yields additional tensor nodes to take into account. The Google team in [64] studied the tensor network decoding process in a distance-5 code for 25 syndrome extraction rounds by considering a Tanner graph between the syndrome elements and the circuit-level error mechanisms in a similar manner than the method used in [87]. Afterwards, this Tanner graph can be rewritten as a tensor network that evaluates the probability of a logical coset outcome and, it can be planarized and contracted considering a specific truncation [182]. As with the other decoding methods, considering measurement errors significantly increases the complexity of the already complex tensor network decoding method. Nevertheless, the high cost of performing TN decoding allowed for the best performance in the experimental surface code from Google [64]. ### Other decoders Many other decoders have been proposed through the literature for decoding topological codes other than the mainstream MWPM, UF, BPOSD and TN decoders. Many of them have been proposed to decode specific families of topological codes other than the rotated planar code such as the toric or the the color code. In this section we briefly introduce those decoders and discuss their capabilities. #### 5.5.1 Cellular-automaton decoder Cellular-automaton decoders were first proposed for decoding the toric code [124]. This class of decoders was constructed with the aim of reducing the decoding problem to simple local update rules in the sense that they only depend on the nearest neighbours. The authors justify that a cellular automaton does fit such requirements and, thus, propose the use of an auxiliary cellular automaton to the toric code such that it communicates long-range information between the non-trivial syndromes that have been with the aim that local decision can be made for correcting the associated error. More specifically, the auxiliary cellular automaton stores real numbers \(\phi_{t}(x)\) at each time \begin{table} \begin{tabular}{|c|c|} \hline \(\eta\) & \(p_{th}\) \\ \hline \hline \(1/2\) & \(0.185\) \\ \hline \(1\) & \(0.175\) \\ \hline \(10\) & \(0.111\) \\ \hline \(100\) & \(0.097\) \\ \hline \(1000\) & \(0.096\) \\ \hline \end{tabular} \end{table} Table 4: Probability threshold values for different biases of the rotated planar code under Pauli noise decoded with the TN decoder with bond dimension \(\chi=16\). Figure 21: Logical error probability with dependence on the physical error probability of the rotated planar code under depolarizing noise decoded with the TN decoder with bond dimension \(\chi=16\). step, \(t\), of the decoder for each of its cells. In this sense, the system depends on the error configuration, the activated syndrome elements at a time step and the value of \(\phi\), which includes the values for every cell \(x\). In order to update the system state from a time \(t\) to the next time step \(t+1\), \(c\) elementary updates of \(\phi\) are done based only on the information of the nearest neighbors and another step where the error and the syndrome on the system at such time step is updated too. For doing so, a nearest syndrome is triggered based on an update rule. By repeating this \(\tau\) times, the closest potential partners are found when two syndromes cancel each other, implying that a matching has been found, recovery Pauli operation. The authors in [124] propose that _phi_ represents an attractive force between the measured syndrome, emulating an electrostatic or gravitational field which can be discretized into update rules since they can be discretized to local dynamics. Therefore, they propose to emulate the long field attraction of those fields to represent that the closest triggered syndromes are attracted, and they propose the local dynamics induced by those laws to updated the values locally. The update rule proposed by the authors is based on selecting which of the neighbour syndromes is activated as a function of the field intensity provided by the values of \(\phi\). In this sense, they look for the the adjacent cell with largest field value and then move there with a probability \(1/2\). Regarding the function \(\phi\), a local discretization of the Gauss' law was selected for emulating the long field attraction [124]. There have been other proposals for the update rule of the cellular automaton decoder such as the so called Toom's rule [183, 184] or the sweep rule [185, 186]. Most of the cellular automaton decoders have been studied for toric codes with boundaries in \(d\) dimensions [185, 186, 124, 123, 184], but some lattices without boundaries have examined [186]. Additionally, it was shown that the decoders for the toric code can be applied to decode color codes [187] and, thus, cellular automaton decoders can also be applied for this family of QECCs. Unfortunately, its application for a rotated planar code has not been investigated. Regarding the threshold achieved by this class of decoders it is noteworthy to say that the noise model considered has been primarily an i.i.d. phase flip model [185, 186, 184, 183, 184]. In this sense, a threshold of 8.2% was claimed in [124] for a 2D toric code, while a MWPM decoder exhibits a 10.31% threshold for such code [188]. However, a threshold of 15.5% was claimed in [184] for the cubic toric code using the sweep rule. In terms of the complexity of this decoder, it can naturally be integrated in a parallel way by communicating the processing cores locally and, therefore, it is usually seemed as a fast decoder [185, 124, 186]. In this sense, the 2D cellular automaton of [124] requires a number of updates in the order of \(\log^{5}(d)\) updates, while the 3D implementation requires a number of updates in the order of \(\log^{3}(d)\). As the authors discuss, this comes from the fact that the cellular automata decoder accepts a parallel implementation from construction [124]. The sweep decoder implementation in [185] requires a runtime \(\mathcal{O}(d)\), so linear in \(d\). An important feature of cellular automaton decoders is that they are robust for the case that measurement errors are considered [185, 186]. Specifically, this decoder is deemed to be a single-shot decoder, which implies that measurement errors can be dealt with a single decoding round (recall Appendix A). This is specially interesting for a decoder since as discussed before, the way in which decoders usually deal with noisy syndromes is by performing \(d\) measurement rounds and then solving the space-time like obtained graph [186]. #### 5.5.2 Renormalization group decoder The renormalization group (RG) decoder was proposed as a fast and efficient decoder for the Kitaev toric code [123]. The main idea behind this decoder is that the toric code can be seen as a concatenated code so that the decoding problem can be resolved from the smaller codes that form it. The toric code is not exactly a concatenated code, but the authors propose that it can be actually considered to be one when allowing the blocks of the concatenation to share qubits, so they are actually overlapped [123]. Following this rationale, the RG decoder computes the logical error probabilities of the smallest codes in the concatenation, which then are used as the noise model for the second layer of codes on the concatenation. the process is repeated until reaching the top layer of the code, which actually reperesents the complete toric code and, thus, the estimation of the error is obtained for correction. The authors discuss that since the small codes for the concatenation are small open-boundary topological codes, those can be decoded by brute force [123]. As stated, the renormalization decoder was originally proposed for the Kitaev toric code, so with Figure 22: Visual representation of the 2D cellular automaton decoder for the toric code with periodic boundaries. The blue dots, represented by lattice \(V\), are the data qubits while the green boxes, represented by lattice \(\Lambda\), depict the cellular automaton decoder. Figure taken from [124]. periodic boundaries, but modifications for decoding color codes [189], the 3D cubic code [190, 191, 192] qudit topological codes [191, 193, 194] and the four-dimensional toric code (both for periodic and open boundary conditions) [195] have also been proposed. Interestingly, the RG decoder was the first known algorithm to decode the color code. The RG decoder has not been investigated for the rotated planar code discussed thorugh this paper, but it could be valid, in principle, by using the ideas for decoding open boundary four-dimensional toric codes [195] since the rotated planar code is essentially a rotated toric code with boundaries. Regarding the code threshold achieved by RG decoders, its standard version shown a threshold of 7.8% for the depolarizing noise model, which falls short when compared to the 15.5% threshold obtained by decoding it with the MWPM decoder. However, the application of belief-propagation decoder before the RG decoder was proposed in [123]. This was done so that the smallest codes of the concatenation can start the decoding with the probability distribution given by BP implying that the RG is more accurate globally. By doing so, the BP+RG decoder achieved a 16.5% threshold, which exceeds that of the MWPM decoder by itself [123]. This, however, was comparing the performance with the MPWM alone, i.e. belief-matching would also increase the threshold [158, 87]. In addition, it does not exceed the threshold achieved by the tensor network decoder, which stands between 17% and 18.5% [121]. Similar to the cellular automata decoder, the RG decoder admits a parallelization of the decoding problem since each of the codes forming a layer of the concatenation can be decoded in at the same time [123]. This implies that the complexity of the algorithm is a function of the number of concatenation layers, which scales logarithmically with the distance and, thus, number of qubits of the code. Therefore, the decoding complexity of the standard RG decoder is in the order of \(\mathcal{O}(d)\) if the complete parallelization is done, while in the order of \(\mathcal{O}(d^{2}\log d)\) if the decoding is done serially [123]. Therefore, this decoder is considered to be a fast decoder. The BP+RG approach for increasing the threshold will suffer a toll in the complexity coming from having to execute BP too. It is noteworthy to say that whenever faulty syndrome measurements coming from circuit level noise are considered, the RG decoder needs \(d\) rounds of measurements for dealing with them, implying that the complexity of the decoder will also increase [186]. #### 5.5.3 Neural-network decoders Neural-network (NN) decoders are a result of applying machine learning (ML) techniques for decoding quantum error correction codes [125]. In order to do so, the authors of the NN decoder proposed reducing the decoding problem to the well-studied ML problem of classification, which generally consist on optimizing the assignment of known labels (generally low-dimensional) to known inputs (generally high-dimensional) with the scope of afterwards using the optimized assignment to label inputs that are unknown. A toy example of a classification problem in ML is to label photos of cats and dogs, i.e. to say which photos depict cats and which dogs [196]. The stage in which the classifier is optimized with known data is named training. Hence, NN decoders need a training stage before being able to decode. In the context of the analogy with machine learning, the authors propose to decompose an error into three multi-qubit Pauli operators as \(E=STL\), where \(S\) is an element of the stabilizer, \(T\) is any fixed Pauli that produces the error syndrome (usually referred as pure error) and \(L\) is a logical Pauli operator [125]. Note that this decomposition is standard in the field of quantum error correction [54]. Due to the degenerate nature of QECCs, any recovery operator \(E^{\prime}=S^{\prime}TL\) will successfully correct the corrupted codeword implying that the stabilizer element of the recovery operator can be assigned arbitrarily with no impact in the logical error rate of the correcting method. In addition, the authors discuss that the pure error can be produced using a parallel table look-up since each element of the error syndrome depends on a unique pure error independently of the others [125]. The algorithms performing such two assignments is named as the simple decoder. Hence, the NN decoder is based on the fact that since the rotated planar code encodes one logical qubit, the logical operators associated are \(\hat{1},\hat{X},\hat{Y},\hat{Z}\) (the hat refers to the fact that those operators are defined over the logical qubit) and, therefore, decoding can be done with as a classification problem with those four labels. The way in which the classification problem was dealt with in [125] was by using feed-forward neural networks. The neural network in question is defined by a graph consisted of many artificial neuron layers which are connected among them. The neurons are defined by a so called-activation function which depends on an array of weights, \(\bar{w}\), of length \(m\) and Figure 23: Example of a code block decomposition used for RG decoding. Each of the edges represent a qubit (12 total). In this example, a subcode of two qubits is used for RG. The dashed lines represent qubits shared by two blocks due to the fact that the toric code is not exactly a concatenated code. Figure taken from [123]. a bias \(b\). Since the NN is of the feed forward type, the neurons calculate the nonlinear activation function (\(\bar{x}\) is the input and \(*\) refers to the inner product), \(\bar{y}=(1+\exp(-(\bar{w}*\bar{x}+b)))^{-1}\), and pass it to the subsequent layer until the output layer is reached, where the result of the activation function is rounded to \(\{0,1\}\) so that the output of the algorithm is obtained [125]. The layers between the input and output layers are named as hidden layers in the terminology of ML. As explained before, a neural network requires a training stage before it can operate over a set of unknown data. Therefore, for the NN decoder, a training stage where the weights and biases of the neurons are trained is needed. The authors of [125] select the average cross-entropy as the cost function to optimize the activation function and produce a training set directly sampling errors at an error probability where the MWPM decoder has a 25% logical error rate. The cost-function was minimized by stochastic gradient descent, while the number of elements of the training set was limited to at most \(10^{6}\) samples. The NN decoder in [125] proved to be an efficient method for decoding small distance, \(d\in\{3,5,7\}\), rotated planar codes achieving similar performance as the MWPM decoder while maintaining a similar complexity. As a result of the interesting performance of this decoder, several studies discussing ML decoders for the rotated planar code have been proposed such as the deep neural network based decoder in [197, 198] or the reinforcement learning based decoder in [199]. The deep neural network decoder proved a better threshold than MWPM at the cost of a higher complexity, while the reinforcement learning decoder proved an almost linear complexity at the cost of a worse code threshold [198]. Combinations of NN decoders with RG decoders have also been proposed [200]. Recently, NN decoding methods for surface codes that do not rely on an specific error model, i.e. that are purely based on hardware obtained data, have been proposed [201, 202]. Machine learning based decoding methods have also been proposed for other topological codes such as the toric code [203, 204, 205, 206, 207, 208, 209, 210, 211], the color code [212] or the semionic code [213]. Each of the proposed ML decoders for each of the codes provide either a faster decoding or a better threshold than other methods, showing the natural trade-off between accuracy and speed. Circuit level noise has also been considered for neural network decoders, i.e. faulty syndrome data arising from noisy gates and measurements [125, 197, 212, 214]. In a similar fashion as other decoders, dealing with noisy measurements requires approximately \(d\) syndrome measurement rounds, implying that the runtime of the decoders increases. Importantly, NN decoder should be trained using this error model so that they can cope with it, i.e. training is specific to the noise model in consideration [125]. The performance of the NN decoder of [125] achieves the same performance as MWPM for this scenario. To sum up, NN decoders are promising in the quest of obtaining accurate and fast decoders for decoding not only rotated planar codes but also other families of error correcting codes. Some of the problematic behind this family of decoders is the training cost, which may scale exponentially with the code distance [186]. This is one of the main reasons behind the fact that most of the NN-based decoders have only been tested for small distances \(d<10\)[211]. Interestingly, promising results have been obtained for decoding the toric code with an NN+UF decoder that achieves high threshold with almost linear complexity, while having been tested up to \(d=255\)[211]. However, the need of a training stage before operation implies that NN and ML decoders might not be the best choice when the noise fluctuates over time, as it occurs for superconducting qubit platforms [35, 133]. Additionally, it is important to state that mapping the decoding problem to a ML classification problem works well for the rotated planar code or other topological codes since they encode a low number of logical qubits, implying that the possible logical errors is not too high. However, since the number of logical errors scales as \(2^{2k}\) with the number of logical qubits [128], \(k\), these methods may not be implementable for other families of codes with higher coding rates such as QLDPC codes. Figure 24: Graphical depiction of the feed-forward neural network used for the NN decoder. The example consists of an input layer (i), a hidden layer (h) and an output layer (o). The inputs represent the syndrome data, while the outputs refer to the estimated logical error. Each of the neurons in every layer compute the activation function as a function of their received information and pass the information to the next layer. Figure taken from [125]. #### 5.5.4 MaxSAT decoder The MaxSAT decoder has been recently proposed as an efficient algorithm to decode color codes [126]. This decoder is based on the analogy between the decoding problem of the color code and the LightsOut puzzle when the error model in consideration is the bit-flip noise. The LightsOut puzzle refers to a problem where a lattice whose faces are associated with switches and lights can be either on or off. In such lattice, toggling a switch on the lattice implies that all its neighbouring lights change their previous state, i.e. if they were on they are turned of and vice versa. The LightsOut problem then tries to find out a sequence of switch actions so that all lights are turned off. The puzzle has two important properties: toggling a switch twice is the same as not doing anything and the state of a light only depends on how often its neighbour switch have been toggled (independence on the order). The authors argue that this problem is similar to the decoding process of a color code by doing the following analogy: * Data qubits \(\leftrightarrow\) switches. * Checks \(\leftrightarrow\) lights. * Syndrome \(\leftrightarrow\) initial light configuration. * Decoding estimate \(\leftrightarrow\) switch set that is a solution. * Minimum-weight decoding estimate \(\leftrightarrow\) solution with minimum switch operations. Therefore, the authors exploit such an analogy to propose a decoding methods based on solving the LightsOut puzzle. The main idea is to find the minimal solution set for the puzzle, which is equivalent to a decoding estimate of minimum weight matching the syndrome measurement. In order to obtain the minimal solution of the LightsOut puzzle, the authors formulate such problem as a maximum satisfiability (MaxSAT) problem, which generally refers to the problem of determining the maximum number of clauses of a Boolean function that can be made true by assigning true values to the variables [215]. Following this rationale, the authors formulate the decoding problem as the MaxSAT problem \[\begin{split}\forall f\in F:&\bigoplus_{v\in \mathcal{F}_{\text{switch}}(f)}\text{switch}_{v}=\mathcal{F}_{\text{init}}(f),\\ &\forall f\in F:\text{not(switch}_{f}),\end{split} \tag{21}\] where \(f\in F\) refer to the lights or faces, the switch\({}_{v}\) are Boolean variables representing if a switch is toggled, \(\mathcal{F}_{\text{switches}}(f)\) is a discrete function that takes a face \(f\) and returns the set of switches surrounding it; and \(\mathcal{F}_{\text{init}}\) is a Boolean function that describes the initial configuration of the system (associated with the measured syndrome). \(\bigoplus\) denotes the exclusive- or (XOR) Boolean operation. In equation (21), the constraints in the first line represent the satisfiability problem, and the second are the soft constraints so that such problem is maximum. Hence, color code decoding can be done by solving such MaxSAT problem [126]. The authors discuss many MaxSAT solvers for decoding the color code. The MaxSAT decoder for bit-flip error noise in color codes was proven to have a very high threshold of 10.1% [126]. This is near the optimal 10.9% threshold obtained by the MPS or TN decoder, and substantially exceed the \(\approx 9\%\) of the MWPM, 8% of the Uf and 7.8% of the RG decoders. This excellent threshold does not come free of charge since it implies a higher decoding runtime. The authors do not discuss the complexity of the MaxSAT decoder, but they do explain that the runtime is slower than the ones of MWPM, UF and RG decoders, while it is faster than the TN decoder. This is consistent with the typical performances versus complexity trade-off discussed throughout this tutorial. The toll in runtime comes from the fact that the MaxSAT problem is an NP-hard problem [126]. Importantly, the authors argue that the MaxSAT decoder is faster whenever the physical error rate is lower, which is a desired feature as QECCs are expected to be working in subthreshold noise levels. In conclusion, the MaxSAT decoder is an efficient decoding method for the color code that exhibits a very high code threshold for bit-flip noise with a more reasonable complexity when compared with the tensor network decoder. However, at the time of writing, the MaxSAT decode is yet a limited approach, mainly due to the fact that only bit-flip noise is considered. This is very important since the LightsOut analogy proposed by the authors in [126] depends on such error model assumption. Additionally, no faulty measurements have been considered and it is rather unclear if the decoder will be able to operate whenever circuit Figure 25: LightsOut and color code (\(d=11\)) decoding analogy. The marked faces represent the initial configuration (syndrome), while the green data qubits represent a possible solution of the problem which is not necessarily minimal. Figure taken from [126]. noise level is considered. Nevertheless, this decoding method has been very recently proposed and generalizations of it for noises with other structures (importantly the depolarizing channel) and other families of topological codes can be expected as future work. ### Software Packages In this section an overview regarding the existing software packages for surface code decoding simulation will be covered. Fortunately, the scientific community has provided a large number of open source repositories for performing simulations of several decoders of the surface code and other topological codes and, thus, we will cover several of them (the most popular and used ones) as a reference for people on the community of quantum error correction. We provide a graphical overview in FIG. 26. Whenever considering numerical simulations of a surface code, it is important to be able to generate the samples of the noise following the distribution of interest for a certain scenario so that the decoder operates over such noise model. In this sense, generating samples for data qubit noise is pretty straightforward by considering the specific distribution of the Pauli channel for each of the qubits (See section 4). Nevertheless, whenever circuit-level noise for faulty check measuments is considered, sampling the noise is not a trivial task [136]. Therefore, the _Stim_ repository by Craig Gidney was developed for sampling circuit-level noise for the sake of obtaining samples of noise that accurately resemble the errors that a surface code experiences over such scenario [136]. Stim samples circuit noise to all qubits within the code, including measurement qubits, at a fast speed and is the simulator of choice when considering stabilizing circuit noise. This repository can analyze a \(d=100\) surface code circuits in approximately 15 seconds for sampling circuit shots at a rate of 1000 samples per second. Regarding software implementations of the minimum weight perfect matching decoder there are several open source repositories which successfully simulate realizations of the decoder at a sufficient speed. The most popular and quickest one considering serial computation is Oscar Higgot's the _Pymatching_ repository [57, 120], the latest version of which, implements the Sparse Blossom implementation of the MWPM decoder. The _Fusion Blossom_ algorithm [119] is also available at the GitHub repository by Yue Wu [118] which, as explained before, albeit having a worse serial performance than Pymatching, allows parallel decoding improving the complexity at each additional node and eventually surpassing the performance of Pymatching. Additionally, the _QECSIM_ repository by David Tuckett [180] also stands as a fair simulator for the minimum weight perfect matching which also allows other decoding methods. Another recent software package that incorporates Pymatching and Fusion Blossom (by calling those repositories) was posted in the _Plaquette_ repository by the QC Design team [216]. Interestingly, such repository incorporates Stim too, allowing the combination of all those repositories in a single one. An implementation of the UF decoder can be found in the _Qsurface_[217]. The package allows the simulation of the standard union-find decoder [109] as well as a modification of it by the authors of Qsurface which they name as Union-find Node-Suspension decoder. The authors claim that an improved threshold can be obtained by using such modified decoder while maintaining complexity. Moreover, such package includes an implementation of the MWPM algorithm. The previously discussed Plaquette package [216] also includes an implementation of the UF decoder by Delfosse and Nickerson. The inclusion of this decoder is specially interesting for the Plaquette package since the circuit-level noise obtained from Stim can be directly used for testing the UF decoder. The BPOSD algorithm was originally thought as a general decoder for QLDPC codes [46]. However, a surface code can be seen as a sparse code as the distance of the code increases (due to the fact that the weight of the stabilizer is constant independently of the size). In this sense, the _BP+OSD: A decoder for quantum LDPC codes_ package by Joschka Roffe [218] implements the BPOSD decoder for general QLDPC codes. For simulating surface codes, the parity check matrix [30] of such code should be used whenever defining the QLDPC code to be decoded by the BPOSD implementation. The aforementioned QECSIM repository [180] does also include an implementation of the tensor network decoder. The package include the possibility of tuning the truncation parameter that is used for the TN decoder. Another implementation of the tensor network decoder can be found in the _SweepContractor_ repository by Cristopher Chubb [219] which is based in the sweep line algorithm of [171]. In the case of cellular automaton decoders, the repository _Sweep-Decoder-Boundaries_ by Michael Vasmer [220] implements the version of the decoder using the sweep rule as presented in [185, 186]. As discussed in the papers, the implementation of the decoder admits simulations using circuit-level noise. Additionally, the repository _LocalToricDecoder_ by Kasper Duivenvoorden implements the version of the cellular automaton using Toom's rule as proposed in [183, 184]. We have been unable to find software packages implementing the other cellular automaton decoders discussed. Concerning the RG decoder, the _QTop_ repository by Jacob Marks [221] presents an implementation of it. However, it is noteworthy to say that the author stated in the repository that such software package is not being actively maintained and, thus, it may be unreliable. Anyway, the code is available there for Figure 26: Sketch representing the available software packages for each mentioned decoder. Each edge connects each decoder (green circle), with the software packages which support it (pink circles). The decoders mentioned in the figure are the Cellular Automata decoder (CA), the MaxSAT decoder, the Union Find decoder (UF), the Minimum Weight Perfect Matching decoder (MWPM), the Tensor Network decoder (TN) the Neural Network decoder (NN), the Belief Propagation + Order Statistics Decoder (BPOSD) and the Renormalized Group decoder (RG). anyone that may be interested in testing it or using it to code its own version of RG decoders. With respect to neural network decoders, the repository _Neural Network Decoders for Quantum Error Correcting Codes_ by Stefan Krastanov [222] implements the neural network decoder proposed in [204] for the toric code. Moreover, Pooya Ronagh developed a the repository _Deep Neural Decoders for Fault-Tolerant Quantum Error Correction_[223] for the deep neural network decoder proposed in [197] for decoding rotated planar codes. Interestingly, the authors of both repositories programmed the decoders so that other neural networks than the ones in [204, 197] can be incorporated. Hence, those repositories provide freedom to integrate other neural network decoders of the literature. Finally, the repository _neural_network_decoder_ by Paul Baireuther [224]contains the implementation of the neural network decoder for the color code in [212]. To finish with this section, there is a repository with the implementation of the MaxSAT decoder presented in [126]. The repository, developed at the Technical University of Munich, is named _QECC: An MQT tool for Quantum Error Correcting Codes written in C++_[225] and is intended to be a more general tool for QEC than just the MaxSAT decoder for color codes. At the time of writing, the authors of the repository state that the project is still in early development and, thus, will contain other QEC software tools. The main ideas used for the software package were presented in [226]. Either way, at the moment of writing, the MaxSAT decoder and a decoder for QLDPCs based on the UF decoder [140] are present in the software package. Note that since surface codes can be seen as QLDPC codes, the UF decoder implemented in [225] can in principle be used to decode such families of codes. ## 6 Discussion The construction of a fault-tolerant quantum computer remains as the Holy Grail of quantum computing so that the groundbreaking theoretical potential of such technology can be achieved. The key element to solve this quandary is quantum error correction. Quantum information has proven to be so frail that the scientific community has accepted that quantum computing will be unrealizable without QEC. One of the principal elements of a QECC is the decoder, or the classical algorithm that is used to estimate which error has corrupted the quantum information. Due to the urgent necessity of obtaining fast and accurate enough decoders that can operate in real-time for experimentally implemented QECCs, the number of papers related to this is increasing in a very fast pace. In this sense, the QEC community is strongly pushing the state-of-the-art of the topic so that fault-tolerance can be a reality as fast as possible. As presented in the main text, the main decoders for the rotated planar code family are the MWPM, UF, BPOSD and TN decoders. FIG. 27 is an illustrative representation of the accuracy versus runtime trade-off discussed through the text for those decoders. At the time of writing, it seems that the MWPM is an strong candidate due to its high threshold and the fact that it always returns the \(X\) and \(Z\)-errors of minimum weight matching the observed syndrome28. In addition, the MWPM decoder presents a considerable worst-case complexity when compared to other decoding methods, but recent implementations of the algorithm have proven to present an almost linear expected complexity [57, 119], making this method even more powerful yet. On the other hand, UF also stands as a fair contestant due to its decoding speed which comes from a linear worst-case complexity. Nevertheless, as discussed before, the MWPM implementations with an average complexity lower than \(\mathcal{O}(n^{2})\)[57], even reaching linearity given that enough nodes for parallelization are available [119], make the candidacy of the UF decoder to significantly lose strength. Anyway, it is noteworthy to state that many of those fast MWPM implementations were constructed based on many ideas introduced by UF decoding. The BPOSD-0 decoder lays in a middle ground between the MWPM and UF decoders whenever the rotated planar code is considered. As discussed before, increasing the search depth of the OSD algorithm has a considerable impact on the worst-case complexity of the decoder and, thus, the benefits of this decoder will probably be lost due to the fact that the threshold of the code would not increase as much. In this sense, it seems that the MWPM decoder is a better candidate for the specific instance of surface codes. Nonetheless, the BPOSD algorithm was proposed as a general decoder for QLDPC codes [46] and, therefore, it is the best candidate for decoding such family of codes which is being studied thoroughly at the time of writing. As seen in the earlier section, its implementation allows the circumnavigation of split beliefs in conventional belief propagation decoding, which have posed a great hurdle to conventional BP decoding for QLDPC codes [150, 46]. The TN decoder achieves the highest code threshold for the rotated planar code, albeit at the expense of the largest worst-case computational complexity. Both the benefit and drawback of this algorithm is a result of the fact that it is an effort to solve the maximum-likelihood (ML) problem explicitly, and this is why it is usually referred as the ML decoder29. In this sense, the TN decoder is an almost brute force approach where such brute force search is limited by the bond dimension, \(\chi\), which is the approximation parameter that is needed for this method to be actually implementable for codes of considerable size. Hence, the bond dimension is the parameter that will be in charge of the accuracy and speed of the algorithm. Due to the enormous growth in bond dimension in the surface code tensor network contraction, the tensor network is not really considered for real-time decoding but is studied cautiously as an effective degenerate quantum maximum likelihood decoder. Also this decoder presents the possibility of the experimental implementation of codes beyond break-even with poorer hardware capabilities due to its high threshold. The conclusion to this is that the choice of the decoder for a future real-time implementation of surface codes remains as an open question. This quandary is even more complex taking into account the existence of other decoding approaches such as the CA, RG, NN or MaxSAT decoders discussed in this tutorial. Each of those may provide interesting features that may be beneficial for specific families of codes or even adaptations of them may be interesting for the context of rotated planar codes. For example, machine learning base decoders are gaining popularity on the recent times, probably due to the huge interest in machine learning in general. In this line, recent breakthroughs have been achieved in the experimental implementations of surface codes over superconducting qubit platforms [63, 64]. Both of the experimental implementations were based on the rotated planar code discussed throughout this tutorial. In [63], Andreas Wallraff's group at ETH Zurich led an experiment based on the rotated planar code of distance three by means of a processor consisted of 17 transmon qubits. Krinner et al. used the obtained data after multiple measurement rounds in order to run the MWPM decoder for estimating the errors corrupting the quantum information. In [64], the Google Quantum AI realized an experiment based on the rotated planar codes of distances 3 and 5 by means of their expanded Sycamore device consisted of 72 transmon qubits. The authors actually run experiments for the distance-5 rotated planar codes since the actual data for the distance-3 code can be obtained from such code. Acharya et al. used the obtained data to decode the errors by means of the TN and belief-matching decoders30. In this way, the authors proved that the logical error rate obtained by the rotated planar code improved as the distance of the code increased for both decoders. This result is really important since the code was operating in a sub-threshold noise level and it represents an experimental proof that the performance of the codes can be increased by increasing the distance (recall the threshold theorem). Importantly, both of the experiments performed the execution of the decoding algorithm by dealing with the measured data in a post-processing stage, i.e. no real-time decoding was actually performed. However, both of these experiments represent the state-of-the-art of experimental implementations of surface codes and they are an effort to progress towards the ultimate goal of fault-tolerant quantum computes. Footnote 30: Recall that the belief-matching decoder combines the output of a BP decoder in order to reweight the edges of the graph in the MWPM problem [87]. Following the present discussion, and as it has been seen throughout the text, the decoding stage stands as a pivotal operation within the successful function Figure 27: Graphical illustration showcasing the complexity and probability threshold of the decoders studied in this work. This comparative figure is inspired by a figure in [126]. ing of, not only the surface code, but any correcting code in general. One should be aware of the fact that the named code threshold depends on three aspects: the code family in consideration, the decoding method for the code and the noise model in consideration [90, 91]. In this sense, it is straightforward to see that designing decoding algorithms is crucial in terms of the ability of a family of codes to correct errors. The intrinsic locality and degeneracy of the surface code allows for several decoding processes to present themselves as valid candidates for future real device implementation. Nevertheless, in addition to the code threshold for a decoding method, two other critical aspects stand as limiting factors for the implementation of surface codes decoding in a real-time fashion when implemented on experimental hardware: run time and circuit-level noise. Decoding runtime refers to the actual time that a decoder requires to output an estimate of the error that has corrupted the logical qubit encoded by a code. Hence, this inherently imposes a delay in the error correcting system that is hazardous for its actual operation. This comes from the fact that, once the syndrome measurement has been done, the data qubits will still continue to suffer from errors as they will not stop to decohere. Thus, if the delay between measuring the checks and applying the recovery operation is too high, then the algorithm will fail, with high probability, the actual error configuration when recovery is executed. This is especially important by taking into account that the decoherence times of state-of-the-art qubits are short31[30]. As seen in through this tutorial, it is pretty fair to state that maintaining code threshold while reducing the actual complexity of a decoder is a hard problem. This is a result of the trade-off of being fast against considering more error configurations, which actually relates to code performance. This logic comes from the fact that the traditional comparison of accurate and fast is done in terms of code threshold and worst case complexity. However, it is a recent trend to benchmark runtime by expected complexity, i.e. an average of the number of operation required to decode different error patterns. The rationale behind this is that worst-case events are generally exponentially rare, implying that their impact will not be as high in code performance as other more frequent cases. This effect should be more important whenever hardware noise improves lower to sub-threshold physical error rates and, thus, expected runtime might be a better benchmark for decoding algorithms. In this sense, designing faster decoding algorithms that are able to maintain the capabilities of the code in terms of decoding accuracy is a critical issue for the QEC field. It is important to state that significant advances in this direction are being obtained with proposals such as Sparse Blossom [57] or Fussion Blossom [119]. Footnote 31: Note that this is not true for all qubit technologies such as ion traps, but those present additional problems such as the fact that their operation times, quantum gates, are very slow. On the other side, considering circuit-level noise is fundamental so that surface codes are successful when implemented in real hardware. As explained before, the consequence of such noise associated to imperfect gates, SPAM and error propagation results in measurement errors that make the decoder not to be able to correctly estimate the actual error that has corrupted the data qubits of code (refer to Section 4). In this sense, considering circuit-level noise is sometimes referred as fault-tolerant error models as it is considering all the errors of the elements of the system [125]. Circuit-level noise is usually alleviated by means of multiple measurement rounds, in the order of \(\mathcal{O}(d)\), so that the additional errors that led to measurement errors can be taken into account for the decoding problem (refer to Appendix A for a more detailed description). Nevertheless, such multi-round measurement results in a toll for complexity and, hence, runtime. This directly hinders the previously discussed necessity of being fast in decoding so that error correction is successful. Additionally, even when the multiple measurement protocol is applied, the code threshold significantly decreases (usually an order of magnitude) in comparison to thresholds that only consider data qubit noise [87, 32]. The direct result of this is that hardware requirements for sub-threshold code implementation become even more stringent. Therefore, circuit-level noise adds another layer of complexity to the already difficult decoding implementation in real hardware. In this sense, improving gate fidelities and decoherence parameters, as well a other noise sources, of state-of-the-art quantum processors will be a very important for obtaining fault-tolerant quantum computers. Moreover, tailoring both codes and decoders to the specific structure of the noise that the qubits experience is being investigated so that the actual correcting performance for such noise. A widespread example of this is the bias towards \(Z\)-errors that many state of the art qubit technologies such as silicon spin qubits or ion traps exhibit [227, 30]. As described in section 4, this comes from the fact that the dephasing time, \(T_{2}\), of such qubits is much smaller than the energy relaxation time, \(T_{1}\). As previously discussed, running the aforementioned decoders over biased noise models usually contributes to a worse threshold than for the depolarizing channel. This, however, occurs due to the fact that the code/decoder is not tailored to work over noise exhibiting such tendency. Specifically, since \(X\) and \(Z\)-errors are decoded independently, then one of the decoding problems results to be more dense if the noise is biased, i.e. it will experience errors with higher weight more probably, and, thus, the actual threshold will be lower since such decoding sub-problem will fail more often. In this sense, the community has actually studied how to deal with some of those noise structures so that performance is maintained or even enhanced. Some examples include: * **The rectangular surface code:** using a surface code with rectangular shape makes the minimal weight logical error \(Z_{L}\) to require more physical \(Z\) operators, increasing the \(Z\)-distance \(d_{Z}\) and, consequently, increasing the number of \(Z\)-errors it can correct: \((d_{Z}-1)/2\)[131]. On the negative side, this is at the expense of increasing the number of necessary data qubits to \(d_{X}d_{Z}\). In FIG. 28, a rectangular code of distances \(d_{X}=5,d_{Z}=7\) is shown, notice how the logical operator \(Z_{L}\) requires additional Pauli operators than for the \(X_{L}\) operator. This code construction may use the reviewed decoders in order to estimate the channel errors. * **Clifford deformation:** in Clifford deformed surface codes, the check qubits are conjugated respect to a unitary operator from the Clifford group [228] so that other valid stabilizers are obtained [106]. By means of this stabilizer modifications, the new stabilizer codes can be constructed so that they are more susceptible to \(Z\)-noise. For example, the XY-code is a surface code which, instead of having \(X\)- and \(Z\)-checks, has \(Y\)- and \(X\)-checks, both of which are susceptible to \(Z\)-noise. A graphical depiction of the XY-code is given in the top figure of FIG. 29. As a result of such susceptibility to \(Z\)-errors, the associated decoder can make use of the correlation between \(X\) and \(Y\)-errors, i.e. \(XY=Z\), in order to be more accurate in detecting \(Z\)-noise [170, 88, 87]. Another popular Clifford deformed surface code is the XZZX surface code [41, 229]. For this code, all check qubits act equally on their surrounding data qubits operating two \(X\) and two \(Z\) operators as shown in the bottom figure of FIG. 29. One can see that the commutation between the checks is maintained, since adjacent checks either anti-commute twice or directly commute. An interesting characteristic of the XZZX code is that there is only one set of \(Z\) physical operators which produce a logical operator. As is shown in FIG. 29, all the products of this logical operator with the stabilizer set produce physical operators which are composed of \(X\), \(Y\) and \(Z\)-operators. This feature complicates the probability of a logical phase-flip, \(Z_{L}\), error under a highly \(Z\)-biased noise model. Moreover, for the case in which the \(Z\)-bias is infinite, e.g. a pure dephasing channel, the XZZX surface code operates as a series of disjoint repetition codes that can be decoded independently. The data qubits of such repetition codes are the top-left to bottom-right diagonals of the overall XZZX code, as can be seen in the bottom of FIG. 29, where the diagonal consisting of two \(Z\) physical errors produces syndromes in their top-left and bottom-right sides. Given this condition, the threshold of the XZZX code reaches an outstanding \(50\%\)[41, 229], while reducing the amount of required qubits significantly. Therefore, it can be seen that a bias towards \(Z\)-errors is not only not problematic, but also beneficial if the code and decoder are tailored for such model. * \(X/Z\) **correlation:** another important approach to improve the performance of surface codes is to make decoders aware of the fact that \(Y\)-errors represent a correlation between the \(X\) and \(Z\) errors. One downside of the independent \(X\) and \(Z\) decoding of CSS codes such as the surface code is that it makes the decoding blind of the correlation among those errors, i.e. it is usually considered that \(p_{Y}=p_{X}p_{Z}\). However, as seen in Section 4, the Pauli noise models have \(p_{X}\approx p_{Y}\) (both in biased and depolarizing) and, thus, by not taking this into account the decoder underestimates the frequency in which \(Y\)-errors corrupt the data qubits of the code. A method in order to circumvent this problem (without explicitly changing the structure of the code) consists in first decoding for one of the errors and, afterwards, using the partial recovered error in order to decode the other error. E.g., given an error in a CSS surface code, first decode the \(X\)-error, and then consider that, for recovering for the \(Z\)-error, every qubit has an updated probability of Figure 28: Graphical representation of a \(d_{X}=5,d_{Z}=7\) rectangular rotated planar code. \(X_{L}\) and \(Z_{L}\) operators are also represented. undergoing a \(Z\)-error defined by: \[P(Z|X=1) =\frac{p_{Y}}{p_{X}+p_{Y}} \tag{22}\] \[P(Z|X=0) =p_{Z},\] where, \(P(Z|X=1)\) is the probability that a specific data qubit undergoes a phase-flip \(Z\) given that it has been considered to undergo a bit-flip \(X\), and \(P(Z|X=0)\) is the probability that a qubit which is considered to not have undergone a bit-flip has undergone a phase-flip \(Z\). This can be done once [134, 135, 87] or in a recursive manner [88, 230]. Moreover, it can be combined with the XY-code in order to enhance the susceptibility of the code towards \(Z\)-noise [88, 87]. Thus, it is important from the decoding (and code construction point of view) to consider the actual noise that the qubits of the surface code experience. As seen for the XZZX code, this tailoring does actually even significantly improve the code performance without needing to lose resources. Even if we have discussed noise bias here, there are many other important subtleties in the nature of the noise for each qubits technology that should be considered for integration in real hardware. For example, multi-qubit error correlation and time-varying noise are examples of this. The fact that the actual distribution of the errors in a real quantum processor is independent seems to be generally false. In this sense, studying the correlated nature of noise is an incipient sub-field in the topic and, thus, tailoring decoding to such correlated nature should improve the performance of the codes over such models [105]. Note that this quandary is not new for classical error correction [231], implying that many ideas of dealing with such noise can be extrapolated to the quantum domain. Modifications of decoders to take into account correlated noise have been already proposed for quantum turbo codes over channels with memory, resulting in a considerably improved performance of the code when compared to the decoder that is blind to the correlation [232]. Additionally, the noise experienced by superconducting qubits has been proven to be time-varying [133, 35], which implies that the performance of the codes experiences a degradation. In this sense, studying adaptive decoders that can estimate the noise level [233] at a certain time in order to follow such fluctuating nature of the noise should considerably improve the performance [234]. To conclude, quantum error correction is still in a primitive stage before the potential of quantum computing can be unleashed by fault-tolerant machines. In this sense, surface codes represent the most promising family of codes to be implemented in the early post-NISQ era, principally due to their locality feature (and, in the case of planar instances, two-dimensional qubit placing) and their high tolerance to Figure 29: On the top, a graphical representation of a rotated planar XY code with the operators of the stabilizer generators. On the bottom, the same representation but for the rotated planar XZZX code altogether with a \(Z_{L}\) operator quantum noise. As extensively reviewed in this article, decoders represent a central part of QEC methods as they are key elements in posing a threshold for a code. Moreover, we have discussed the importance of the runtime of this algorithms due to the accumulation of other errors if the estimation of the channel error results to be too slow. As a result of this, there exists an important trade-off between accuracy and speed of decoders which implies that a selection of a decoder for real-time decoding depends on many factors that go from the hardware noise level to the pace at which more errors accumulate. Hence, the selection of the best decoder for an experimental implementation of a surface code is still an open question that many research teams, both in academia and industry, are trying to resolve. Due to the extense zoo of possible qubit technologies being investigated nowadays, it is possible that many of the exisitng candidates, or new ones, are the best fit as a function of the specifics of each of the quantum computing platforms. There is much work left to do, and each of us on the field should contribute our share in this Herculean quest. We live in exciting times. ## Data availability The data that supports the findings of this study is available from the corresponding authors upon reasonable request. ## Competing Interests The authors declare no competing interests. ## Acknowledgements We want to acknowledge Nicolas Delfosse, Pavel Panteleev, Christopher Chubb, David Tuckett, Michael Newman, Manabu Hagiwara, Inigo Barasoain and Javier Oliva for fruitful discussions. This work was supported by the Spanish Ministry of Economy and Competitiveness through the ADELE (Grant No. PID2019-104958RB-C44) and MADDIE projects (Grant No. PID2022-137099NB-C44), by the Spanish Ministry of Science and Innovation through the project Few-qubit quantum hardware, algorithms and codes, on photonic and solid-state systems (PLEC2021-008251), by the Ministry of Economic Affairs and Digital Transformation of the Spanish Government through the QUANTUM ENIA project call - QUANTUM SPAIN project, and by the European Union through the Recovery, Transformation and Resilience Plan - NextGenerationEU within the framework of the Digital Spain 2025 Agenda.
2310.06942
Ab initio description of bcc iron with correlation matrix renormalization theory
We applied the ab initio spin-polarized Correlation Matrix Renormalization Theory (CMRT) to the ferromagnetic state of the bulk BCC iron. We showed that it was capable of reproducing the equilibrium physical properties and the pressure-volume curve in good comparison with experiments. We then focused on the analysis of its local electronic correlations. By exploiting different local fluctuation-related physical quantities as measures of electronic correlation within target orbits, we elucidated the different roles of $t_{2g}$ and $e_g$ states in both spin channels and presented compelling evidence to showcase this distinction in their electronic correlation.
Jun Liu, Yongxin Yao, Vladimir Antropov, Kai-Ming Ho, Cai-Zhuang Wang
2023-10-10T19:00:01Z
http://arxiv.org/abs/2310.06942v1
# _Ab initio_ description of bcc iron with correlation matrix renormalization theory ###### Abstract We applied the _ab initio_ spin-polarized Correlation Matrix Renormalization Theory (CMRT) to the ferromagnetic state of the bulk BCC iron. We showed that it was capable of reproducing the equilibrium physical properties and the pressure-volume curve in good comparison with experiments. We then focused on the analysis of its local electronic correlations. By exploiting different local fluctuation-related physical quantities as measures of electronic correlation within target orbits, we elucidated the different roles of \(t_{2g}\) and \(e_{g}\) states in both spin channels and presented compelling evidence to showcase this distinction in their electronic correlation. ## I Introduction Iron, a prototypical magnetic material, is integral to our daily lives. Experimental studies have ascertained that its low-temperature ground state, the \(\alpha\)-Fe phase, exhibits a bcc crystal structure with an equilibrium lattice volume of 11.7A\({}^{3}\) (equivalent to lattice constant \(a\)=2.86A) and a bulk modulus of 168GPa[1]. As a ferromagnetic substance, it possesses an ordered spin magnetic moment of 2.13\(\mu_{B}\) and orbital magnetic moment of 0.08\(\mu_{B}\)[2]. The system displays discernible electronic correlation. Specifically, an effective local Hubbard interaction U for 3d electrons in iron has been identified within a range of 1\(\sim\)3eV, and a definitive ratio of \(U/W\simeq 0.2\) was established, with \(W\) representing the bandwidth of 3d states[3; 4; 5]. This observation was later corroborated by a theoretical study coming up with a close \(U/W\) ratio[6]. Such characteristics were further evidenced in various experimental outcomes that diverge from their mean field-like theoretical predictions and interpretations[5; 7]. Presently, both experimental and theoretical efforts have categorized \(\alpha\)-Fe as a local moment system with a great tendency towards itinerancy[8]. But a consensus is yet to be reached on the underlying physical mechanism on the formation of the strong ferromagnetism in \(\alpha\)-Fe[9; 10; 11; 12]. Density Functional Theory (DFT), including Local Spin Density Approximation (LSDA) and its Generalized Gradient Approximation (GGA), has been applied to \(\alpha\)-Fe to understand its peculiar physical properties from a microscopic perspective. LSDA's predictions deviated from experimental findings and suggested a notably reduced equilibrium lattice constant for the ferromagnetic ground state of \(\alpha\)-Fe[13]. Adjusting this discrepancy involves enhancing the kinetic energy via nonlocal charge density variations and employing compatible exchange-correlation functionals akin to GGA. These modifications yielded commendably accurate depictions concerning the right ferromagnetic ground state and its innate properties[14; 11]. Broadly, DFT furnishes a reasonable portrayal of \(\alpha\)-Fe, including its energy ground state and quasiparticle characteristics[14; 15; 16]. Specifically, it validates the Stoner mechanism for the emergence of spontaneous ferromagnetism in BCC Fe[17]. Other weakly interacting techniques, for example, GW approximation[18] and quasiparticle self-consistent GW[19; 20], have also been applied to the system, purporting enhanced efficacy relative to GGA. A semi-_ab initio_ Hartree-Fock (HF) calculation, where local and nonlocal interaction operators were separately scaled, was also reported to have produced A quite consistent bandstructure as DFT[21]. Nevertheless, there is room for further refinement to illuminate the subtle aspects of \(\alpha\)-Fe like local moment formation and competition between localized and itinerant electrons, and to bridge the gap between theory and experiments, notably through addressing both local and nonlocal electronic correlations[22; 23; 24; 25]. Advanced _ab initio_ techniques, specifically designed to treat local electronic correlation, have been employed to investigate the BCC iron system. Notable methods included LDA+U[26], LDA+Dynamic Mean Field Theory (LDA+DMFT)[12; 24; 27; 28] and LDA+Gutzwiller (LDA+G)[29; 30; 31]. While LDA+DMFT is considered the state-of-the-art _ab initio_ method, it is also computationally demanding. It was shown to improve the agreement between theory and experiment, including very subtle aspects on quasiparticle properties like broadening of quasiparticle spectra[27], local spin splitting[12; 28] and the emergence of satellite subband[32]. Specifically, it gave numerical evidence on the distinct nature of the \(t_{2g}\) and \(e_{g}\) states in electronic[12] as well as magnetic contexts[33], and ascribed local moment mainly to \(e_{g}\) electrons[12]. LDA+G can be regarded as a simplified and accelerated version of LDA+DMFT with a different definition of the Baym-Kadanoff functional within the conserving approximation[34]. It made specific physical observations based on its output and produced information on quasiparticle dispersion. The engaged treatment OF local electronic interactions helped introduce new interpretations towards ferromagnetism from DFT methods[30]. However, The notable challenge with these methods is the variability in defining effective Hubbard \(U\) and exchange \(J\) parameters. These parameters are essential for outlining screened local electronic interactions. They could differ significantly across separate implementations and were often calibrated to align with certain experimental data[22; 23; 29; 30; 31]. Specifically, the \(U\) value can range from 2eV to 9eV, and \(J\) between 0.5eV and 1.2eV, a considerable spread for similar _ab initio_ techniques. Nevertheless, there were reassuring studies indicating that magnetic properties are more influenced by \(J\) than \(U\)[22; 31]. DFT and its embedding methods, including LDA+U, LDA+G, and LDA+DMFT mentioned above, enriched our knowledge for a better understanding of the microscopic origin of the ferromagnetism in the bulk bcc iron system by analyzing physical quantities coming out of the calculations and confirmed the importance of the role local electronic correlation plays in producing a more accurate theory to meet experiments. Local physical quantities analyzed include local self-energy, spectral function, and spin-spin susceptibilities mainly produced in LDA+DMFT[12; 24], local orbit occupation and mass renormalization factor[30], and local charge(spin) distribution[31]. They have provided direct evidence on existence of local moment, asymmetry between \(t_{2g}\) and \(e_{g}\) states, and notable influence from electronic correlation. In this work, we aim to delve deeper into some of these subjects, employing data from the recently introduced _ab initio_ method, Correlation Matrix Renormalization Theory (CMRT)[35; 36; 37]. Uniquely, CMRT utilizes Hartree-Fock (HF) rather than DFT for the foundational single-particle effective Hamiltonian. A strength of integrating HF into CMRT is its direct engagement with term-wise bare Coulomb interactions, eliminating the need for adjustable \(U,J\) energy parameters and double counting choices, and avoiding self-interaction complications. However, this approach also has drawbacks: HF offers a less realistic quasiparticle foundation for CMRT. Therefore, ensuring that the many-body screening effects are properly incorporated within CMRT is essential. We thus assessed the total energy of the system and compared the derived pressure-volume curve to experimental data to ensure they are closely aligned, a necessary step for CMRT to proceed further. We then devised a series of correlation metrics to discern distinct roles of \(t_{2g}\) and \(e_{g}\) states across spin channels. ## Methods CMRT is a fully _ab initio_ variational theory specifically tailored for strongly correlated electron systems utilizing a multiband Gutzwiller wavefunction as its trial state[37]. Notably, in the context of transition metal systems, CMRT offers a cohesive framework that accommodates both itinerant and localized electrons within the same electronic structure calculation, akin to DFT-embedded correlated _ab initio_ methodologies[30]. For a periodic bulk system with one atom per unit cell, the CMRT ground state total energy is \[E_{total}=\sum_{\begin{subarray}{c}ij\\ \alpha\beta,\sigma\end{subarray}}\tilde{t}_{i\alpha,j\beta,\sigma}\left\langle c ^{\dagger}_{i\alpha\sigma}c_{j\beta\sigma}\right\rangle+\frac{1}{2}\sum_{ \begin{subarray}{c}ijkl\\ \alpha\beta\gamma\delta,\sigma\sigma^{\prime}\end{subarray}}\tilde{U}^{\alpha \beta\gamma\delta}_{ijkl;\sigma\sigma^{\prime}}\left(\left\langle c^{\dagger}_ {i\alpha\sigma}c_{k\gamma\sigma}\right\rangle\left\langle c^{\dagger}_{j\beta \sigma^{\prime}}c_{l\delta\sigma^{\prime}}\right\rangle-\delta_{\sigma\sigma \sigma^{\prime}}\left\langle c^{\dagger}_{i\alpha\sigma}c_{l\delta\sigma^{ \prime}}\right\rangle\left\langle c^{\dagger}_{j\beta\sigma^{\prime}}c_{k \gamma\sigma}\right\rangle\right)+E_{local} \tag{1}\] with the local energy, \(E_{local}\), expressed as \[E_{local}=\sum_{i}\sum_{\Gamma}\tilde{E}_{i\Gamma}\left(p_{i\Gamma}-p_{i \Gamma_{0}}\right) \tag{2}\] and the dressed hopping and two-body interactions are defined as \[\tilde{t}_{i\alpha,j\beta;\sigma} =t_{i\alpha,j\beta}+\frac{N_{e}}{2}\lambda^{\alpha\beta\beta\alpha }_{ijji;\sigma\sigma} \tag{3}\] \[\tilde{U}^{\alpha\beta\gamma\delta}_{ijkl;\sigma\sigma^{\prime}} =U^{\alpha\beta\gamma\delta}_{ijkl}-\lambda^{\alpha\beta\gamma \delta}_{ijkl;\sigma\sigma^{\prime}} \tag{4}\] Here \(i,j,k,l\) represent site indices, \(\alpha,\beta,\gamma,\delta\) are orbital indices, and \(\sigma,\sigma^{\prime}\) correspond to spin indices. \(\Gamma\) denotes Fock states in the occupation number representation of local correlated orbitals on each atom in the unit cell, while \(N_{e}\) is the system's electron count per unit cell. The energy parameters, \(t_{i\alpha,j\beta}\) and \(U^{\alpha\beta\gamma\delta}_{ijkl}\) are the bare hopping and Coulomb integrals, respectively. The sum rule correction coefficient, \(\lambda^{\alpha\beta\gamma\delta}_{ijkl;\sigma\sigma^{\prime}}\), is introduced in CMRT to specifically enhance the accuracy of the total energy calculation. \(\tilde{E}_{i\Gamma}\) is the Fock state eigenvalues of the dressed local correlated Hamiltonian on each site. The initial two terms in Eq. 1 yield the expectation value of the dressed lattice Hamiltonian under CMRT, where the expectation values of two-body operators expand following Wick's theorem in terms of one-particle density matrices, which is defined as \[\left\langle c^{\dagger}_{i\alpha\sigma}c_{i\beta\sigma}\right\rangle =f\left(z_{\alpha\sigma}\right)f\left(z_{\beta\sigma}\right) \left\langle c^{\dagger}_{i\alpha\sigma}c_{i\beta\sigma}\right\rangle_{0}\] \[+\left[1-\delta_{\alpha\beta}f^{2}\left(z_{\alpha\sigma}\right) \right]\bar{n}_{i\alpha\sigma} \tag{5}\] Here, \(z_{\alpha\sigma}\) represents the Gutzwiller renormalization factor while \(\left\langle\ldots\right\rangle_{0}\) indicates the one-particle non-interacting density matrix and \(\bar{n}_{i\alpha\sigma}\) the local electronic occupation of state \(\alpha\). The function \(f\left(z_{\alpha\sigma}\right)\) is integrated to ensure CMRT aligns with the solution of an exactly solvable model[36] under certain conditions. The third term in Eq. 1 is essential for preserving dominant local physics in CMRT by rigorously expressing the local correlated energy through the variational parameter \(p_{i\Gamma}\). This parameter \(p_{i\Gamma}\) denotes the occupational probability of Fock state \(\Gamma\) spanned by the correlated atomic orbits at site \(i\). The non-interacting counterpart, \(p_{i\Gamma_{0}}\), denotes the same quantity evaluated with the mean field approximation and correlates with the local energy components already assessed in the initial two terms of Eq. 1. The underlying local correlated Hamiltonian behind the third energy term of Eq. 1 encompasses primary two-body Hubbard-type Coulomb interaction terms dominating local spin and charge interactions. Its exact treatment particularly helps preserve intrinsic local spin and charge fluctuation effects and generate local magnetic moments. The Hund's coupling exchange interaction terms, which are believed to be physically relevant for bcc iron[12; 38], are approached in a mean field way in CMRT. The sum rule correction coefficients, provisionally represented as \[\lambda_{ijkl;\sigma\sigma^{\prime}}^{\alpha\beta\gamma\delta}=\lambda_{i \sigma}^{\alpha}\delta_{ik}\delta_{jl}\left(1-\delta_{ij}\right)\delta_{ \alpha\gamma}\delta_{\beta\delta}, \tag{6}\] are integrated explicitly into CMRT to aid in counteracting errors associated with the Fock terms in Eq. 1. These terms constitute a significant error source of CMRT. The sum rule correction coefficients serve to redistribute nonlocal Coulomb interactions onto local sites, thus further refining total energy by exactly treating these local interactions. The central term, \(\lambda_{i\sigma}^{\alpha}\), in Eq. 6 for each correlated orbit is tested out in this work for magnetic systems. Its optimal functional form is determined following the logic of cancellation of inter-site Fock contributions and is identified as \[\lambda_{i}^{\alpha}=\frac{\sum_{\sigma^{\prime}}\left[\sum_{j\neq i}\sum_{ \beta}U_{ijij}^{\alpha\beta\alpha\beta}\left|\left\langle c_{i\alpha,\sigma^{ \prime}}^{\dagger}c_{j\beta,\sigma^{\prime}}\right\rangle\right|^{2}\right]}{ \sum_{\sigma^{\prime}}\left[\sum_{j\neq i}\sum_{\beta}\left|\left\langle c_{i \alpha,\sigma^{\prime}}^{\dagger}c_{j\beta,\sigma^{\prime}}\right\rangle \right|^{2}\right]} \tag{7}\] One reassuring aspect of the above definition is the spin-independent nature of the term, which aligns with the system's bare _ab initio_ Hamiltonian. There, the energy coefficients of one-body and two-body operators are all spin-independent. Thus, whatever magnetization produced in CMRT is a genuine characteristic of the system but not endowed by certain pre-defined energy parameters. The variational minimization of the CMRT total energy, as given by Eq. 1, yields a set of Gutzwiller equations[37]. These are self-consistently solved to reach the optimal solution for the target system. For weakly correlated lattice systems, the volume-dependent total energy and related physical quantities produced by CMRT have been found to align closely with experimental results[37]. In the realm of strongly correlated systems, CMRT has demonstrated its prowess in capturing the correlated nature of 4f electrons in fcc Ce and fcc Pr [39]. By interfacing with the Hartree-Fock (HF) module of Vienna Ab Initio Simulation Package (VASP) [40], CMRT has been efficiently implemented with the QUAsi-atomic Minimal Basis set Orbitals (QUAMBO) basis set [41]. Its computational speed mirrors that of a minimal basis HF calculation [37; 39], marking a significant performance gain over the more time-consuming Quantum Monte Carlo methods. Specifically for this work, a plane-wave basis set was constructed in VASP with the default energy cutoff prescribed by the pseudopotential of Fe. Brillouin zone sampling was facilitated with VASP using an automatically generated K-point grid maintaining a \(R_{k}\) length of 40 (\(R_{k}=\)40), which amounts to a \(20\times 20\times 20\) uniform mesh at the experimental lattice constant. The local QUAMBO basis set of 3d4s4p states are projected from the LDA wavefunction preserving the low-energy LDA spectrum up to 1eV above the LDA Fermi energy. These localized orbits define the tight binding Hamiltonian and the bare Coulomb interactions. ## Results ### Total energy and its related physical quantities In the study of the ferromagnetic ground state of the bulk bcc Fe lattice, energy versus volume (E-V) curves are collected and compared in panel (a) and (b) of Fig 1. These curves contain results from several calculation methods, including HF, LSDA, GGA(PBE), LDA+U, LDA+G, LDA+DMFT and CMRT. Both HF and CMRT calculations share the same QUAMBO basis set, while LSDA, GGA(PBE) and LDA+U are evaluated with plane-wave basis set in this work. GGA data are cross-checked against the published results in Ref. [14]. To complement the E-V curves, the pressure versus volume (P-V) curves extracted from their Birch-Murnaghan Equation of State (BM-EOS) [43] fits are also showcased in panel (c) and (d) of Fig 1, side by side with the experimental measurements, while the accompanying fitted equilibrium volumes and bulk moduli as well as the calculated magnetic moments are collected in Table 1. By examining the intersection points of these curves with the volume axis, we can discern the distribution of equilibrium volumes for each method in relation to the experimental volume. This provides a clear illustration of the exemplary performance of both the GGA and CMRT methods, which operate without the need for adjustable energy parameters, and commendable outcomes of LDA+G and LDA+DMFT with appropriate \(U\), \(J\) energy parameters adapted. The alignment between the CMRT-generated data and experimental pressure-volume measurements stands out. Specifically, CMRT demonstrates a closer resemblance to experimental outcomes for the bcc iron phase when compared to GGA. One might wonder how local energy corrections resulting from electronic correlations might influence the total energy in CMRT calculations. This particular contribution is encapsulated in \(E_{local}\) as seen in Eq. 1. As described by Eq. 2, \(E_{local}\) encompasses predominant energy terms arising from \(\hat{n}_{i\alpha,\sigma}\hat{n}_{i\beta,\sigma^{\prime}}\) type of two-body operators, where \(\alpha\) and \(\beta\) represent the set of local correlated orbits. This term delineates the discrepancy between the strict expectation values and their corresponding mean field values. Typically, each term in \(E_{local}\) is negative, reflecting diminished Coulomb interaction stemming from the presence of local electronic repulsion. We've assigned an additional negative sign to these terms for a clearer visualization in Fig. 2. General understanding might suggest that local correlation energy gain amplifies with increasing volume expansion. Yet, contrary to this notion, the inset of the figure displays a different trend. The root of this behavior can be traced back to the terms that most significantly influence \(E_{local}\), as exemplified at three distinct volumes across the experimental equilibrium volume. These individual energy terms are segregated into separate spin-spin channels on the x-axis of Fig. 2: \(\uparrow\uparrow\) for majority-majority spin, \(\uparrow\downarrow\) for majority-minority spin, and so forth. A closer look reveals that energy corrections from the majority-majority spin channel remain minuscule across the considered terms following the x- axis. The majority-minority spin channel flourishes while it contributes to \(E_{local}\) at a reduced volume \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & Exp & HF & LSDA & PBE & LDA+U & LDA+G & LDA+DMFT & CMRT \\ \hline \(a_{0}\) (Γ…) & 2.867 & 3.0 & 2.746 & 2.833 & 2.76 & 2.85 & 2.853 & 2.887 \\ \(B_{0}\) (GPa) & 172 & 115 & 245 & 169 & 207 & 160 & 168 & 165 \\ \(M\) (\(\mu_{B}\)) & 2.2 & 2.92 & 2.00 & 2.2 & 2.13 & 2.30 & 2.2 & 2.6 \\ \hline \end{tabular} \end{table} Table 1: BM-EOS fitted equilibrium lattice constant \(a_{0}\), bulk modulus \(B_{0}\) and the calculated spin magnetic moment \(M\) of ferromagnetic BCC Fe obtained by various calculation methods are in comparison with experimental measurements. HF, LDA+U, and CMRT data were calculated in this work, while other data are sourced or adapted from relevant literature: Exp[25], LSDA and GGA(PBE) [15], LDA+DMFT[25], LDA+G[29]. Note that LDA+DMFT utilized \(U\)=4.3eV and \(J\)=1.0eV to match the experimental spin magnetic moment value of \(2.2\mu_{B}\) at T=290K. Meanwhile, while there are varied parameter choices for LDA+G in the literature, here it uses \(U\)=7eV and \(J\)=1.0eV. Also, note the spin magnetic moment tabulated here is linked to the equilibrium lattice constant specific to each method, and not pegged to the experimental lattice constant. Figure 1: Energy versus Volume (E-V) and Pressure versus Volume (P-V) curves for ferromagnetic bcc iron calculated with different _ab initio_ methods. Plot (a) compares CMRT against weakly interacting methods including LSDA, GGA(PBE) and HF while plot (b) compiles strongly correlated methods, including LDA+DMFT (\(U\)=4.3eV, \(J\)=1.0eV)[25], LDA+U (\(U\)=2.2eV, \(J\)=1.0eV), LDA+G (\(U\)=2.2eV, \(J\)=1.0eV)[29] and CMRT. The corresponding P-V curves are depicted in solid lines in plots (c) and (d). LSDA, PBE, and LDA+U are evaluated with VASP at an automatic K grid of \(R_{h}\)=40. HF used the identical QUAMBO basis set and K-grid as CMRT. Experimental measurements are symbolized as follows, : solid squares for bcc \(\alpha\)-Fe phase and empty squares for hcp \(\varepsilon\)-Fe phase[14; 42]. Vertical adjustments have been made for all the energy curves for a clearer view. Specifically, HF energy is downshifted an extra 5.5eV with respect to CMRT energy in the figure. Vertical dash dotted lines mark the experimental equilibrium volume of the ferromagnetic BCC Fe lattice. but diminishes rapidly beyond the experimental volume. Conversely, the minority-minority spin channel possesses a handful of two-body operators that notably amplify their contributions to \(E_{local}\), indicating a swift rise in electronic repulsion between specific states. The composite energy correction trajectory, presented in the inset, unveils that the gains from enhanced terms in the minority-minority spin channel fail to offset the dufling contributions from the majority-minority two-body terms. ### Local Orbital Occupations and Their Fluctuations A comprehensive examination of the local physics is presented in the ferromagnetic bcc iron lattice using the CMRT method. Fig 3 gives local orbital occupancies on the \(t_{2g}\) and \(e_{g}\) states of the 3d orbit at a lattice volume of 11.94A\({}^{3}\) (or \(a=2.88\)A) across various _ab initio_ methods. The orbital occupancies of CMRT align closely with most methods except for HF. For example, using the same QUAMBO local orbit basis set, both LDA and CMRT yield roughly 1.3 electrons in each of the \(t_{2g}\) and \(e_{g}\) states though CMRT exhibits a slightly greater ordered spin magnetic moment. On the other hand, a discrepancy in the HF orbital occupancy is evident in the minority spin channel, where the \(t_{2g}\) state occupation significantly surpasses that of the \(e_{g}\) state. This disparity may indicate that the local 3d energy components dominate the HF total energy. More details are provided in the discussion. The CMRT formalism, built upon the HF method, incorporates electronic correlation effects through both renormalizing effective single particle hoppings and rigorously treating local two-body interactions. Such a procedure successfully reduces electron occupancy in the majority spin channel and markedly redistributes electrons between the \(t_{2g}\) and \(e_{g}\) states in the minority spin channel, yielding more balanced orbital occupancies and tempering the pronouncedly high local spin moment returned by HF. To delve deeper into fluctuations, we introduce a local pseudo-charge correlator as \[\chi_{i\alpha\sigma,i\beta\sigma^{\prime}}=\langle\hat{n}_{i\alpha\sigma}\hat{n}_ {i\beta\sigma^{\prime}}\rangle-\bar{n}_{i\alpha\sigma}\bar{n}_{i\beta\sigma^{ \prime}}\text{ for }\left(\alpha\sigma\right)\neq\left(\beta\sigma^{\prime}\right) \tag{8}\] This correlator serves as an insightful metric to gauge the electronic correlation between two electronic states effectively capturing how one electron's presence might influence another's motion. In essence, this correlator quantifies the deviation in the likelihood of observing a specific electron pair, \(\langle\hat{n}_{i\alpha\sigma}\hat{n}_{i\beta\sigma^{\prime}}\rangle\), which can be thoroughly evaluated within CMRT, from a baseline uncorrelated value, \(\bar{n}_{i\alpha\sigma}\bar{n}_{i\beta\sigma^{\prime}}\). When the expectation value is evaluated with a single Slater determinant ground state wavefunction, the result would yield the Hartree term as the baseline value, and a much smaller Fock term if the working basis set possesses the correct lattice and orbital symmetry. Thus, this correlator would nearly vanish in a non-interacting system, as expected for two electrons being uncorrelated. Introduce local charge and spin (z component only) operators as \(\hat{n}\) and \(\hat{S}_{z}\) and we can write down the local static charge and spin (z component only) fluctuations, \(\chi_{\hat{n}}\) and \(\chi_{\hat{S}_{z}}\), as \[\hat{n}=\sum_{\alpha,\sigma}\hat{n}_{\alpha,\sigma}\Rightarrow \chi_{\hat{n}}=\left\langle\left(\hat{n}-\bar{n}\right)^{2}\right\rangle \tag{9}\] \[\hat{S}_{z}=\frac{1}{2}\sum_{\alpha,\sigma}\sigma\hat{n}_{\alpha, \sigma}\Rightarrow\chi_{\hat{S}_{z}}=\left\langle\left(\hat{S}_{z}-\bar{S}_{ z}\right)^{2}\right\rangle \tag{10}\] with \(\alpha\) indexing a set of local orbits and \(\sigma=\pm 1\) denoting majority and minority spins, respectively. A simple algebra establishes the following relationship between fluctuations and pseudo-charge correlator \[4\chi_{\hat{S}_{z}}-\chi_{\hat{n}}=4\sum_{\alpha\beta}\left(-\chi_{\alpha \uparrow,\beta\downarrow}\right) \tag{11}\] Given a single orbit, the above equation provides a way to gain insights into local double occupancy by taking the difference between the two fluctuations. Fig. 4 compiles the spin and charge fluctuations from various sets of local orbits and highlights the dominant pseudo-charge correlators. Panels (a) and (b) dissect the fluctuations within all 3d orbits, and within \(t_{2g}\) and \(e_{g}\) states respectively. The principal variability in spin fluctuation predominantly concerns the \(t_{2g}\) states, especially at smaller lattice volumes. Panel (c) provides a clearer perspective on the observation by representing fluctuations for individual states. By noting that local fluctuations of 4S state are not suppressible with increasing electronic correlation, we might reliably classify 4S state to be weakly correlated. Meanwhile, as volume increases, Panel (c) suggests that the \(t_{2g}\) and \(e_{g}\) states exhibit weak correlation, as indicated by \(\langle\hat{n}_{i\alpha\uparrow}\hat{n}_{i\alpha\downarrow}\rangle\simeq\bar{n }_{i\alpha\uparrow}\bar{n}_{i\alpha\downarrow}\) readily read out from the diminishing difference between the spin and charge fluctuations and with help of Eq. 11. This weak correlation arises from the nearly filled 3d orbits in the majority spin channel. The minority spin channel in the 3d orbits, however, pose to be the chief contributor to local electronic correlations. This observation stems from Fig. 2 and is corroborated by Panel (d) in Fig. 4. This panel showcases \(\chi_{i\alpha\sigma,i\beta\sigma^{\prime}}\) adjusted by \(\bar{n}_{i\alpha\sigma}\bar{n}_{i\beta\sigma^{\prime}}\) to account for variations in orbital occupation. Such an approach can compare electronic correlations across different state pairs, as is supported by two notable advantages. First, all state pairs maintain their numerical alignment at one with the non-interacting limit. Second, the visualization aptly highlights the few most significant electronic correlations and pinpoints the state pairs that generate them. These predominant correlations between \(t_{2g}\) and \(e_{g}\) could be the reason for their rebalanced occupations in CMRT which are otherwise significantly skewed in the HF calculation shown in Fig. 3. ### Normalized Local Charge Fluctuation analysis While the Gutzwiller renormalization prefactors for the correlated orbits shown in Fig. 5 reveal some similarity between \(t_{2g}\) and \(e_{g}\) states in both spin channels, the difference might be explored through the Normalized Local Charge Fluctuation (NLCF), defined as \(\chi_{\hat{n}}/\bar{n}^{2}\)[39]. We evaluate this metric using CMRT and HF calculations, with HF serving as the reference for electronic correlation. Notable deviations between CMRT and HF indicate additional correlations captured by CMRT. For a balanced comparison, we introduce a standardized NLCF (sNLCF). Given the non-comparability of expectation values in the NLCF definition across methods, we adjusted their range to fall between 0 and 1, considering unique constant shifts. Fig 6 contrasts sNLCF values from CMRT and HF across subsets of local correlated orbits in a ferromagnetic bcc iron system. This figure presents relative charge fluctuations across different choices of orbits (rows) and spin channels (columns). The top row illustrates CMRT vs. HF for all five 3d orbits, and the middle and bottom rows focus on comparisons for individual \(t_{2g}\) and \(e_{g}\) states respectively. In interpreting Fig 6, it's evident that different treatments in electronic correlation between methods yield different sNLCF behaviors. Specifically, the majority spin channel in the second column reveals HF's near-linear descent as contrasted with CMRT's well-established curvatures. CMRT either further suppresses or enhances charge fluctuations on top of HF in the \(t_{2g}\) or \(e_{g}\) states for a better treatment of their electronic correlations. This qualitative difference in \(t_{2g}\) and \(e_{g}\) treatment supports the distinct correlation nature of both 3d states made in existing literature. The curve in the inset of plot (b), resulting from the difference between HF and CMRT there, peaks near the CMRT equilibrium volume. This might suggest a predominant role of majority spin electrons in shaping the interatomic bonds and the bulk bcc lattice structure. ## Discussion We demonstrated that CMRT can correctly predict both the energy versus volume (E-V) and pressure versus volume (P-V) curves for the bulk BCC iron ferromagnetic phase. Furthermore, it yields an equilibrium volume and bulk modulus consistent with experimental findings, as illustrated in Fig. 1. CMRT also produced other credible physical quantities like local orbital renormalization prefactors and orbital occupations. All these suggest that CMRT can capture the essential correlation physics inherent in the 3d orbits of this system. These extra correlations built into CMRT aid in redistributing the system's kinetic and potential energies, and orbital fillings. While there was analysis indicating that changes in these energy components correlate with the formation of ordered moments[30], we choose not to delve into such intricacies here, given that this information might be method specific. Meanwhile, CMRT predicts a local spin magnetic moment larger than experimental measurements. The local state occupations depicted in Fig 3 reveal that HF-based CMRT still allocates more electrons to the majority spin channel than LSDA/GGA, resulting in an exaggerated local spin magnetic moment. Interestingly, local interaction enhanced LSDA methods, such as DFT+U and DFT+G, display similar local state occupations as CMRT, even though they stem from distinct theoretical backgrounds, namely LSDA and HF. The local 3d occupation in HF significantly skews towards \(t_{2g}\) states in the minority spin channel compared to the other methods, as depicted in Fig. 3. Generally speaking, a preferred occupation on \(t_{2g}\) over \(e_{g}\) is consistent with the cubic crystal field splitting of 3d orbits[44]. But, this skew in HF calculation seems excessively pronounced. Insight into this phenomenon may be gleaned by examining a simplified model of an isolated atom. This model replicates the local electron filling pattern observed in the ferromagnetic iron state, presupposing nearly fully filled 3d orbits in the majority spin channel and a predetermined number of 3d electrons in the minority spin channel. We closely observe \(\hat{n}_{\alpha\sigma}\hat{n}_{\beta\sigma^{\prime}}\) type two-body operators, with \(\alpha,\beta\in\{t_{2g},e_{g}\}\), which are dominant in the energy Hamiltonian and possess very close Coulomb energy coefficients. The classical Coulomb potential energy pertinent to these operators is expressed as follows \[E_{p}\propto C_{3}^{2}\bar{n}_{t_{2g},\downarrow}\bar{n}_{t_{2g},\downarrow}+6 \bar{n}_{t_{2g},\downarrow}\bar{n}_{e_{g},\downarrow}+C_{2}^{2}\bar{n}_{e_{g}, \downarrow}\bar{n}_{e_{g},\downarrow} \tag{12}\] which might as well be thought of a mean field decomposition on \(\hat{n}_{\alpha\sigma}\hat{n}_{\beta\sigma^{\prime}}\) but having the Fock terms dropped as quantum effects. In this equation, \(\bar{n}_{t_{2g}(e_{g}),\downarrow}\) represents local orbital occupation in a \(t_{2g}\) or \(e_{g}\) state in the minority spin channel, while \(C_{n}^{m}\) is the standard binomial coefficient. Simple algebraic manipulation reveals three notable cases[45]. Two extremes, \(\left(\hat{n}_{t_{2g},\downarrow},0\right)\) and \(\left(0,\bar{n}_{e_{g},\downarrow}\right),\) represent variable-bounded local minima separated by a potential energy maximum, which defines the physically relevant third case holding equal occupation in all 3d orbits for an isolated atom with nearly degenerate orbits. Given this scenario, it is reasonable to hypothesize that the HF solution likely corresponds to one of the two extreme cases in an effort to minimize local potential energy. Confirmation of this hypothesis Figure 3: Local charge occupation of th \(t_{2g}\) and \(e_{g}\) states within the 3d orbit collected from HF, CMRT, LDA+U (\(U\)=2.3eV, \(J\)=0.9eV) and LDA+G (\(U\)=2.5eV, \(J\)=1.2eV) [30]. is obtained by applying HF to the local energy Hamiltonian constructed at a reference site on the BCC iron lattice with a unit cell volume of 25A\({}^{3}\)(or \(a_{0}=3.7\)A). The HF approach, contingent on specific initial orbital occupations, readily converges to the two extreme cases with vanishing occupation in either type of the 3d states. Comparing these solutions to the actual HF solution for the BCC iron lattice reveals that the extra nonlocal hoppings and interactions left out of the local energy Hamiltonian contribute to electron transitions into the empty states. With local correlation effects incorporated in the HF framework to establish CMRT - which effectively reduces local Coulomb interaction as showcased in Fig. 4 - a greater number of electrons continue to transit into the empty 3d states. This results in a more balanced electron occupation among the 3d states, which would otherwise be energetically discouraged by a local energy Hamiltonian as seen through HF. Based on the classical potential energy depicted in Eq. 12 and the different orbital occupations between HF and CMRT, two key observations are made. Firstly, local correlation is crucial in reestablishing the correct physical picture in the BCC iron lattice with CMRT. While correlation may reduce nonlocal energy components through Gutzwiller renormalization, the overarching effect is an enhanced nonlocal effect, ensuring a steady electron flow into empty 3d states. Secondly, integrating the exchange-correlation functional into DFT markedly enhances its efficacy, as evidenced here by a correct depiction of the BCC iron lattice. Nevertheless, the similarity in electronic behaviors yielded by both the classical Coulomb repulsion and HF positions HF as a benchmark method. Figure 4: Left panels show local spin and charge fluctuations with different sets of local states. Solid lines show four times spin fluctuation and dashed lines charge fluctuation. Panel (a) selects all \(t_{2g}\) and \(e_{g}\) states to define the spin and charge operators; panel (b) selects all \(t_{2g}\) states as well as all \(e_{g}\) states while panel (c) picks up individual \(4S\), \(t_{2g}\) and \(e_{g}\) states to evaluate their spin and charge fluctuations. The right panel (d) gives comparisons of \(\left\langle\hat{n}_{\alpha\sigma}\hat{n}_{\beta\sigma^{\prime}}\right\rangle\) normalized by \(\bar{n}_{\alpha\sigma}\bar{n}_{\beta\sigma^{\prime}}\) with \(\bar{n}_{\alpha\sigma}=\left\langle\hat{n}_{\alpha\sigma}\right\rangle\). The first five biggest terms have their labels put nearby the curves with matching colors. In contrast, the rest of the terms have their labels collected at the left-bottom corner following roughly the magnitude ordering at a small lattice volume. The curves are consistently labeled with two capital letters denoting a pair of local orbits involved. Specifically, \(T,E,S\) denote \(t_{2g}\), \(e_{g}\) and \(4S\) states averaged over their degenerate states, respectively. If the same state is involved in both local orbits, then each letter carries a number to distinguish whether they are the same state. For instance, T\({}_{1,\uparrow}\)T\({}_{2,\downarrow}\) denotes \(\hat{n}_{\alpha\uparrow}\hat{n}_{\beta\downarrow}\) with \(\alpha,\beta\in\left\{t_{2g}\right\}\) but \(\alpha\neq\beta\), while T\({}_{1,\uparrow}\)T\({}_{1,\downarrow}\) denotes \(\hat{n}_{\alpha\uparrow}\hat{n}_{\alpha\downarrow}\) for \(\alpha\in\left\{t_{2g}\right\}\), which is basically the averaged double occupancy of an individual \(t_{2g}\) state. The spin index, \(\sigma\in\left\{\uparrow,\downarrow\right\},\) denotes majority or minority spins respectively. In both panels, the vertical dotted lines show the CMRT equilibrium volume. ology in comparing treatment on electronic correlation effects, which is purely quantum in nature. These insights might be instrumental in resolving an inconsistent statement made in a QSGW calculation[20] stating that local physics is not relevant for describing BCC iron lattice by taking DFT as its reference. The local correlated energy, \(E_{local}\), as defined in Eq. 2 encapsulates the effect of correlation on the electronic Coulomb interaction energy. When this quantity is subtracted from the CMRT total energy, the equilibrium lattice volume shifts to approximately that of the HF equilibrium volume. This alignment might seem coincidental, given that CMRT and HF converge to distinct ground states with varying orbital occupations in the minority spin channel. Nevertheless, this shifting trend underscores the significance of accurately addressing correlation effects for a precise depiction of a physical system. Segmenting \(E_{local}\) into two-body energy components reveals a competition of correlation energy across different spin-spin channels, as illustrated in Fig. 2. The dominant roles of the electronic correlation of \(t_{2g}\) and \(e_{g}\) states in the minority spin channel are further highlighted in Fig. 4. Concurrently, these figures emphasize the weak correlation present within the majority of spin channels of these states--a perspective somewhat at odds with the insights from \(f\left(z\right)\) in Fig. 5. One potential explanation is that \(\chi_{\alpha\sigma,\beta\sigma^{\prime}}\) provides static correlation data for two electrons in a system's final state, which emerges after the culmination of all inherent physical screening and damping effects. In contrast, \(f\left(z\right)\) may carry dynamical significance for individual orbits, facilitating quasiparticle motion renormalization and giving rise to necessary screening and damping effects. While \(\chi_{\alpha\sigma,\beta\sigma^{\prime}}\) could suggest the ease with which two electrons approach each other, it doesn't necessarily correlate straightforwardly with the single particle-related Gutzwiller renormalization factor \(f\left(z\right)\) under a mean field scenario. Such an interpretation might also reconcile a statement made in a DFT+DMFT calculation emphasizing a strong correlation effect in the majority spin channel[32] by noting an intricate connection between self-energy and Gutzwiller renormalization prefactor[34]. Analysis of local fluctuations and pseudo-charge correlators suggested distinct correlation patterns for 3d orbits in the majority and minority spin channels. A closer look at pseudo-charge correlators associated with \(t_{2g}\) and \(e_{g}\) states indicates that both orbits exhibit significant interactions within and among themselves in the minority spin channel, without major qualitative differences. Hence, the approach of categorizing \(t_{2g}\) and \(e_{g}\) states as purely itinerant and localized states or attributing them different electronic characteristics[12] isn't wholly corroborated by our findings. Subsequent analysis exploring local fluctuation was carried out. While NLCF can be insightful for analyzing electronic localization in strongly correlated systems, it didn't yield any substantial insights for the bulk BCC iron system. This aligns with the notion that localization-delocalization dynamics are not a primary concern here. On the other hand, by accessing the standardized NLCF for 3d orbits and contrasting them with HF computations, it becomes evident that \(t_{2g}\) and \(e_{g}\) states have distinct behaviors in the majority spin channel. While they almost retain their local orbit occupations, their local charge fluctuations are modulated in opposing directions, optimizing electronic correlation energy for the CMRT ground state. The profound difference in the behaviors of \(t_{2g}\) and \(e_{g}\) states within the majority spin channel warrants further investigation. Figure 5: Volume dependence of Gutzwiller renormalization factor, \(f\left(z_{\alpha\sigma}\right),\) for both spin channels for selected local states. Blue curves are for \(t_{2g}\), red for \(e_{g}\) and green for \(4S\) states. Solid lines with an upward triangle represent the majority spin, while dashed lines with a downward triangle represent the minority spin. The dashed vertical line indicates the CMRT equilibrium volume with the ferromagnetic bcc iron lattice. ## Summary In this study, we expanded the capabilities of CMRT, an entirely _ab initio_ approach for correlated electron systems, to accommodate magnetization by facilitating straightforward spin polarization within the system. We put this formalism to the test, benchmarking it against the established ferromagnetic system of bulk bcc Fe. Interestingly, we found that utilizing spin-independent sum rule energy coefficients yielded the most accurate results in the CMRT total energy computations. This finding is in harmony with a raw _ab initio_ Hamiltonian employing spin-independent energy parameters. We charted the E-V curve for this system, deriving equilibrium attributes like volume and bulk modulus. These values align closely with experimental data and compare positively to other ab initio methodologies. Furthermore, our constructed P-V curve not only mirrors experimental results but also demonstrates better concordance than GGA predictions. Diving deeper, we extensively examined local physical metrics, encompassing local orbit occupation, local spin and charge fluctuations, and local correlation effects using new measures introduced in this study. Our findings pinpointed the primary correlation impact to the 3d orbits within the minority spin channel and highlighted subtle distinctions between \(t_{2g}\) and \(e_{g}\) states. While the majority spin channel exhibits weak correlation, the behaviors of \(t_{2g}\) and \(e_{g}\) are notably different. This discrepancy might hinge on the method used, and its physical implications remain unclear. ## Acknowledgement _Acknowledgement_ We would like to thank F. Zhang and J. H. Zhang for valuable discussions. This work was supported by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences, Materials Science and Engineering Division, including the computer time support from the National Energy Research Scientific Computing Center (NERSC) in Berkeley, CA. The research was performed at Ames Laboratory, which is operated for the U.S. DOE by Iowa State University under Contract No. DEAC02-07CH11358.
2305.02992
On the Mahler measure of $(1+x)(1+y)+z$
We prove a conjecture of Boyd and Rodriguez Villegas relating the Mahler measure of the polynomial $(1+x)(1+y)+z$ and the value at $s=3$ of the $L$-function of an elliptic curve of conductor $15$. The proof makes use of the computation by Zudilin and the author of the regulator of certain $K_4$ classes on modular curves.
François Brunault
2023-05-04T16:58:04Z
http://arxiv.org/abs/2305.02992v1
# On the Mahler measure of \((1+x)(1+y)+z\) ###### Abstract. We prove a conjecture of Boyd and Rodriguez Villegas relating the Mahler measure of the polynomial \((1+x)(1+y)+z\) and the value at \(s=3\) of the \(L\)-function of an elliptic curve of conductor \(15\). The proof makes use of the computation by Zudilin and the author of the regulator of certain \(K_{4}\) classes on modular curves. Key words and phrases:Mahler measure; motivic cohomology; modular curve; regulator; \(L\)-function 2020 Mathematics Subject Classification: Primary 19F27; Secondary 11F67, 11G16, 11R06, 19E15 The author was supported by the research project "Motivic homotopy, quadratic invariants and diagonal classes" (ANR-21-CE40-0015) operated by the French National Research Agency (ANR) where \(\eta\) is a differential \((n-1)\)-form on the zero locus \(V_{P}\) of \(P\) in \((\mathbf{C}^{\times})^{n}\), and \(\Gamma\) is the \((n-1)\)-dimensional _Deninger chain_, \[\Gamma=\big{\{}(x_{1},\ldots,x_{n})\in V_{P}:\,|x_{1}|=\cdots=|x_{n-1}|=1,|x_{n}| \geq 1\big{\}},\] endowed with the orientation coming from that of \(T^{n-1}\). We make here all necessary assumptions for this integral to make sense [13, Assumptions 3.2], in particular \(\Gamma\) must avoid the singular points of \(V_{P}\). Assume now that \(\Gamma\) is closed. Then (3) can be given a cohomological interpretation, since the class of \(\eta\) in de Rham cohomology is the image under the Beilinson regulator map of the cup-product \(\{x_{1},\ldots,x_{n}\}\) in the motivic cohomology group \(H^{n}_{\mathcal{M}}(V_{P},\mathbf{Q}(n))\). This situation is favourable and under certain conditions, the Beilinson conjectures predict a link between \(m(P)\) and some \(L\)-value associated to \(V_{P}\). The identity (2) is an example of this phenomenon (in reality, in this case the path \(\Gamma\) is not closed, but symmetries can be used to "close the path"). A more mysterious situation is when the form \(\eta\) is exact, in which case we say \(P\) is _exact_. Stokes's formula reduces the Mahler measure \(m(P)\) to an \((n-2)\)-dimensional integral over the boundary \(\partial\Gamma\), but Deninger's theory does not provide an intrinsic cohomological interpretation of this integral. Maillot suggested in 2003 that, in the exact case, \(m(P)\) should be related to the cohomology of the variety \[W_{P}:P(x_{1},\ldots,x_{n})=\overline{P}\Big{(}\frac{1}{x_{1}},\ldots,\frac{1} {x_{n}}\Big{)}=0.\] What makes it plausible is that \(\partial\Gamma\) is contained in \(W_{P}\), because \(V_{P}\cap T^{n}=W_{P}\cap T^{n}\). The relevant motivic cohomology group is now \(H^{n-1}_{\mathcal{M}}(W_{P},\mathbf{Q}(n))\), which is harder to deal with, as we cannot use cup-products. The identities (1) are of this type. For example, the polynomial \(1+x+y\) leads to the algebraic \(K\)-group \(K_{3}(\mathbf{Q}(\sqrt{-3}))\), which is known to have rank \(1\) by Borel's theorem. In general, motivic cohomology \(H^{i}_{\mathcal{M}}(\cdot,\mathbf{Q}(n))\) with \(i\neq n\) makes it more difficult to handle the Mahler measure. Following Maillot's insight, Boyd and Rodriguez Villegas discovered in 2003 several identities involving \(3\)-variable exact polynomials [6, 3, 4]. One example is: **Conjecture 1** (Boyd and Rodriguez Villegas [3]).: _We have the equality_ \[m((1+x)(1+y)+z)\overset{?}{=}-2L^{\prime}(E,-1), \tag{4}\] _where \(E:(1+x)(1+y)(1+\frac{1}{x})(1+\frac{1}{y})=1\) is an elliptic curve of conductor \(15\)._ Here \(E\) arises as the Maillot variety of \(P=(1+x)(1+y)+z\). The first result towards Conjecture 1 was obtained by Lalin [21], who related the Mahler measure of \(P\) to the regulator of a cocycle in the Goncharov complex \(\Gamma(E,3)\) (see Section 3 for the definition of this complex). Let us write \(\gamma_{E}=\partial\Gamma\) for the boundary of Deninger's chain \(\Gamma\); this is a closed path in \(E\). **Theorem 2** (Lalin).: _We have \(m(P)=\frac{1}{4\pi^{2}}\int_{\gamma_{E}}r_{3}(2)(\xi_{E})\), where \(\xi_{E}\) is the class of the cocycle \(\{-x\}_{2}\otimes y-\{-y\}_{2}\otimes x\) in \(\Gamma(E,3)\), and \(r_{3}(2)\) is the Goncharov regulator map._ In essence, Lalin's theorem reduces Conjecture 1 to the Beilinson conjecture for \(L^{\prime}(E,-1)\). In this article, we compute the above Goncharov regulator, leading to the following theorem. **Theorem 3**.: _The Boyd and Rodriguez Villegas conjecture (4) is true._ Another fascinating conjecture by Rodriguez Villegas concerns the Mahler measure of the polynomials \(1+x_{1}+\ldots+x_{n}\) for \(n=4\) and \(n=5\). These polynomials are also exact and their Mahler measures are expected to involve \(L\)-values of cusp forms of weight \(3\) and \(4\), respectively [10, Section 6.2]. Partial results have been obtained by Shinder and Vlasenko [26]. Here is a similar identity that we found recently: \[m((1+x)(1+y)(1+z)+t)\overset{?}{=}-6L^{\prime}(f_{7},-1)-\frac{48}{7}\zeta^{ \prime}(-2),\] where \(f_{7}(\tau)=\eta(\tau)^{3}\eta(7\tau)^{3}\) is the unique CM newform of weight \(3\) and level \(7\). The main ingredient in the proof of Theorem 3 is the computation by Zudilin and the author [11] of the Goncharov regulator of explicit classes \(\xi_{1}(a,b)\) in the motivic cohomology of the modular curve \(Y_{1}(N)\), which were introduced in [8]. A key fact here is that \(E\) is isomorphic to the modular curve \(X_{1}(15)\), something we make precise in Section 2. In Section 3, we recall Goncharov's theory of polylogarithmic complexes in weight \(2\) and \(3\) and, for modular curves, we define subcomplexes built out of modular units. These complexes are amenable to computation, and we partly implemented the weight \(3\) complex in PARI/GP [23]; the scripts are available at [9]. In Sections 4 and 5, we express Lalin's class \(\xi_{E}\) and the path \(\gamma_{E}\) in purely modular terms. The final computation is performed in Section 6, using the results of [11]. In the appendix, we give tables of (conjectural) identities relating \(3\)-variable Mahler measures and \(L(E,3)\) for a number of elliptic curves \(E\) over \(\mathbf{Q}\). **Acknowledgements.** I am grateful to Matilde Lalin, Riccardo Pengo, Wadim Zudilin and the International Groupe de travail on differential equations in Paris for exchanges which have been helpful in several parts of this paper. I would also like to thank Berend Ringeling for checking numerically several Mahler measure identities from the appendix. ## 2. The modular parametrisation Consider the polynomial \(P(x,y,z)=(1+x)(1+y)+z\). We keep the same notations as in the introduction, so that the Maillot variety \(W_{P}\) in \((\mathbf{C}^{\times})^{3}\) is defined as \[W_{P}\colon\begin{cases}(1+x)(1+y)+z=0,\\ (1+\frac{1}{x})(1+\frac{1}{y})+\frac{1}{z}=0.\end{cases}\] Eliminating \(z\), we see that \(W_{P}\) is isomorphic to the smooth curve in \((\mathbf{C}^{\times})^{2}\) given by \[C:(1+x)^{2}(1+y)^{2}=xy. \tag{5}\] Let \(E\) denote the closure of \(C\) in \(\mathbf{P}^{1}(\mathbf{C})\times\mathbf{P}^{1}(\mathbf{C})\). We view \(E\) as a smooth projective curve defined over \(\mathbf{Q}\). It turns out that \(E\) is isomorphic to an elliptic curve of conductor \(15\)[21, (4.2)]. The PARI/GP commands E = ellfromeqn((1+x)^2*(1+y)^2-x*y) ellidentify(ellinit(E)) confirm that \(E\) is isomorphic to the elliptic curve with Cremona label \(15a8\). On the other hand, we know that the modular curve \(X_{1}(15)\) is isomorphic to \(15a8\), since they are both elliptic curves of conductor \(15\), and the period lattice of \(X_{1}(15)\) can be computed using modular symbols, agreeing with that of \(15a8\). Note that Stevens's conjecture [28, Conjecture II] is known in this case by [28, Section 7]. In Proposition 4 below we give an explicit isomorphism \(X_{1}(15)\cong E\) (note that the proof does not rely on floating point computations). An important feature of this parametrisation is that the functions \(-x\) and \(-y\) correspond to modular units on \(X_{1}(15)\). This is crucially used in Section 4 to relate Lalin's class \(\xi_{E}\) and the modular classes \(\xi_{1}(a,b)\) from [8, Section 6]. Even more, we need the functions \(-x\) and \(-y\) to be of the form \(u_{1}(a,b,c,d)\), a class of modular units introduced in [8] and whose definition we now recall. Let \(N\geq 1\) be an integer. For any \(\boldsymbol{a}=(a_{1},a_{2})\in(\mathbf{Z}/N\mathbf{Z})^{2}/\pm 1\), \(\boldsymbol{a}\neq(0,0)\), we define \[\wp_{\boldsymbol{a}}(\tau)=\wp\Big{(}\tau;\frac{a_{1}\tau+a_{2}}{N}\Big{)} \qquad(\tau\in\mathbf{C},\,\mathrm{Im}(\tau)>0),\] where \(\wp(\tau;z)\) is the Weierstrass function. The function \(\wp_{\boldsymbol{a}}\) is a modular form of weight \(2\) on the principal congruence group \(\Gamma(N)\). For any distinct elements \(\boldsymbol{a},\boldsymbol{b},\boldsymbol{c},\boldsymbol{d}\) of \((\mathbf{Z}/N\mathbf{Z})^{2}/\pm 1\), we then define \(u(\boldsymbol{a},\boldsymbol{b},\boldsymbol{c},\boldsymbol{d})\) as the cross-ratio \([\wp_{\boldsymbol{a}},\wp_{\boldsymbol{b}},\wp_{\boldsymbol{c}},\wp_{ \boldsymbol{d}}]\). This is a modular unit on \(\Gamma(N)\). For distinct elements \(a,b,c,d\) of \((\mathbf{Z}/N\mathbf{Z})/\pm 1\), we use the shortcut \(u_{1}(a,b,c,d)=u((0,a),(0,b),(0,c),(0,d))\), which is a modular unit on \(\Gamma_{1}(N)\). These units are defined over \(\mathbf{Q}\). The properties of \(u(\boldsymbol{a},\boldsymbol{b},\boldsymbol{c},\boldsymbol{d})\) needed in this article can be found in [8, Section 3]. In the following proposition, we take \(N=15\). **Proposition 4**.: _The curve \(E\) is parametrised by the following modular units on \(\Gamma_{1}(15)\):_ \[x(\tau)=-u_{1}(1,2,3,7)(\tau),\qquad y(\tau)=-u_{1}(2,4,6,1)(\tau). \tag{6}\] _Moreover, the map \(\tau\mapsto(x(\tau),y(\tau))\) induces an isomorphism \(\varphi:X_{1}(15)\xrightarrow{\tilde{u}}E\) defined over \(\mathbf{Q}\)._ Proof.: Let us show that \(u=-u_{1}(1,2,3,7)\) and \(v=-u_{1}(2,4,6,1)\) satisfy \((1+u)^{2}(1+v)^{2}=uv\). For this we may replace \(u\) and \(v\) by their transforms under the Atkin-Lehner involution \(W_{15}:\tau\mapsto-1/15\tau\) on \(X_{1}(15)\), as this does not affect the equation. The units \(\tilde{u}=u\circ W_{15}\) and \(\tilde{v}=v\circ W_{15}\) can be expressed in terms of Siegel units of level \(15\) using [8, eq. (6)]: \[\begin{split}\tilde{u}&=-\frac{\tilde{g}_{2}\tilde{g }_{4}}{\tilde{g}_{1}\tilde{g}_{7}}=-1-q+q^{4}+q^{5}-q^{7}+O(q^{8}),\\ \tilde{v}&=-\frac{\tilde{g}_{4}\tilde{g}_{7}}{\tilde {g}_{1}\tilde{g}_{2}}=-q^{-2}-q^{-1}-2-2q-2q^{2}-2q^{3}-2q^{4}+O(q^{5})\end{split} \tag{7}\] where, for \(a\in\mathbf{Z}/N\mathbf{Z}\), \(a\neq 0\), \[\tilde{g}_{a}(\tau)=q^{NB_{2}(\hat{a}/N)/2}\prod_{\begin{subarray}{c}n\geq 1 \\ n\equiv a\bmod N\end{subarray}}(1-q^{n})\prod_{\begin{subarray}{c}n\geq 1\\ n=a\bmod N\end{subarray}}(1-q^{n})\qquad(q=e^{2\pi i\tau}).\] Here \(B_{2}(t)=t^{2}-t+\frac{1}{6}\) is the Bernoulli polynomial and \(\hat{a}\) is the lift of \(a\) in \(\{1,\dots,N-1\}\). We are now going to compute the divisors of \(\tilde{u}\) and \(\tilde{v}\). To this end, we recall the description of the cusps of the modular curve \(X_{1}(N)\). There is a bijection [14, Example 9.1.3] \[\{\text{cusps of }X_{1}(N)(\mathbf{C})\}\xrightarrow{\tilde{u}}(c,d):c\in \mathbf{Z}/N\mathbf{Z},\,d\in(\mathbf{Z}/(c,N)\mathbf{Z})^{\times}\}/\pm 1\] which associates to a cusp \(\gamma\infty\) with \(\gamma\in\operatorname{SL}_{2}(\mathbf{Z})\), the class of the bottom row \((c,d)\) of \(\gamma\). Moreover, by [14, Section 9.3, p. 79], the Galois action on the cusps is described as follows: for \(\sigma\in\operatorname{Aut}(\mathbf{C})\), we have \(\sigma\cdot(c,d)=(c,\chi(\sigma)d)\), where \(\chi(\sigma)\in(\mathbf{Z}/N\mathbf{Z})^{\times}\) is characterised by \(\sigma(e^{2\pi i/N})=e^{2\pi i\chi(\sigma)/N}\). As a consequence, a complete set of representatives of the Galois orbits is provided by the cusps \(\frac{1}{k}=\left\{\begin{smallmatrix}1&0\\ k&1\end{smallmatrix}\right\}\infty\) with \(0\leq k\leq\lfloor\frac{N}{2}\rfloor\). Now we can compute the divisor of \(u_{1}(a,b,c,d)\) for distinct \(a,b,c,d\in(\mathbf{Z}/N\mathbf{Z})/\pm 1\) as follows. Since this unit is defined over \(\mathbf{Q}\), it suffices to determine its order of vanishing at the cusps \(1/k\) just described. By [8, Proposition 3.6], we have \[u_{1}(a,b,c,d)|\begin{pmatrix}1&0\\ k&1\end{pmatrix}=u((ka,a),(kb,b),(kc,c),(kd,d)).\] The order of vanishing of this unit at \(\infty\) is deduced from the expression of \(u(\boldsymbol{a},\boldsymbol{b},\boldsymbol{c},\boldsymbol{d})\) in terms of Siegel units [8, Proposition 3.7], taking into account that it should be computed with respect to the uniformising parameter \(q^{(k,N)/N}\). Applying this in our situation, we obtain \[\operatorname{div}(u)=-2[1/2]+2[1/7],\qquad\qquad\operatorname{div}(v)=-2[0]+2[ 1/4].\] These cusps are rational and we see that \(F=(1+u)^{2}(1+v)^{2}-uv\) has poles of order at most \(4\) at \(0\) and \(1/2\), and is regular elsewhere. Moreover, we compute from (7) that \(F(-1/15\tau)=O(q^{5})\) when \(\operatorname{Im}(\tau)\to+\infty\). Therefore \(F\) vanishes at order \(\geq 5\) at \(0\), and consequently \(F=0\). It remains to show that \(\varphi:X_{1}(15)\to E\) is an isomorphism. Since \(X_{1}(15)\) and \(E\) are smooth, it suffices to check that \(\varphi\) is a birational map. We know that \(u\) has degree \(2\) as a function on \(X_{1}(15)\), while \(x\) has degree \(2\) as a function on \(E\). It follows that \(\varphi\) is birational. ## 3. The weight \(3\) complex of the modular curve \(Y_{1}(n)\) Goncharov has defined in [16] polylogarithmic complexes which are expected to compute the motivic cohomology of arbitrary fields. We define in this section a modular complex \(\mathcal{C}_{N}(3)\), which is a subcomplex of the weight \(3\) polylogarithmic complex attached to the modular curve \(Y_{1}(N)\). It is generated (in a suitable sense) by the Siegel units and the modular units \(u_{1}(a,b,c,d)\) from Section 2. Our construction can be seen as a weight \(3\) analogue of the weight \(2\) Euler complex \(\mathbb{E}_{N}^{\bullet}\) introduced by Goncharov in [18]. We also explain how to manipulate \(\mathcal{C}_{N}(3)\) using PARI/GP. The constructions below work with no more effort for the modular curve \(Y(N)\) with full level \(N\) structure, using parameters in \(\boldsymbol{(\mathsf{Z}/N\mathbf{Z})}^{2}\) instead of \(\mathbf{Z}/N\mathbf{Z}\). However we have not implemented it, as the case of \(Y_{1}(N)\) suffices for our application. We briefly recall Goncharov's polylogarithmic complexes in weight \(2\) and \(3\). Let \(F\) be any field. Define \(B_{2}(F)\) to be the quotient of \(\mathbf{Q}[F^{\times}\backslash\{1\}]\) by the subspace generated by the \(5\)-term relations [16, Section 1.8]. The group \(B_{3}(F)\) is defined similarly as the quotient of \(\mathbf{Q}[F^{\times}\backslash\{1\}]\) by explicit relations [16, Section 1.8], whose definition will not be needed here. For \(x\in F^{\times}\backslash\{1\}\) and \(n\in\{2,3\}\), we denote by \(\{x\}_{n}\) the image of the generator \([x]\) in \(B_{n}(F)\). Then the complex \(\Gamma(F,2)\), in degrees \(1\) and \(2\), is defined as \[\Gamma(F,2):\] \[\{x\}_{2} \xrightarrow{}(1-x)\wedge x,\] and the complex \(\Gamma(F,3)\), in degrees \(1\) to \(3\), is defined as \[B_{3}(F) \xrightarrow{}B_{2}(F)\otimes F^{\times}\otimes\mathbf{Q} \xrightarrow{}\Lambda^{3}F^{\times}\otimes\mathbf{Q}\] \[\Gamma(F,3):\] \[\{x\}_{3} \xrightarrow{}(1-x)\wedge x\wedge y.\] Goncharov conjectures that \(H^{i}(\Gamma(F,n))\) is isomorphic to \(H^{i}_{\mathcal{M}}(F,\mathbf{Q}(n))\). In the case \(F\) is the function field of a smooth curve \(Y\) over a field \(k\), these complexes are endowed with residue maps \(\Gamma(F,n)\to\Gamma(k(x),n-1)[-1]\) for every closed point \(x\in Y\). Goncharov then defines the complex \(\Gamma(Y,n)\) as the simple of the morphism of complexes \(\Gamma(F,n)\to\bigoplus_{x\in Y}\Gamma(k(x),n-1)[-1]\), and he conjectures that \(H^{i}(\Gamma(Y,n))\) is isomorphic to \(H^{i}_{\mathcal{M}}(Y,\mathbf{Q}(n))\)[16, Section 1.15(b)]. We will consider these complexes in the case \(Y\) is the modular curve \(Y_{1}(N)\), and \(F\) is its function field. We will see, in particular, that they have natural subcomplexes built out of modular units. **Definition 5**.: Fix an integer \(N\geq 1\). We introduce the following sets of modular units on \(Y_{1}(N)\): * \(U_{1}\) consists of the Siegel units \(g_{0,a}\), \(a\in\boldsymbol{(\mathsf{Z}/N\mathbf{Z})}\backslash\{0\}\), in \(\mathcal{O}(Y_{1}(N))^{\times}\otimes\mathbf{Q}\); * \(U_{2}\) consists of the modular units \(u_{1}(a,b,c,d)\) in \(\mathcal{O}(Y_{1}(N))^{\times}\), where \(a,b,c,d\) are distinct elements of \(\boldsymbol{(\mathsf{Z}/N\mathbf{Z})}/\pm 1\). Moreover, we associate to them the following spaces: * \(\langle U_{1}\rangle\) is the \(\mathbf{Q}\)-span of \(U_{1}\) in \(F^{\times}\otimes\mathbf{Q}\); * \(\langle U_{2}\rangle\) is the \(\mathbf{Q}\)-span of \(\{u\}_{2}\), \(u\in U_{2}\), in \(B_{2}(F)\); * \(\langle U_{2}\rangle_{3}\) is the \(\mathbf{Q}\)-span of \(\{u\}_{3}\), \(u\in U_{2}\), in \(B_{3}(F)\). With these definitions, the weight \(2\) modular complex can be defined as \[\mathcal{C}_{N}(2):\] This complex is well-defined because \(U_{2}\) is contained in \(\langle U_{1}\rangle\) by [8, Proposition 3.8], and \(U_{2}\) is stable under \(u\mapsto 1-u\) from the definition of \(u_{1}(a,b,c,d)\) as a cross-ratio. It would be interesting to compare \(\mathcal{C}_{N}(2)\) with the Euler complex \(\mathbb{E}_{N}^{\star}\) defined by Goncharov in [18, Section 2.5]. We are now ready to introduce a version of the weight \(3\) modular complex. **Definition 6**.: The complex \(\mathcal{C}_{N}(3)\) is the following subcomplex of \(\Gamma(F,3)\) in degrees \(1\) to \(3\): \[\mathcal{C}_{N}(3):\] \[\mathcal{C}_{N}(3):\] \[\{U_{2}\}_{3} \xrightarrow{}(U_{2})\otimes(U_{1}) \xrightarrow{}\Lambda^{3}(U_{1}).\] We warn the reader that the group \(\langle U_{2}\rangle_{3}\) in degree \(1\) may not be the right one. Indeed, the unit \(u_{1}(a,b,c,d)\) is by definition a cross-ratio, hence is a natural argument for the dilogarithm, but _a priori_ not for the trilogarithm. However, the complex \(\mathcal{C}_{N}(3)\) will suffice for our needs. Since the construction of \(\mathcal{C}_{N}(3)\) involves only modular units, the elements of \(\langle U_{2}\rangle_{3}\), \(\langle U_{2}\rangle\otimes\langle U_{1}\rangle\) and \(\Lambda^{3}\langle U_{1}\rangle\) have trivial residues at every point of \(Y_{1}(N)\). In particular, \(\mathcal{C}_{N}(3)\) embeds as a subcomplex of \(\Gamma(Y_{1}(N),3)\), and we have natural maps in cohomology \(H^{i}(\mathcal{C}_{N}(3))\to H^{i}(\Gamma(Y_{1}(N),3))\) in degree \(i\in\{1,2,3\}\). The case of interest to us is \(i=2\). We have implemented part of the complex \(\mathcal{C}_{N}(3)\) in PARI/GP, with the specific aim of comparing cocycles in degree \(2\). Firstly, the following lemma gives a natural way to represent modular units in \(\langle U_{1}\rangle\). **Lemma 7**.: _A basis of \(\langle U_{1}\rangle\) is given by the Siegel units \(g_{0,a}\) with \(1\leq a\leq\lfloor N/2\rfloor\)._ Proof.: We have \(g_{0,-a}=g_{0,a}\) in \(F^{\times}\otimes\mathbf{Q}\), and by [29], the units \(g_{0,a}\) with \(1\leq a\leq\lfloor N/2\rfloor\) form a basis of \((\mathcal{O}(Y_{1}(N))^{\times}/\mathbf{Q}^{\times})\otimes\mathbf{Q}\). Each unit in \(U_{2}\) can be written in the basis of Lemma 7 using [8, Proposition 3.8]. Note that no computation of divisor is needed here, thanks to this choice of basis. We actually need to determine \(U_{2}\) as a set, and so to check whether two given units \(u_{1}(a,b,c,d)\) and \(u_{1}(a^{\prime},b^{\prime},c^{\prime},d^{\prime})\) are equal. We remark that the leading coefficient of \(u_{1}(a,b,c,d)\) at the cusp \(0\) is equal to \(1\) by the discussion after [8, Proposition 3.8]. Combining this with Lemma 7, we see that two units are equal if and only if their coordinates in the basis of \(\langle U_{1}\rangle\) are equal. We now consider the free vector space \(\mathbf{Q}[U_{2}]\), and we quotient it by the following subspaces, encoding the relations between the symbols \(\{u_{1}(a,b,c,d)\}_{2}\). From the definition of \(u_{1}(a,b,c,d)\) as a cross-ratio, the symmetric group \(\mathcal{S}_{4}\) acts on \(U_{2}\) by permuting the indices, and this action factors through \(\mathcal{S}_{3}\). Moreover, because of the relations \(\{1/u\}_{2}=\{1-u\}_{2}=-\{u\}_{2}\) in \(B_{2}(F)\)[30, VI, Lemma 5.4], we have the antisymmetry property: \[\{u_{1}(a_{\sigma(1)},a_{\sigma(2)},a_{\sigma(3)},a_{\sigma(4)})\}_{2}= \varepsilon(\sigma)\{u(a_{1},a_{2},a_{3},a_{4})\}_{2}\qquad(\sigma\in\mathcal{ S}_{4}), \tag{8}\] for all distinct parameters \(a_{i}\) in \((\mathbf{Z}/N\mathbf{Z})/\pm 1\), where \(\varepsilon(\sigma)=\pm 1\) is the signature. It thus suffices to consider those parameters satisfying \(0\leq a<b<c<d\leq\lfloor N/2\rfloor\). The elements \(\{u_{1}(a,b,c,d)\}_{2}\) are also subject to the \(5\)-term relations [8, Lemma 4.7]: \[\sum_{j\in\mathbf{Z}/5\mathbf{Z}}\{u_{1}(a_{j},a_{j+1},a_{j+2},a_{j+3})\}_{2}= 0\qquad\text{in }B_{2}(F), \tag{9}\] for any family \((a_{j})_{j\in\mathbf{Z}/5\mathbf{Z}}\) of distinct elements of \((\mathbf{Z}/N\mathbf{Z})/\pm 1\). We denote by \(R_{2}\) the subspace of \(\mathbf{Q}[U_{2}]\) generated by the antisymmetry relations (8) and the \(5\)-term relations (9). Finally, we denote by \(Q\) the space of degree \(2\) coboundaries in the complex \(\mathcal{C}_{N}(3)\), namely, the subspace of \(\mathbf{Q}[U_{2}]\otimes(U_{1})\) generated by the symbols \([u]\otimes u\) with \(u\in U_{2}\). In practice, in order to reduce the size of the objects, we only compute: * a set \(U_{2}^{\prime}\) of representatives of the quotient \(U_{2}/\mathcal{S}_{3}\); * the subspace \(R_{2}^{\prime}\) of \(\mathbf{Q}[U_{2}^{\prime}]\) generated by the \(5\)-term relations; * the subspace \(Q^{\prime}\) of \(\mathbf{Q}[U_{2}^{\prime}]\otimes(U_{1})\) of degree \(2\) coboundaries. The corresponding scripts are contained in the file K4-modular-complex.gp from [9]. They can be applied in the following way. Say we have two degree \(2\) cocycles \(\xi\) and \(\xi^{\prime}\) in \(\Gamma(Y_{1}(N),3)\). Assume that they are both linear combinations of symbols \(\{u_{1}(a,b,c,d)\}_{2}\otimes g_{0,x}\). We may then represent \(\xi-\xi^{\prime}\) by an element of \(\mathbf{Q}[U_{2}]\otimes(U_{1})\), and we check whether this element belongs to the subspace \(R_{2}\otimes(U_{1})+Q\). If so, then we can deduce that \(\xi\) and \(\xi^{\prime}\) are cohomologous, and thus have the same image in \(K_{4}^{(3)}(Y_{1}(N))\) under De Jeu's map [8, Theorems 5.3 and 5.4]. If \(\xi-\xi^{\prime}\) does not belong to the subspace, we cannot conclude anything, as \(R_{2}\) and \(Q\) may not contain all the relations in the respective groups. The linear system involved in the above computation has size \(O(N^{5})\times O(N^{6})\). Experimentally, we have found that the cardinality of \(U_{2}\) for \(N=p\) prime is \((p^{2}-1)(p^{2}-25)/192\), which is smaller by a factor of about \(3\) than what we could expect, namely \(6^{\left(\varphi+1\right)/2}\). Furthermore, it seems that the dimension of \(\mathbf{Q}[U_{2}]/R_{2}\) is equal to \((p-1)(p-5)/12\), which is also the number of triples \((a,b,c)\) with \(0<a<b<c<p\) and \(a+b+c\equiv 0\mod p\), where \(2b<p\); see [22, Sequence A242090]. If true, there should be a way to bypass the step of quotienting by \(R_{2}\). This would result in a much smaller linear system for the comparison of cocycles. ## 4. The Lalin class Recall that Lalin's theorem (Theorem 2) expresses the Mahler measure of \((1+x)(1+y)+z\) as the regulator integral of the following cocycle in the weight \(3\) Goncharov complex of \(E\): \[\xi_{E}=\{-x\}_{2}\otimes y-\{-y\}_{2}\otimes x.\] Our aim in this section is to relate \(\xi_{E}\) to the classes \(\xi_{1}(a,b)\) on \(X_{1}(15)\), which were introduced in [8, Section 6]. This is a purely algebraic computation making use of our implementation of the weight \(3\) complex of \(X_{1}(15)\) explained in Section 3. We first pull back \(\xi_{E}\) to the modular curve \(X_{1}(15)\) using the modular parametrisation \(\varphi\). Using Proposition 4 and its proof, we have in the degree \(2\) cohomology of \(\Gamma(Y_{1}(15),3)\) \[\varphi^{*}\xi_{E}=\{u_{1}(1,2,3,7)\}_{2}\otimes\Big{(}\frac{g_{4}g_{7}}{g_{1 }g_{2}}\Big{)}-\{u_{1}(2,4,6,1)\}_{2}\otimes\Big{(}\frac{g_{2}g_{4}}{g_{1}g_{ 7}}\Big{)}, \tag{10}\] with the shortcut \(g_{k}=g_{0,k}\) for \(k\in\mathbf{Z}/15\mathbf{Z}\). Let us denote by \(\tilde{\xi}_{15}\) the cocycle in the right-hand side of (10). Lalin has shown that the cocycle \(\tilde{\xi}_{E}\) has trivial residues [21, Section 4.1, p. 213], hence \(\tilde{\xi}_{15}\) has trivial residues at the cusps. The next task is to express \(\tilde{\xi}_{15}\) in terms of the cocycles \(\tilde{\xi}_{1}(a,b)\) with \(a,b\in\mathbf{Z}/15\mathbf{Z}\). We do this using the modular complex \(\mathcal{C}_{15}(3)\) from Section 3. Using the function find_xi1ab from K4-modular-complex.gp[9], we detect the following simple expression for \(\tilde{\xi}_{15}\). **Proposition 8**.: _We have the equality of cocycles \(\tilde{\xi}_{15}=-20\tilde{\xi}_{1}(1,4)+\Xi\), where \(\Xi\) is a \(\mathbf{Q}\)-linear combination of coboundaries \(\{u\}_{2}\otimes u\) with \(u\in U_{2}\). In particular, we have \(\varphi^{*}(\xi_{E})=-20\xi_{1}(1,4)\)._ ## 5. The integration path In Theorem 2, the integration path \(\gamma_{E}=\partial\Gamma\) is a closed path in \(E\), and we would like to express it in terms of modular symbols on \(X_{1}(15)\), via the modular parametrisation from Section 2. This is a crucial ingredient in the computation of the regulator integral on \(E\). We will do this carefully in order to certify the relation (Proposition 9). Lalin [21, Section 4.1] has shown that \(\gamma_{E}\) is a generator of \(H_{1}(E,\mathbf{Z})^{+}\), where \((\cdot)^{+}\) denotes the subgroup of invariants under complex conjugation. So we first search for a generator \(\gamma_{15}\) of \(H_{1}(X_{1}(15),\mathbf{Z})^{+}\). We do this with the help of SageMath [25]; see the notebook ModularSymbolGamma15.ipynb in [9]. For any \(g\in\mathrm{SL}_{2}(\mathbf{Z})\), denote by \([g]=\{g0,g\infty\}\) the associated Manin symbol, viewed in the relative homology group \(H_{1}(X_{1}(15),\{\mathrm{cusps}\},\mathbf{Z})\). We obtain \[\gamma_{15}=2\left[\begin{pmatrix}1&9\\ 2&19\end{pmatrix}\right]-\left[\begin{pmatrix}0&-1\\ 1&11\end{pmatrix}\right]-\left[\begin{pmatrix}0&-1\\ 1&4\end{pmatrix}\right]+2\left[\begin{pmatrix}0&-1\\ 1&2\end{pmatrix}\right]. \tag{11}\] We therefore have \(\gamma_{E}=\pm\varphi_{*}(\gamma_{15})\). The precise sign is not strictly needed in what follows, as the Mahler measure is a positive real number and the final identity will fix the sign for us. However, we want to sketch a method to determine the sign rigorously, as it could be useful in more general situations, where the integration path \(\gamma\) need not be a generator of the homology group. In such a scenario, one wishes to ascertain an identity of the form \(\gamma=c\cdot\varphi_{*}(\gamma_{0})\), where \(\varphi\) is the modular parametrisation, \(\gamma_{0}\) is a modular symbol, and \(c\in\mathbf{Z}\) is to be determined. The idea is to integrate an invariant differential form over the cycles to be compared. By [21, Section 4.1], an invariant differential form on \(E\) is given by \[\omega_{E}:=\frac{-dx}{2(x+1)^{2}(y+1)-x}.\] Using (7), we can compute the Fourier expansion of the pull-back of \(\omega_{E}\) to \(X_{1}(15)\): \[W_{15}^{*}(\varphi^{*}\omega_{E})=-(q-q^{2}-q^{3}+O(q^{4}))\frac{dq}{q}.\] A basis of \(\Omega^{1}(X_{1}(15))\) is given by \(\omega_{15}:=2\pi if_{15}(\tau)d\tau\), where \(f_{15}=q-q^{2}-q^{3}+O(q^{4})\) is the newform of weight \(2\) on \(\Gamma_{1}(15)\). Therefore \(W_{15}^{*}(\varphi^{*}\omega_{E})=-\omega_{15}\). Moreover, the involution \(W_{15}\) has a fixed point \(\tau=i/\sqrt{15}\) in the upper half-plane, so it must act on the complex torus underlying \(X_{1}(15)\) as \(z\mapsto z_{0}-z\) for some \(z_{0}\) (it cannot be a translation). It follows that \(W_{15}\) acts as \(-1\) on \(\Omega^{1}(X_{1}(15))\), and we conclude that \(\varphi^{*}\omega_{E}=\omega_{15}\). Now let us integrate the forms \(\omega_{E}\) and \(\omega_{15}\), and compare the signs of the integrals. Following [21, Section 4.1], the path \(\gamma_{E}\) is described using polar coordinates \(x=e^{i\theta}\), \(y=e^{i\psi}\) with \(\theta,\psi\in\left[-\pi,\pi\right]\), and is given by the equation \(\cos(\theta/2)\cos(\psi/2)=1/4\). Since the orientation of the Deninger chain \(\Gamma\) is induced by the product orientation of \([-\pi,\pi]^{2}\), its boundary \(\gamma_{E}\) is oriented counterclockwise in this square (see Figure 1). We can use the symmetries of \(\gamma_{E}\) to reduce the integration path. For any automorphism \(\sigma\) of \(E\) defined over \(\mathbf{R}\), we have \(\sigma^{*}\omega_{E}=\varepsilon(\sigma)\omega_{E}\), where \(\varepsilon(\sigma)=1\) if \(\sigma\) preserves the orientation of \(E(\mathbf{R})\), and \(\varepsilon(\sigma)=-1\) otherwise. Equivalently, \(\varepsilon(\sigma)=1\) if and only if \(\sigma=\mathrm{id}\) or \(\sigma\) has no fixed point. Applying this with the symmetries \((x,y)\mapsto(1/x,y)\) and \((x,y)\mapsto(x,1/y)\), which reverse the orientation of \(E(\mathbf{R})\) as well as that of \(\gamma_{E}\), we obtain that \(\int_{\gamma_{E}}\omega_{E}\) is \(4\) times the integral over the path \(\gamma\) pictured in Figure 1. After some computation, we get \[\int_{\gamma_{E}}\omega_{E}=4\int_{\gamma}\omega_{E}=4\int_{0}^{2\arccos(1/4 )}\frac{d\theta}{\sqrt{16\cos^{2}(\theta/2)-1}}>0. \tag{12}\] Now with the modular curve \(X_{1}(15)\), we wish to determine the sign of \(\int_{\gamma_{15}}\omega_{15}\). For this, consider the linear map \(H_{1}(X_{1}(15),\{\mathrm{cusps}\},\mathbf{Z})\to H_{1}(X_{1}(15),\mathbf{Q})\) provided by the Manin-Drinfeld theorem [15]. Again with SageMath, we compute that the image of \(\{0,\infty\}\) is equal to \(-\frac{1}{16}\gamma_{15}\) (see ModularSymbolGamma15.ipynb [9]). It follows that \[\int_{\gamma_{15}}\omega_{15}=-16\int_{0}^{\infty}\omega_{15}=16L(f_{15},1)>0. \tag{13}\] That \(L(f_{15},1)\) is positive can be ascertained without much effort using the rapidly convergent series \(L(f_{15},1)=2\sum_{n=1}^{\infty}a_{n}e^{-2\pi n/\sqrt{15}}/n\)[12, Proposition 7.5.8]. Namely, one may use the bound \(|a_{n}|\leq n\) for \(n\geq 1\), which follows from the Hasse bound on \(E\) and the inspection of the coefficients \(a_{n}\) for small \(n\). Combining (12) and \[\int_{\varphi_{*}(\gamma_{15})}\omega_{E}=\int_{\gamma_{15}}\varphi^{*}\omega_ {E}=\int_{\gamma_{15}}\omega_{15}>0,\] we come to the following conclusion. Figure 1. The Deninger chain \(\Gamma\), its boundary \(\gamma_{E}\) and the path \(\gamma\). **Proposition 9**.: _We have \(\gamma_{E}=\varphi_{*}(\gamma_{15})\)._ To be fully accurate (and in order to handle more general situations), ascertaining this equality requires to compute numerically the integrals (12) and (13). And since the ratio of these integrals is known to be an integer, it suffices to compute them with rigorous error bounds. The integral (12) is a complete elliptic integral which can be dealt with the Arb library [19, 20]. On the other hand, (13) involves integrating a modular form over a modular symbol. We can do it in the present situation thanks to the rapidly convergent series. In general, although PARI/GP [23] and Magma [2] can evaluate such integrals efficiently, we are not aware of implementations that prove error bounds for them. ## 6. Final computation We denote by \(r_{3}(2)\) the Goncharov regulator map in degree \(2\) for the weight \(3\) complex of a curve (see [17]). It sends a degree \(2\) cocycle to an explicit closed \(1\)-form on this curve. By Proposition 9, we have \[\int_{\gamma_{E}}r_{3}(2)(\xi_{E})=\int_{\varphi_{*}(\gamma_{15})}r_{3}(2)(\xi _{E})=\int_{\gamma_{15}}r_{3}(2)(\varphi^{*}\xi_{E})=\int_{\gamma_{15}}r_{3}(2 )(\tilde{\xi}_{15}). \tag{14}\] Note that the differential form \(r_{3}(2)(\tilde{\xi}_{15})\) is defined only on the open modular curve \(Y_{1}(15)\). However, it has trivial residues at the cusps since the same is true for \(\tilde{\xi}_{15}\), see Section 4. We may therefore compute the integral by choosing the representative of \(\gamma_{15}\) given by (11). Note that this integral involves cusps but it is absolutely convergent by [8, Corollary 7.3]. The technical details of this procedure are explained at the end of [8, Section 8]. **Lemma 10**.: _Let \(u\) be a modular unit on \(X_{1}(N)\) such that \(1-u\) is also a modular unit. For any two cusps \(\alpha\neq\beta\) in \(\mathbf{P}^{1}(\mathbf{Q})\), we have \(\int_{\alpha}^{\beta}r_{3}(2)(\{u\}_{2}\otimes u)=\hat{\mathcal{L}}_{3}(u( \beta))-\hat{\mathcal{L}}_{3}(u(\alpha))\), where \(\hat{\mathcal{L}}_{3}:\mathbf{P}^{1}(\mathbf{C})\to\mathbf{R}\) is the single-valued trilogarithm defined in [17, Section 2.1]._ Proof.: By [17, Theorem 2.2], we have \[r_{3}(2)(\{u\}_{2}\otimes u)=r_{3}(2)(\delta(\{u\}_{3}))=dr_{3}(1)(\{u\}_{3}) =d\hat{\mathcal{L}}_{3}(u).\qed\] Since the path \(\gamma_{15}\) is closed, Lemma 10 implies that \(\int_{\gamma_{15}}r_{3}(2)(\{u\}_{2}\otimes u)=0\) for any \(u\in U_{2}\). Using Proposition 8, the computation (14) continues as \[\int_{\gamma_{E}}r_{3}(2)(\xi_{E})=-20\int_{\gamma_{15}}r_{3}(2)(\tilde{\xi}_ {1}(1,4)). \tag{15}\] We are now in position to apply the main result of [11], which computes \[\mathcal{G}(\boldsymbol{a},\boldsymbol{b})=\int_{0}^{\infty}r_{3}(2)(\tilde{ \xi}(\boldsymbol{a},\boldsymbol{b}))\qquad(\boldsymbol{a},\boldsymbol{b}\in( \mathbf{Z}/N\mathbf{Z})^{2}),\] under the assumption that the coordinates of \(\boldsymbol{a}\), \(\boldsymbol{b}\) and \(\boldsymbol{a}+\boldsymbol{b}\) are non-zero. We may integrate along Manin symbols \([g]=\{g0,g\infty\}\) as well, noting that \[\int_{g0}^{g\infty}r_{3}(2)(\tilde{\xi}(\boldsymbol{a},\boldsymbol{b}))=\int_ {0}^{\infty}r_{3}(2)(\tilde{\xi}(\boldsymbol{a}g,\boldsymbol{b}g))=\mathcal{ G}(\boldsymbol{a}g,\boldsymbol{b}g)\qquad(g\in\mathrm{SL}_{2}(\mathbf{Z})).\] Recall also that \(\tilde{\xi}_{1}(a,b)=\tilde{\xi}((0,a),(0,b))\). Expanding (15), we get \[\int_{\gamma_{E}}r_{3}(2)(\xi_{E})=-20\big{(}2\mathcal{G}((2,4),(8,1))- \mathcal{G}((1,11),(4,14))-\mathcal{G}((1,4),(4,1))+2\mathcal{G}((1,2),(4,8)) \big{)}.\] The assumption on the coordinates of the parameters is satisfied, and [11, Theorem 1] gives \[\int_{\gamma_{E}}r_{3}(2)(\xi_{E})=\pi^{2}L^{\prime}(F,-1) \tag{16}\] with \[F =-8(G_{2,1}G_{8,-4}+G_{2,-1}G_{8,4})+4(G_{1,14}G_{4,-11}+G_{1,-14} G_{4,11})\] \[\quad+4(G_{1,1}G_{4,-4}+G_{1,-1}G_{4,4})-8(G_{1,8}G_{4,-2}+G_{1,-8 }G_{4,2}). \tag{17}\] Here \(G_{a,b}\) is a shortcut for the Eisenstein series \(G_{a,b}^{(1);15}\) defined in [11, Introduction] for arbitrary level \(N\) by \[G_{a,b}^{(1);N}(\tau)=a_{0}(G_{a,b}^{(1);N})+\sum_{\begin{subarray}{c}m,n\geq 1 \\ (m,n)\equiv(a,b)\bmod N\end{subarray}}q^{mn/N}-\sum_{\begin{subarray}{c}m,n \geq 1\\ (m,n)=(-a,b)\bmod N\end{subarray}}q^{mn/N}\qquad(a,b\in\mathbf{Z}/N\mathbf{Z}).\] In our situation the indices \(a,b\) are non-zero modulo \(15\), so that the constant terms \(a_{0}(G_{a,b})\) vanish. The functions \(G_{a,b}\) are Eisenstein series of weight \(1\) on \(\Gamma(15)\). Note that the products \(G_{\boldsymbol{x}}G_{\boldsymbol{y}}\) appearing in (17) are actually power series in \(q\), because \(x_{1}x_{2}+y_{1}y_{2}\) is divisible by \(15\) for each such product. It follows that \(F\) belongs to \(M_{2}(\Gamma_{1}(15))\). We have written a script K4-reg-Lvalue.gp [9] to automate the application of [11, Theorem 1] and compute the \(q\)-expansion of the resulting modular form to arbitrary precision. We find that \(F=-8f_{15}+O\big{(}q^{21}\big{)}\), where \(f_{15}\) is the newform associated to \(E\). Moreover, the Sturm bound for the space \(M_{2}(\Gamma_{1}(15))\) is equal to \(16\) (apply [27, Sturm's theorem, 9.4.1.2] with the group \(\Gamma=\pm\Gamma_{1}(15)\), which has index \(96\) in \(\operatorname{SL}_{2}(\mathbf{Z})\)). This means that if two modular forms \(F_{1}\) and \(F_{2}\) in this space satisfy \(F_{1}=F_{2}+O(q^{17})\), then \(F_{1}=F_{2}\). In our situation, this allows us to certify that \(F=-8f_{15}\). Using Theorem 2 and (16), the Mahler measure finally equals \[m(P)=\frac{1}{4\pi^{2}}\int_{\gamma_{E}}r_{3}(2)(\xi_{E})=\frac{1}{4\pi^{2}} \cdot\pi^{2}L^{\prime}(-8f_{15},-1)=-2L^{\prime}(E,-1).\] This concludes the proof of Theorem 3. ## Appendix. Tables of \(3\)-variable Mahler measures We would like to give here a list of conjectural identities for \(3\)-variable Mahler measures involving \(L(E,3)\) for several elliptic curves \(E\) over \(\mathbf{Q}\). It is possible that our methods can be applied to prove at least some of these identities. The success of the approach will depend very much on the modular parametrisation of the elliptic curve; in our case, Proposition 4 was crucial. This is similar to what happens for the \(2\)-variable Mahler measures, where the proofs using the Rogers-Zudilin method require the curve to be parametrised by modular units [10, Section 8.4 and Chapter 9]. Boyd and Rodriguez Villegas [3] discovered several identities of type \(m(P(x,y,z))=r\cdot L^{\prime}(E,-1)\) with \(r\in\mathbf{Q}^{\times}\) by looking at polynomials of the form \(P=A(x)+B(x)y+C(x)z\) where \(A\), \(B\), \(C\) are products of cyclotomic polynomials. Boyd found further examples in [4, 5]. We extended Boyd's search with \(A\), \(B\), \(C\) of degree up to \(5\) and found a few other examples, see Table 1 below (we do not claim to have spotted all identities for this range of \(A,B,C\)). Table 2 displays two Mahler measures which involve a combination of \(L(E,3)\) and \(\zeta(3)\) with polynomials \(P\) of the same type. Note that \(\zeta(3)\) terms also appear in [11, Theorem 1]. In the tables below, the curve \(E\) is given by its Cremona label, and the integer \(g\) is the genus of the Maillot variety \(W_{P}\) (or a component of it) whose Jacobian has \(E\) as an isogeny factor. We also looked at polynomials \(P(x,y,z)\) which have degree \(1\) in each variable \(x,y,z\), and all of whose coefficients are \(\pm 1\) (or zero). It seems to be the case that every such polynomial is exact. The identities found are collected in Table 3. The first entry in this table is not of this shape but we include it for completeness; it already appears in [10]. Ringeling computed numerically the Mahler measures below, and the identities seem to hold to at least \(100\) digits. A particular feature of Table 3 is the appearance of the elliptic curve \(36a1\), which has complex multiplication. The elliptic curve \(450c1\) is also the first example with a curve of rank \(1\).
2308.13863
Full-scale ab initio simulations of laser-driven atomistic dynamics
The coupling of excited states and ionic dynamics is the basic and challenging point for the materials response at extreme conditions. In laboratory, the intense laser produces transient nature and complexity with highly nonequilibrium states, making it extremely difficult and interesting for both experimental measurements and theoretical methods. With the inclusion of laser-excited states, we extended ab initio method into the direct simulations of whole laser-driven microscopic dynamics from solid to liquid. We constructed the framework of combining the electron-temperaturedependent deep neural network potential energy surface with hybrid atomistic-continuum approach, controlling non-adiabatic energy exchange and atomistic dynamics, which enables consistent interpretation of experimental data. By large scale ab inito simulations, we demonstrate that the nonthermal effects introduced by hot electrons play a dominant role in modulating the lattice dynamics, thermodynamic pathway, and structural transformation. We highlight that the present work provides a path to realistic computational studies of laser-driven processes, thus bridging the gap between experiments and simulations.
Qiyu Zeng, Bo Chen, Shen Zhang, Dongdong Kang, Han Wang, Xiaoxiang Yu, Jiayu Dai
2023-08-26T12:46:45Z
http://arxiv.org/abs/2308.13863v2
# Full-scale _ab initio_ simulations of laser-driven atomistic dynamics ###### Abstract The coupling of excited states and ionic dynamics is the basic and challenging point for the materials response at extreme conditions. In laboratory, the intense laser produces transient nature and complexity with highly nonequilibrium states, making it extremely difficult and interesting for both experimental measurements and theoretical methods. With the inclusion of laser-excited states, we extended _ab initio_ method into the direct simulations of whole laser-driven microscopic dynamics from solid to liquid. We constructed the framework of combining the electron-temperature-dependent deep neural network potential energy surface with hybrid atomistic-continuum approach, controlling non-adiabatic energy exchange and atomistic dynamics, which enables consistent interpretation of experimental data. By large scale _ab inito_ simulations, we demonstrate that the nonthermal effects introduced by hot electrons play a dominant role in modulating the lattice dynamics, thermodynamic pathway, and structural transformation. We highlight that the present work provides a path to realistic computational studies of laser-driven processes, thus bridging the gap between experiments and simulations. ## I Introduction Intense laser-matter interaction plays an important role in many applications including inertial confinement fusion [1], laser micromachining [2], and material synthesis [3]. Ultrafast laser excitation can drive matter into extremely non-equilibrium states, in which the hot electron and cold lattice coexist. The subsequent atomistic dynamics is therefore a long-standing challenge, because it is governed by the interplay between excited-electron-modulated potential energy surface (PES) [4], electron-ion coupling [5], and geometric characteristics of irradiated samples [6]. Tremendous efforts based on time-resolved probing techniques and simulations have provided valuable insights into the nonthermal behaviors [7; 8; 9; 10; 11], kinetics of laser-driven melting [12; 13; 14; 15], and electron-phonon coupling [16; 17; 18]. The related processes from cold solid to hot liquid and plasma are the typical multiscale dynamics due to the cascade of interrelated processes triggered by the laser excitation, both in time scale and size scale. Therefore, it is of great difficulty and importance to construct a well-coordinated picture between experimental and theoretical efforts. For example, the dynamics of laser-excited Au is still under debates [4; 7; 19], regarding the phonon behaviors driven by laser excitation. In these cases, different priori assumptions on material response were usually made [20; 21]. The above obscure stems from the technical limitations that the present methods can not capture both the nonthermal and intrinsic scale of laser-induced process at the same time. For _ab initio_ methods such as time-dependent density functional theory (DFT), the sizes are limited to \(10^{1}\sim 10^{3}\) atoms and \(10^{2}\sim 10^{4}\) fs, unable to access realistic representation of structural transformations of irradiated samples. While for classical molecular dynamics simulations coupled with two-temperature model (TTM-MD) [22; 23], the implementation of empirical potential like embedded-atom-method (EAM) is limited in prior knowledge and model complexity, thus can hardly capture the high dimensional dependence of PES on both atomic local environments and electron occupations for a wide range of temperature and density [19; 24], leading to the inadequate description of nonthermal nature of laser-driven processes. Therefore, bringing the advantage of _ab initio_ and large-scale molecular dynamics including nonthermal effect becomes the route one must take. In this paper, we developed _ab initio_ atomistic-continuum model by combining two-temperature-model (TTM) with an extended deep potential molecular dynamics (DPMD), as illustrated in Fig.1. When the ultrafast laser interacts with solids, the electrons quickly thermalized at timescases of femtoseconds, producing highly non-equilibrium states (electron temperature \(T_{e}\gg\) ion temperature \(T_{i}\)). The hot \(T_{e}\) will result in the redistributed charge density firstly and then modifies the PES of ions. To capture this physics, we introduced the laser-excited PES by constructing electron-temperature-dependent deep neural network potential firstly, and coupled the PES into additional electron continuum subsystem via TTM-MD framework. By this way, we can directly simulate the whole electron-ion coupled dynamics during the laser-driven processes with large-scale simulations within _ab initio_ accuracy. We take tungsten as an example and systematically validate the accuracy of our model in describing lattice dynamics, thermophysical properties, and laser heating process in both equilibrium and laser-excited states, by comparing with the related experimental results recently [13]. ## Results **Construction of laser-excited PES.** Recent efforts have demonstrated the success of machine learning model towards large-scale simulations of _ab initio_ quality at extreme conditions [25, 26, 27, 28, 29, 30, 31, 32], but most of the studies focus on the equilibrium-state and ground-state applications. Here the electron-temperature-dependent deep potential (ETD-DP) model is implemented in the framework of deep potential method [25, 33, 34] to model the laser-driven dynamics. To avoid constructing hand-crafted features or kernels for different types of bulk systems, a general end-to-end symmetry preserving scheme is adopted [35]. As illustrated in Fig.1(a), the ETD-DP model consists of an embedding network and a fitting network. The embedding network is designed to transform the coordinate matrices \(\mathcal{R}_{I}\) to symmetry preserving features, encoded in the descriptor \(\mathcal{D}_{I}\). And the fitting network is a standard fully connected feedforward neural network, mapping the descriptor to the atomic contribution of total energies. The newly introduced parameter, electron temperature \(T_{e}\), is used to characterize the laser modulation on PES, in which electron occupation distribution is far away from electron-ion equilibrium states. This ETD-DP is defined as \[A=A(\mathcal{R},T_{e})=\sum_{i}\mathcal{N}_{\alpha_{i}}(\mathcal{D}_{\alpha_{ i}}(r_{i},\{r_{j}\}_{j\in n(i)}),T_{e}) \tag{1}\] where \(A(\mathcal{R},T_{e})\) is the potential energy depends on the local atomic environment (\(\mathcal{R}\)) and \(T_{e}\), \(\mathcal{N}_{\alpha_{i}}\) denotes the neural network of specified chemical species of \(\alpha_{i}\) of atom \(i\), and the descriptors \(\mathcal{D}_{\alpha_{i}}\) describes the symmetry preserved local environment of atom \(i\) with its neighbor list \(n(i)=\{j|r_{ji}<r_{cut}\}\), respectively. To generate an ETD-DP, the new degree of freedom, \(T_{e}\), will dramatically expand the sampling space in the data labelling process, introducing expensive computational costs. Therefore, an iterative concurrent learning scheme [36] is highly required to efficiently sample atomic configurations under both equilibrium (\(T_{e}=T_{i}\)) and non-equilibrium conditions (\(T_{e}\neq T_{i}\)). As shown in Fig.1(b), to explore the density-temperature space with different electron occupations (\(\rho,T_{i},T_{e}\)), a variety of crystal structures are used as the initial configurations to run multiple DPMD simulations. And an ensemble of ETD-DP is trained with the same dataset but with different parameter initializations. The model deviation, denoted as the maximum standard deviation of the predicted atomic forces by the ensemble of ETD-DP, is used to evaluate whether the explored atomic configurations should be send to generate referenced _ab initio_ energies, forces, and virial tensors. **Two-temperature model coupled DPMD (TTM-DPMD).** To model the whole ultrafast laser-driven processes from cold solid to plasma, we should couple quantum electron subsystem and strongly coupled ionic subsystem. Here, we implemented our laser-excited Figure 1: **Schematic diagram of workflow for efficient and accurate simulation of laser-driven atomistic dynamics.** (a) ETD-DP model. \(T_{e}\) is the electron temperature, regarding to the electron occupation distribution. The free energy \(A\), force \(\mathbf{F}\), virial \(\Xi\), electronic entropy \(S_{e}\), and electronic heat capacity \(C_{e}\) can be inferred through backpropagation algorithm. (b) iterative concurrent learning scheme is used to efficiently sample atomic configurations for a wide range of equilibrium and non-equilibrium conditions. (c) hybrid atomistic continuum approach. The evolution of electron subsystem allows atomistic system transits between different PES, and the Langevin thermostat is introduced to mimic non-adiabatic energy exchange between electron and lattice. PES into the TTM-MD framework [22; 23; 37], going beyond traditional ground-state EAM and neural-network-driven PES descriptions. As shown in Fig.1(c), the heat conduction equation of electron continuum characterizes the temporal evolution of electron occupations, thus governing the transition of ionic system between different \(T_{e}\)-dependent PES. Langevin dynamics is incorporated to mimic the dynamic electron-ion collisions [37; 38; 23; 39]. The TTM-DPMD is defined as follows, \[C_{e}(T_{e})\frac{\partial T_{e}}{\partial t}=\nabla\cdot(\kappa_{e}\nabla T_{ e})-g_{ei}(T_{e})(T_{e}-T_{i})+S(\mathbf{r},t) \tag{2}\] \[m_{i}\frac{d^{2}\mathbf{r}_{i}}{dt^{2}}=-\nabla_{i}A(T_{e})-\gamma_{i}\mathbf{ v}_{i}+\mathbf{\tilde{F}}_{i}(t) \tag{3}\] where \(C_{e}\) is the electronic heat capacity, \(\kappa_{e}\) the electronic thermal conductivity, \(g_{ei}\) the electron-phonon coupling constant, \(S(\mathbf{r},t)\) the laser source. The ions evolves on the \(T_{e}\)-dependent PES \(A(\mathcal{R},T_{e})\), and suffers fluctuation-dissipation forces \(-\gamma_{i}\mathbf{v}_{i}+\mathbf{\tilde{F}}_{i}(t)\) from electron sea. Here \(\gamma_{i}\) is the friction parameter that characterizes the electron-ion equilibration rate, relating to the electron-phonon coupling constant through \(\gamma=g_{ei}m_{i}/2n_{i}k_{B}\), where \(n_{i}\) the ion number density. The \(\mathbf{\tilde{F}}_{i}(t)\) term is a stochastic force term with a Gaussian distribution, whose mean and variance is given by \(\langle\mathbf{\tilde{F}}_{i}(t)\rangle=0\) and \(\langle\mathbf{\tilde{F}}_{i}(t)\cdot\mathbf{\tilde{F}}_{i}(t^{\prime}) \rangle=2\gamma_{i}k_{B}T_{e}\delta(t-t^{\prime})\). In TTM-DPMD, by practically choosing the electron temperature or ionic temperature in the meshgrid as the additional parameter in ETD-DP model, the ions can evolve under laser-excited PES (\(T_{e}\gg T_{i}\)) or ground-state PES (\(T_{e}=T_{i}\)), so that we can separate the nonthermal effects defined by the electronic excitation from thermally driven atomic dynamics and phase transformation. **Validating neural network model for laser-excited tungsten.** To validate the effectiveness of extended DP model, we chose tungsten as our target system. Tungsten is a typical transition metal, with half-filled \(d\) bands that is sensitive to \(T_{e}\). Upon laser excitation, tungsten is expected to go through a complicated dynamic process including possible nonthermal solid-solid phase transition [20; 21; 8; 24], attracting much attention but remains ambiguous. Here we generate a \(T_{e}\)-dependent deep-neural-network tungsten model by learning from DFT data calculated with the generalized gradient approximation (GGA) of the exchange-correlation functional [40] using VASP package [41]. The atomic configurations used in the training set are collected from a wide range of \((\rho,T_{i},T_{e})\) condition, covering the phase space of the body-centered-cubic (BCC), close-packed structure, uniaxially distorted crystalline, and the liquid structures. More details about DP training can be found in the supplemental materials [42]. Here we pay special attention to thermodynamic properties of equilibrium tungsten that are closely related to laser heating process. The melting temperature predicted by ground-state DPMD (3550 K [42]) is in consistence to the previous DFT-MD (3450 \(\pm\) 100 K [46]) and Gaussian approximation potentials simulations (3540 K [47]), which confirms that the present PES can reproduce melting with DFT accuracy. Furthermore, the dependence of DPMD-predicted enthalpy on temperature along isobaric heating condition is shown in Fig.2(a), and the experimental data agree very well with our DPMD predictions, especially in the liquid regime [44]. The estimated enthalpy of fusion at the melting point (\(\Delta H_{m}=237\pm 20\) kJ/kg) is also close to the DFT-MD values (250 kJ/kg) [43] and other experimental values (see Table.S1 in [42]). Based on calculated thermophysical properties, we can determine the complete melting threshold \(\epsilon_{m}\), which is the laser energy that is sufficient to drive the complete melt of the samples. We found \(\epsilon_{m}=0.92\pm 0.04\) MJ/kg, corresponding absorbed pump fluence is \(53.0\pm 2.2\) mJ/cm\({}^{2}\) for 30-nm-thick tungsten film [42]. Such values are in agreement with the estimated values from experimental results [44; 48; 49; 50; 51; 13], in which energy density is approximately 0.94 MJ/kg (pump fluence of 53.8 mJ/cm\({}^{2}\)). The density decrease at elevated tem Figure 2: **Validating accuracy of ETD-DP model.** (a) Temperature dependence of enthalpy under isobaric heating (\(p=1\) bar) with the reference temperature of 300 K. The blue line, the black cross, and grey square denotes the DPMD results, previous DFT-MD prediction [43], and isobaric expansion experimental data [44] respectively. (b) phonon dispersion of laser-excited tungsten (\(\rho_{0}=19.15\) g cm\({}^{-3}\)). The black cross and white squares represent the individual KS-DFT calculation and experimental measurements [45]. perature as shown in Fig.S3[42], is also consistent to the experimental measurements [50; 51; 52]. The lattice dynamics, that requires high-order derivatives of PES, were further investigated. As shown in Fig.2(b), the phonon dispersion curves of BCC tungsten under both equilibrium and non-equilibrium states are well-reproduced compared with the DFT results. In particular, comparing with the phonon dispersion at \(T_{e}=300\) K, the directional phonon softening is observed along the \(\mathrm{H-N}\) and \(\mathrm{N-\Gamma}\) path in the first Brillouin zone at elevated electron temperature (\(T_{e}=10000\) K), which can be attributed to the delocalization half-filled \(d\) bands [8; 24]. The depopulation of such a strong directional component in electronic bonding weakens the directional forces and may drive the crystalline structures towards to close-packed forms. These results indicate that the neural network PES can provide faithful prediction related properties in consistency to experiments or _ab initio_ method. **Direct _ab initio_ simulations of laser-driven dynamics.** It is stressed that our explicit electron-temperature-dependence PES can well capture the non-thermal nature of laser-excited metals. When implemented in TTM-DPMD framework, it allows us to establish a comprehensive understanding on laser-induced non-equilibrium states within _ab initio_ accuracy. Recent time-resolved ultrafast electron and X-ray diffraction experiments collects direct quantitative structural information of laser-driven processes [13], providing a benchmark for the validation of the present newly-developed methods. Here, we apply TTM-DPMD to directly simulate the dynamic response of tungsten nanofilm under different absorbed laser energy densities. In TTM-DPMD simulations, full-scale _ab initio_ description in one-dimension of polycrystalline (PC) 30-nm-thick tungsten nanofilm is considered, according to relevant UED experiment [13]. For PC systems, large size included 752,650 atoms is used to describe crystal grains with random shapes, orientations, and different types of boundaries. The size of each grain ranges from \(\sim 5\) nm to \(\sim 7\) nm and each grain contains more than \(10^{4}\) atoms (totally reaching to millions of atoms), which cannot be achieved by the traditional time-dependent DFT simulations. Moreover, extra 30 nm vacuum space perpendicular to laser incident direction is set to allow free surface response to the internal stress relaxation, and extra spring forces are introduced for atoms in the bottom regime relating to their initial lattice site, to present bonding to the substrate (see supplementary materials [42]). Considering the ballistic transportation of excited electron in tungsten (the mean free path \(\sim 33\) nm), we assumed the uniform deposition of laser energy with relatively low energy density of 0.08 MJ/kg (corresponding to absorbed laser fluence of 4.8 mJ/cm\({}^{2}\)). In this case, a moderate two-temperature state is created at the initial stage, where maximum electron temperature can reach to 4400 K. Through electron-ion energy exchange, the system quickly reaches thermal equilibrium (\(T_{e}=T_{i}\sim 920\) K) at t = 5 ps. The structure factor is calculated to extract the decay dynamics of Laue diffraction peak (LDP) [42; 53], which is an important quantity to diagnose the structural dynamics in experiments [13]. As shown in Fig.3(a), based on TTM-DPMD simulations with the inclusion of laser-driven excited states, the temporal evolution of normalized intensity of (211) LDP agrees well with UED measurements. Conversely, the results from simulations by ground-state PES deviate from experimentally measured values significantly. It is interesting to say that the thermal process (\(T_{e}=T_{i}\)) exhibits remarkably slower decay dynamics than the process with excited states (\(T_{e}\geq T_{i}\)) upon such laser fluence. By further investigating the lattice vibration of bulk tungsten, we note that even under moderate non-equilibrium state (\(T_{e}=5000\) K), a Figure 3: **Capturing nonthermal effect with TTM-DPMD approach.** Comparison of (a) temporal evolution of (211) diffraction peak intensity in structure factor under absorbed laser energy density of 0.08 MJ/kg, compared with experimental data [13]. (b) Temperature dependence of mean square displacement with isobaric constrains and (c) phonon density of states (PDOS), obtained under equilibrium condition (blue) and nonequilibrium condition (orange). relative increase of over 10% in mean square displacement (MSD) can be observed under isobaric heating condition, as shown in Fig.3(b). The enhancement of lattice vibration can be attributed to the hot-electron-induced phonon softening (Fig.3(c)). Such nonequilibrium and nonthermal effects therefore modify the dynamics of diffraction signals according to Debye-Waller formula [42], in which the decay of LDP is relating to temporal evolution of lattice temperature and temperature dependence of MSD. The quantitative consistency between our simulations and experiments validates our model further, and then provide a chance to further elucidate laser excitation effects. By increasing laser energy density up to 0.80 MJ/kg, the irradiated tungsten nanofilm starts with more severe nonequilibrium states (\(T_{e}=11200~{}\rm{K},T_{i}=300~{}\rm{K}\)). As presented in Fig.4(a)(b), the evolution of tungsten nanofilm predicted by ground-state PES (\(T_{e}=T_{i}\)) is a purely thermal process governed by electron-ion coupling. With increased lattice temperature, the system firstly evolves along the equilibrium isochore in the first 4 ps, where the ionic kinetic pressure accumulates to \(\sim 10~{}\rm{GPa}\). Then the thermal pressure is gradually released due to existence of free surface. Although the thermal expansion process leads to temperature and density decrease, the gradient in thermodynamic profile is slight and the whole system can be considered as homogeneous. When laser-induced changes in the PES is included (\(T_{e}\geq T_{i}\)), the thermodynamic pathway and thermodynamic profile is totally different. As shown in the Fig.4(c)(d), the ultrafast excitation of electrons results in the buildup of extra pressure on a sub-picosecond timescale (more details in Fig.S7). Such hot-electron- contributed pressure increases monotonically with increased laser energy density, from \(\sim 1~{}\rm{GPa}\) with \(T_{e,0}=4400~{}\rm{K}\) (\(\epsilon=0.08~{}\rm{MJ/kg}\)) to \(\sim 17~{}\rm{GPa}\) with \(T_{e,0}=11200~{}\rm{K}\) (\(\epsilon=0.80~{}\rm{MJ/kg}\)). The tungsten nanofilm then quickly responds to this nonthermal internal stress, triggering anisotropic volume relaxation dynamics. As a result, a significant inhomogeneity is demonstrated in the thermodynamic profiles. In Fig.4(d), the propagation and reflection of stress waves can be identified with velocity of \(\sim 4.3~{}\rm{km/s}\), accompanied with the density decrease of \(\sim 1~{}\rm{g/cm^{3}}\). With existence of free surface, the build up and uniaxial relaxation of nonthermal stress can strongly influence the thermodynamic pathway especially under high laser fluence, which cannot simply be assumed to be isochoric or isotropically isobaric. We highlight that such real-time material response captured by TTM-DPMD simulation provides unique insights into previous controversial issues on nonthermal behavior of laser-excited matter [19; 20]. Figure 4: **Hot electron modifies the thermodynamic pathway.** Comparison of (a)(c) thermodynamic pathway (b)(d) temporal evolution of thermodynamic profile of nanofilm, predicted by ground-state PES and laser-excited PES, respectively. In (a)(c), the red arrows indicate the evolution path of selected regime (z = 14.0 nm) in the tungsten nanofilm, and the thermodynamic state is highlighted by colored stars every 1 ps. In (d), the black dashed lines are used to highlight the propagation of stress waves, whose slope represents a constant propagation speed of \(\sim 4.3~{}\rm{km/s}\). Discussion In this work, we developed the deep learning model to perform large-scale _ab inito_ simulations on the laser-induced atomistic dynamics, with quantum accuracy on the non-thermal effects. To validate the accuracy, special attention is paid to recent experiments. We successfully reproduce the experimental data with our model. It is therefore verified that the laser-excited states have profound effects on the thermodynamic evolution and structural transformation dynamics. More importantly, the combination of deep learning techniques with hybrid continuum-atomistic approach bridges the theoretical method and experimental observations, providing a new path to establish accurate and complete understanding of the atomistic dynamics under ultrafast laser interactions. ## Method **DP training.** The ETD-DP models for tungsten are generated with DeePMD-kit packages [54] by considering \(T_{e}\) as atomic parameter. Deep Potential Generator (DP-GEN) [36], has been adopted to sample the most compact and adequate data set that guarantees the uniform accuracy of ETD-DP in the explored configuration space. We consider BCC structure (54 atoms) and liquid structure (54 atoms) as the initial configurations and run DPMD under NVT and NPT ensemble (both isotropic and uniaxial constrains are considered), where temperatures ranges from 100 K to 6000 K, pressure ranges from -15 to 60 GPa, and corresponding electronic temperature ranges from 100 K to 25000 K. The training sets consist of 6366 configurations under equilibrium condition (\(T_{e}=T_{i}\)) and 6820 configurations sampled under two-temperature state (\(T_{e}>T_{i}\)). For DP training, the embedding network is composed of three layers (25, 50, and 100 nodes) while the fitting network has three hidden layers with 240 nodes in each layer. The total number of training steps is set to 400 000. The radius cutoff \(r_{c}\) is chosen to be 6.0 A. The weight parameters in loss function for energies \(p_{e}\), forces \(p_{f}\), and virials \(p_{V}\) are set to \((0.02,1000,0.02)\) at the beginning of training and gradually change to \((1.0,1.0,1.0)\). The self-consistency calculations are all performed with the VASP package [55]. The Perdew-Burke-Erzerhof (PBE) exchange correlation functional is used [56], and the pseudopotential takes the projector augmented-wave (PAW) formalism [57; 58]. The sampling of Brillouin zone is chosen as 0.2 A\({}^{-1}\) under ambient conditions (\(T\leq 300\) K), and 0.5 A\({}^{-1}\) for high temperature. **TTM-DPMD simulation setting.** We perform TTM-DPMD simulations with LAMMPS package [59] through modified EXTRA-FIX packages [23]. The electronic heat capacity is calculated by individual DFT calculations \(C_{e}=T_{e}\frac{\partial S_{e}}{\partial T_{e}^{2}}\), which is consistent with previous calculations [60]. The electron-phonon coupling factor is set to constant (\(G_{0}=2.0\times 10^{17}\) W m\({}^{-2}\) K\({}^{-1}\)) according to relevant ultrafast electron diffraction experiments [13]. The electron thermal conductivity is described by the Drude model relationship, \(\kappa_{\rm e}(T_{e},T_{i})=\frac{1}{3}v_{F}^{2}C_{e}(T_{e})\tau_{e}(T_{e},T_ {i})\), where \(v_{F}\) is Fermi velocity and \(\tau_{e}(T_{e},T_{i})\) is the total electron scattering time defined by the electron-electron and electron-phonon scattering rates, \(1/\tau_{e}=1/\tau_{e-e}+1/\tau_{e-ph}=AT_{e}^{2}+BT_{i}\). The coefficients \(A=2.11\times 10^{-4}\) K\({}^{-2}\) ps\({}^{-1}\),B = \(8.4\times 10^{-2}\) K\({}^{-1}\) ps\({}^{-1}\),v\({}_{\rm F}\) = 9710 A ps\({}^{-1}\) are adopted [61]. The duration of laser pulse is set to 130 fs. Since the mean free path of laser-excited electrons is \(\sim 33\) nm in tungsten [62], the electrons are heated uniformly due to the ballistic transport. Therefore, optical penetration of laser energy can be neglected for simplicity. For atomic system, the simulation size of polycrystalline sample is set to 30 nm\(\times\) 20 nm\(\times\) 20 nm, containing 752650 atoms, with extra 30 nm vacuum space along the x direction to mimic the free boundary condition. Extra spring forces are introduced for atoms in the bottom 5 A relating to their initial lattice site to present bonding to the substrate. **phonon spectra calculation.** To validate the accuracy of ETP-DP model, we investigate the lattice dynamics that need high order derivatives of PES. We use finite displacement method to calculate the phonon dispersion with ALAMODE package [63] as a postprocessing code. The forces are calculated in \(5\times 5\times 5\) supercell with cell lattice parameter \(a_{0}=3.17104\)A. The atomic displacement is set to 0.01 A, and the interatomic force constants are extracted from KS-DFT and DPMD calculation respectively. The dynamical matrices are derived from these force displacement data to obtain phonon dispersion spectra. **Ultrafast electron diffraction pattern.** To extract the decay of Laue diffraction peak (LDP) intensities as in the UED experiments, we performed the ultrafast electron diffraction simulations with DIFFRACTION package [53] to obtain the structure factor \(S(Q_{x},Q_{y})\) defined as follows, \[S = \frac{F^{*}F}{N} \tag{4}\] \[F(\mathbf{Q}) = \sum_{i}f_{i}(\mathbf{Q};\lambda)e^{i2\pi\mathbf{Q}\cdot\mathbf{ r}_{i}} \tag{5}\] where \(\mathbf{Q}=(Q_{x},Q_{y},Q_{z})\) the wave vector, \(f_{i}\) the atomic scattering factor, \(\lambda\) the wavelength of incident electron, \(\mathbf{r}_{i}\) the coordinates of atom \(i\). Here, simulated 3.2MeV electron radiation (\(\lambda\sim 0.34\) pm) is used to create selected area electron diffraction (SAED) patterns according to relevant experiments [13], and the SAED patterns aligned on the [100] axis (\(Q_{z}=0\)) are constructed by selecting reciprocal lattice points intersecting a \(0.01\)A\({}^{-1}\) thick Ewald sphere slice.Detailed discussion can be found in Fig.S5 and Fig.S6 in SI.
2307.12320
Action for classical, quantum, closed and open systems
It is well known that the action functional can be used to define classical, quantum, closed, and open dynamics in a generalization of the variational principle and in the path integral formalism in classical and quantum dynamics, respectively. These schemes are based on an unusual feature, a formal redoubling of the degrees of freedom. Several arguments to motivate the redoubling are put forward in classical and quantum mechanics to demonstrate that such a formalism is natural.
Janos Polonyi
2023-07-23T13:16:10Z
http://arxiv.org/abs/2307.12320v2
# Action for classical, quantum, closed and open systems ###### Abstract The action functional can be used to define classical, quantum, closed, and open dynamics in a generalization of the variational principle and in the path integral formalism in classical and quantum dynamics, respectively. These schemes are based on an unusual feature, a formal redoubling of the degrees of freedom. Five arguments to motivate such a redoubling are put forward to demonstrate that such a formalism is natural. The common elements of the different arguments is the causal time arrow. Some lessons concerning decoherence, dissipation and the classical limits are mentioned, too. ## I Introduction To understand the transition between the classical and the quantum physics one needs at least a common formalism, applicable in both domains. This is a wonderful problem because the classical limit of quantum system is driven by the interactions with a large environment, in other word the quantum system obeys open dynamics. Hence we need a CQCO formalism in mechanics which can handle Classical, Quantum, Closed and Open systems. The search for simpler schemes, in particuler for a CCO formalism covering classical closed and open systems, has started a century ago by attempting to describe the forces acting on electric charges by the variational principle without the electromagnetic field [1; 2; 3]. However an inconsistency arises because the force obtained in such a manner is the sum of retarded and advanced contributions and to recover the usual retarded interactions one has to give up the starting point, the variational principle. The idea of the electromagnetic force resulting from a time reversal invariant action at a distance has been advanced further by putting the burden on the absorber, by arguing that the charges completely absorb the in-falling electromagnetic radiation [4]. This assumption bears the imprint of open dynamics and the original problem, the untenability of the variational principle returns. Summarising in contemporary terms: The naive relativistic generalization of local forces without fields is doomed to a failure due to the lack of retardation [5; 6; 7]. Though the problem can formally be solved by the introduction of constraints [8; 9] the existence of electromagnetic radiation requires the introduction of non-mechanical degrees of freedom. The use of the variational principle to capture their contributions leads to the time reversal invariant near field interaction [1; 2; 3]. To incorporate retardation one needs the far field component but the full representation of the field degrees of freedom renders the charge dynamics open, non-accessible for the traditional variational principle. In another standard CQC formalism the action is used in the variational principle and in the path integral formalism in the classical and the quantum case, respectively. However this bridge between the quantum and the classical domains is formal since the quantum dynamics must contain open channels to reach the classical limit. The extension of a formalism over open systems represents a challenge both on classical and quantum levels. In the deterministic world of classical mechanics one is tempted to retreat to probabilistic description like in kinetic theory. The construction of an effective open quantum theory starts with the assumption that the observed system and its environment together obey a closed dynamics with known quantization rules. The next step is the extraction of the time dependence of the reduced density matrix of the observed sub-system by projection operators and the result is an integro-differential equation of motion [10; 11; 12; 13]. The complexity of this equation restricts its application for Markovian weak coupling expansion [14; 15; 16; 17; 18; 19; 20]. Another level of difficulties appears by recalling that the density matrix has to satisfy more stringent relations than the wave function of a pure state and the positivity can not be assured in the non-Markovian case [21; 22; 23]. The usual solution of this problem is the phenomenological characterization of the most general Markovian master equation to produce physically acceptable density matrix [24]. Another approach to effective quantum dynamics is the Closed Time Path (CTP) scheme [25; 26] and the resulting QCO formalism allows us to employ the standard perturbation expansion based on the physically appealing Feynman graphs [27; 28]. This method can be applied even in the projector operator formalism [29; 30], as well. To appreciate the importance of this scheme one should realize that the usual UV divergences of quantum field theory make the introduction of an UV cutoff necesary which in turn opens the bare cutoff theory [31]. A distinguished feature of this formalism is a redoubling of the degrees of freedom. This step is non-intuitive and renders the mathematics unusually involved and slowed down the spread of the applications. But any practitioner of CTP can convince himself or herself that the complications of this scheme always represent true physical elements of the rich dynamics of open systems. Our intuition arises from the macroscopic world which is supposed to be derived from the underlying and only formally known quantum dynamics. If the redoubling is indeed an inherent part of quantum physics then its trace should be visible in classical mechanics, too. The CTP formalism with redoubling has been introduced in classical Lagrangian [32] and Hamiltonian [33] formalism. Hence we already have the elements of a CQCO scheme, the CTP formalism, where the dynamics is defined by the action functional. The goal of the present work is to identify the points showing the need of redoubling in the CQCO formalism based on the action functional and highlight some insights gained from such a scheme. To find an intuitive explanation of the redoubling within classical physics we start with the standard variational principle, a CC scheme, and underline the need of redoubling during its extensions to a CCO formalism in five steps. The time arrow plays an important role in the arguments as we have seen in the discussion of the action at a distance interaction of charges above hence we start in section II with the discussion of the direction of time and its relation with the auxiliary conditions of the Newton equations. In section III the redoubling is used (i) to replace the acausal auxiliary conditions of the traditional variational principle by the causal initial conditions. The way the time arrow is encoded by the modified variation principle is demonstrated by the Green functions, introduced in section IV. The action of the modified variation principle is generalized in section V for open systems and the redoubling is found crucial (ii) to parametrize the non-conservative forces, and (iii) to represent these forces in the action. The vision of the redoubling as an ancilla (iv) to preserve the Noether theorem in a non-conservative dynamics, and (v) to obtain any equation of motion from the variational principle is described in section VI. The extension CCO \(\rightarrow\) CQCO over the quantum domain starts in section VII with pointing out the need of the introduction of internal time reversal parity followed by the identification of the quantum origin of argument (i). The effective dynamics for the reduced density matrix of an open system, introduced in section VIII, shows clearly the origin of arguments (iii) and (v) in the quantum dynamics. In section IX the Ward identities are used to derive the Noether theorem for the expectation value of conserved quantities in open systems, the latter being the key point of step (iv). Section X is devoted to some genuine quantum issues, namely the complexification of the action, the decoherence, its relation to dissipation, and finally the nature of the semiclassical limit. The results are briefly summarised in section XI. Some technical details are collected in three appendices, namely appendix A contains the calculation of the CTP Green function for an open classical harmonic oscillator, B is about the the derivation of the equation of motion in quantum mechanics, and appendix C outlines the construction of an interpolating trajectory needed in deriving the energy balance equation in quantum mechanics. Classical equations of motion and their time arrows Physical laws have two components, an equation of motion, and some auxiliary conditions. The latter installs a time arrow for the former. These two components and their relations are briefly surveyed in this section. ### Auxiliary conditions An equation of motion alone is not sufficient to make predictions in physics because it contains time derivatives hence one has to impose auxiliary conditions. These two components, the equations of motion and its auxiliary conditions, are strictly separate. The former comes from our theories and the latter is chosen by the experimentalists. Such a strong separation may explain that a slight inconsistency of the variational principle about causality remained unnoticed: On the one hand, we use non-causal auxiliary conditions in the variational method by fixing the initial and the final coordinates in classical mechanics. On the other hand, the Euler-Lagrange equations are used with causal initial conditions in physical applications. No problem arises from the change of the auxiliary conditions as long as they are indeed independent of the equations of motions. However one can never observe a genuinely closed system where the auxiliary conditions are fully under our control. When a subsystem of a closed dynamics is observed then the equations of motion of the observed system and the auxiliary conditions of its invisible environment are irrevocably mixed. In fact, let us denote the observed and the unobserved coordinates of a closed bipartite system of classical particles by \(x\) and \(y\), respectively and assume the equations of motion \(\ddot{x}=F(x,y)\), \(\ddot{y}=G(x,y)\) with the auxiliary conditions \(y(t_{a})=y_{i}\), and \(\dot{y}(t_{a})=v_{i}\). The effective equation of motion of the observed system is found by first solving the environment equation of motion for an arbitrary system trajectory \(y=y[x;y_{i},v_{i}]\) and inserting the result into the system equation of motion \(\ddot{x}=F(x,y[x;y_{i},v_{i}])\). The resulting effective equation displays an explicit dependence on the environment auxiliary conditions. If the open dynamics is derived from an action principle, the best scheme to describe a large system of particles, then that principle must be based on initial conditions. ### Time arrows Several time arrows can be defined [34; 35; 36; 37] and they are usually classified according to the domains of physics where they appear, time arrows are known in cosmology, quantum mechanics, a thermodynamics, and electrodynamics. It remains to be seen if these time arrows are independent of each other or they stem from a common origin. A time arrow can be informational or causal. The former points in the direction we loose information, eg. the auxiliary conditions degrade, examples being the quantum mechanical and the thermodynamical time arrows. The causal time arrow is directed from the cause to its effects, an example being the electrodynamical time arrow, and will simply be called "time arrow" below. Finally, the time arrow can be internal or external relative to the dynamics where it is observed. The latter is introduced by the auxiliary conditions and the former corresponds to equations of motions with broken time reversal symmetry. An external time arrow which is generated by the auxiliary conditions imposed at a given time \(t_{c}\) points away from \(t_{c}\). Such a double, time-dependent time arrow is a characteristic feature of the solution of local equations of motion and has caused some complications in finding the origin of the thermodynamical time arrow since the second law of thermodynamics applies in either directions. To avoid such a pathological cases one employs causal auxiliary conditions, either initial or final. One would think that the experimental determination of the time arrow is trivial but it is actually rather challenging. The reason is that the concept of cause is not defined in physics since it implies an external intervention into the physical world. For instance the Newton equation describes a correlation between the states of a particle at different times rather than referring to cause and consequence. The cause is usually replaced by the supposedly the free will of the physicist in selecting the initial conditions for the experiment. In fact, the choice of the initial conditions must be arbitrary in some range to prove or to disprove an equation of motion. After having granted the independence of the experimentalists from the observed system one can identify the causal time arrow by the help of a time dependent external source \(j(t)\) coupled to the system in a finite time interval identified by some reference clock. The causal time arrow \(\tau=\pm 1\) relative to the reference time, is determined by the direction of the time the external intervention leads to changes in the state of the system. The existence of an internal time arrow, irreversibility, can be observed by recording the motion and by checking wether the time reversed motion, seen by the replaying the recording backward, satisfy the same equation of motion. The orientation, usually defined by the direction of the stable, relaxing motion, is used to define the internal time arrow. ## III Action principle with time arrow The goal of the variational principle is the selection of the observed trajectory from a set of possible trajectories, called variational trajectory space. This space consists of trajectories which are at least twice differentiable and satisfy the desired auxiliary conditions to make the choice unique and well defined. The problematic feature of this scheme is that the variational trajectory space is defined by non-causal auxiliary conditions, by fixing the initial and the final coordinates, hence no time arrow can be introduced in this scheme. The generalization of the action principle to allow the dynamics to handle its time arrow is presented in this section for the Lagrangian \(L=m\dot{x}^{2}/2-U(x)\) of a one dimensional particle for the sake of simplicity. To keep track of the independent equations we discretize the time interval \(t_{i}\leq t\leq t_{f}\) by introducing a small time step \(\Delta t\) and represent the trajectories \(x(t)\) as vectors \(\vec{x}\) with components \(x_{n}=x(t_{n})\), where \(t_{n}=t_{i}+n\Delta t\), \(n=0,\ldots,N+1\), \(\Delta t=(t_{f}-t_{i})/(N+1)\). The action \[S(\vec{x})=\Delta t\sum_{n=1}^{N+1}\left[\frac{m}{2}\left(\frac{x_{n}-x_{n-1}} {\Delta t}\right)^{2}-U(x_{n})\right] \tag{1}\] yields the variational equations \[0=\frac{\partial S(\vec{x})}{\partial x_{n}}=\begin{cases}x_{0}-x_{1}&n=0,\\ 2x_{n}-x_{n+1}-x_{n-1}-\frac{\Delta t^{2}}{m}U^{\prime}(x_{n})&1\leq n\leq N, \\ x_{N+1}-x_{N}-\Delta t^{2}U^{\prime}(x_{N+1})/m&n=N+1.\end{cases} \tag{2}\] It is easy to check that the evaluation of the potential energy at an intermediate point \(x_{n}^{(\eta)}=(1-\eta)x_{n}+\eta x_{n-1}\) leads to \(\mathcal{O}(\Delta t^{2})\) changes in the equation of motion at the end points and to a \(\mathcal{O}(\Delta t^{3})\) correction at the intermediate points and brings no changes in the limit \(\Delta t\to 0\). The velocity is vanishing at the end points as \(\Delta t\to 0\) because there is no kinetic energy contribution in the Lagrangian before the initial and after the final time. This is not a problem if \(x_{0}\) and \(x_{N}\) are provided by the auxiliary conditions and the variational equation is used only for the remaining \(N\) intermediate points to find \(N\) unknowns. But in the case of initial conditions \(x_{0}\) and \(x_{1}\) are fixed by the initial coordinate and velocity and we must use the variational equation for \(x_{N+1}\) which is incomplete. How can we complete it? One can not follow the time evolution in the absence of trajectory therefore we must turn back, make a time inversion \(\Delta t\to-\Delta t\), and follow the motion backward in time. The \(x_{N+1}\)-dependent part of the action, \[S(x_{N})=\Delta t\left[\frac{m}{2}\left(\frac{x_{N+1}-x_{N}}{\Delta t}\right)^{2} -U(x_{N+1})-\frac{m}{2}\left(\frac{x_{N+2}-x_{N+1}}{\Delta t}\right)^{2}+U(x_{N +2})\right], \tag{3}\] yields \(x_{N+2}=x_{N}+\Delta t^{2}U^{\prime}(x_{N+1})/m\), in other words turns the motion back in time with \(\mathcal{O}(\Delta t)\) precision in the velocity. However we still have a problem with the last point, \(x_{N+2}\), since there is no kinetic energy and its variational equation from the action (3), \(x_{N+2}=x_{N+1}\), stops the motion. Thus we have to add more and more points \(x_{n}\), \(n>N+2\) to the backward moving part of the trajectory, \[S\to S-\Delta t\sum_{n}\left[\frac{m}{2}\left(\frac{x_{n}-x_{n-1}}{\Delta t} \right)^{2}-U(x_{n})\right]. \tag{4}\] The last coordinate still enters into an with an incomplete equation but luckily we arrive back to the initial condition at \(n=2(N+1)\). We stop there and take the last two coordinates from the time reversed form of the known initial condition rather than solving variational equations. Let us check quickly the consistency. The number of the coordinates of the two trajectories is \(2(N+2)\) which is reduced by the two initial conditions and the common end point to \(2N-1\). Since we have \(2N\) variational equation this seems to be an overdetermined problem. However the recursive solution of the equation of motion from the two end points, starting with \(n=0,1\) and \(n=2N+3,2N+2\) forward and backward in time, respectively, yields the same end points thus the number of independent equations is indeed \(2N-1\). Therefore the proposal is that we trace the trajectory twice: first forward in time then we make a time reversal and revisit the motion backward in time until we arrive back to the time reversed initial conditions as depicted in Fig. 1. Redoubling (i) arises from breaking the trajectory of the Figure 1: The motion is followed by the trajectory \(\tilde{x}(t)\) in the generalized action principle from the initial to the finial time. A time reversal is performed at the latter and we follow the motion until it retakes its time reversed initial conditions. roundtrip into two pieces, \[\tilde{x}(t)=\begin{cases}x_{+}(t)=x(t)&t_{i}\leq t\leq t_{f},\\ x_{-}(t)=x(2t_{f}-t)&t_{f}\leq t\leq 2t_{f}-t_{i},\end{cases} \tag{5}\] where the time as a parameter is reversed, \(t\rightarrow-t\), in the second phase of the motion creating the false illusion that the time flows in the same direction in both trajectories. The action for the trajectory doublet \(x_{\pm}(t)\) is \[S[\tilde{x}]=S[x_{+}]-S[x_{-}]. \tag{6}\] The variational trajectory space is defined by identical initial conditions for \(x_{+}(t)\) and \(x_{-}(t)\) and the final condition \[x_{+}(t_{f})=x_{-}(t_{f}). \tag{7}\] Note that the choice of the final time does not matter, the trajectory is \(t_{f}\)-independent for \(t<t_{f}\). The common final point justifies the name Closed Time Path of this scheme. The traditional action principle will be called Single Time Path (STP) formalism. Note that the introduction of the CTP doublet \(x\rightarrow\hat{x}=(x_{+},x_{-})\) is _not_ a redoubling of the physical degrees of freedom since we observe a single physical degree of freedom for twice and long time and even an irreversible the equation of motion sets \(x_{+}(t)=x_{-}(t)\). It is instructive to check the presence of a time arrow. Since the solution of the equation of motion makes the trajectories of the CTP copies identical an external source \(j(t)=j_{0}\delta(t-t_{0})\) with \(t_{i}<t_{0}<t_{f}\) in the action induces a response for \(t_{0}<t<2t_{f}-t_{0}\) on the trajectory \(\tilde{x}(t)\) of Fig. 1, for \(t_{0}<t<t_{f}\) in \(\hat{x}(t)\) and a causal structure is formed. While the redoubling makes the use of the causal initial conditions possible in the variational principle it comes with a surprising high price. In fact, the redoubling of the coordinates seems to be out of proportion compared with the original problem, the change of the auxiliary conditions. But this is actually a reasonable price since the two trajectories satisfy the same equation of motion hence the effort to obtain them remains the same as in the traditional scheme. ## IV Green functions A system of infinitely many Green function can be introduced for a classical dynamics and it offers two important advantages: It shows the role of the time arrow in a specially clear manner, and incorporates the initial conditions within the action. We discuss the case of a one dimensional system for the sake of simplicity where \(x\) denotes the coordinate of a particle or the component of a plane wave of a field with a given wave vector whose dynamics is defined by the action \(S[x]\). ### STP Green functions We start with the traditional variation method where we perform a functional Legendre transformation, \(x(t)\to j(t)\), \(S[x]\to W[j]\), \[W[j]=S[x]+\int_{t_{i}}^{t_{f}}dtx(t)j(t), \tag{8}\] where the trajectory \(x(t)\) is chosen by solving the equation of motion \[\frac{\delta S[x]}{\delta x(t)}=-j(t) \tag{9}\] with fixed auxiliary conditions. The variational equation for \(j\), \[\frac{\delta W[j]}{\delta j(t)}=x(t), \tag{10}\] can be used to express \(j\) in terms of \(x\) and to construct the inverse functional Legendre transform \[S[x]=W[j]-\int_{t_{i}}^{t_{f}}dtx(t)j(t). \tag{11}\] By restricting the book-keeping variable \(j\) infinitesimal the functional \(W[j]\) can be considered as formal functional power series, \[W[j]=\sum_{n=1}^{\infty}\frac{1}{n!}\int_{t_{i}}^{t_{f}}dt_{1} \cdots dt_{n}D_{n}(t_{1},\ldots,t_{n})j(t_{1})\cdots j(t_{n}), \tag{12}\] where the coefficient functions \(D_{n}\) define the Green functions. To find the physical roles of the Green functions let us consider the action \[S[x]=\frac{1}{2}\int_{t_{i}}^{t_{f}}dtdt^{\prime}x(t)K(t,t^{\prime })x(t^{\prime})-\frac{g}{4}\int_{t_{i}}^{t_{f}}dtx^{4}(t)+\int_{t_{i}}^{t_{f}} dtj(t)x(t) \tag{13}\] where the kernel of the first integral is local in time with time translation invariance, \(K(t,t^{\prime})=K(d/dt)\delta(t-t^{\prime})\), \(K(z)\) being a polynomial of order \(2n_{d}\) in \(z\). The kernel is assumed to be symmetrical, \(K(z)=K(-z)\) because the odd powers of the time derivative produce a boundary term in the action and drop out from the variational equations. The iterative solution of the equation of motion \[\int_{t_{i}}^{t_{f}}dtdt^{\prime}K(t,t^{\prime})x(t^{\prime})=g \int_{t_{i}}^{t_{f}}dtx^{3}(t)-j(t) \tag{14}\] which is reliable for sufficiently short time is the infinite sum of tree graphs, the first three are shown in Fig. 2. Such a representation reveals that the Green functions \(D_{n}\) describe the \({\cal O}(j^{n-1})\) contribution to the trajectory. ### Mass-shell and off-shell modes The null space of the kernel \(K\) consists of the trajectories \(K(d/dt)x_{h}(t)=0\), the general solutions of the homogeneous equation of motion. The trajectories in the null-space will be called mass-shell modes as in field theory. To assess the role of the mass-shell modes we restrict our attention to a harmonic dynamics with \(g=0\). The mass-shell modes drop out from the action and thereby from the variational principle, they remain present only to assure the auxiliary conditions. The variational trajectory space is now defined by the generalized Dirichlet condition \(x(t_{i})=x(t_{f})=0\) and \(d^{n}x(t_{i})/dt^{n}=d^{n}x(t_{f})/d^{n}=0\) with \(n=1,\ldots,n_{d}-1\), called off-shell modes. To regain the desired auxiliary conditions we write the physical trajectory as the sum of the general solution of the homogeneous equation of motion and a particular solution of the inhomogeneous case \[x(t)=x_{h}(t)+x_{ih}(t) \tag{15}\] where \(x_{h}(t)\) and \(x_{ih}(t)\) is a mass-shell and an off-shell mode, respectively. The kernel is invertible in the space of the off-shell modes where we can define the near Green function \(D^{n}(t,t^{\prime})=D_{2}(t,t^{\prime})\) by \[\delta(t_{1}-t_{2})=\int_{t_{i}}^{t_{f}}dt^{\prime}D^{n}(t_{1},t^{\prime})K(t^ {\prime},t_{2})=\int_{t_{i}}^{t_{f}}dt^{\prime}K(t_{1},t^{\prime})D^{n}(t^{ \prime},t_{2}). \tag{16}\] Being the inverse of a symmetric operator \(D^{n}\) is symmetric as well, \(D^{n}(t,t^{\prime})=D^{n}(t^{\prime},t)\). The solution of the equation of motion of the harmonic model is given by \[x_{ih}(t)=-\int_{t_{i}}^{t_{f}}dt^{\prime}D^{n}(t,t^{\prime})j(t^{\prime}). \tag{17}\] Figure 2: The iteration of the equation of motion (14) in terms of tree-graphs. The lines represent the Green function \(D_{2}\), the dots stand for the vertex \(g\), and the crosses denote the source, \(j\). To recover translation symmetry in time we perform the limits \(t_{i}\rightarrow-\infty\) and \(t_{f}\rightarrow\infty\) where \(D^{n}(t,t^{\prime})=D^{n}(t-t^{\prime})\) and \[D^{n}(t)=\int_{-\infty}^{\infty}\frac{d\omega}{2\pi}e^{-it\omega}D^{n}(\omega). \tag{18}\] Partial fraction decomposition can be used to bring the inverse of the kernel within the off-shell modes into the form \[D^{n}(\omega)=\sum_{j=1}^{2n_{d}}\frac{Z_{j}}{\omega-\omega_{j}} \tag{19}\] where \(\omega_{j}\) are the normal frequencies. The extension of this expression over the whole frequency axis is carried out by the help of the principal value prescription \(1/(\omega-\omega_{j})\rightarrow(\omega-\omega_{j})/[(\omega-\omega_{j})^{2}+ \epsilon^{2}]\), \[D^{n}(\omega)=\sum_{\omega_{j}\in C}\frac{Z_{cj}}{\omega^{2}-\omega_{j}^{2}}+ \sum_{\omega_{j}\in R}\frac{Z_{rj}(\omega-\omega_{j})}{(\omega-\omega_{j})^{2 }+\epsilon^{2}} \tag{20}\] where the limit \(\epsilon\to 0\) is to be performed after the Fourier interal over the frequency, and the first and the second sum includes the complex and real normal frequencies, respectively. The latter come in doublets \((\omega_{j},-\omega_{j})\) and the formers in quadruplets \((\omega_{j},-\omega_{j},\omega_{j}^{*},-\omega_{j}^{*})\) and we have \[D^{n}(t)=2i\sum_{\text{Re}\omega_{cj},\text{Im}\omega_{cj}>0}Z_{\omega_{cj}} \cos\text{Re}\omega_{cj}te^{-\text{Im}\omega_{cj}|t|}+\sum_{\omega_{rj}>0}Z_{ \omega_{rj}}\sin\omega_{rj}|t| \tag{21}\] with imaginary \(Z_{\omega_{cj}}\) and real \(Z_{\omega_{rj}}\). ### Causal Green functions The variational dynamics for the off-shell modes has non-oriented time due to the symmetry \(D^{n}(t,t^{\prime})=D^{n}(t^{\prime},t)\). The time arrow arises from the the mass-shell modes in the decomposition (15) which are introduced "by hand", beyond the STP variational scheme. This decomposition can be realized by the use of the retarded Green function \(D^{r}=D^{n}+D^{f}\) where the far Green function \(D^{f}\) acts within the null-space. According to eqs. (16) \[K\left(\frac{d}{dt}\right)D^{n}(t)=0,\quad t\neq 0 \tag{22}\] implying that \(D^{n}(t)\) is given by two linear superpositions of the mass-shell modes, one for \(t>0\) and another for \(t<0\), cf. (21). There are two symmetrical and real combinations for each mode, \(\cos\bar{\omega}t\) and \(\sin\bar{\omega}|t|\) where \(\bar{\omega}\) stands for the normal frequency. The former is regular at \(t=0\) and is omitted but the latter can reproduce the singularity in eq. (16) and the retarded Green function with time arrow \(\tau_{c}=1\) results by the choice \(D^{f}(t)=\text{sign}(t)D^{n}(t)\) and the restriction that the normal frequencies must be real. ### CTP Green functions The variational space of the CTP formalism is defined by the initial conditions thus the time arrow can be introduced on the level of the variational trajectory space. One employs independent sources \(\hat{j}=(j_{+},j_{-})\) for the CTP copies and defines the functional Legendre transformation \[W[\hat{j}]=S[\hat{x}]+\int_{t_{i}}^{t_{f}}dt\hat{x}(t)\hat{j}(t), \tag{23}\] where \(\hat{x}\hat{j}=x_{+}j_{+}+x_{-}j_{-}\) and \(\hat{x}(t)\) solves the equation of motion \[\frac{\delta S[\hat{x}]}{\delta\hat{x}(t)}=-\hat{j}(t) \tag{24}\] with identical initial conditions for \(x_{+}\) and \(x_{-}\) at \(t_{i}\) and the final condition \(x_{+}(t_{f})=x_{-}(t_{f})\). The Green functions are defined by the functional Taylor series \[W[\hat{j}]=\sum_{n=0}^{\infty}\frac{1}{n!}\int_{t_{i}}^{t_{f}}dt_{1}\cdots dt_ {n}D_{n,\sigma_{1},\ldots,\sigma_{n}}(t_{1},\ldots,t_{n})j_{\sigma_{1}}(t_{1}) \cdots j_{\sigma_{n}}(t_{n}). \tag{25}\] The iterative solution of the equation of motion results the series of the tree graph of Fig. 2 as in the case of closed dynamics. The variational equation for \(\hat{j}\), \[\frac{\delta W[\hat{j}]}{\delta\hat{j}(t)}=\hat{x}(t), \tag{26}\] and its successive derivatives establish the same interpretation of the Green functions as in the case of closed dynamics. The two external sources \(j_{\pm}\), are treated as independent in the funciotnal Legendre transformation but physical systems where \(x_{+}(t)=x_{-}(t)\) holds for the solution of the equation of motion one has to use \(j_{+}(t)=-j_{-}(t)\). A harmonic dynamics is defined by the action \[S[\hat{x}]=\frac{1}{2}\int_{t_{i}}^{t_{f}}dtdt^{\prime}\hat{x}(t)\hat{K}(t-t^ {\prime})\hat{x}(t^{\prime})+\int_{t_{i}}^{t_{f}}dt\hat{x}(t)\hat{j}(t) \tag{27}\] and the solution of the equation of motion is \[\hat{x}(t)=-\int_{t_{i}}^{t_{f}}dt^{\prime}\hat{D}(t-t^{\prime})\hat{j}(t^{ \prime}). \tag{28}\] where \(\hat{D}=\hat{D}_{2}\). The Green function \(\hat{D}\) is symmetrical, \(D_{\sigma_{1},\sigma_{2}}(t_{1},t_{2})=D_{\sigma_{2},\sigma_{1}}(t_{2},t_{1})\). To find identical and real copies of \(x\) for physical sources, \(j_{\pm}=\pm j\), the Green function must satisfy the conditions \(D^{++}+D^{--}=D^{+-}+D^{-+}\), \(\mathrm{Im}(D^{++})=\mathrm{Im}(D^{+-})\), and \(\mathrm{Im}(D^{-+})=\mathrm{Im}(D^{--})\). These equations restrict the Green function to the form \[D_{\sigma\sigma^{\prime}}=\begin{pmatrix}D^{n}+iD^{i}&-D^{f}+iD^{i}\\ D^{f}+iD^{i}&-D^{n}+iD^{i}\end{pmatrix} \tag{29}\] in terms of three real functions \(D^{n}(t,t^{\prime})=D^{n}(t^{\prime},t)\), \(D^{i}(t,t^{\prime})=D^{i}(t^{\prime},t)\), and \(D^{f}(t,t^{\prime})=-D^{f}(t^{\prime},t)\). The solution (28) for physically realizable source is \[x(t)=-\int_{t_{i}}^{t_{f}}dt^{\prime}D^{r}(t-t^{\prime})j(t^{\prime}) \tag{30}\] and the time arrow is properly installed even in the iterative solution of the anharmonic model. The inverse can be written in a similar form \[K_{\sigma\sigma^{\prime}}=\begin{pmatrix}K^{n}+iK^{i}&K^{f}-iK^{i}\\ -K^{f}-iK^{i}&-K^{n}+iK^{i}\end{pmatrix} \tag{31}\] with \[K^{\vec{a}} = K^{n}\pm K^{f}=(D^{\vec{a}})^{-1},\] \[K^{i} = -D^{r-1}D^{i}D^{a-1}, \tag{32}\] and \[D^{\vec{a}} = D^{n}\pm D^{f}=(K^{\vec{a}})^{-1},\] \[D^{i} = -K^{r-1}K^{i}K^{a-1}. \tag{33}\] ### Generalized \(\epsilon\)-prescription To complete the CTP variational principle guided by the Green functions and use the inversions (32)-(33) one needs an invertible kernel of the harmonic model. Such a regularization of the Green functions is achieved in the traditional STP scheme by the usual \(\epsilon\)-prescription, the introduction of an infinitesimal imaginary term in the action \(S[x]\to S[x]+i\epsilon\int dtx^{2}(t)/2\) which implements a particular treatment of the discrete spectrum embedded into the continuum. However the CTP action (6) possesses a much larger degeneracy, it is vanishing for arbitrary \(x_{+}(t)=x_{-}(t)\). To lift this degeneracy the imaginary part of the action is used with the same sign in the action, \[S[\hat{x}]=S[x_{+}]-S[x_{-}]+i\frac{\epsilon}{2}\int_{t_{i}}^{t_{f}}dt[x_{+}^ {2}(t)+x_{-}^{2}(t)]. \tag{34}\] The Green function (A17) of a harmonic oscillator becomes time translation invariant in the limits \(t_{i}\rightarrow-\infty\), \(t_{f}\rightarrow\infty\), \[\hat{D}(\omega)=\frac{1}{m}\begin{pmatrix}\frac{1}{\omega^{2}-\omega_{0}^{2}+i \epsilon}&-i2\pi\Theta(-\omega_{0})\delta(\omega^{2}-\omega_{0}^{2})\\ -i2\pi\Theta(\omega_{0})\delta(\omega^{2}-\omega_{0}^{2})&-\frac{1}{\omega^{2 }-\omega_{0}^{2}-i\epsilon}\end{pmatrix}. \tag{35}\] The variational trajectory space is defined by the generalized Dirichlet boundary conditions because the desired initial conditions at finite time can be achieved by an appropriate adiabatic turning on of the external source. The use of the Lorentzian regulated Dirac-delta gives \[D^{n}(\omega) = P\frac{1}{m(\omega^{2}-\omega_{0}^{2})},\] \[D^{f}(\omega) = -i\frac{\text{sign}(\omega)\epsilon}{m[(\omega^{2}-\omega_{0}^{2}) ^{2}+\epsilon^{2}]},\] \[D^{i}(\omega) = -\frac{\epsilon}{m[(\omega^{2}-\omega_{0}^{2})^{2}+\epsilon^{2}]}, \tag{36}\] and the inversion (32) can be used to arrive at the kernel \[\hat{K}(\omega)=m\begin{pmatrix}\omega^{2}-\omega_{0}^{2}+i\epsilon&-2i \epsilon\Theta(-\omega)\\ -2i\epsilon\Theta(\omega)&-\omega^{2}+\omega_{0}^{2}+i\epsilon\end{pmatrix}, \tag{37}\] where \(K^{n}=m(\omega^{2}-\omega_{0}^{2})\), \(K^{f}=im\text{sign}(\omega)\), and \(K^{i}=m\epsilon\) and to define the action of the harmonic oscillator \[S = \frac{m}{2}\int_{-\infty}^{\infty}dt[\dot{x}_{+}^{2}(t)-\dot{x}_ {+}^{2}(t)-\omega_{0}^{2}(x_{+}^{2}(t)-x_{-}^{2}(t))] \tag{38}\] \[+\frac{\epsilon}{\pi}\int_{-\infty}^{\infty}dtdt^{\prime}\frac{ x^{-}(t)x^{+}(t^{\prime})}{t-t^{\prime}+i\epsilon}+\frac{i\epsilon}{2}\int_{- \infty}^{\infty}dt[x^{+2}(t)+x^{-2}(t)]\] with the generalized \(\epsilon\)-prescription terms in the second line. The time translation symmetry breaking coupling (7) of the CTP doublet trajectories at the final time is spread over an infinitesimal time translation symmetrical coupling between the trajectories. ## V Open Classical Mechanics The traditional variational principle was extended in closed dynamics to incorporate the time arrow. The next step is the generalization of the action for open dynamics. ### External time arrow We start at the bipartite system mentioned in the Introduction. By imposing initial or final conditions one can install time arrow independently for the system and for the environment as long as they do not interact. Hence we can formally prepare parallel or anti-parallel flow of time for the non-interacting subsystems. The four possible orientation of the time are displayed in Fig. 3 Figure 3: The time arrows of the system and the environment. The fat horizontal line indicates the time when the auxiliary conditions are imposed, and the corresponding time arrows are shown, as well. (a): \(\tau_{s}=\tau_{e}=1\); (b): \(\tau_{s}=\tau_{e}=-1\); (c): \(\tau_{s}=1\), \(\tau_{e}=-1\); (d): \(\tau_{s}=1\), \(\tau_{e}=-1\). The system coordinate is changed by a small amount at the time of the right oriented horizontal dashed line, the arrow representing the impact of the change on the environment. At another time, corresponding to the left oriented horizontal dashed line, the change of the environment trajectory feeds back to the system itself and appears as a self interaction within the system, induced by the environment. ### Semi-holonomic forces The conservative holonomic forces of the action principle are represented by a potential \(U(x,\dot{x})\) and are of the form \[F(x,\dot{x})=-\partial_{x}U(x,\dot{x})+\frac{d}{dt}\partial_{\dot{x}}U(x,\dot{x}). \tag{39}\] To find their generalization, the open semi-holonomic forces, we start with the full closed dynamics of the observed system and its environment characterized by the action \(S[x,y]\). The elimination of the environment is achieved by solving the equation of motion \(\delta S[x,y]/\delta y(t)=0\) together with the environment initial conditions imposed at \(t_{i}\) for a general system trajectory \(x(t)\). The effective system action is obtained by inserting the solution, \(y[x]\), into the action, \(S_{eff}[x]=S[x,y[x]]\). The resulting effective equation of motion \[\frac{\delta S_{eff}[x]}{\delta x(t)}=\frac{\delta S[x,y]}{\delta x(t)}_{|y=y[ x]}+\int_{t_{t}}^{t_{f}}dt^{\prime}\frac{\delta S[x,y[x]]}{\delta y(t^{ \prime})}\frac{\delta y[t^{\prime};x]}{\delta x(t)}=\frac{\delta S[x,y]}{ \delta x}_{|y=y[x]}=0 \tag{40}\] shows clearly the double role the system coordinate plays in the effective dynamics: It appears twice on the list of variables of \(S[x,y[x]]\), first as a virtual variational parameter to deduce forces and second as a position defining parameter and only the second role is taken up outside of the variational calculation. This is redoubling (ii) and suggests the generalization of eq. (39) \[F(x,\dot{x})=-\partial_{x}U(x,\dot{x},x^{\prime},\dot{x}^{\prime})_{|x^{ \prime}=x}+\frac{d}{dt}\partial_{\dot{x}}U(x,\dot{x},x^{\prime},\dot{x}^{ \prime})_{|x^{\prime}=x} \tag{41}\] for semi-holonomic forces. The structure \(S[x,y[x]]\) of the effective action assures that the semi-holonomic forces cover all possible open forces existing within a subsystem of a closed dynamics. ### Action of an open system To find the action and the variational trajectory space in the presence of semi-holonomic forces we write the full action in the form \(S[x,y]=S_{s}[x]+S_{e}[y]+S_{i}[x,y]\) and assume a simple system-environment interaction corresponding to the interaction Lagrangian \(L_{i}=gxy\)\(g\) being a coupling constant. We shall use initial conditions for the system and seek the action corresponding to the case of either parallel or antiparallel system and environment time arrows, shown in Fig. 3 (a) and (d), respectively. In the case of a causal effective dynamics of Fig. 3 (a) the perturbative elimination of the environment, the iterative solution of the environment equation of motion, generates the non-local potential energy \(U^{(a)}=g^{2}x^{\prime}(t_{2})D_{e}^{r}(t_{2},t_{1})x^{\prime}(t_{1})\) in the leading order to the effective action where \(t_{1}\) and \(t_{2}\) correspond to the time of right and the left oriented horizontal dashed lines in Fig. 3 (a), respectively and \(D_{e}^{r}\) stands for the retarded Green function of the environment. In the case of an acausal effective dynamics, shown in Fig. 3 (d) the leading order interaction is represented by \(U^{(d)}=gx^{\prime}(t_{2})D_{e}^{a}(t_{2},t_{1})x^{\prime}(t_{1})\), \(D_{e}^{a}\) being the advanced environment Green function. Thus the leading order interaction is given by the action \[S_{i}^{(a)}=g\int_{t_{i}}^{t_{f}}dt_{1}dt_{2}x^{\prime}(t_{2})[D_{e}^{n}(t_{2},t _{1})\pm D_{e}^{f}(t_{2},t_{1})]x^{\prime}(t_{1}). \tag{42}\] This expression shows the need of redoubling (iii): The contribution of the far Green function is vanishing because the Green function is sandwiched between the same trajectory. This is a well known problem of the traditional STP scheme which is having difficulties in supporting interactions with odd time reversal parity. For instance the Lorentz force in electrodynamics has negative time reversal parity however it can be derived from an STP variation principle owing to its negative space inversion parity. The representation of the time inversion asymmetric part of a one dimensional harmonic force needs "another" system trajectory handled independently in the variation. The action for the trajectory doublet has to be introduced in such a manner that the solution of the equation of motion brings the two trajectory to overlap. For this end we follow the procedure outlined in section III and introduce the action \(S[\hat{x},\hat{y}]=S[x_{+},y_{+}]-S[x_{-},y_{-}]\) up to the infinitesimal generalized \(\epsilon\)-prescription terms for the full closed system and define the variational trajectory space with the same initial conditions for the two trajectories and with the identification of the final coordinates, \(x_{+}(t_{f})=x_{-}(t_{f})\), \(y_{+}(t_{f})=y_{-}(t_{f})\). The derivation of the effective action follows the steps outlined above and one arrives at \[S_{eff}[\hat{x}]=S_{s}[x_{+}]+S_{e}[y_{+}[\hat{x}]]+S_{i}[x_{+},y_{+}[\hat{x}] ]-S_{s}^{*}[x_{-}]-S_{e}^{*}[y_{-}[\hat{x}]]-S_{i}[x_{-},y_{-}[\hat{x}]] \tag{43}\] where \[\frac{\delta}{\delta\hat{y}(t)}\{S_{e}[y_{+}]+S_{i}[x_{+},y_{+}]-S_{e}^{*}[y_ {-}]-S_{i}^{*}[x_{-},y_{-}]\}=0. \tag{44}\] The effective action (43) can be written as \[S_{eff}[\hat{x}]=S_{s}[x_{+}]-S_{s}^{*}[x_{-}]+S_{infl}[x_{+},x_{-}], \tag{45}\] the sum of the closed system CTP action and the influence functional [38] \[S_{infl}[\hat{x}]=S_{e}[y_{+}[\hat{x}]]+S_{i}[x_{+},y_{+}[\hat{x}]]-S_{e}^{*}[ y_{-}[\hat{x}]]-S_{i}^{*}[x_{-},y_{-}[\hat{x}]] \tag{46}\] representing the effective interactions. The simplest open system, a damped harmonic oscillator, corresponds to the Lagrangian \[L=\frac{m}{2}[\dot{x}_{+}^{2}-\dot{x}_{-}^{2}-\omega_{0}^{2}(x_{+}^{2}-x_{-}^{2}) +\nu(\dot{x}_{+}x_{-}-x_{+}\dot{x}_{-})], \tag{47}\] with the equation of motion [39] \[\ddot{x}_{\pm}=-\omega_{0}^{2}x_{\pm}-\nu\dot{x}_{\mp}. \tag{48}\] Few remarks are in order at this point: (1) One can now better understand the necessity of the time inversion performed at \(t_{f}\) in the intuitively introduced scheme of section III: It yields the opposite sign in front of the action of the two CTP doublet trajectories, needed to retain the contribution of the far Green function term in eq. (42). (2) There is a clear difference between closed and open dynamics: While the action (6) of a closed dynamics has no finite \({\cal O}(\epsilon)\) coupling between the copies \(x_{+}\) and \(x_{-}\) the closing of the final environment coordinates \(y_{+}(t_{f})=y_{-}(t_{f})\) during the elimination (44) couples \(x_{+}\) and \(x_{-}\) with a finite strength. (3) The effective action can be written by separating the STP and the genuine CTP terms, \[S_{eff}[\dot{x}]=S_{1}[x_{+}]-S_{1}^{*}[x_{-}]+S_{2}[\dot{x}] \tag{49}\] where \(\delta^{2}S_{2}[\dot{x}]/\delta x_{+}(t)\delta x_{-}(t^{\prime})\neq 0\). The role of \(S_{1}\) and \(S_{2}\) can be revealed by the help of the parametrization \(x_{\pm}=x\pm x_{d}/2\): The equation of motion arising from the variation of \(x\) by ignoring the infinitesimal imaginary terms, \[\frac{\delta S_{1}[x+\frac{x_{d}}{2}]}{\delta x}-\frac{\delta S_{1}[x-\frac{x _{d}}{2}]}{\delta x}+\frac{\delta S_{2}[x+\frac{x_{d}}{2},x-\frac{x_{d}}{2}]}{ \delta x}=0, \tag{50}\] is trivially satisfied since \(x_{+}(t)=x_{-}(t)\) hence \(x_{d}(t)=0\). The variation of \(x_{d}\) yields the physical equation of motion, \[\frac{\delta S_{1}[x]}{\delta x}+\frac{\delta S_{2}[x_{+},x]}{\delta x_{+}}_{ \mid x_{+}=x}=0, \tag{51}\] and shows that the holonomic and the semi-holonomic forces arise from \(S_{1}\) and \(S_{2}\), respectively, shown in Fig. 4 where the dashed line stands to the environment far Green function. The interactions are separated into two classes by the expressions (45) and (49) the difference being that the system interaction is defined by the original system dynamics (\(S_{s}\)) in the former and by all closed system interactions (\(S_{1}\)) in the latter. The classification of (49) is more natural since the separation of the full system into the observed subsystem and its environment is introduced only by us. (4) The time reversal can be realized in two different manners in open systems: The full time reversal acts on both the system and its environment and is a trivial symmetry owing to a reparametrization of the motion. It is represented by the exchange of the trajectories, \((x_{+},x_{-})\rightarrow(x_{-},x_{+})\) and \(S[x_{+},x_{-}]\rightarrow-S^{*}[x_{-},x_{+}]\) where the complex conjugation corresponds to the flip \(\epsilon\rightarrow-\epsilon\), the swap of the initial and the final conditions. Therefore \[S[x_{+},x_{-}]=-S^{*}[x_{-},x_{+}] \tag{52}\] is a formal symmetry of the CTP scheme. A partial time reversal effects is performed only on the observed system and is represented by \(t\rightarrow-t\) in and a complex conjugation of the effective action. The symmetry with respect to partial time reversal transformation is violated by \(S_{2}\) in (49) in agreement with remark made in section V.1 that the time of an open dynamics is always oriented. ## VI Ancilla The argument of redoubling (iii) of the previous section suggests that the CTP copy of the system actually represents the environment. We follow this point of view and seek a non-trivial extension of the original variational principle to cover open systems by treating the copy as an ancilla. Figure 4: An open interaction channel of \(x_{+}\) with the environment environment is represented as an interaction with \(x_{-}\). The dashed line stands for the environment induced self interaction and (a): \(\tau_{e}=1\); (b): \(\tau_{e}=-1\). ### Environment as a copy If a small system is interacting with a large environment then it is not necessary to possess the full information about the environment to find its imprint on the dynamics of the system. The selection and the representation of the necessary information can be achieved by introducing a sufficiently simple ancilla as a new, reduced environment. How to select the ancilla and its interaction with the system? The solution comes from looking into the conservation laws whose violation is the hallmark of open dynamics. The energy is decreased by a Newtonian friction force which is proportional to the velocity despite the explicit time translation invariance of the equation of motion. How can we modify the variational principle in order to reproduce such a non-conservative force with unbroken time translation symmetry of the effective dynamics? The answer is rather obvious, the ancilla should absorb the lost energy. Rather than trying to circumvent the second law of thermodynamics and accumulate the dissipated energy in a small ancilla we can arrive at a solution in two simple steps. First, it is trivial to compare the energy stored in the observed system and the ancilla if the latter is chosen to be a copy of the former. This is redoubling (iv). As a side remark, such an _ancilla = observed system_ construction is optimal for the representation of the effective dynamics since the environment is reduced to an ancilla with the same complexity as the system. Next, the energy conservation can trivially be achieved by assuring that the energy is defined with the opposite sign in the original system and its copy. Since the conserved quantities are linear in the Lagrangian according to the Noether theorem the Lagrangian of the full system is chosen to be antisymmetric with respect to the exchange of the system and the ancilla as in eq. (46). The procedure can easily be demonstrated by the derivation of the Noether theorem for the Lagrangian \(L=L_{1}(x_{+},\dot{x}_{+})-L_{1}(x_{-},\dot{x}_{-},t)+L_{2}(x_{+},\dot{x}_{+ },x_{-},\dot{x}_{-},t)\). By analogy with field theory the trajectory \(x(t)\) can be viewed as a mapping of the external space, the time, into the internal space, the coordinate space. Hence the infinitesimal changes \(x\to x+\delta x\) and \(t\to t+\delta t\) are called internal and external space transformations, respectively. The momentum balance equation arises by performing an infinitesimal translation in the internal space, \(\hat{x}\rightarrow\hat{x}+\hat{\xi}\) and by treating the time dependent \(\delta\hat{x}=\hat{\xi}\) with \(\hat{\xi}(t_{i})=\hat{\xi}(t_{f})=0\) as a special variation. The corresponding linearized action \[S[\hat{\xi}]=\int dt\hat{\xi}\left(\frac{d}{dt}\frac{\partial L}{\partial\dot{ \hat{x}}}-\frac{\partial L}{\partial\hat{x}}\right) \tag{53}\] has a trivial \(0=0\) equation of motion for \(\xi=(\xi_{+}+\xi_{-})/2\) in agreement with eq. (50). This triviality indicates that the total momentum of both copies is vanishing because the momentum is defined with the opposite sign in the copies. The Noether theorem corresponding to the opposite variation of the two trajectories, the equation of motion for \(\xi_{d}=\xi_{+}-\xi_{-}\), adds rather than subtracts the contributions of the two momenta and is actually a balance equation, \[\frac{d}{dt}\frac{\partial L_{1}}{\partial\dot{x}}=\frac{\partial L_{1}}{ \partial x}+\left(\frac{\partial L_{2}}{\partial x_{+}}-\frac{d}{dt}\frac{ \partial L_{2}}{\partial\dot{x}_{+}}\right)_{|\dot{x}_{\pm}=\dot{x},x_{\pm}=x}. \tag{54}\] The change of the generalized momentum is due to the violation of the translation invariance of the closed dynamics and the semi-holonomic forces. Another form of this equation states that the change of the momentum renormalized by the open interactions, \[p_{r}=\frac{\partial L_{1}}{\partial\dot{x}}+\frac{\partial L_{2}}{\partial \dot{x}_{+}\left|\dot{x}_{\pm}=\dot{x},x_{\pm}=x\right.} \tag{55}\] is due to the breakdown of the translation invariance of the effective dynamics, \[\dot{p}_{r}=\frac{\partial L_{1}}{\partial x}+\frac{\partial L_{2}}{\partial x _{+}\left|\dot{x}_{\pm}=\dot{x},x_{\pm}=x\right.}. \tag{56}\] The second term in this expression represents the image of the system, the "polarization" of the environment, generated by the system-environment interactions which moves with the system. In particular, the momentum balance equation for the damped harmonic oscillator (47) is \(p_{r}=m(\dot{x}+\nu x/2)\) is \(\dot{p}_{r}=-m(\omega_{0}^{2}x+\nu\dot{x}/2)\). The balance equation for angular momentum comes from infinitesimal internal space rotations of a multi-component coordinate vector, \(\delta x_{\sigma}=\xi_{\sigma}\tau x_{\sigma}\) where \(\tau\) stands for a generator of the rotation group. The linearized action of the infinitesimal angle \(\xi\), \[S[\hat{\xi}]=-\sum_{\sigma}\int dt\left[\xi_{\sigma}\frac{\partial L}{\partial x _{\sigma}}\tau x_{\sigma}+\frac{\partial L}{\partial\dot{x}_{\sigma}}(\xi_{ \sigma}\tau x_{\sigma}+\xi_{\sigma}\tau\dot{x}_{\sigma})\right] \tag{57}\] results the equation of motion for \(\xi_{d}\) \[\frac{d}{dt}(p_{r}\tau x)=\frac{\partial L_{1}}{\partial x}\tau x+\frac{ \partial L_{1}}{\partial\dot{x}}\tau\dot{x}+\left(\frac{\partial L_{2}}{ \partial x_{+}}\tau x+\frac{\partial L_{2}}{\partial\dot{x}_{+}}\tau\dot{x} \right)_{|\dot{x}_{\pm}=\dot{x},x_{\pm}=x} \tag{58}\] showing that the non-conservation of the renormalized angular momentum arises from the non-rotational invariant part of the effective Lagrangian. The energy equation is obtained by performing an infinitesimal external space translation, \(x_{\pm}(t)\to x_{\pm}(t-\xi_{\pm})\), \(\xi_{\pm}(t_{i})=\xi_{\pm}(t_{f})=0\), and treating \(\delta x_{\sigma}=-\xi_{\sigma}\dot{x}_{\sigma}\) as a variation with the linearized action \[S[\hat{\xi}]=-\sum_{\sigma}\int dt\left(\xi_{\sigma}\dot{x}_{\sigma}\frac{ \partial L}{\partial x_{\sigma}}+\frac{d}{dt}(\xi_{\sigma}\dot{x}_{\sigma}) \frac{\partial L}{\partial\dot{x}_{\sigma}}\right). \tag{59}\] The triviality of the equation of motion for \(\xi\) renders the linearized action \(\xi\)-independent and its \(\xi_{d}\)-dependence, \[S[\xi_{d}]=\int dt\left[\xi_{d}\left(\partial_{t}L_{1}-\frac{d}{dt}L_{1}\right)- \dot{\xi}_{d}\dot{x}\frac{\partial L_{1}}{\partial\dot{x}}\right]-\frac{1}{2} \sum_{\sigma}\sigma\int dt\left(\xi_{d}\dot{x}_{\sigma}\frac{\partial L_{2}}{ \partial x_{\sigma}}+\dot{\xi}_{d}\dot{x}_{\sigma}\frac{\partial L_{2}}{ \partial\dot{x}_{\sigma}}+\xi_{d}\ddot{x}_{\sigma}\frac{\partial L_{2}}{ \partial\dot{x}_{\sigma}}\right), \tag{60}\] is obtained by the help of the relation \[\frac{d}{dt}L_{1}=\dot{x}\frac{\partial L_{1}}{\partial x}+\ddot{x}\frac{ \partial L_{1}}{\partial\dot{x}}+\partial_{t}L_{1}. \tag{61}\] The equation of motion \[\frac{d}{dt}\left(\dot{x}\frac{\partial L_{1}}{\partial\dot{x}}-L_{1}\right)=- \partial_{t}L_{1}+\dot{x}\left(\frac{\partial L_{2}}{\partial x_{+}}-\frac{d }{dt}\frac{\partial L_{2}}{\partial\dot{x}_{+}}\right)_{|\dot{x}_{\pm}=\dot{x },x_{\pm}=x} \tag{62}\] is the energy equation showing that the energy defined by the closed dynamics is changed by the explicite time-dependence of the closed dynamics and the work of the semi-holonomic forces. It follows from the structure of the semi-holonomic forces that the internal symmetry conservation laws are retained by an appropriate renormalization of the conserved quantities if the open interactions, encoded by \(L_{2}\) do not violate the underlying symmetry. This is not the case with external symmetries which are always broken by the semi-holonomic forces. ### Generalized variational equation The equations of motion of closed systems are restricted by the variational principle, they are the canonical equations of classical mechanics. Open equations of motion are non-conservative hence non-canonical. It is natural to ask the question whether the possible equations of motion of a subsystem of closed systems cover the set of all imaginable equation or they belong to a restricted class. Let us assume that the equation of motion for the coordinate \(x\) at the time \(t^{\prime}\) can be written in the form \(F[x,t^{\prime}]=0\) where \(F[x,t^{\prime}]\) is an arbitrary functional of the trajectory \(x(t)\). We introduce another coordinate \(x_{d}\) and define the action \[S_{F}[x,x_{d}]=\int dt^{\prime}x_{d}(t^{\prime})F[x,t^{\prime}]+S^{\prime}[x_{ d}] \tag{63}\] for the two trajectories where \(S^{\prime}\) is an arbitrary odd functional, \(S^{\prime}[x_{d}]=-S^{\prime}[x_{d}]\). It easy to see that the action \[S[\hat{x}]=S_{F}\left[\frac{1}{2}(x_{+}+x_{-}),x_{+}-x_{-}\right] \tag{64}\] equipped with the generalized \(\epsilon\)-prescription terms generates the variational equation \(F[x,t^{\prime}]=0\) and \(x_{d}=0\) within the variational space of section III. Argument (v) for the redoubling is that it introduces a copy of the system as an ancilla to obtain any equation of motion by the variational principle. ## VII Time arrow in quantum mechanics Perhaps the first indication of the redoubling in quantum mechanics is the way the time arrow is introduced. The auxiliary conditions for the Newton equation, a second order differential equation, can be used to orient the time for the classical motions. In fact, neither the boundary conditions, the initial and the final coordinates, nor the initial conditions, the initial coordinate and the velocity, are time reversal invariant. The initial condition for the first order Schrodinger equation is the initial state. How to encode the direction of time in a state? The solution of this problem is well knwown, it is the introduction of an internal time reversal parity for the quantum states. The usual procedure is to use positive time reversal parity coordinate eigenstates with real wave functions and to represent the time reversal transformation by complex conjugation. The real and imaginary part of the wave function in coordinate representation stand for the positive and the negative time reversal parity components of the states. As a result the bra and the ket develop in opposite direction in time. This remarks brings us to the quantum origin of argument (i) mentioned above in closed classical dynamics. To recover time independent physical quantities in the eigenstate of the Hamiltonian the time dependence of the ket expectation values must be neutralized by a bra. The appearance of the bra and the ket components as multiplicative factors in the expectation values is required by Gleason's theorem, as well. The expectation value of the coordinate dependent operator \(A(x)\) at time \(t\) is given by \[\langle\psi_{i}|U^{\dagger}(t,t_{i})AU(t,t_{i})|\psi_{i}\rangle=\int d\hat{x}_ {i}dx_{f}A(x_{f})\langle x_{i+}|\psi_{i}\rangle\langle\psi_{i}|x_{i-}\rangle \int_{\hat{x}(t_{i})=\hat{x}_{i}}^{x_{\pm}(t)=x_{f}}D[\hat{x}]e^{\frac{i}{L}(S [x_{+}]-S^{*}[x_{-}])} \tag{65}\] in the path integral representation. Therefore the integration over the trajectories \(x_{+}(t)\) and \(x_{-}(t)\) of Fig. 1 represents the quantum fluctuations within \(U\) and \(U^{\dagger}\) and the action is given by eq. (34). The redoubling of the coordinate is the result of the non-trivial representation of the time arrow on the space of pure states, \(\psi\rightarrow(|\psi\rangle,\langle\psi|)\). The CTP formalism was actually first introduced to deal with the perturbation expansion in the Heisenberg representation [25; 26] where the redoubling arises from the independent perturbation series of the time evolution operators \(U(t)\) and \(U^{\dagger}(t)\) Similar argument applies to Thermal Field Theory [40]. ## VIII Quantum effective actions The distribution of the quantum fluctuations in a state is described by the density matrix, defined by \(\rho(x_{+},x_{-})=\langle x_{+}|\rho|x_{-}\rangle\) in the coordinate representation and the generalization of the expectation value (65) for mixed state is \[\mathrm{Tr}[\rho A]=\int dx_{\pm}\rho(x_{+},x_{-})\langle x_{-}|A|x_{+}\rangle. \tag{66}\] The density matrix is factorizable in a pure state \(\rho(x_{+},x_{-})=\langle x_{+}|\psi\rangle\langle\psi|x_{-}\rangle\) making the quantum fluctuations in the bra and the ket independent. These fluctuations become correlated in a mixed state where the density matrix is not factorizable, \[\rho(x_{+},x_{-})=\sum_{n}\langle x_{+}|\psi_{n}\rangle p_{n}\langle\psi_{n}|x_ {-}\rangle \tag{67}\] where the sum extends over more than one pairwise orthogonal states in the sum. Therefore the mixing, the uncertainty about the actual state of the system appears as a correlation between the quantum fluctuations of the bra and the ket. The correlations appear as a coupling between the CTP copies, represented by the contribution \(S_{2}\) in (49). The construction of the effective action, point (iii) in classical mechanics, is based on the reduced density matrix of the coordinate \(x\) within a closed system of section V.3 \[\rho(t,\hat{x}) = \langle x_{+}|\mathrm{Tr}_{e}[U(t,t_{i})\rho_{i}U^{\dagger}(t,t_ {i})]|x_{-}\rangle \tag{68}\] \[= \int d\hat{x}_{i}d\hat{y}_{i}dy_{f}\rho_{i}(x_{i+},y_{i+},x_{i-}, y_{i-})\int_{\hat{x}(t_{i})=\hat{x}_{i},\hat{y}(t_{i})=\hat{y}_{i}}^{\hat{x}(t)= \hat{x},y_{\pm}(t)=y_{f}}D[\hat{x}]D[\hat{y}]e^{\frac{i}{\hbar}(S[x_{+},y_{+}] -S^{*}[x_{-},y_{-}])},\] where the integration of the final coordinate of the environment trajectory stands for the trace \(\mathrm{Tr}_{e}\) over the environment Hilbert space. One can introduce the influence functional by the equation \[e^{\frac{i}{\hbar}S_{infl}[\hat{x}]}=\int d\hat{y}_{i}dy_{f}\rho_{e}(y_{i+},y_ {-})\int_{\hat{y}(t_{i})=\hat{y}_{i}}^{y_{\pm}(t)=y_{f}}D[\hat{y}]e^{\frac{i} {\hbar}(S_{e}[y_{+}]+S_{i}[x_{+},y_{+}]-S_{i}^{*}[y_{-}]-S_{i}^{*}[x_{-},y_{-} ])}, \tag{69}\] where the initial density matrix is assumed to be factorizable, \(\rho_{i}(x_{i+},y_{i+},x_{i-},y_{i-})=\rho_{s}(x_{i+},x_{i-})\rho_{e}(y_{i+},y _{i-})\) and rewrite the reduced density matrix as \[\rho(t,\hat{x})=\int d\hat{x}_{i}d\hat{y}_{i}dy_{f}\rho_{s}(x_{i+},x_{i-})\int _{\hat{x}(t_{i})=\hat{x}_{i}}^{\hat{x}(t)=\hat{x}}D[\hat{x}]e^{\frac{i}{\hbar }S_{Beff}[\hat{x}]}, \tag{70}\] where the bare effective action \(S_{Beff}\) is given by (45). The system and the environment are treated in the CTP scheme with identical final points of the doublet trajectories and in the Open Time Path (OTP) scheme with different, fixed end points, respectively. In the case of a pure state in closed, unitary dynamics the bra and the ket follow independent time evolution related by a trivial time inversion thus it is sufficient to solve the Schrodinger equation for one of them to find the time dependence of the expectation values. This is similar to classical mechanics where it is sufficient to solve the Euler-Lagrange equation for the unique trajectory in the traditional variational principle. However the correlation between the bra and the ket fluctuations in an open dynamics makes it necessary to find the non-factorizable density matrix. In a similar manner one has to solve the equation of motion for the trajectories of both copies in classical open systems. The origin of point (v), the possibility of reproducing any classical time dependence by an appropriately chose open dynamics is not easy to find in quantum mechanics because the positivity of the reduced density matrix restricts the effective equation of motion in a rather complicated manner. But at least a partial analogy can be found: The wave function of a pure state of a closed dynamics is given by the dependence of the path integral on the final point. In a similar manner the density matrix should be represented by a path integral over redoubled trajectories. This is just what the double CTP path integral (70) is doing. To assure the Hermiticity of the density matrix one needs condition (52). The relation with the classical dynamics can be easily be established by the help of the generator functional \[e^{\frac{i}{\hbar}W[\hat{j}]} = {\rm Tr}[U(t,t_{i};j_{+})\rho_{i}U^{\dagger}(t,t_{i};-j_{i})] \tag{71}\] \[= \int D[\hat{x}]e^{\frac{i}{\hbar}S[\hat{x}]+\frac{i}{\hbar}\int dt \hat{j}(t)\hat{x}(t)}\] for the connected Green functions defined by using (25) for \(W[\hat{j}]\) rather than \(Z[\hat{j}]\) where \(U(t,t_{i};j)\) is the time evolution operator for the closed dynamics of the system and its environment in the presence of the external source \(j\) coupled to \(x\). The inverse functional Legendre transform \(W[\hat{j}]\to S[\hat{x}]\) defined by eqs. (23) and (26) can be used to introduce the effective action \(S_{eff}[\hat{x}]\). The trajectory \(\hat{x}(t)\) defined by (26) at \(\hat{j}=0\) yields the physical trajectory \[x(t)=\int D[\hat{x}]e^{\frac{i}{\hbar}S_{Beff}[\hat{x}]}x_{\pm}(t)=\langle x(t)\rangle \tag{72}\] which satisfies the variational equation of \(S_{eff}[\hat{x}]\) according to eq. (24). Hence the effective action defined by the Legendre transformation (23) and (26) is the classical CTP action. From Ward identities to Noether Theorem Point (iv) of redoubling in classical mechanics is about the reconciliation of the violation of conservation laws of open dynamics with Noether theorem. To understand the relation of this argument with quantum mechanics one has to start at an earlier point, at the difference between conservation laws in quantum and classical mechanics. An obvious difference is the existence of quantum conservation laws related to discrete symmetries. This is easy to understand by noting that the symmetry such a symmetry is realized by discrete linear superposition. The absence of such states in classical mechanics explains the lack of classical conservation laws belonging to discrete symmetry. The relation between Noether theorem and quantum conservation laws is a more difficult question in the case of a continuous symmetry. In fact, the proof of Noether theorem goes by considering the parameters of the continuous symmetry as new coordinates and the reparametrization invariant Euler-Lagrange equation, written in this new coordinate system, yields the conservation of the generalized momentum. One can not translate this argument into quantum mechanics because the quantization rules are not invariant under non-linear coordinate transformations and the equation of motion can not be used as an operator equation. The relation between the classical and quantum conservation laws goes rather by comparing the Ward identities, the expression of a continuous symmetry in terms of Green functions, with Noether theorem. The variation of the trajectory is interpreted as a change of the integration variable in (71) and invariance of the integral, the generator functional produces a functional equation in the source. The successive functional derivatives of this equation generates a hierarchical set of equations. This procedure is briefly summarized in appendix B for a generic variation. The resulting hierarchical set of equations gives the insertion of the equation of motion into the Green functions. In the case of a continuous symmetry the variation is chosen to be the local, gauged version of the symmetry and the resulting hierarchical set of equations are called Ward identities. They describe the insertion of the conservation law resulting from Noether theorem into the Green functions. The translation \(\delta x_{\pm}=\pm\xi\) treated as a time-dependent variation yields the momentum balance equation (56) in classical mechanics. Treated as a change of variable the repetition of the steps (B3)-(B6) yields \[\frac{d}{dt}\langle T[p_{r}(t)x_{\sigma_{1}}(t_{1})\cdots x_{ \sigma_{n}}(t_{n})]\rangle=\langle T\left[\left(\frac{\partial L_{1}}{ \partial x(t)}+\frac{\partial L_{2}}{\partial x_{+}(t)}_{|\dot{x}_{\pm}=\dot {x},\dot{x}_{\pm}=x}\right)x_{\sigma_{1}}(t_{1})\cdots x_{\sigma_{n}}(t_{n}) \right]\rangle\] \[-i\hbar\sum_{j=1}^{n}\delta(t-t_{j})\langle T[x_{\sigma_{1}}(t_{ 1})\cdots x_{\sigma_{j-1}}(t_{j-1})x_{\sigma_{j+1}}(t_{j+1})\cdots x_{\sigma_{ n}}(t_{n})]\rangle. \tag{73}\] The linear transformation of a multi-component coordinate \(\delta x_{\pm}=\pm\xi\tau x_{\sigma}\) leads to the classical balance equation (58) for the renormalized angular momentum but treated as a change of variable in the path integral generates the equations \[\frac{d}{dt}\langle T[p_{r}(t)\tau x(t)x_{\sigma_{1}}(t_{1})\cdots x _{\sigma_{n}}(t_{n})]\rangle \tag{74}\] \[= \langle T\left[\left(\frac{\partial L_{1}}{\partial x}\tau x+ \frac{\partial L_{1}}{\partial\dot{x}}\tau\dot{x}+\left(\frac{\partial L_{2}} {\partial x_{+}}\tau x+\frac{\partial L_{2}}{\partial\dot{x}_{+}}\tau\dot{x} \right)_{|\dot{x}_{\pm}=\dot{x},x_{\pm}=x}\right)x_{\sigma_{1}}(t_{1})\cdots x _{\sigma_{n}}(t_{n})\right]\rangle\] \[-i\hbar\sum_{j=1}^{n}\delta(t-t_{j})\langle T[x_{\sigma_{1}}(t_{ 1})\cdots x_{\sigma_{j-1}}(t_{j-1})x_{\sigma_{j+1}}(t_{j+1})\cdots x_{\sigma_ {n}}(t_{n})]\rangle.\] The expectation value of the renormalized momentum or angular momentum is non-conserved for translation or rotational non-invariant Lagrangian, respectively or when the instantaneous creation or annihilation of elementary excitations violate the equation of motion. The energy conservation is a more subtle issue because the underlying external symmetry is broken by the \(\Delta t\neq 0\) regulator of the path integral which restricts the time into a discrete set. To recover the continuous translations, used to derive (61), we replace the regulated bare action of the discrete trajectory defined by the set of points \(\{(t_{i}+n\Delta t,x_{n})\}\) by that of the continuous action of the interpolating trajectory defined in appendix C which is constructed in such a manner that the relative difference of the two actions is bounded by a freely chosen small number. The steps followed in appendix B can now be repeated for the interpolating trajectories with eq. (61) holding with the result \[\frac{d}{dt}\int D[x]e^{\frac{i}{\hbar}S[x]+\frac{i}{\hbar}\int xj }(H-jx) = \int D[x]e^{\frac{i}{\hbar}S[x]+\frac{i}{\hbar}\int xj}\left[ \dot{x}\left(\frac{d}{dt}\frac{\partial L}{\partial\dot{x}}-\frac{\partial L}{ \partial x}-j\right)-\partial_{t}L\right]\] \[\frac{d}{dt}\langle T[\left(\dot{x}\frac{\partial L_{1}}{\partial \dot{x}(t)}-L_{1}\right)x_{1}\cdots x_{n}]\rangle = \langle T\left[\left[\dot{x}\left(\frac{d}{dt}\frac{\partial L}{ \partial\dot{x}}-\frac{\partial L}{\partial x}\right)-\partial_{t}L\right]_{ t}x_{1}\cdots\cdots x_{n}\right]\rangle \tag{75}\] ## X Quantum extensions We turn finally to the aspects of open dynamics which are specific of the quantum level, the complexification of the action, decoherence, dissipation and the semiclassical limit. ### Finite life-time and preservation of the total probability The open interaction channels lead to more radical changes in the quantum dynamics than in the classical case: While the action of a classical open system is real up to the infinitesimal terms of the generalized \(\epsilon\)-prescription the bare quantum action whose parameters are given in terms of Green functions may assume complex values. The physical origin of the complexification in a unitary full dynamics of a many-body system is identified by the optical theorem as the forward scattering, the presence of the mass-shell excitations in the intermediate states of the perturbation series. The complexification of the action enriches the dynamics which now contains more free parameters: The number of the closed interaction parameters is doubled since any parameters of \(S_{1}\) can be complex and new the open interaction channels in \(S_{2}\) represent additional free parameters restricted by the formal symmetry (52). The imaginary part of the closed parameters introduces finite life-time for excitations which renders the time evolution non-unitary. One would expect a violation the conservation of the norm of the state, the total probability in that case. However the open channels restore the conservation of the total probability. In fact, the system and its environment together obey closed unitary dynamics hence \(W[\hat{j}]=0\) for physically realisable sources, \(j_{+}=-j_{-}\), which implies \(\mathrm{Tr}\rho_{s}=1\) for the system reduced density matrix according to eq. (71). ### Decoherence and dissipation The excitations of the environment not only destabilize the state of a subsystem, they generate system-environment entanglement and decohere the sub-systems. The non-factorisablilty of the density matrix (67) arises in this formalism when the excitations of the environment produce several contributions to the trace \(\mathrm{Tr}_{e}\) in (68). The decoherence follows since the imaginary on-shell contributions which dominate the parameters of the influence action in eq. (69) for long time are positive according to the optical theorem and suppress the integrand on the right hand side of eq. (70). A detailed space-time picture of the build up of the decoherence can be obtained from the path integral (70): The integrand \(\exp{iS[\hat{x}]}/\hbar\) is the contribution of the pair of trajectories \(x_{\pm}\) to the density matrix and the decoherence in the coordinate basis consists of the suppression of contributions with \(x_{d}=x_{+}-x_{-}\neq 0\), induced by \(\mathrm{Im}S>0\). The physical origin of the decoherence of a subsystem in a large, closed many-body system is identical with that of dissipation. In fact, the internal on-shell excitations establish interactions among well separated spatial regions and generate long range open dynamics. The instability of the subsystem states appears as an irresistible "leakage" into the environment and generates dissipative effective dynamics. A condition for strong decoherence/dissipation is a dense excitation spectrum of the system and the environment. The small and frequent energy exchanges make the mixing contributions of the reduced density matrix large and thereby the system-environment entanglement and the decoherence become strong. It is instructive to consider the case of a harmonic oscillator where the second order Green functions are identical in the classical and the quantum case. The action (38) of the \(\epsilon\)-prescription can in be used for finite \(\epsilon\) and the the first and the second \({\cal O}(\epsilon)\) term describes decoherence and dissipation, respectively but an \({\cal O}(\epsilon)\) acausal contribution arise in \(D^{r}\). A better strategy is to use the most general quadratic Lagrangian compatible with the full time inversion symmetry (52) [41] \[L=\frac{m}{2}(\dot{x}_{+}^{2}-\dot{x}_{-}^{2})-\frac{m\omega^{2}}{2}(x_{+}^{2}- x_{-}^{2})+\frac{m\nu}{2}(\dot{x}_{+}x_{-}-\dot{x}_{-}x_{+})+\frac{i}{2}[d_{0}(x_ {+}-x_{-})^{2}+d_{2}(\dot{x}_{+}-\dot{x}_{-})^{2}] \tag{76}\] with \(K^{n}=m(\omega^{2}-\omega_{0}^{2})\), \(K^{i}=d_{0}+d_{2}\omega^{2}\), and \(K^{f}=im\nu\omega\) gives \[D^{\stackrel{{ r}}{{a}}} = \frac{1}{m[\omega^{2}-\omega_{0}^{2}\pm i\omega\nu]},\] \[D^{i} = -\frac{d_{0}+d_{2}\omega^{2}}{m[(\omega^{2}-\omega_{0}^{2})^{2}+ \omega^{2}\nu^{2}]}, \tag{77}\] where the Heaviside function is smeared. The closed limit is \(\nu=\epsilon/\omega_{0}\), \(d_{0}=m\epsilon\), and \(d_{2}=0\). This Lagrangian is better suited for phenomenological applications because \(\nu\), \(d_{0}\) and \(d_{2}\) may be finite without acausality. The open oscillator has a Newton friction force \(F_{f}=-m\nu\dot{x}\) and the decoherence is controlled by the parameters \(d_{0}\) and \(d_{2}\) which drop out from the classical equation of motion. The dissipative time scale, the life-time of the excitations, is given by the imaginary part of the pole of \(D^{r}\) which depends only on \(\nu\), a parameter of the real part of the action in agreement with the classical origin of the friction force. However the decoherence, being a genuine quantum effect, is controlled by the imaginary part of the action. Hence dissipation and decoherence may have independent scales despite their common dynamical origin [42]. Another remark is that the CTP symmetry (52) makes the Green functions for \(x_{d}\) vanishing, rendering \(x_{d}\) "invisible" on the level of expectation values. However in the mixed Green function for \(x\) and \(x_{d}\) the latter brings an \({\cal O}(\sqrt{\hbar})\) multiplication factor. ### Semiclassical limit It is a widespread view that in phenomenons where \(\hbar\) can be treated as a small parameter the quantum effects are weak up to some macroscopic quantum effects. One usually argues by mentioning that the Heisenberg commutation relations are \({\cal O}(\hbar)\) or pointing out that the traditional path integral expression for the transition amplitude between coordinate eigenstates \[\langle x_{f}|e^{-\frac{i}{\hbar}Ht}|x_{i}\rangle=\int_{x(0)=x_{i}}^{x(t)=x_{f }}D[x]e^{\frac{i}{\hbar}S[x]} \tag{78}\] is dominated by the classical trajectory as \(\hbar\to 0\). Such a view is too naive [43; 44] and one may look for more reliable signatures of the classical limit in a CQCO scheme. Though the necessary conditions for the classical limit remain unknown there are few well known necessary conditions, such as the decoherence, the suppression of the interference between macroscopically different states [45; 46; 47; 48], and the return of the determinism, the narrowing of the probability distributions for the observables. The first condition excludes closed systems where the time evolution is unitary and their fully resolved dynamics remains forever quantum. The classical description can be efficient for open dynamics where only a partial information is available. We have argued above that the decoherence is strong when the system and the environment have dense excitation spectrum. The second condition can be satisfied if the observable is the macroscopic average of microscopic quantities according to the central limit theorem [49]. The conflict between the simplistic \(\hbar\to 0\) condition and the more involved arguments about need of the decoherence and the macroscopic limit can partially be understood by comparing the path integrals (70) and (78). The strong decoherence in the coordinate basis implies strong suppression of trajectory pairs with large \(x_{d}=x_{+}-x_{-}\) for long time in (70). Hence the trajectory pairs with \(x_{d}\sim 0\) dominate the path integral and the two copies "stick together" in the classical limit yielding the action \(S_{Beff}[x,x]=\text{Im}S_{Beff}[x,x]\) according to (52) for the common trajectory \(x=x_{+}=x_{-}\). The imaginary part of the bare effective action arises from the system-environment interactions hence it is assumed to be small and the trajectories contributes to the functional integral (70) in an approximately identical manner. In other words the fluctuations are large and the dynamics is soft. The limit \(\hbar\to 0\) localizes the dominant contributions in (78) around the classical trajectory hence the fluctuations are small and the dynamics is hard. We are lead to the question whether the classical limit is soft or hard. The answer depends on the auxiliary conditions. When the initial state of the system and its environment is fixed as in a realistic situation then the soft path integral of the CTP formalism reflects the easy excitability owing to the dense excitation spectrum. When a pure initial and final states are fixed then the transition is dominated by classical physics if the corresponding action is large compared to \(\hbar\). But notice that (78), a transition amplitude in a closed dynamics between pure states is neither an observable nor relevant for the classical limit of open systems. A measurable transition probability between initial and final coordinate states is given by the help of the CTP path integral (70) by integrating over trajectories with given initial and final points. The naive argument about the dominance of the classical trajectory may remain valid for short time evolution in agreement with the general expectation that short time, high energy motion of particles is semiclassical. However the decoherence may build up in during a long time evolution invalidating the naive argument about (78). Summary To understand realistic mechanical systems from first principles we need a CQCO formalism which is equally applicable for classical, quantum, closed and open dynamics. This condition is satisfied by the CTP scheme where the dynamics can be defined by an action functional. However such a wide applicability is achieved by an unusual feature, a formal redoubling of degrees of freedom. The following possible origins of redoubling were presented in classical mechanics: (i): A generalization of the traditional variational principle of classical mechanics can be constructed for causal initial conditions. The variational determination of the final coordinate of the motion is possible by following the motion backward in time from the final to the initial time. The description of the motion in both direction in time yields the redoubling. (ii): The necessary and sufficient generalization of the classical conservative interactions, the semi-holonomic forces, is based by assigning a double role to the coordinate. It stands for the location of the particle and denotes the variational parameter. The separation of these two roles leads to the redoubling. (iii): Causal interactions with the environment generate time reversal odd terms to the effective action which can be retained by the redoubling. (iv): A simple way to encode the relevant information of a large environment for a small system is to represent the environment by an ancilla which is comparable with the system in its complexity. In particular, the energy exchange with the environment can trivially be reproduced by using a copy of the system as ancilla where the energy is defined with the opposite sign. (v): The effective equation of motion of an open system is non-canonical. Any differential equation can be derived by the variational principle with redoubling. These points can be justified by starting from quantum mechanics. Point (i) follows from the presence of the bra and the ket components in the expectation values, the origin of argument (iii) and (v) can be identified in the path integral representation of the reduced density matrix and the Noether theorem for classical open system, point (iv) can be derived from the Ward identities. The CTP formalism offers a simple way to derive decoherence, shows its common origin with dissipation, and is helpful is establishing the classical limit. An important difference between the quantum and classical levels is that the redoubling is purely formal in classical physics because the copies are separate only in the virtual variations and the equations of motion send them along the same trajectory. This is not the case anymore in quantum dynamics where the difference between the two trajectory contains the quantum fluctuations and is \(\mathcal{O}(\sqrt{\hbar})\). Another words, the quantum fluctuations appear as the difference between the copies and quantum and thermal averages have the right number of degrees of freedom. The redoubling makes a wide class of physical phenomenons accessible and offers new points of view suggesting that it should be included into our standard tool box of mechanics.
2310.07484
Certifying long-range quantum correlations through routed Bell tests
Losses in the transmission channel, which increase with distance, pose a major obstacle to photonics demonstrations of quantum nonlocality and its applications. Recently, Chaturvedi, Viola, and Pawlowski (CVP) [arXiv:2211.14231] introduced a variation of standard Bell experiments with the goal of extending the range over which quantum nonlocality can be demonstrated. In these experiments, which we call 'routed Bell experiments', Bob can route his quantum particle along two possible paths and measure it at two distinct locations - one near and another far from the source. The idea is that a Bell violation in the short-path should weaken the conditions required to detect nonlocal correlations in the long-path. Indeed, CVP showed that there are quantum correlations in routed Bell experiments such that the outcomes of the remote device cannot be classically predetermined, even when its detection efficiency is arbitrarily low. In this paper, we show that the correlations considered by CVP, though they cannot be classically predetermined, do not require the transmission of quantum systems to the remote device. This leads us to define the concept of 'short-range' and 'long-range' quantum correlations in routed Bell experiments. We show that these correlations can be characterized through standard semidefinite programming hierarchies for non-commutative polynomial optimization. We then explore the conditions under which short-range quantum correlations can be ruled out. We point out that there exist fundamental lower-bounds on the critical detection efficiency of the distant device, implying that routed Bell experiments cannot demonstrate long-range quantum nonlocality at arbitrarily large distances. However, we do find that routed Bell experiments allow for reducing the detection efficiency threshold. The improvements, though, are significantly smaller than those suggested by CVP's analysis.
Edwin Peter Lobo, Jef Pauwels, Stefano Pironio
2023-10-11T13:30:52Z
http://arxiv.org/abs/2310.07484v4
# Certifying long-range quantum correlations through routed Bell tests ###### Abstract Losses in the transmission channel, which increase with distance, pose a major obstacle to photonics demonstrations of quantum nonlocality and its applications. Recently, Chaturvedi, Viola, and Pawlowski (CVP) [arXiv:2211.14231] introduced a variation of standard Bell experiments with the goal of extending the range over which quantum nonlocality can be demonstrated. These experiments, which we call 'routed Bell experiments', involve two distant parties, Alice and Bob, and allow Bob to route his quantum particle along two possible paths and measure it at two distinct locations - one near and another far from the source. The premise is that a high-quality Bell violation in the short-path should constrain the possible strategies underlying the experiment, thereby weakening the conditions required to detect nonlocal correlations in the long-path. Building on this idea, CVP showed that there are certain quantum correlations in routed Bell experiments such that the outcomes of the remote measurement device cannot be classically predetermined, even when its detection efficiency is arbitrarily low. In this paper, we show that the correlations considered by CVP, though they cannot be classically predetermined, do not require the transmission of quantum systems to the remote measurement device. This leads us to define and formalize the concept of'short-range' and 'long-range' quantum correlations in routed Bell experiments. We show that these correlations can be characterized through standard semidefinite-programming hierarchies for non-commutative polynomial optimization. We then explore the conditions under which short-range quantum correlations can be ruled out and long-range quantum nonlocality can be certified in routed Bell experiments. We point out that there exist fundamental lower-bounds on the critical detection efficiency of the distant measurement device, implying that routed Bell experiments cannot demonstrate long-range quantum nonlocality at arbitrarily large distances. However, we do find that routed Bell experiments allow for reducing the detection efficiency threshold necessary to certify long-range quantum correlations. The improvements, though, are significantly smaller than those suggested by CVP's analysis. ## 1 Introduction Losses in the transmission channel, which increase with the distance (exponentially in fibers and quadratically in free-space), are a major obstacle for demonstrating the violation of Bell inequalities in photonic experiments [1, 2, 3, 4, 5, 6, 7]. They represent a daunting challenge for long-distance applications that rely on quantum nonlocality, such as entanglement certification between distant parties or device-independent quantum key distribution (DIQKD) [8, 9]. Chaturvedi, Viola, and Pawlowski (CVP) recently proposed an interesting idea that could potentially address this limitation [10]. They looked at a Bell experiment that involves the usual two distant parties, Alice and Bob. However, in their setup, Bob can measure his quantum particles at two distinct locations - one close to the source, \(B_{\mathtt{S}}\), and another far away, \(B_{\mathtt{L}}\), as illustrated in Fig. 1. (S and L stand for'short' and 'long' distance, respectively). This can be accomplished, for example, by using a switch that directs Bob's quantum particle either to the nearby measurement device \(B_{\mathtt{S}}\) or to the distant one \(B_{\mathtt{L}}\), depending on a classical input \(z\in\{\mathtt{S},\mathtt{L}\}\). Like all other components of the experiment (the source, the transmission channel, the measurement devices), the switch does not need to be trusted. As in any Bell experiment, certain causality constraints are assumed: events at Alice's side cannot causally influence those at Bob's side, and vice versa. In particular, the switch input \(z\) cannot affect the quantum state or measurement of Alice. We refer to such Bell experiments with selective routing of quantum particles to different locations as _routed Bell experiments_. In their paper, CVP examine a situation where Alice's measurement device \(A\) receives a binary input \(x\in\{0,1\}\) and produces a binary outcome \(a\in\{\pm 1\}\). Similarly, the measurement device \(B_{\mathsf{S}}\) or \(B_{\mathsf{L}}\), selected by the switch, receives a binary input \(y\in\{0,1\}\) and produces a binary output \(b\in\{\pm 1\}\). We denote Alice's observables as \(A_{x}\), those of Bob as \(B_{yz}\) (which may depend on the switch setting \(z\)), and the expectation of their product as \(\langle A_{x}B_{yz}\rangle\). Alice and Bob can then evaluate two CHSH expressions, \[\mathcal{C}_{z}=\langle A_{0}B_{0z}\rangle+\langle A_{0}B_{1z}\rangle+\langle A _{1}B_{0z}\rangle-\langle A_{1}B_{1z}\rangle\,, \tag{1}\] involving either the nearby or faraway measurement devices, depending on the switch value \(z\in\{\mathsf{S},\mathsf{L}\}\). We refer to \(\mathcal{C}_{\mathsf{S}}\) as the short-path (_SP_) CHSH value and \(\mathcal{C}_{\mathsf{L}}\) as the long-path (_LP_) CHSH value. In a standard Bell experiment, the condition for ruling out classical models and thus certifying genuine quantum properties in the _LP_ test would be the violation of the well-known local bound: \(\mathcal{C}_{\mathsf{L}}\leq 2\). CVP argue, however, that in any quantum model where the output of the distant measurement device \(B_{\mathsf{L}}\) is predetermined by classical variables, the _LP_ value is upper-bounded by \[\mathcal{C}_{\mathsf{L}}\leq\sqrt{8-\mathcal{C}_{\mathsf{S}}{}^{2}}\,, \tag{2}\] whenever the _SP_ value violates the local bound, i.e., whenever \(\mathcal{C}_{\mathsf{S}}>2\). This bound implies that the _SP_ test can be used to weaken the conditions for ruling out classical models in the _LP_ test, since the right-hand side of (2) is strictly smaller than \(2\) for any value of \(\mathcal{C}_{\mathsf{S}}>2\). This result suggests that routed Bell experiments might provide a way to dramatically extend the range over which nonlocality can be demonstrated. Indeed, assume, as an illustration, an ideal CHSH realization where the source prepares the maximally entangled two-qubit state \(|\phi_{+}\rangle\), \(A\) measures in the bases \(Z,X\), and \(B_{\mathsf{S}}\) and \(B_{\mathsf{L}}\) in the bases \((Z\pm X)/\sqrt{2}\). Then \(\mathcal{C}_{\mathsf{S}}=\mathcal{C}_{\mathsf{L}}=2\sqrt{2}\). Typically, however, the _LP_ CHSH value will be lower than the _SP_ CHSH value due to additional losses and noise in the transmission channel. For instance, let's assume that \(B_{\mathsf{S}}\) has a device with global detection efficiency \(\eta_{\mathsf{S}}\), while \(B_{\mathsf{L}}\), being farther away from the source, has a smaller efficiency \(\eta_{\mathsf{L}}<\eta_{\mathsf{S}}\) (and for simplicity that \(A\) has a measurement device with unit detection efficiency \(\eta_{A}=1\)). Hence, with some non-zero probability, \(B_{\mathsf{S}}\) and \(B_{\mathsf{L}}\) will occasionally fail to click. Simply discarding the 'no-click' outcomes \(\varnothing\) can lead to the detection loophole and is only valid under the fair sampling assumption [1, 2]. However, one can deal with these 'no-click' results by mapping them to one of the \(\pm 1\) outcomes, say, \(+1\)[2, 7, 11]. Taking into account losses in this way, the _SP_ and _LP_ CHSH values become \[\mathcal{C}_{\mathsf{S}}=\eta_{\mathsf{S}}2\sqrt{2}\quad\text{ and }\quad \mathcal{C}_{\mathsf{L}}=\eta_{\mathsf{L}}2\sqrt{2}\,. \tag{3}\] Figure 1: Routted Bell experiment. Depending on the value \(z\in\{\mathsf{S},\mathsf{L}\}\), a switch directs Bob’s quantum particle either to (a) a nearby measurement device \(B_{\mathsf{S}}\) or (b) to a distant one \(B_{\mathsf{L}}\). The experiment is characterized by the joint output/input probabilities \(p(a,b|x,y,z)\). If we substitute these values into (2), we find that classical models for \(B_{\rm L}\) are ruled out if \[\eta_{\rm L}>\sqrt{1-\eta_{\rm S}^{2}}\,. \tag{4}\] For instance, if \(\eta_{\rm S}=1-\delta\), then an efficiency \(\eta_{\rm L}>\sqrt{2\delta}\) for the far-away device is sufficient and can be made arbitrarily small as \(\delta\to 0\). Taking into account that detection efficiencies decrease with transmission distance, the implication is that by performing high-quality CHSH tests close to the source (which are achievable with current technology), the bound (2) can be violated even if the measurement device \(B_{\rm L}\) is at an arbitrarily large distance from the source. The significance of this result, however, hinges on the assumptions used to derive the bound (2) and in particular on what one means by 'demonstrating nonlocality' and 'ruling out classical models' in routed Bell experiment. Evidently, the aim is not to rule local hidden-variable models a la Bell for the _entire_ routed Bell experiment. This is because such models are already ruled out by the _SP_ test, without any need to even consider the _LP_ test. Furthermore, the relation (2) explicitly assumes that there is no local hidden-variable model for the _SP_ test, since it relies on the violation \({\cal C}_{\rm S}>2\) of the local bound. The idea is thus to take for granted that the devices \(A\) and \(B_{\rm S}\), which are located close to the source, behave quantumly and ask whether the observed correlations can, or cannot, be explained if the faraway device \(B_{\rm L}\) behaves classically. However, various definitions are possible for what it means for \(B_{\rm L}\) to behave classically. In this paper, we adopt the following view. We assume that on Bob's side the transmission of quantum information is only possible at short distance, where by'short distance' we mean that it cannot reach the remote device \(B_{\rm L}\). Thus, beyond some point on the line connecting the source to \(B_{\rm L}\), quantum data can no longer be transmitted or processed and the experimental setup becomes entirely classical. In particular, the device \(B_{\rm L}\) functions as a purely classical device (e.g., a classical computer) receiving classical data through a classical transmission channel (e.g., radio waves from a wifi emitter). If such a model can reproduce the experimental data, then it is not possible to claim in a device-independent way that long-range quantum correlations have been demonstrated, since they can be replicated without any entanglement or quantum communication reaching \(B_{\rm L}\). Conversely, if no such model can reproduce the experimental data, then long-range quantum correlations can be certified. We view this framing of the problem as the relevant one in the context of transmission losses and their impact on the demonstration of quantum nonlocality over long distances. Indeed, if at short distance quantum nonlocality is already established and quantum resources are known to be transmitted (to perform the _SP_ CHSH test), then the pertinent question is whether transmission of quantum resources is also necessary over longer distances, or whether it is possible to achieve the same results without them. The above standpoint differs from the one adopted by CVP. CVP's definition of what it means for the distant device \(B_{\rm L}\) to behave classically, and which is used to derive the bound (2), is that \(B_{\rm L}\)'s outcome is determined by classical variables already set at the source. We point out in this paper that this definition does not encompass the most general class of short-range quantum models in the sense outlined above. We do this by presenting a simple strategy in which the remote device \(B_{\rm L}\) is entirely classical and no quantum information ever reaches it, yet which achieves the standard classical bound \({\cal C}_{\rm L}=2\) for any value of \({\cal C}_{\rm S}>2\), i.e., that violates the bound (2) satisfied by CVP models. This motivates us to introduce a proper definition of _short-range quantum correlations_, which suitably captures long-range nonlocality and the potential tradeoff between _SP_ and _LP_ tests in routed Bell experiments. We then show that short-range quantum correlations (and incidentally the correlations considered by CVP) can be characterized through standard semidefinite-programming hierarchies for non-commutative polynomial optimization. Based on this formulation, we derive new _LP_ bounds and show that _SP_ tests do lead to weakened conditions for _LP_ tests. Although they allow for reducing the detection efficiency threshold necessary to certify long-range quantum correlations, the improvements are considerably smaller than those suggested by CVP's analysis and the bound (2). In particular, we point out that there exists fundamental lower-bounds on the critical detection efficiency \(\eta_{\rm L}\) of the distant measurement device \(B_{\rm L}\), namely \(\eta_{\rm L}>1/M\) where \(M\) is the number of measurement settings of \(B_{\rm L}\). The same lower-bounds hold for regular Bell tests and imply that routed Bell experiments cannot be used to demonstrate long-range quantum nonlocality at arbitrarily large distances. This paper is organized as follows. We first briefly review, in Section 2, the class of models considered by CVP and present an example of a short-range quantum correlation that violates the bound (2). In Section 3, we introduce more formally our definition of short-range and long-range quantum correlations in routed Bell experiments. We also analyze more generally various classes of correlations that can be obtained in routed Bell experiments and show how they can be characterized through standard semidefinite-programming hierarchies for non-commutative polynomial optimization. In Section 4, we derive new \(\mathit{SP}/\mathit{LP}\) relations valid according to our definition. In Section 5, we analyze the required detection efficiencies required to demonstrate long-range quantum correlations in routed Bell experiments. We conclude with a discussion of our results. ## 2 CVP models vs short-range quantum correlations As they form the initial motivation for the present paper, we begin by examining CVP models as a potential mechanism for the observed correlations in routed Bell experiments. Assume, for concreteness, a routed Bell experiment characterized by quantum correlations as in (3) where \(\eta_{\mathsf{S}}\) and \(\eta_{\mathsf{L}}\) are such that \(\mathcal{C}_{\mathsf{S}}>2\), but \(\mathcal{C}_{\mathsf{L}}\leq 2\). A natural question to ask is: Can such correlations be reproduced by a classical remote device \(B_{\mathsf{L}}\) without any distribution of entanglement between \(A\) and \(B_{\mathsf{L}}\)? In a standard Bell experiment, this would be the case if and only if the output of \(B_{\mathsf{L}}\) were fully determined by classical variables \(\lambda\) already set at the source and shared with \(A\). Hence, it seems reasonable to make the same assumption here. However, since the \(\mathit{SP}\) test does violate the CHSH inequality, the measurement devices \(A\) and \(B_{\mathsf{S}}\) must, as discussed earlier, share and exploit quantum entanglement. This leads us to consider a hybrid quantum-classical model where the source generates an entangled quantum state \(\rho_{AB_{\mathsf{S}}}\) that can reach \(A\) and \(B_{\mathsf{S}}\), along with classical variables \(\lambda\) that determine \(B_{\mathsf{L}}\)'s measurement outcomes. In full generality, these classical variables can also be correlated with the quantum system of \(A\) and \(B_{\mathsf{S}}\) and (partly) determine their outcomes, i.e., the state \(\rho_{AB_{\mathsf{S}}}^{\lambda}\) can also depend on \(\lambda\). For such a model, we can then write the correlations generated in a routed Bell experiment as \[p(a,b|x,y,z)=\begin{cases}\sum_{\lambda}p(\lambda)\,\operatorname{Tr}\left[ \rho_{AB_{\mathsf{S}}}^{\lambda}M_{a|x}\otimes M_{b|y,\mathsf{S}}\right]& \text{ if }z=\mathsf{S}\\ \sum_{\lambda}p(\lambda)\,p(b|y,\lambda)\,p(a|x,\lambda)&\text{ if }z=\mathsf{L} \end{cases} \tag{5}\] where the first line describes quantum correlations between \(A\) and \(B_{\mathsf{S}}\) and the second line classical correlations between \(A\) and \(B_{\mathsf{L}}\). These two lines should be coupled by the condition that \(p(a|x,\lambda)=\operatorname{Tr}\left[\rho_{A}^{\lambda}\,M_{a|x}\right]\), since what Alice does cannot causally depend on what happens on Bob's side, and in particular on whether \(z=\mathsf{S}\) or \(\mathsf{L}\). This is the formulation used in [10], which leads to the bound (2). The intuition behind the derivation of this bound is as follows. Assume first for simplicity that the \(\mathit{SP}\) CHSH expression reaches the maximal quantum value \(\mathcal{C}_{\mathsf{S}}=2\sqrt{2}\). Then, by standard self-testing results [12, 13, 14], it can be inferred that the measurement \(A_{x}\) corresponds to a Pauli measurement on a two-dimensional subspace of \(A\) that is maximally entangled with \(B_{\mathsf{S}}\) and acts as the identity on any other degrees of freedom. In particular, the measurement outcome of \(A_{x}\) must be fully random and uncorrelated with the classical instructions \(\lambda\) shared with \(B_{\mathsf{L}}\). Consequently, we have \(p(a|x,\lambda)=p(a|x)=1/2\) for all \(\lambda\). Substituting this condition in (5) implies that the correlations between \(a\) and the output \(b\) of \(B_{\mathsf{L}}\) vanishes: \(\langle A_{x}B_{\mathsf{L}}\rangle=\sum_{a,b\in\{\pm 1\}}ab\,p(a,b|x,y, \mathsf{L})=\frac{1}{2}\sum_{a,b\in\{\pm 1\}}ab\,p(b|y,\mathsf{L})=0\) for all \(x,y\). This in turn implies that \(\mathcal{C}_{\mathsf{L}}=0\). The hypothesis \(\mathcal{C}_{\mathsf{S}}=2\sqrt{2}\) is obviously too strong in any real-life experiment. However, the above argument can be refined using the fact that for any value \(\mathcal{C}_{\mathsf{S}}>2\), there is a bound on how much the measurement outcomes of \(A_{x}\) can be correlated to any other system besides \(B_{\mathsf{S}}\), and in particular to the classical instructions \(\lambda\) shared with \(B_{\mathsf{L}}\): specifically \(|p(a|x,\lambda)|\leq 1/2+\sqrt{8-\mathcal{C}_{\mathsf{S}}^{2}}/4\) for all \(\lambda\)[15]. Building on this result, CVP arrive at the bound (2). ### A strategy based on a fully classical \(B_{\text{L}}\) that violates the bound (2) We now present a simple strategy where \(B_{\text{L}}\) is entirely classical and which aligns with the intuitive notion of short-range quantum correlations discussed in the introduction, but that violates (2). This shows that CVP models do not correspond to the notion taken here of what it means for \(B_{\text{L}}\) to behave classically. We start from the ordinary quantum strategy that yields the _SP_ and _LP_ CHSH expectations (3). We recall that in this strategy, the source prepares the two-qubit entangled state \((|00\rangle+|11\rangle)/\sqrt{2}\). The first qubit is measured by \(A\) in the bases \(Z,X\), while the second qubit is directed to either \(B_{\text{S}}\) or \(B_{\text{L}}\), depending on the switch setting, and is then measured in the bases \((Z\pm X)/\sqrt{2}\). If the second qubit is directed to \(B_{\text{S}}\), it has a probability \(\eta_{\text{S}}\) of being detected, whereas if it is directed to \(B_{\text{L}}\), it has a lower probability \(\eta_{\text{L}}\) of being detected. Consider now an alternative strategy where the source, the measurement device \(A\), the switch, and the measurement device \(B_{\text{S}}\) all behave as in the ordinary quantum strategy described above. Thus any value \(\mathcal{C}_{\text{S}}\in[0,2\sqrt{2}]\) can be obtained by tuning \(\eta_{\text{S}}\). We only modify what happens in the experiment _after_ the second qubit has been directed towards \(B_{\text{L}}\) when the switch has been set to \(z=\text{L}\). In this case, at some location between the switch and \(B_{\text{L}}\) - possibly just after the switch, but in any case before reaching \(B_{\text{L}}\) - the second qubit gets measured in the \(Z\) basis yielding an outcome \(\lambda\in\{\pm 1\}\), as illustrated in Fig. 2. This classical outcome is then transmitted to \(B_{\text{L}}\) through some purely classical channel and upon receiving \(\lambda\), \(B_{\text{L}}\) simply outputs it, irrespective of which input \(y\in\{0,1\}\) is selected. We then have \(p(\lambda)=1/2\), \(p(b|y,\lambda)=\delta_{b,\lambda}\), \(p(a|x,\lambda)=\text{Tr}\left[\rho_{\lambda}A_{x}\right]\), where \(\rho_{1}=|0\rangle\langle 0|\) and \(\rho_{-1}=|1\rangle\langle 1|\) are the reduced states for Alice conditioned upon \(\lambda\). We can then evaluate the probabilities \(p(a,b,|x,y,\text{L})\) by inserting these expressions in (5) or more simply directly evaluate the _LP_ correlators \(\langle A_{x}B_{y\text{L}}\rangle\) through \[\langle A_{x}B_{y\text{L}}\rangle=\langle\phi_{+}|A_{x}\otimes Z|\phi_{+} \rangle\,, \tag{6}\] since whatever the choice of \(y\), the'measurement' \(B_{y\text{L}}\) corresponds to an effective \(Z\) measurement on Bob's particle. If \(x=0\), i.e, \(A_{0}=Z\), we find \(\langle A_{0}B_{y\text{L}}\rangle=1\), while if \(x=1\), i.e, \(A_{1}=X\), we find \(\langle A_{1}B_{y\text{L}}\rangle=0\). As claimed, this implies the _LP_ CHSH value \(\mathcal{C}_{\text{L}}=2\). ### Discussion The above example shows that the notion of short-range quantum strategies considered in the present paper does not coincide with the set of CVP strategies. The assumption that the outcomes of \(B_{\text{L}}\) are determined by classical variables that are already specified at the source is too strong to capture properly the notion of short-range quantum correlations and to detect long-range quantum nonlocality. If one were to take CVP's original definition as a definition for long-range nonlocality in routed Bell experiments, then one would have to agree that a standard Bell experiment done on an optical table in Gdansk and whose classical measurement outcomes are sent by email to Sydney Figure 2: A strategy yielding both \(\mathcal{C}_{\text{S}}=2\sqrt{2}\) and \(\mathcal{C}_{\text{L}}=2\). The source prepares the Bell state \(|\phi_{+}\rangle\), Alice performs \(Z\) or \(X\) measurements on her qubit, and if \(z=\text{S}\), Bob performs the measurements \((Z\pm X)/\sqrt{2}\) at \(B_{\text{S}}\). On the other hand, if \(z=\text{L}\), Bob’s qubit gets measured in the \(Z\) basis and the classical outcome \(\lambda\) is transmitted to \(B_{\text{L}}\), which simply outputs it. The entire quantum part of this experiment, enclosed by the dotted line, may happen, for instance, on an optical table in Gdansk and the classical outcome \(\lambda\) sent by email to \(B_{\text{L}}\) located in Sydney. would feature nonlocality between Gdansk and Sydney, since such a procedure would allow for implementing the strategy of Fig. 2 which violates the CVP bound (2). In particular, the above example shows that from the observation of a violation of the bound (2), it _cannot_ be inferred that * long-range quantum resources are necessary to reproduce the correlations, * entanglement had to be distributed from the source to the faraway device \(B_{\mathsf{L}}\), * the device \(B_{\mathsf{L}}\) behaves quantumly, e.g., it performs incompatible measurements, * fresh quantum randomness is generated after the input \(y\) is given to \(B_{\mathsf{L}}\), * the correlations can be used for secure DIQKD between \(A\) and \(B_{\mathsf{L}}\), * or that any other property typically associated with quantum nonlocality is present at the remote device \(B_{\mathsf{L}}\). There are actually two distinct questions that can be raised regarding the correlations observed in a routed Bell experiment. **Question 1**.: _Can we trace back the outcomes of the remote device \(B_{\mathsf{L}}\) to a genuine quantum measurement?_ **Question 2**.: _Can we trace back the outcomes of the remote device \(B_{\mathsf{L}}\) to a genuine quantum measurement occurring at or near \(B_{\mathsf{L}}\)?_ Question 1 is addressed by CVP models. If it is not possible to account for the outcomes of the remote device by classical variables predetermined at the source, then some quantum measurement must have taken place between the source and \(B_{\mathsf{L}}\). However, CVP models say nothing about _where_ this quantum measurement happened. It could have taken place near the source, the switch, or in proximity to \(B_{\mathsf{L}}\). Question 2, on the other hand, focuses not only on the fact that a quantum measurement took place, but in addition that it took place at the remote location \(B_{\mathsf{L}}\). We view this question as the relevant one in the context of routed Bell experiments and the impact of transmission losses on the demonstration of quantum nonlocality over long distances. Indeed, in a routed Bell experiment that exhibits a _SP_ violation, it is already clear that the experiment possesses the capability to demonstrate quantum effects at short distances. The interesting question in most applications is not just whether the outcomes of \(B_{\mathsf{L}}\) also depend on such quantum effects, but whether they depend on quantum effects arising far away from the source. In the Gdansk-Sydney experiment, it is true that based on the information available in Sydney, and if the bound (2) is violated, one can infer that a quantum measurement must have taken place on the particle sent by the source. But this does not imply that nonlocality has been established between Gdansk and Sydney. In particular, the measurement performed on the particle sent by the source might have happened well before the input \(y\) was provided to \(B_{\mathsf{L}}\) in Sydney. In contrast, the objective of the present paper is to identify conditions under which one can conclude that a quantum measurement took place in Sydney after the input \(y\) was provided to \(B_{\mathsf{L}}\). The distinction between these scenarios is illustrated in Fig. 3, which depicts a spacetime diagram representing three types of correlations that can be observed in a routed Bell experiment: genuine long-range quantum correlations, short-range quantum correlations, and CVP correlations. We note that the strategy depicted in Fig. 2 is fully consistent with a device-independent setting, where all components including the switch, the communication channel, and the device \(B_{\mathsf{L}}\) are untrusted. The intermediate \(Z\) measurement that is performed when \(z=\mathsf{L}\), could for instance be implemented by the switch device itself. However, it is worth emphasizing that even if the switch device were to be fully trusted, the strategy could still be applied by performing the intermediate \(Z\) measurement somewhere on the transmission channel connecting the switch to \(B_{\mathsf{L}}\). Again, this would be fully consistent with device-independent scenarios, where transmission channels between devices are usually assumed to be untrusted and may not behave as expected (in our case, the lossy channel characterizing the transmission line would be replaced by a quantum-classical channel). One might argue that in a semi-device-independent setting where both the switch and the communication channel are trusted, and losses are nonmalicious, our strategy would not be applicable. However, losses in Bell experiments are a problem only in a scenario where they are untrusted. The detection loophole in standard Bell experiments arises from the possibility for classical models to replace the existing transmission channel with an alternative one where losses are not merely passive events but explicitly depend on the inputs of the devices. If losses are instead assumed to be innocent and cannot be exploited by underlying classical models, or can only be exploited in a limited way [16] - this is the famous fair sampling assumption [1, 2] that one typically aims to avoid when analyzing Bell experiments - then nonlocality can already be demonstrated in standard Bell experiments regardless of the extent of such losses. Thus if the aim is to establish nonlocality over long distances, there is no clear incentive for considering routed Bell experiments with trusted transmission channels. But even in a hypothetical scenario where both the switch and the transmission channel are trusted (or a scenario based on certain assumptions preventing any quantum measurement between the source and the measurement device \(B_{\text{L}}\)), the conclusion reached by the violation of CVP models would still be limited. Indeed, the strategy depicted in Fig. 2 could still be applied with the measurement \(Z\) taking place inside the measurement device \(B_{\text{L}}\) independent of the input \(y\). Thus, a violation of the bound (2) would indicate that quantum entanglement has been established between \(A\) and \(B_{\text{L}}\), however, it would not imply other quantum properties typically associated with quantum nonlocality, such as the fact that different input choices \(y\) correspond to incompatible measurements. The strategy of Fig. 2 also has implications for an alternate interpretation of routed Bell experiments, discussed in [10]. Instead of considering experiments with a switch that alters the path of the quantum particle and routes it to different measurement devices, one could also consider a (more impractical) scenario in which Bob has a single measurement device that is physically moved in each experimental run either to the close location \(B_{\text{S}}\) or the faraway location \(B_{\text{L}}\). However, in light of Fig. 2, to achieve a violation of the bound (2), it is unnecessary to actually move the measurement device. The same violation can be obtained by consistently leaving the measurement device at the close location \(B_{\text{S}}\), performing the intermediate \(Z\) measurement there, and relaying its classical outcome to the remote location \(B_{\text{L}}\). Finally, we point out that the strategy depicted in Fig. 2 can also be interpreted from a more foundational perspective. One might for instance consider alternative theories to the standard quantum theory, where entangled particles undergo spontaneous collapse after travelling a certain distance. For example, a pair in the \(\ket{\phi_{+}}=\bra{\ket{00}+\ket{11}}/\sqrt{2}\) state might collapse with probability \(1/2\) either to the \(\ket{00}\) state or the \(\ket{11}\) after particles have travelled a distance \(d>D\). Such an alternative theory could then violate Bell inequalities when the measurement devices of Alice and Bob are within distance \(D\), but would satisfy standard Bell inequalities when they are at a distance larger than \(D\). One could attempt to rule out such theories by performing routed Bell experiments with the remote device \(B_{\text{L}}\) located at a distance \(d>D\). Since the spontaneous collapse described Figure 3: Spacetime diagram of routed Bell experiments in the case where \(z=\text{L}\). Red lines represent the transmission of quantum information, blue dotted lines of classical variables. _(a)_ Long-range quantum correlations: a quantum measurement is performed by Bob within the future-light cone of the input choice \(y\) and beyond a distance \(D\) from the source, corresponding to the point where the future-light cone of \(y\) and the past-light cone of \(b\) intersect. _(b)_ Short-range quantum correlations: a measurement occurs before the particle reaches \(B_{\text{L}}\), at a distance smaller than \(D^{\prime}\), and outside the future-light cone of \(y\). _c)_ CVP correlations: the outcomes of \(B_{\text{L}}\) are determined by classical variables predetermined at the source. Correlations that cannot be represented by _(c)_ models belong to either _(b)_ or _(a)_ types. Correlations that cannot be represented by _(b)_ models are of the _(a)_ type. above effectively amounts to doing the intermediate \(Z\) measurement in Fig. 2, such an alternative theory could in principle reproduce a _LP_ CHSH value of \(\mathcal{C}_{\mathsf{L}}=2\) and thus cannot be falsified by a violation of the bound (2). Because of all the reasons above, we consider in the present paper models that are alternative to those introduced by CVP and which accommodate strategies such as those depicted in Fig. 2. ### Monogamy of quantum correlations and the bound (2) Before introducing more formally our formulation of short-range and long-range quantum correlations in routed Bell experiments, we point out that the bound (2) was already established in a more general setting in [17]. This follows from the fact that CVP's assumption that the measurement results of \(B_{\mathsf{L}}\) are predetermined at the source is equivalent to the assumption that the source prepares a tripartite \(qqc\)-state \[\rho_{AB_{\mathsf{S}}B_{\mathsf{L}}}=\sum_{\lambda}p(\lambda)\,\rho_{AB_{ \mathsf{S}}}^{\lambda}\otimes|\lambda\rangle\langle\lambda|_{B_{\mathsf{L}}} \tag{7}\] yielding, when measurements are performed on \(A\) and on either \(B_{\mathsf{S}}\) or \(B_{\mathsf{L}}\), correlations of the form \[p(a,b|x,y,z)=\begin{cases}\operatorname{Tr}\left[\rho_{AB_{\mathsf{S}}B_{ \mathsf{L}}}\,M_{a|x}\otimes M_{b|y,\mathsf{S}}\otimes\mathbb{I}\right]& \text{if $z=\mathsf{S}$}\\ \operatorname{Tr}\left[\rho_{AB_{\mathsf{S}}B_{\mathsf{L}}}\,M_{a|x}\otimes \mathbb{I}\otimes M_{b|y,\mathsf{L}}\right]&\text{if $z=\mathsf{L}$}\,.\end{cases} \tag{8}\] Indeed, using the explicit \(qqc\)-form (7) of the state \(\rho_{AB_{\mathsf{S}}B_{\mathsf{L}}}\), it is easily seen that the above correlations are equivalent to those in (5). More generally, one could consider correlations obtained by measuring a genuine tripartite \(qqq\)-state \(\rho_{AB_{\mathsf{S}}B_{\mathsf{L}}}\). Such correlations would correspond to a routed Bell experiment where the source produces on Bob's side a pair of quantum systems \((B_{\mathsf{S}},B_{\mathsf{L}})\), but where the nearby measurement device only measures the \(B_{\mathsf{S}}\) system and the remote measurement device only measures the \(B_{\mathsf{L}}\) system. It was already proven in [17] that the relation (2) holds for correlations arising from measurements on such tripartite \(qqq\)-state \(\rho_{AB_{\mathsf{S}}B_{\mathsf{L}}}\), and thus also for the more restricted case of \(qqc\)-states corresponding to CVP correlations. It was furthermore already proven in [17] (below the proof of Lemma 4) that when \(\mathcal{C}_{\mathsf{S}}>2\), it is sufficient to consider \(qqc\)-states to saturate the bound (2). In [17], the bound (2) is interpreted as a monogamy relation: two pair of systems (\(A\) and \(B_{\mathsf{S}}\)) in a general tripartite quantum state can lead to a violation of the CHSH inequality, only if the CHSH value for the other pairs of systems (\(A\) and \(B_{\mathsf{L}}\)) is limited, even if \(B_{\mathsf{L}}\) is quantum. It is indeed easy to see that the intuition behind the derivation of the bound (2) presented at the beginning of Section 2 does not rely on the fact that \(B_{\mathsf{L}}\) is classical, but on such monogamy of quantum correlations. In Section 4, we will derive new relations between _SP_ and _LP_ tests valid for our general definition of short-range quantum correlations. Unlike the bound (2), such relations will not follow from the monogamy of quantum correlations. This is because routed Bell experiments should, in general, be considered as bipartite experiments where Bob's entire quantum system is routed either to one measurement device or the other measurement device, and cannot always be viewed as tripartite experiments by dividing Bob's system into a pair of subsystems, one for each measurement location. ## 3 Routed Bell experiments and short-range quantum correlations In this Section and the subsequent ones, we undertake a more formal and systematic analysis of correlations in routed Bell experiments. We consider a general routed Bell experiment as depicted in Fig. 1. We introduce the following notation, which was already implicitly used in the previous sections. The state generated by the source is denoted as \(\rho_{AB}\) and Alice's measurements are denoted as \(M_{x}\) with POVM elements \(M_{a|x}\). On Bob's side, the measurements are denoted as \(M_{yz}\) with POVM elements \(M_{b|yz}\); these operators depend not only on the local input \(y\), but also on the switch value \(z\), since the measurements made by the _SP_ device \(B_{\tt S}\) and the _LP_ device \(B_{\tt L}\) are not necessarily identical. We assume that Alice has \(m_{A}\) input choices and \(d_{A}\) possible output results, i.e., \(x\in\{0,1,\ldots,m_{A}-1\}\) and \(a\in\{0,1,\ldots,d_{A}-1\}\). Similarly, Bob's measurement device on the short-path has \(m_{B_{\tt S}}\) inputs and \(d_{B_{\tt S}}\) outputs, while the device on the long-path has \(m_{B_{\tt L}}\) inputs and \(d_{B_{\tt L}}\) outputs. As before, the switch input is binary, taking values \(z\in\{{\tt S},{\tt L}\}\) to determine the routing of Bob's particle. The correlations in routed Bell experiments are characterized by the conditional probabilities \(p=\{p(a,b|x,y,z)\}_{a,b,x,y,z}\). Note that, in any given run, a measurement is performed at Bob's side in only one of the two measurement locations. Hence the conditional probabilities involve a single input \(y\) and a single output \(b\), with the switch value \(z\) indicating the corresponding location. As in traditional Bell experiments, no-signalling constraints between Alice and Bob are assumed to hold. Specifically, the input and output of Alice should not causally influence Bob's outcome, and vice versa. In particular, the switch input \(z\) should not have any causal influence on Alice. Such conditions can be enforced in a relativist framework by appropriately configuring the spacetime setup. In device-independent applications, it is commonly assumed that devices are internally described as black boxes, but are unable to transmit arbitrary external information. Relativistic constraints are typically not necessary in such scenarios. However, caution must be exercised in a routed Bell experiment, particularly in adversarial settings, to ensure that information about the switch input \(z\) cannot be obtained after the switch operation has taken place. For instance, it is important to prevent situations where an adversary monitoring the transmission lines could determine wether a quantum particle has taken the short-path or long path and manipulate Alice's particle based on this knowledge before it reaches her measurement device. Such actions would violate the no-signalling condition. ### Correlations in routed Bell experiments We now define various types of quantum correlations that can be observed in routed Bell experiments. We begin by considering the most general case, where no restrictions are imposed on the long path device \(B_{\tt L}\). #### 3.1.1 General quantum correlations For a general quantum strategy, the correlations in a routed Bell experiment can be expressed as follows \[p(a,b|x,y,z)=\operatorname{Tr}\left[\left(\mathcal{I}\otimes C_{z}\right)( \rho_{AB})\,M_{a|x}\otimes M_{b|yz}\right]\,, \tag{9}\] where \(C_{z}\) is the CPTP map describing the transmission of Bob's system on the short-path (\(z={\tt S}\)) or the long-path (\(z={\tt L}\)). The quantum channel describing the transmission of the quantum system on Alice's side is independent of all the input variables \(x,y,z\) and can thus be absorbed in the definition of the state \(\rho_{AB}\). The adjoint of the channels \(C_{z}\) map the POVM elements \(M_{b|yz}\) to valid POVM elements \(C_{z}^{\dagger}(M_{b|yz})=\tilde{M}_{b|yz}\). Consequently, the above correlations can also be expressed as \[p(a,b|x,y,z)=\operatorname{Tr}\left[\rho_{AB}\,M_{a|x}\otimes C_{z}^{\dagger} (M_{b|yz})\right]=\operatorname{Tr}\left[\rho_{AB}\,M_{a|x}\otimes\tilde{M}_{ b|yz}\right]\,. \tag{10}\] Thus general correlations in a routed Bell experiment coincide with those of a regular bipartite Bell experiment where Bob has \(m_{B_{\tt S}}+m_{B_{\tt L}}\) inputs represented by the pairs \((y,z)\in\{(0,{\tt S}),\ldots,(m_{B_{\tt S}}-1,{\tt S}),(0,{\tt L}),\ldots,(m_{B_ {\tt L}}-1,{\tt L})\}\). This simply expresses that the combined effect of the channel \(C_{z}\) and the subsequent measurement \(M_{yz}\) represents an effective measurement \(\tilde{M}_{yz}\). This is illustrated in Fig. 4. We denote by \(\mathcal{Q}\) the set of general quantum correlations (9) or (10). #### 3.1.2 Short-range quantum correlations We now define the set of _short-range quantum_ (SRQ) correlations, denoted \(\mathcal{Q}_{SR}\), as the subset of the correlations (9) that can be obtained without any entanglement being distributed to \(B_{\tt L}\), i.e., as those where the channel \(C_{\tt L}\) in (9) is entanglement-breaking: \[p(a,b|x,y,z)=\operatorname{Tr}\left[\left(\mathcal{I}\otimes C_{z}\right)( \rho_{AB})\,M_{a|x}\otimes M_{b|yz}\right]\text{ with }C_{\tt L}\text{ entanglement-breaking}. \tag{11}\] An entanglement-breaking channel \(C_{\mathsf{L}}\) can be understood as first performing a measurement described by POVM elements \(N_{\lambda}\) on the input system, and then preparing one of the states \(\{\rho_{\lambda}\}\)[18]: \[C_{\mathsf{L}}(\rho)=\sum_{\lambda}\operatorname{Tr}\left[N_{\lambda}\rho \right]\,\rho_{\lambda}\,. \tag{12}\] The adjoint \(C_{\mathsf{L}}^{\dagger}\) maps the POVM elements \(M_{b|y\mathsf{L}}\) to \[\tilde{M}_{b|y\mathsf{L}}=C_{\mathsf{L}}^{\dagger}(M_{b|y\mathsf{L}})=\sum_{ \lambda}p(b|y,\lambda)N_{\lambda}\,,\quad\text{where }p(b|y,\lambda)= \operatorname{Tr}\left[\rho_{\lambda}\,M_{b|y\mathsf{L}}\right]\,. \tag{13}\] This is equivalent to the statement that the measurements \(\tilde{M}_{y\mathsf{L}}\) defined by the operators \(\{\tilde{M}_{b|y\mathsf{L}}\}\) are jointly-measurable [19, 20, 21]. That is, they can be reproduced by measuring a parent POVM \(\{N_{\lambda}\}\), regardless of the input \(y\), which yields a classical outcome \(\lambda\). The final outcome \(b\) is then generated according to the probability distribution \(p(b|y,\lambda)\), which depends on both \(y\) and \(\lambda\). As for the case of general quantum correlations, we can thus view SRQ correlations as bipartite Bell correlations with \(m_{B_{\mathsf{S}}}\times m_{B_{\mathsf{L}}}\) inputs on Bob'side, but with the additional restriction that the subset of measurements corresponding to the input pairs \((y,\mathsf{L})\) are jointly-measurable, i.e., the correlations (11) can also be expressed as \[p(a,b|x,y,z)=\operatorname{Tr}\left[\rho_{AB}\,M_{a|x}\otimes\tilde{M}_{b|yz} \right]\text{ with the operators }\tilde{M}_{b|y\mathsf{L}}\text{ jointly-measurable}\,. \tag{14}\] Using the definition of joint-measurability provided by (13), we can explicitly express SRQ correlations as follows \[p(a,b|x,y,z)=\begin{cases}\operatorname{Tr}\left[\rho_{AB}\,M_{a|x}\otimes \tilde{M}_{b|y\mathsf{S}}\right]&\text{ if }z=\mathsf{S}\\ \sum_{\lambda}p(b|y,\lambda)\,\operatorname{Tr}\left[\rho_{AB}\,M_{a|x}\otimes N _{\lambda}\right]&\text{ if }z=\mathsf{L}\,.\end{cases} \tag{15}\] Operationally, this can be understood as follows: if the switch selects the short path (\(z=\mathsf{S}\)), then the correlations are obtained by measuring a shared entangled state \(\rho_{AB}\) as in a regular Bell experiment. If the switch selects the long path (\(z=L\)), then a fixed measurement \(\{N_{\lambda}\}\) is performed on Bob's system yielding a classical outcome \(\lambda\). This classical outcome is then transmitted to \(B_{\mathsf{L}}\), which based on the input \(y\) selects the output \(b\) according to the distribution \(p(b|y,\lambda)\). The example of Section 2.1 clearly falls in this category. This aligns perfectly with our notion, formulated in the Introduction, that SRQ correlations should not allow for the distribution of quantum information to \(B_{\mathsf{L}}\). Hence, the most general thing to do is to measure Bob's particle beyond the switch, and subsequently send a classical message to \(B_{\mathsf{L}}\). Figure 4: General correlations in a routed Bell experiment, \(p(a,b|x,y,z)\) can be viewed as a regular bipartite Bell experiment where the combined effect of the switch and of the subsequent measurement (either \(B_{\mathsf{L}}\) or \(B_{\mathsf{S}}\)) can be viewed as one effective measurement device for Bob, with input pairs \((y,z)\in\{(0,\mathsf{S}),\ldots,(m_{B_{\mathsf{S}}}-1,\mathsf{S}),(0,\mathsf{ L}),\ldots,(m_{B_{\mathsf{L}}}-1,\mathsf{L})\}\). #### 3.1.3 Fully quantum marginal correlations As mentioned in Section 2.3, a subclass of general quantum correlations in routed Bell scenarios are those where the source prepares on Bob's side a pair of systems \(B=(B_{\mathsf{S}},B_{\mathsf{L}})\) and the switch routes the first subsystem to the nearby device \(B_{\mathsf{S}}\) if \(z=\mathsf{S}\) and the second subsystem to \(B_{\mathsf{L}}\) if \(z=\mathsf{L}\). The resulting correlations are \[p(a,b|x,y,z)=\begin{cases}\operatorname{Tr}\left[\rho_{AB_{\mathsf{S}}B_{ \mathsf{L}}}\,M_{a|x}\otimes M_{b|y,\mathsf{S}}\otimes I\right]&\text{if }z= \mathsf{S}\\ \operatorname{Tr}\left[\rho_{AB_{\mathsf{S}}B_{\mathsf{L}}}\,M_{a|x}\otimes I \otimes M_{b|y,\mathsf{L}}\right]&\text{if }z=\mathsf{L}\,,\end{cases} \tag{16}\] and correspond to bipartite marginals of the tripartite \(qqq\)-correlations \[p(a,b_{\mathsf{S}},b_{\mathsf{L}}|x,y_{\mathsf{S}},y_{\mathsf{L}})= \operatorname{Tr}\left[\rho_{AB_{\mathsf{S}}B_{\mathsf{L}}}\,M_{a|x}\otimes M _{b|y,\mathsf{S}}\otimes M_{b|y,\mathsf{L}}\right]\,. \tag{17}\] We refer to such correlations as \(qq\)-marginal correlations and denote the set of such correlations as \(\mathcal{M}_{qq}\). #### 3.1.4 Quanutum-classical marginal correlations If we further restrict the state \(\rho_{AB_{\mathsf{S}}B_{\mathsf{L}}}\) in the above correlations to be a \(qqc\)-state as in (7), i.e., \[\rho_{AB_{\mathsf{S}}B_{\mathsf{L}}}=\sum_{\lambda}p(\lambda)\,\rho_{AB_{ \mathsf{S}}}^{\lambda}\otimes|\lambda\rangle\langle\lambda|_{B_{\mathsf{L}}}\,, \tag{18}\] then we get the class of correlations considered in [10], as pointed out in Section 2.3. From now on, we will refer to such correlations as \(qc\)-marginal correlations and denote the corresponding set as \(\mathcal{M}_{qc}\). #### 3.1.5 Relations between the above correlations The following figure depicts the relation between the sets of correlations defined above. (19) The inclusions \(\mathcal{M}_{qc}\subseteq\mathcal{M}_{qq}\subseteq\mathcal{Q}\) and \(\mathcal{M}_{qc}\subseteq\mathcal{Q}_{SR}\subseteq\mathcal{Q}\) are obvious. They are actually strict, i.e., \(\mathcal{M}_{qc}\subset\mathcal{M}_{qq}\subset\mathcal{Q}\) and \(\mathcal{M}_{qc}\subset\mathcal{Q}_{SR}\subset\mathcal{Q}\). That \(\mathcal{M}_{qc}\neq\mathcal{M}_{qq}\) and \(\mathcal{Q}_{SR}\neq\mathcal{Q}\) is easy to see. The example of Section 2.1 and the fact that \(\mathcal{M}_{qc}\) and \(\mathcal{M}_{qq}\) satisfy the bound (2), as pointed out in Section 2.3, imply \(\mathcal{M}_{qc}\neq\mathcal{Q}_{SR}\) and \(\mathcal{M}_{qq}\neq\mathcal{Q}\). It is further not difficult to see that \(\mathcal{M}_{qq}\) and \(\mathcal{Q}_{SR}\) are incomparable, meaning that there exist correlations that belong to one set but not the other, and vice versa. Indeed, the example of Section 2.1 shows that \(\mathcal{Q}_{SR}\nsubseteq\mathcal{M}_{qq}\), while \(\mathcal{M}_{qq}\nsubseteq\mathcal{Q}_{SR}\) follows for instance from the fact that correlations in \(\mathcal{M}_{qq}\) can yield a \(LP\) CHSH value \(\mathcal{C}_{\mathsf{L}}>2\), while correlations in \(\mathcal{Q}_{SR}\) cannot. ### Characterization through semidefinite programming hierarchies All of the above sets can be outer-approximated through semidefinite programming (SDP) hierarchies for noncommutative polynomial optimization [22, 23, 24]. This is because we can express the different types of correlations \(p\) above as \[p(a,b|x,y,z)=\operatorname{Tr}\left[\rho\,M_{a|x}\,M_{b|yz}\right] \tag{20}\] where the \(d_{A}\times m_{A}\) measurement operators \(M_{a|x}\) and the \(d_{B_{\text{s}}}\times m_{B_{\text{s}}}+d_{B_{\text{s}}}\times m_{B_{\text{s}}}\) measurement operators \(M_{b|y,z}\) are projectors and satisfy specific commutation relations depending on the type of correlations \(p\): \[[M_{a|x},M_{b|yz}] =0 \text{if }p\in\mathcal{Q}\,, \tag{21}\] \[[M_{a|x},M_{b|yz}] =0,\,[M_{b|y\text{s}},M_{b^{\prime}|y^{\prime}\text{L}}] =0 \text{if }p\in\mathcal{Q}_{SR}\,,\] (22) \[[M_{a|x},M_{b|yz}] =0,\,[M_{b|y\text{s}},M_{b^{\prime}|y^{\prime}\text{L}}] =0 \text{if }p\in\mathcal{M}_{qq}\,,\] (23) \[[M_{a|x},M_{b|yz}] =0,\,[M_{b|y\text{s}},M_{b^{\prime}|y^{\prime}\text{L}}] =0,\,[M_{b|y\text{s}},M_{b^{\prime}|y^{\prime}\text{L}}] =0 \text{if }p\in\mathcal{M}_{qc}\,. \tag{24}\] These representations naturally fit in the framework of noncommutative polynomial optimization. The above follow from the fact that in each of the representations (10), (14), (16), the tensor product structure between different subsystems can be replaced by commutation relations. Specifically, the tensor product structure between \(A\) and \(B\), common to all types of correlations, can be replaced by the commutations relations \([M_{a|x},M_{b|yz}]=0\). In the case of \(\mathcal{M}_{qq}\) and \(\mathcal{M}_{qc}\), the additional product structure between \(B_{\text{s}}\) and \(B_{\text{s}}\) leads to the commutation relations \([M_{b|y\text{s}},M_{b^{\prime}|y^{\prime}\text{L}}]=0\). Furthermore, we can without loss of generality assume the measurements to be projective by dilating the local Hilbert spaces if necessary. The requirement of joint-measurability in (14) to define \(\mathcal{Q}_{SR}\) is then equivalent to the condition that the operators \(M_{b|y\text{L}}\) commute with each other [25], i.e., to \([M_{b|y\text{L}},M_{b^{\prime}|y^{\prime}\text{L}}]=0\). Lastly, the condition that the subsystem \(B_{\text{L}}\) is classical in the case of \(\mathcal{M}_{qc}\) is equivalent to the condition that the operators \(M_{b|y\text{L}}\) commute with all other operators, leading also to the additional commutation relations \([M_{b|y\text{L}},M_{b^{\prime}|y^{\prime}\text{L}}]=0\) in that case. Note that, strictly speaking, the replacement of the tensor product structure by commutation relations represents a relaxation for infinite-dimensional quantum systems [26, 27]. However, any commuting infinite-dimensional quantum correlations that are described by our current physical theories can be approximated arbitrarily well by tensor-product quantum correlations. In the case of \(\mathcal{Q}_{SR}\), an alternative characterization is possible [28]. Indeed, in (15), we can assume without loss of generality that the outcome \(\lambda\) of the measurement \(\{N_{\lambda}\}\) specifies deterministically the outcome \(b\) for each input \(y\) of \(B_{\text{L}}\). That is, we can assume that \(\lambda=\boldsymbol{\beta}=(\beta_{0},\ldots,\beta_{m_{B_{\text{L}}}-1})\), where \(\beta_{y}\) is the output for input \(y\in\{0,\ldots,m_{B_{\text{L}}}-1\}\). We can then write SRQ correlations as \[p(a,b|x,y,z)=\begin{cases}\operatorname{Tr}\left[\rho_{AB}\,M_{a|x}\otimes M_ {b|y\text{s}}\right]&\text{if }z=\text{S}\\ \sum_{\boldsymbol{\beta}}\delta_{\boldsymbol{\beta}_{y},b}\,\operatorname{Tr} \left[\rho_{AB}\,M_{a|x}\otimes N_{\boldsymbol{\beta}}\right]&\text{if }z=\text{L}\,.\end{cases} \tag{25}\] In other words, SRQ correlations are linear combinations of regular bipartite quantum correlations that involve on Bob's side \(m_{B_{\text{s}}}\) measurements with \(d_{B_{\text{s}}}\) outcomes, corresponding to the operators \(M_{b|y\text{S}}\), and one additional measurement with \(d_{B_{\text{s}}}^{m_{B_{\text{s}}}}\) outcomes, corresponding to the operators \(N_{\boldsymbol{\beta}}\). They can thus be approximated from the outside, like regular bipartite quantum correlations, using the SDP hierarchies [22, 23, 24]. ## 4 _Sp_-enhancement of _Lp_ tests As the example of Section 2.1 shows, the short-path CHSH value \(\mathcal{C}_{\text{S}}\) does not constrain the long-path CHSH value \(\mathcal{C}_{\text{L}}\), according to our definition of SRQ correlations. We show in this section, though, that there exist other _LP_ tests for which a _SP_ CHSH violation does weaken the conditions under which they witness long-range quantum correlations. We refer to this as a "_SP_-enhancement" of the _LP_ test. Throughout this section, we assume that all inputs and outputs of the measurement devices are binary. We denote the input values as \(x,y\in\{0,1\}\), and for convenience the output values as \(a,b\in\{\pm 1\}\). We define \(A_{x}=\sum_{a=\pm 1}a\,M_{a|x}\) as the observable corresponding to the average value of \(a\) for given input \(x\) and we define similarly \(B_{yz}\) based on the effective POVMs \(\tilde{M}_{b|yz}\) that appear in definition (14). In the case where the POVM elements \(M_{a|x}\) and \(\tilde{M}_{b|yz}\) are projective (which we can assume without loss of generality), the observables \(A_{x}\), \(B_{yz}\) are unitary and square to the identity. We use these observables to define the observed quantities \(\langle A_{x}\rangle=\operatorname{Tr}\left[\rho_{AB}\,A_{x}\right]=\sum_{a=\pm 1 }a\,p(a|x)\), \(\langle B_{yz}\rangle=\operatorname{Tr}\left[\rho_{AB}\,B_{yz}\right]=\sum_{b= \pm 1}b\,p(b|y,z)\), and \(\langle A_{x}B_{yz}\rangle=\operatorname{Tr}\left[\rho_{AB}\,A_{x}B_{yz} \right]=\sum_{a,b=\pm 1}ab\,p(a,b|x,y,z)\). The knowledge of \(\langle A_{x}\rangle\), \(\langle B_{yz}\rangle\), and \(\langle A_{x}B_{yz}\rangle\) is equivalent to the knowledge of the full set of probabilities \(p(a,b|x,y,z)\). ### A family of _lp_ tests The _lp_ tests we are going to consider in this section are based on the following Bell expressions \[\mathcal{J}_{\text{L}}^{\theta}=t_{\theta}\langle A_{0}B_{0\text{L}}\rangle+ \langle A_{0}B_{1\text{L}}\rangle+\langle A_{1}B_{0\text{L}}\rangle-t_{ \theta}\langle A_{1}B_{1\text{L}}\rangle\,, \tag{26}\] where \(t_{\theta}=\tan\theta\) and \(\theta\) is a parameter in \([0,\pi/4]\). The expressions \(\mathcal{J}_{\text{L}}^{\theta}\) for other possible values of \(\theta\) can be obtained by relabelling the input and/or outputs of the observables \(A_{x}\) and \(B_{y\text{L}}\). Seen as standard Bell expressions, \(\mathcal{J}_{\text{L}}^{\theta}\) satisfy the following local and quantum bounds (see Appendix A): \[\mathcal{J}_{\text{L}}^{\theta} \leq 2\qquad\text{(local bound)}\,, \tag{27}\] \[\mathcal{J}_{\text{L}}^{\theta} \leq 2/c_{\theta}\qquad\text{(quantum bound)}\,, \tag{28}\] where \(c_{\theta}=\cos\theta\). The following states and observables \[\rho_{AB}=|\phi_{+}\rangle\!\langle\phi_{+}|\,\quad A_{0}=X\,,\,A_{1}=Z\,, \quad B_{0\text{L}}=s_{\theta}X+c_{\theta}Z\,,\,B_{1\text{L}}=c_{\theta}X-s_{ \theta}Z\,, \tag{29}\] define the optimal quantum strategies reaching the maximal quantum bound \(2/c_{\theta}\). Note that the measurements on Bob's side are anticommuting and the angle \(\theta\) can be seen as a global rotation along the \(Y\) axis on the Bloch sphere. For \(\theta=\pi/4\), the expression (26) simply corresponds to the CHSH expression \(\mathcal{J}_{\text{L}}^{\pi/4}=\mathcal{C}_{\text{L}}\) and the local bound (27) and quantum bounds (28) are the usual ones, i.e., \(2\) and \(2\sqrt{2}\). For \(\theta=0\) the local and quantum bounds are both equal to \(2\), i.e., \(\mathcal{J}_{\text{L}}^{0}\) cannot be used to detect any quantum nonlocality. Values of \(\theta\) between \(0\) and \(\pi/4\) lead to a gap between the local and quantum bounds, i.e., the inequalities \(\mathcal{J}_{\text{L}}^{\theta}\leq 2\) correspond to standard Bell inequalities. In the next two subsections, we will see how the above family of _lp_ tests can be enhanced by a _sp_ CHSH test. ### _Sp_-enhancement with a maximal short-path CHSH value Assume that in addition to the _lp_ expression \(\mathcal{J}_{\text{L}}^{\theta}\), we observe a _sp_ CHSH value, which, as a starting point, we assume to be maximal, i.e., \(\mathcal{C}_{\text{S}}=2\sqrt{2}\). **Proposition 1**.: _When \(\mathcal{C}_{\text{S}}=2\sqrt{2}\), SRQ correlations satisfy the following bound_ \[\mathcal{J}_{\text{L}}^{\theta}\leq\sqrt{2}/c_{\theta}\qquad\text{(SRQ bound)}\,. \tag{30}\] Proof.: To prove this bound, we need to maximize the _lp_ expression \(\mathcal{J}_{\text{L}}^{\theta}\) for SRQ correlations of the form (14). The fact that \(\mathcal{C}_{\text{S}}=2\sqrt{2}\) fixes the shared state \(\rho_{AB}\) and the observables \(A_{x}\) and \(B_{y\text{S}}\). Indeed by self-testing [12, 13, 14], they must be, up to local isometries, equivalent to the two-qubit state \(|\phi_{+}\rangle=(|00\rangle+|11\rangle)/\sqrt{2}\) and the qubit observables \(A_{0}=X\), \(A_{1}=Z\), \(B_{0\text{S}}=(Z+X)/\sqrt{2}\), \(B_{1\text{S}}=(X-Z)/\sqrt{2}\). Consequently, the _lp_ correlators are equal to \[\langle A_{0}B_{y\text{L}}\rangle =\langle\phi_{+}|XB_{y\text{L}}|\phi_{+}\rangle=\frac{1}{2}\, \text{Tr}\left[XB_{y\text{L}}\right], \tag{31}\] \[\langle A_{1}B_{y\text{L}}\rangle =\langle\phi_{+}|ZB_{y\text{L}}|\phi_{+}\rangle=\frac{1}{2}\,\text {Tr}\left[ZB_{y\text{L}}\right].\] We can thus write the _lp_ expression \(\mathcal{J}_{\text{L}}^{\theta}\) as \[\mathcal{J}_{\text{L}}^{\theta}=\frac{1}{2}\left[t_{\theta}\,\text{Tr}(XB_{0 \text{L}})+\text{Tr}(XB_{1\text{L}})+\text{Tr}(ZB_{0\text{L}})-t_{\theta}\, \text{Tr}(ZB_{1\text{L}})\right]\,. \tag{32}\] We now need to bound the above expressions for observables \(B_{0\text{L}}\) and \(B_{1\text{L}}\) that are jointly-measurable. In the case \(\theta=0\), it is shown in [18] that \[\frac{1}{2}\left[\text{Tr}(XB_{1\text{L}})+\text{Tr}(ZB_{0\text{L}})\right] \leq\sqrt{2}\,, \tag{33}\] whenever \(B_{\texttt{0L}}\) and \(B_{\texttt{1L}}\) are jointly-measurable. We rederive in Appendix B.1 this result for completeness. The above joint-measurability inequality holds for any pair of observables \(B_{\texttt{0L}}\) and \(B_{\texttt{1L}}\) and is independent of the basis in which we write it. Making the change of basis \(Z\to s_{\theta}X+c_{\theta}Z\), \(X\to c_{\theta}X-s_{\theta}Z\), where \(c_{\theta}=\cos\theta\) and \(s_{\theta}=\sin\theta\), we can rewrite the above inequality as \[\frac{1}{2}\left[s_{\theta}\operatorname{Tr}(XB_{\texttt{0L}})+c_{\theta} \operatorname{Tr}(XB_{\texttt{1L}})+c_{\theta}\operatorname{Tr}(ZB_{\texttt{0L }})-s_{\theta}\operatorname{Tr}(ZB_{\texttt{1L}})\right]\leq\sqrt{2}\,. \tag{34}\] Dividing by \(c_{\theta}\), the left-hand side becomes equal to (32) and the right-hand side to \(\sqrt{2}/c_{\theta}\), proving (30). The intuition behind the above Proposition and its proof is that when Alice and Bob observe the maximal value \(\mathcal{C}_{\texttt{S}}=2\sqrt{2}\), their _LP_ correlators are, by self-testing, directly associated to the Pauli expectations \(\operatorname{Tr}\{[PB_{y\texttt{L}}]\}\), where \(P=I,X,Z\), of the observables \(B_{y\texttt{L}}\). This means that they can perform tomography, restricted to the \(Z\operatorname{-}X\) plane, of these observables. If from the knowledge of these Pauli expectations it is possible to conclude that the two observables \(B_{\texttt{0L}}\) and \(B_{\texttt{1L}}\) are not jointly measurable, then, following the definition of SRQ correlations in Subsection 3.1.2, the resulting correlations are not SRQ. Inequalities that rule out SRQ correlations are thus directly related to inequalities that rule out joint-measurability, such as (33) and (34). In fact, it is the existence of these inequalities for joint-measurability that led us to introduce the _LP_ expressions (26). The SRQ bound (30), the local bound (27), and the general quantum bound (28) of \(\mathcal{J}_{\texttt{L}}^{\theta}\) are plotted in Fig. 5 as a function \(\theta\). For any value \(0\leq\theta<\pi/4\), the SRQ bound is strictly smaller than the local bound, i.e., the _SP_ CHSH test weakens the condition to witness long-range quantum correlations based on the \(\mathcal{J}_{\texttt{L}}^{\theta}\) test. When \(\theta=\pi/4\), the expression \(\mathcal{J}_{\texttt{L}}^{\pi/4}\) is simply the CHSH expression \(\mathcal{C}_{\texttt{L}}\). In this case, the SRQ bound and local bound coincide, i.e., the _SP_ CHSH test does not weaken the condition to witness long-range quantum correlations using the \(\mathcal{C}_{\texttt{L}}\) test, as already noted in Subsection 2.1. Note that by adding the short-path measurements \[B_{\texttt{0S}}=\frac{X+Z}{\sqrt{2}},\quad B_{\texttt{1S}}=\frac{X-Z}{\sqrt{2} }\,, \tag{35}\] to the strategy (29), both \(\mathcal{C}_{\mathsf{S}}\) and \(\mathcal{J}_{\mathsf{L}}^{\theta}\) can simultaneously reach their maximal quantum values of \(2\sqrt{2}\) and \(2/c_{\theta}\), respectively. Interestingly, in the case \(\theta=0\), the _SP_-enhancement of the _LP_ test based on \(\mathcal{J}_{\mathsf{L}}\equiv\mathcal{J}_{\mathsf{L}}^{0}\) is maximal. Indeed, in this case the standard local bound (27) is \[\mathcal{J}_{\mathsf{L}}=\langle A_{0}B_{\mathsf{1L}}\rangle+\langle A_{1}B_{ \mathsf{0L}}\rangle\leq 2\,. \tag{36}\] But this coincides with the quantum bound (28) and thus the above inequality cannot be violated by quantum theory. That is, it does not represent a proper Bell inequality and cannot be used to witness long-range quantum nonlocality. However, when supplemented with a maximal _SP_ test, the local bound has to be replaced with the more strict SRQ bound \[\mathcal{J}_{\mathsf{L}}=\langle A_{0}B_{\mathsf{1L}}\rangle+\langle A_{1}B_{ \mathsf{0L}}\rangle\leq\sqrt{2}<2\,. \tag{37}\] This bound is now smaller than the quantum bound, hence it does represent a proper witness of long-range quantum nonlocality. The strategy that reaches the maximal quantum values \(\mathcal{C}_{\mathsf{S}}=2\sqrt{2}\) and \(\mathcal{J}_{\mathsf{L}}=2\) is defined by eqs. (29) and (35). It is obtained by measuring a maximally two-qubit state \(|\phi_{+}\rangle\) state with \(Z,X\) observables for \(A\) and the remote devices \(B_{\mathsf{L}}\). These correlations arise in many contexts in quantum information - e.g. in entanglement-based BB84 [29, 30, 31], or for witnessing the entanglement of the \(|\phi_{+}\rangle\) state [32, 33] - but they cannot certify any quantum property in a device-independent way since they can be reproduced by purely classical strategies. The result above shows that if we append to this BB84 scenario intermediate \(B_{\mathsf{S}}\) measurements in the \((X\pm Z)/\sqrt{2}\) basis, then the \(\langle AB_{\mathsf{L}}\rangle\) correlations certify long-range quantumness in a device-independent setting. To illustrate the interest of the _SP_-enhanced _LP_ tests represented by the family of inequalities (30), consider the optimal correlations defined by eqs. (29) and (35), but assume that the quantum particle going to the remote measurement device \(B_{\mathsf{L}}\) is characterized by a visibility \(v\), i.e., with probability \(1-v\) it undergoes completely depolarizing noise. Then the corresponding long-range quantum correlations are \[\langle A_{0}B_{\mathsf{1L}}\rangle=\langle A_{1}B_{\mathsf{0L}}\rangle=vc_{ \theta},\quad\langle A_{0}B_{\mathsf{0L}}\rangle=-\langle A_{1}B_{\mathsf{1L }}\rangle=vs_{\theta}\,, \tag{38}\] Without the switch, the condition to witness non-locality using the \(\mathcal{J}_{\mathsf{L}}^{\theta}\) expression is \(\mathcal{J}_{\mathsf{L}}^{\theta}>2\), i.e., for the correlations (38) \(2v/c_{\theta}>2\). That is, a visibility \(v>c_{\theta}\) is required. However, we know that in the two-input, two-output regular Bell scenario, a necessary and sufficient condition to witness non-locality is the violation \(\mathcal{C}>2\) of the CHSH inequality. In the case of the correlations (38), this condition is equivalent to \(2v(c_{\theta}+s_{\theta})>2\), or \(v>1/(c_{\theta}+s_{\theta})\). The required visibility threshold thus goes from \(v>1\) when \(\theta=0\) (i.e., the correlations do not exhibit any nonlocality) to \(v>1/\sqrt{2}\simeq 0.71\) when \(\theta=\pi/4\) (corresponding to the case of maximally robust Tsirelson's correlations). In a routed Bell experiment, and since the _SP_ CHSH test is maximally violated, we can witness long-range quantum correlations whenever \(\mathcal{J}_{\mathsf{L}}^{\theta}>\sqrt{2}/c_{\theta}\) for the _LP_ test, instead of the more constraining criterion \(\mathcal{J}^{\theta}>2\). This happens when \(2v/c_{\theta}>\sqrt{2}/c_{\theta}\), i.e., \(v>1/\sqrt{2}\simeq 0.71\) for all \(\theta\). Thus all correlations defined by the family of strategies (29) and (35) have the same noise tolerance in a routed Bell experiment, which is moreover equal to the noise tolerance of the maximally robust Tsirelson's correlations corresponding to \(\theta=\pi/4\) in a regular Bell scenario. ### _Sp_-enhancement with a non-maximal short-path CHSH value Obviously, the assumption that \(\mathcal{C}_{\mathsf{S}}=2\sqrt{2}\) is too strong in any realistic experimental setting. We now derive new bounds on the value of \(\mathcal{J}_{\mathsf{L}}^{\theta}\) achievable by short-range quantum correlations when the CHSH value in the short-path is not maximal. We first consider the case of \(\theta=0\), for which we analytically derive the SRQ bound for \(\mathcal{J}_{\mathsf{L}}\equiv\mathcal{J}_{\mathsf{L}}^{0}\) as a function of \(\mathcal{C}_{\mathsf{S}}\). **Proposition 2**.: _For any short-range quantum (SRQ) correlations, the following inequality holds_ \[\mathcal{J}_{\mathsf{L}}\leq\frac{\mathcal{C}_{\mathsf{S}}+\sqrt{8-\mathcal{C }_{\mathsf{S}}^{2}}}{2}\qquad\text{when }\mathcal{C}_{\mathsf{S}}\in\left[2,2\sqrt{2}\right]\,. \tag{39}\] Proof.: In the \((\mathcal{C}_{\mathsf{S}},\mathcal{J}_{\mathsf{L}})\) plane, the region delimited by (39) is convex, hence it is equivalent to a family of linear bounds given by the tangent to the curve \(\mathcal{J}_{\mathsf{L}}=\frac{c_{\mathsf{S}}+\sqrt{8-c_{\mathsf{S}}^{2}}}{2}\). These linear bounds are \[\sin u\ \mathcal{C}_{\mathsf{S}}+(\cos u-\sin u)\mathcal{J}_{\mathsf{L}}\leq 2 \,\ \ u\in\left[0,\frac{\pi}{4}\right]. \tag{40}\] Instead of directly proving (39), we will thus prove the bounds (40) for every \(u\in[0,\pi/4]\). Let us define the Bell operators \[\mathtt{C}_{\mathsf{S}} =A_{0}B_{0\mathsf{S}}+A_{0}B_{1\mathsf{S}}+A_{1}B_{0\mathsf{S}}-A _{1}B_{1\mathsf{S}} \tag{41}\] \[\mathtt{J}_{\mathsf{L}} =A_{0}B_{1\mathsf{L}}+A_{1}B_{0\mathsf{L}}\,. \tag{42}\] Then since for any quantum state \(\ket{\psi}\), \(\mathcal{C}_{\mathsf{S}}=\bra{\psi}\mathtt{C}_{\mathsf{S}}\ket{\phi}\) and \(\mathcal{J}_{\mathsf{L}}=\bra{\psi}\mathtt{J}_{\mathsf{L}}\ket{\phi}\), proving (40) is equivalent to proving the operator semidefinite constraint \[\mathtt{I}_{u}=2\mathbf{1}-\sin u\ \mathtt{C}_{\mathsf{S}}-(\cos u-\sin u )\mathtt{J}_{\mathsf{L}}\geq 0\qquad\text{for all }u\in\left[0,\frac{\pi}{4}\right]\,. \tag{43}\] We will prove this positivity constraint using a Sum of Squares (SoS) decomposition. Define the hermitian operators: \[\mathtt{C}_{ij\mathsf{S}} =\sum_{x,y}(-1)^{\delta_{x,i}\delta_{y,j}}A_{x}B_{y\mathsf{S}}\, \tag{44a}\] \[\mathtt{J}_{ij\mathsf{L}} =A_{0}B_{(0\oplus i)\mathsf{L}}+(-1)^{j+1}A_{1}B_{(1\oplus i) \mathsf{L}}, \tag{44b}\] for \(i,j\in\{0,1\}\), where \(\oplus\) denotes addition modulo \(2\). Thus \(\mathtt{C}_{\mathsf{S}}=\mathtt{C}_{11\mathsf{S}}\) and \(\mathtt{J}_{\mathsf{L}}=\mathtt{J}_{11\mathsf{L}}\). Define further \[P_{1}(u) =-c_{u}(c_{u}-s_{u})\mathtt{C}_{11\mathsf{S}}+(c_{u}^{2}-s_{u}^{ 2})\mathtt{J}_{11\mathsf{L}}\, \tag{45a}\] \[P_{2}(u) =-c_{u}(c_{u}+s_{u})\mathtt{C}_{01\mathsf{S}}+(c_{u}^{2}-s_{u}^{ 2})\mathtt{J}_{01\mathsf{L}}\,\] (45b) \[P_{3}(u) =s_{u}(c_{u}+s_{u})\mathtt{C}_{10\mathsf{S}}+(c_{u}^{2}-s_{u}^{ 2})\mathtt{J}_{10\mathsf{L}}\,\] (45c) \[P_{4}(u) =s_{u}(c_{u}-s_{u})\mathtt{C}_{00\mathsf{S}}+(c_{u}^{2}-s_{u}^{ 2})\mathtt{J}_{00\mathsf{L}}. \tag{45d}\] where \(c_{u}=\cos u\) and \(s_{u}=\sin u\). Then using that the operators \(A_{x}\), \(B_{y\mathsf{S}}\) obey the commutation relations in (22) and that they square to the identity, it is easily verified that \[\mathtt{I}_{u}=\frac{1}{4}\mathtt{I}_{u}^{2}+\frac{s_{u}P_{1}(u)^{2}}{8c_{u}( c_{u}^{2}-s_{u}^{2})}+\frac{s_{u}P_{2}(u)^{2}}{8c_{u}(c_{u}+s_{u})^{2}}+\frac{P_{3} (u)^{2}}{8(c_{u}+s_{u})^{2}}+\frac{P_{4}(u)^{2}}{8(c_{u}^{2}-s_{u}^{2})},\quad \text{for }u\in[0,\frac{\pi}{4}[. \tag{46}\] The right-hand side of the above expression is a SoS, hence is positive, which proves (43). The SoS (46) is not valid at \(u=\pi/4\). But for this point (40) is simply the well-known bound \(\mathcal{C}_{\mathsf{S}}\leq 2\sqrt{2}\) for CHSH. To find the above SoS decomposition, we followed the approach in [34]. We briefly describe this method in Appendix C, where we also show that the bound (40) is tight. For other values of \(\theta\), a SRQ bound on \(\mathcal{J}_{\mathsf{L}}^{\theta}\) as a function of \(\mathcal{C}_{\mathsf{S}}\) can be obtained numerically. Maximizing \(\mathcal{J}_{\mathsf{L}}^{\theta}\) for a fixed value of \(\mathcal{C}_{\mathsf{S}}\) over \(\mathcal{Q}_{SR}^{n}\), the set of correlations corresponding to the \(n^{\text{th}}\)-level NPA relaxation of the set of SRQ correlations [22, 23, 24] (see Section 3.2), gives an upper bound on the maximum value of \(\mathcal{J}_{\mathsf{L}}^{\theta}\). Lower bounds on the maximum value of \(\mathcal{J}_{\mathsf{L}}^{\theta}\) can be obtained using a see-saw algorithm searching over explicit quantum strategies. However, fixing the value of \(\mathcal{C}_{\mathsf{S}}\) is not convenient running the see-saw algorithm since we initialize on a random state and measurements. Instead, we maximize the expressions \(\cos u\ \mathcal{C}_{\mathsf{S}}+\sin u\ \mathcal{J}_{\mathsf{L}}^{\theta}\) for different values of \(u\), yielding the solution \(k_{u}\). The envelope of the curves \(\cos u\ \mathcal{C}_{\mathsf{S}}+\sin u\ \mathcal{J}_{\mathsf{L}}^{\theta}=k_{u}\) yields lower-bounds on \(\mathcal{J}_{\mathsf{L}}^{\theta}\) for given \(\mathcal{C}_{\mathsf{S}}\). A plot of such bounds for different values of \(\theta\) is given in Fig. 6. The upper bounds were obtained using level '\(AB\)' of the NPA hierarchy. The upper bounds match the see-saw lower bounds up to numerical precision. As before, we illustrate the interest of such bounds on the noise robustness of the correlations induced by the quantum strategies (29) and (35). We assume this time a depolarizing noise characterized by a visibility on the path from the source to the devices and, which are located close to the source, and a visibility on the path to the remote device, with. The resulting correlations are (47) yielding, and. In a regular Bell scenario, the condition for the correlations (47) to exhibit non-locality is the violation of the CHSH inequality, i.e.,. The conditions obtained in a routed Bell scenario exploiting the above bounds on induced by the value are depicted in Fig. 7a for different values of. These conditions are based on a specific family of _LP_ tests, i.e., (39), which are not necessarily optimal. We thus also directly determined the conditions on and required to demonstrate long-range quantum correlations using the full set of correlations (47) and the NPA relaxation of the SRQ set at a level intermediate between 3 and 4. These results are depicted in Fig. 7b. We notice that for very high values of, routed Bell experiments based on a _SP_ CHSH test tolerate lower values of than standard Bell experiments. The range of values for which this _SP_-enhancement is obtained increases as. We note that for the BB84 correlations corresponding to, using the LP inequality (39) or the full set of correlations give the same results, hence this inequality seems to be optimal for these correlations. The inequality (39) is violated by the noisy BB84 correlations whenever. The minimal value for which this arises, assuming further (since the _SP_ visibility is higher than the _LP_ visibility), is. Figure 6: vs for SRQ strategies, obtained at level β€˜_AB_’ of the NPA hierarchy and saturated using a see-saw algorithm. ## 5 Detection efficiency thresholds in routed Bell experiments We now consider the effect of detection inefficiencies in routed Bell experiments, the original motivation for considering such experiments. We denote \(\vec{\eta}=(\eta_{A},\eta_{B_{\text{s}}},\eta_{B_{\text{s}}})\), the detector efficiencies of the three measurement devices, that is, the probabilities that these measurement devices 'click' and provide a valid output. Disregarding the 'no-click' events \(\varnothing\) in the statistical analysis opens the so-called detection loophole, and is only valid assuming fair sampling [1, 2]. There are two common ways to take no-click events into account: we can either count them as a separate, additional outcome \(\varnothing\), or we can bin them with one of the other outcomes, say \(\varnothing\mapsto+1\)1. Footnote 1: We can also mix these two ways for different detectors. Given some correlations \(p\) in the ideal situation where measurement devices always click, the non-ideal correlations \(p^{\vec{\eta}}\) keeping the no-click outcomes \(\varnothing\) as an additional outcome are \[p^{\vec{\eta}}(a,b|x,y,z) =\eta_{z}\eta_{A}p(a,b|x,y,z)\,, \tag{48}\] \[p^{\vec{\eta}}(a,\varnothing|x,y,z) =\eta_{A}(1-\eta_{z})p(a|x)\,,\] \[p^{\vec{\eta}}(\varnothing,b|x,y,z) =(1-\eta_{A})\eta_{z}p(b|y,z)\;,\] \[p^{\vec{\eta}}(\varnothing,\varnothing|x,y,z) =(1-\eta_{A})(1-\eta_{z})\,,\] where \(p(a|x)\) and \(p(b|y,z)\) are the marginal probabilities of respectively \(A\) and \(B_{z}\) in the target implementation. If instead the no-click outcome is binned according to \(\varnothing\mapsto+1\), the ideal correlations \(p\) are modified to \[p^{\vec{\eta}}(a,b|x,y,z)=\eta_{A}\eta_{z}p(a,b|x,y,z) +(1-\eta_{A})\eta_{z}p(b|y,z)\delta_{a,+1}\] \[+\eta_{A}(1-\eta_{z})p(a|x)\delta_{b,+1}+(1-\eta_{A})(1-\eta_{z} )\delta_{a,+1}\delta_{b,+1}\,, \tag{49}\] where \(\delta\) is the Kronecker delta. ### Universal bounds on critical detection efficiency Lower bounds on critical detection efficiencies in regular Bell experiments were derived in [6]. These bounds depend only on the number of measurement settings of Alice and Bob and are independent of the quantum implementation. We now generalize these bounds to i) the case where each party has a different detection efficiency (as [6] considered only the symmetric case where Alice and Bob have the same detection efficiency) and ii) routed Bell experiments. Figure 7: Value of \(v_{\text{L}}\) required to demonstrate long-range quantum correlations as a function of \(v_{\text{S}}\) in a standard Bell scenario (solid lines) and routed Bell scenario (dashed lines) for the correlations (47), determined (a) using the \(LP\) test based on \(\mathcal{J}_{\text{L}}^{\varnothing}\) and (b) using the full set of correlations (47) and the NPA relaxation of the SRQ set at level between 3 and 4. **Proposition 3**.: _Consider a routed Bell experiment where Alice's measurement device has \(m_{A}\) measurement settings and detection efficiency \(\eta_{A}\), and Bob's remote device has \(m_{B_{\text{\tiny{L}}}}\) measurement settings and detection efficiency \(\eta_{B_{\text{\tiny{L}}}}\). Then, there exists an SRQ model when the following condition is satisfied,_ \[\eta_{B_{\text{\tiny{L}}}}\leq\frac{\eta_{A}(m_{A}-1)}{\eta_{A}(m_{A}m_{B_{ \text{\tiny{L}}}}-1)-(m_{B_{\text{\tiny{L}}}}-1)}\,, \tag{50}\] _independently of the number of measurement settings \(m_{B_{\text{\tiny{L}}}}\) and detection efficiency \(\eta_{B_{\text{\tiny{S}}}}\) of Bob's close device \(B_{\text{\tiny{S}}}\). In particular, this bound also applies to standard Bell experiment, which can be seen as the particular case \(m_{B_{\text{\tiny{S}}}}=0\)._ Proof.: We prove the above result by constructing an explicit SRQ model for the correlations \(p^{\vec{\eta}}\) given by (48) when (50) is satisfied. This model is based on a mixture of three different strategies with respective weights \(st\), \(s(1-t)\), and \(1-s\) with \(0\leq s,t\leq 1\). The first strategy is purely local (hence SRQ) and based on a hidden variable \((a^{\prime},x^{\prime})\) shared with all measurement devices and chosen with probability \(p(a^{\prime}|x^{\prime})/m_{A}\) where \(p(a^{\prime}|x^{\prime})\) is the marginal distribution of Alice's and Bob's ideal correlations \(p\). If \(x=x^{\prime}\), Alice outputs \(a^{\prime}\), otherwise she outputs \(\varnothing\). Bob's device (whether \(B_{\text{\tiny{S}}}\) or \(B_{\text{\tiny{L}}}\)), on the other hand, outputs \(b\) with probability \(p(b|a^{\prime},x^{\prime},y)=p(a^{\prime},b|x^{\prime},y,z)/p(a^{\prime}|x^{ \prime})\) when his input is \(y\). The second strategy is a SRQ strategy in which the source, Alice's measurement device \(A\) and Bob's nearby measurement device \(B_{\text{\tiny{S}}}\) are as in the quantum strategy yielding the ideal correlations \(p\). On Bob's side, if the switch is set to \(z=\text{\tiny{L}}\), then an input \(y^{\prime}\) is selected at random with probability \(1/m_{B_{\text{\tiny{L}}}}\) and the corresponding ideal measurement (the one leading to the correlations \(p\)) is performed yielding the outcome \(b^{\prime}\). Both \(y^{\prime}\) and \(b^{\prime}\) are relayed to \(B_{\text{\tiny{L}}}\) as a classical message. If \(B_{\text{\tiny{L}}}\)'s actual input \(y\) matches \(y^{\prime}\), it outputs \(b=b^{\prime}\), otherwise, it outputs \(\varnothing\). Note that if \(m_{B_{\text{\tiny{S}}}}=0\), i.e, there is no switch and no nearby measurement device, then this strategy is actually equivalent to a purely local one, since it involves a single quantum measurement on Bob's side. In the third strategy, \(A\) and \(B_{\text{\tiny{L}}}\) always output \(\varnothing\), while \(B_{\text{\tiny{S}}}\) outputs \(b\) with probability \(p(b|y,\text{\tiny{S}})\). The correlations obtained by this mixture of three strategies are \[p^{\text{\tiny{srq}}}(a,b|x,y,\text{\tiny{S}}) =s\left(\frac{t}{m_{A}}+1-t\right)p(a,b|x,y,\text{\tiny{S}}), \tag{51}\] \[p^{\text{\tiny{srq}}}(a,\varnothing|x,y,\text{\tiny{S}}) =0,\] \[p^{\text{\tiny{srq}}}(\varnothing,b|x,y,\text{\tiny{S}}) =\left(s\ t\ \frac{m_{A}-1}{m_{A}}+1-s\right)p(b|y,\text{\tiny{S}}),\] \[p^{\text{\tiny{srq}}}(\varnothing,\varnothing|x,y,\text{\tiny{S}}) =0,\] \[p^{\text{\tiny{srq}}}(a,b|x,y,\text{\tiny{L}}) =s\left(\frac{t}{m_{A}}+\frac{1-t}{m_{B_{\text{\tiny{L}}}}}\right)p (a,b|x,y,\text{\tiny{L}}),\] \[p^{\text{\tiny{srq}}}(a,\varnothing|x,y,\text{\tiny{L}}) =s(1-t)\left(\frac{m_{B_{\text{\tiny{L}}}}-1}{m_{B_{\text{\tiny{L} }}}}\right)p(a|x),\] \[p^{\text{\tiny{srq}}}(\varnothing,b|x,y,\text{\tiny{L}}) =s\ t\left(\frac{m_{A}-1}{m_{A}}\right)p(b|y,\text{\tiny{L}}),\] \[p^{\text{\tiny{srq}}}(\varnothing,\varnothing|x,y,\text{\tiny{L}}) =1-s.\] These correlations match the correlations \(p^{\vec{\eta}}\) given in (48) when \(\eta_{B_{\text{\tiny{S}}}}=1\) and \(\eta_{B_{\text{\tiny{L}}}}\) is equal to the right hand side of (50). It is of course possible in the above model to locally lower the detection efficiencies of \(B_{\text{\tiny{S}}}\) and \(B_{\text{\tiny{L}}}\) by instructing part of the time the measurement device to output \(\varnothing\), thereby achieving arbitrary values of \(\eta_{B_{\text{\tiny{S}}}}\) and any \(\eta_{B_{\text{\tiny{L}}}}\) satisfying (50). The above bound places fundamental limits on the distance at which nonlocal correlations can be observed, both for standard and routed Bell experiments. In particular, the right-hand side of (50) is always greater than \(1/m_{B_{\text{\tiny{L}}}}\), implying that the detection efficiency of the remote measurement device cannot be lower than \(1/m_{B_{\text{\tiny{L}}}}\), even when all other detectors are perfect. This lower-bound of \(1/m_{B_{\text{\tiny{L}}}}\) can also be seen as a consequence of the universal bounds derived in [35] for general (semi-)device-independent protocols. In the case where Bob's remote measurement device is doing two measurements, as for the explicit examples considered in this paper and in [10], the detection efficiency of the remote measurement device cannot be lower than \(1/2\). Though the bound (50) applies both to standard and routed Bell experiments, this does not mean that routed Bell experiment cannot be more robust to photon losses than standard Bell experiment. First, the above bound is universal and applies to any quantum strategy. For specific strategies, based on specific states and quantum measurements, routed Bell experiment may provide an advantage over standard Bell experiments, as we will demonstrate in the next subsection. Second, a more stringent bound than (50) might actually apply for standard Bell experiments, leaving the possibility of a gap between the optimal (i.e., optimized over all quantum strategies) photon loss resistance of standard and routed Bell experiments. In the case of two measurements per party, \(m_{A}=m_{B_{\mathrm{L}}}=2\), however, this cannot be the case. Indeed, the bound (50) then reduces to \[\eta_{B_{\mathrm{L}}}\leq\frac{\eta_{A}}{3\eta_{A}-1}\,. \tag{52}\] It was shown in [36] that there are quantum implementations that violate the CH inequality [2], and thus demonstrate standard nonlocality, whenever (52) is violated, i.e., the bound (52) is tight for standard Bell experiments - and routed Bell experiments cannot improve it. Whether there exists a quantum strategy that demonstrates standard nonlocality when \(\eta_{B}\) violates (50) for general values of \(m_{A}\) and \(m_{B_{\mathrm{L}}}\) is an open question. ### Analytical detection thresholds for implementations based on an ideal short-path CHSH test We now derive the detection efficiency threshold \(\eta_{B_{\mathrm{L}}}\) of the remote measurement devices for specific quantum correlations. We focus on the natural implementation depicted in Fig. 8 in which the close detectors have the same efficiency \(\eta_{A}=\eta_{B_{\mathrm{S}}}=\eta_{\mathrm{S}}\) and the distant detector has a lower efficiency \(\eta_{B_{\mathrm{L}}}=\eta_{\mathrm{L}}\leq\eta_{\mathrm{S}}\). We assume, as in the previous section, that the state produced by the source and the observables implemented by the nearby devices \(A\) and \(B_{\mathrm{S}}\) are \[\rho_{AB}=|\phi_{+}\rangle\!\langle\phi_{+}|\;,\quad A_{0}=X\,,\,A_{1}=Z\,, \quad B_{0\mathrm{S}}=\frac{X+Z}{\sqrt{2}},\quad B_{1\mathrm{S}}=\frac{X-Z}{ \sqrt{2}}\,, \tag{53}\] yielding (in the ideal case \(\eta_{\mathrm{S}}=1\)) a maximal CHSH violation in the short path. For the remote device \(B_{\mathrm{L}}\), we consider two families of strategies. Those, again as in the previous section, where \(B_{0\mathrm{L}}\) and \(B_{1\mathrm{L}}\) anti-commute, i.e., \[B_{0\mathrm{L}}=s_{\theta}X+c_{\theta}Z,\;B_{1\mathrm{L}}=c_{\theta}X-s_{ \theta}Z\quad\text{(anti-commuting $B_{\mathrm{L}}$)}\,, \tag{54}\] where \(c_{\theta}=\cos\theta\) and \(s_{\theta}=\sin\theta\) and \(\theta\in[0,\pi/4]\). These strategies include the CHSH strategy (\(\theta=\pi/4\)) and the BB84 strategy (\(\theta=0\)) as special cases. The second families of strategies are those where \(B_{0\mathrm{L}}\) and \(B_{1\mathrm{L}}\) correspond to arbitrary measurement in the \(X-Z\) plane, which we parametrize as \[B_{0\mathrm{L}}=s_{\theta_{0}}X+c_{\theta_{0}}Z,\;B_{1\mathrm{L}}=s_{\theta_{ 1}}X+c_{\theta_{1}}Z\quad\text{(general $B_{\mathrm{L}}$)}\,, \tag{55}\] where \(\theta_{0}=\theta_{+}-\theta_{-}\) and \(\theta_{1}=\theta_{+}+\theta_{-}\). Thus \(\theta_{-}\) corresponds to half the angle between the two measurement directions of \(B_{\mathrm{L}}\) in the Bloch sphere, and \(\theta_{+}\) corresponds to a global rotation. Without loss of generality (by relabelling the outcomes of the measurements), we can restrict the angles \(\theta_{-}\) to the interval \(]0,\pi/2[\) and \(\theta_{+}\) to the interval \(]0,\pi/4[\). The anti-commuting strategies (54) are a special case of the general strategies (55) with \(\theta_{-}=\pi/4\) and \(\theta_{+}=\theta+\pi/4\). We will now derive analytically the critical efficiencies \(\eta_{\mathrm{L}}\) necessary to exhibit long-range quantum correlations when the nearby measurement devices are perfect, i.e., \(\eta_{\mathrm{S}}=1\). In the next section, we will use numerical methods for the general case \(\eta_{\mathrm{S}}<1\). #### 5.2.1 Binned no-click outcomes Let us start by considering anticommuting measurements (54) for the remote device \(B_{\mathrm{L}}\) and that the no-click outcome \(\varnothing\) is binned with the outcome \(+1\). Then a necessary and sufficient condition for the long-path quantum correlations \(p(a,b|x,y,\mathsf{L})\) to be nonlocal according to the standard notion of nonlocality (i.e, ignoring the switch) is to violate the CHSH inequality \(\mathcal{C}_{\mathsf{L}}>2\). The quantum implementation given by eqs. (53) and (54) yields for \(\eta_{\mathsf{S}}=1\) the CHSH value \(\mathcal{C}_{\mathsf{L}}=2\eta_{\mathsf{L}}(c_{\theta}+s_{\theta})\), which violates the local bound when \[\eta_{\mathsf{L}}>\frac{1}{c_{\theta}+s_{\theta}}\,. \tag{56}\] The required efficiency thus goes from \(\eta_{\mathsf{L}}>1\) when \(\theta=0\) (i.e., BB84 correlations do not exhibit nonlocality) to \(\eta_{\mathsf{L}}>1/\sqrt{2}\approx 0.71\) when \(\theta=\pi/4\) (CHSH correlations). Let us now exploit the extra constraints following from the short-path correlations. Since \(\mathcal{C}_{\mathsf{S}}=2\sqrt{2}\), SRQ correlations satisfy the _LP_ inequality \(\mathcal{J}_{\mathsf{L}}^{\theta}\leq\sqrt{2}/c_{\theta}\) given in (26). The value of \(\mathcal{J}_{\mathsf{L}}^{\theta}\) for the anti-commuting implementations (54) is \(\mathcal{J}_{\mathsf{L}}^{\theta}=2\eta_{\mathsf{L}}/c_{\theta}\,\). Long-range quantum correlations are certified when this value exceeds the SRQ bound, i.e., when \[\eta_{\mathsf{L}}>1/\sqrt{2}\approx 0.71\qquad\text{for all }\theta\,. \tag{57}\] Thus, all the anti-commuting implementations (54) have the same tolerance to detection losses, which is moreover equal to the tolerance of the maximally loss-tolerant CHSH correlations. We can reduce the critical efficiency in a routed Bell experiment further by using the following _LP_ inequality \[J_{\mathsf{L}}^{\theta_{+},\theta_{-}}=(c_{\theta_{+}}+s_{\theta _{-}}s_{\theta_{+}})\langle A_{1}B_{0\mathsf{L}}\rangle+(c_{\theta_{+}}-s_{ \theta_{-}}s_{\theta_{+}})\langle A_{1}B_{1\mathsf{L}}\rangle+(s_{\theta_{+}} -s_{\theta_{-}}c_{\theta_{+}})\langle A_{0}B_{0\mathsf{L}}\rangle+\\ (s_{\theta_{+}}+s_{\theta_{-}}c_{\theta_{+}})\langle A_{0}B_{1 \mathsf{L}}\rangle+c_{\theta_{-}}(\langle B_{0\mathsf{L}}\rangle+\langle B_{ 1\mathsf{L}}\rangle)\leq 2, \tag{58}\] where the SRQ bound \(J_{\mathsf{L}}^{\theta_{+},\theta_{-}}\leq 2\) is derived assuming \(\mathcal{C}_{\mathsf{S}}=2\sqrt{2}\) (see Appendix B.2 for a proof). The value of \(J^{\theta+\pi/4,\pi/4}\) for the anti-commuting implementation (54) is \(\eta_{\mathsf{L}}+\sqrt{2}\) which violates the SRQ bound when \[\eta_{\mathsf{L}}>2-\sqrt{2}\approx 0.586\qquad\text{for all }\theta, \tag{59}\] which represents a significant improvement over (57). Let us now consider general projective measurements for \(B_{\mathsf{L}}\) of the form (55). We then find that the CHSH value is \(\mathcal{C}_{\mathsf{L}}=2\eta_{\mathsf{L}}c_{\theta_{+}}(c_{\theta_{-}}+s_{ \theta_{-}})\), which violates the standard local bound when \[\eta_{\mathsf{L}}>\frac{1}{c_{\theta_{+}}(c_{\theta_{-}}+s_{\theta_{-}})}\,. \tag{60}\] This critical efficiency is plotted in Fig. 9 as a function of \(\theta_{-}\) for different values of \(\theta_{+}\). If we instead use the SRQ inequality (58), we find that the value of \(J_{\mathsf{L}}^{\theta_{+},\theta_{-}}\) is \(2(c_{\theta_{-}}+\eta_{\mathsf{L}}s_{\theta_{-}}^{2})\), which violates (58) when \[\eta_{\mathsf{L}}>\frac{1}{1+c_{\theta_{-}}}\,. \tag{61}\] As \(\theta_{-}\to 0\), this critical efficiency approaches \(1/2\) (see Fig. 9), which saturates the universal lower bound for the detection threshold (50). We provide in Appendix D an explicit SRQ strategy that reproduces the correlations obtained using (55) when \(\eta=1/(1+c_{\theta_{-}})\), which proves that the bound (61) is tight. Note that as \(\theta_{-}\) gets smaller, the target strategies become increasingly close to being jointly measurable, and the corresponding correlation close to SRQ correlations, since \(B_{\texttt{0L}}\approx B_{\texttt{1L}}\). Somewhat counterintuitively, the implementations that are most robust to detection losses are precisely the ones that are the least robust against white noise, as happens for Eberhard correlations in the case of standard nonlocality [4]. #### 5.2.2 No-click outcomes kept as an additional output Let us now consider the situation where the no-click output \(\varnothing\) of \(B_{\texttt{L}}\) is kept as an additional outcome instead of binning it with \(+1\). In a standard Bell scenario with two inputs and two outputs per party, this does not improve the analysis, as follows from [37], i.e. the critical detection efficiencies for anticommuting and general measurements are still given by (56) and (60), respectively. In a routed Bell scenario, however, we do find an improvement by retaining non-detection events in the statistics. In fact, we find that for any anticommuting strategy from the family (54), long-range quantum correlations can be certified whenever \[\eta_{\texttt{L}}>1/2\quad\text{for all $\theta$}. \tag{62}\] This is optimal, as follows from the bound (50). Thus, the maximally noise-robust anti-commuting measurements are also maximally loss-tolerant in a routed Bell experiment when non-detection events are included in the statistics. To derive this result, we make use of the following _LP_ inequality \[\tilde{\mathcal{J}}_{\texttt{L}}^{\theta}=c_{\theta}\langle A_{1}B_{\texttt{0L }}\rangle-s_{\theta}\langle A_{1}B_{\texttt{1L}}\rangle+s_{\theta}\langle A_ {0}B_{\texttt{0L}}\rangle+c_{\theta}\langle A_{0}B_{\texttt{1L}}\rangle- \frac{\langle T_{\texttt{0L}}\rangle+\langle T_{\texttt{1L}}\rangle}{2}\leq \frac{1}{2}, \tag{63}\] where \(T_{\texttt{yL}}=M_{b=+1|\texttt{yL}}+M_{b=-1|\texttt{yL}}\), and the SRQ bound \(\tilde{\mathcal{J}}_{\texttt{L}}^{\theta}\leq 1/2\) is derived assuming \(\mathcal{C}_{\texttt{S}}=2\sqrt{2}\) (see Appendix B.3 for proof). The value of \(\tilde{\mathcal{J}}_{\texttt{L}}^{\theta}\) for the implementation (54) is \(\tilde{\mathcal{J}}_{\texttt{L}}^{\theta}=\eta\), which violates (63) when (62) holds. The results obtained in this subsection are summarized in Table 1. ### Numerical detection thresholds for implementations based on a lossy short-path CHSH test We now analyze numerically the setup considered in the previous section in the case where \(\eta_{\texttt{S}}<1\), i.e., when the short-path CHSH violation is no longer maximal. We will focus on the anticommuting Figure 9: Critical efficiency \(\eta_{\texttt{L}}\) for the quantum implementations (55). The solid lines correspond to the critical efficiency in a standard Bell experiment, while the dashed line to the critical efficiency in a routed Bell experiment. Values for anti-commuting measurements, corresponding to \(\theta_{-}=\pi/4\), are indicated on the graph. strategies given by (53) in the cases \(\theta=0\), corresponding to BB84 correlations, and \(\theta=\pi/4\), corresponding to CHSH correlations. Given a value \(\eta_{\mathsf{B}}\) for the short-path detector efficiency, we ask what is the maximal value \(\eta_{\mathsf{L}}\) for the long-path detector efficiency, so that the corresponding correlations \(p^{\vec{\eta}}\) can be reproduced by an SRQ model, i.e., we aim to solve \[\max_{\eta_{\mathsf{R}}}\quad\text{s.t.}\ p^{\vec{\eta}}\in\mathcal{Q}_{\text{ SR}}\,. \tag{64}\] Replacing the set \(\mathcal{Q}_{\text{SR}}\) by its \(n\)th-level NPA relaxation \(\mathcal{Q}_{\text{SR}}^{n}\), we can obtain an upper-bound on the critical long-path detection efficiency through \[\max_{\eta_{\mathsf{R}}}\quad\text{s.t.}\ p^{\vec{\eta}}\in\mathcal{Q}_{\text{ SR}}^{n}\,. \tag{65}\] Since the entries of the correlation vector \(p^{\vec{\eta}}\) depend linearly on \(\eta_{\mathsf{L}}\) (once \(\eta_{\mathsf{B}}\) is fixed), the above problem is a SDP and can be solved numerically using standard packages. The results for a NPA level intermediate between the 3rd and the 4th are plotted in Fig. 10 (see Appendix E for details). In the case of CHSH correlations, we find that routed Bell experiments provide a significant improvement over standard Bell test, especially for high-quality short-range tests. For a standard CHSH test, the critical efficiency is \(\eta_{\mathsf{L}}=\frac{\eta_{\mathsf{L}}}{(\sqrt{2}+1)\eta_{\mathsf{L}}-1}\). This value is significantly reduced for routed Bell tests for short-path efficiencies above \(\eta_{\mathsf{S}}\sim 96.5\%\) in the binning case and above \(\eta_{\mathsf{S}}\sim 91\%\) in the no-binning case. The BB84 correlations do not exhibit any nonlocality in standard Bell experiments, but, as already noted in the previous sections, do exhibit long-range quantum nonlocality in routed Bell experiments. Interestingly, for high values of \(\eta_{\mathsf{S}}\), BB84 correlations exhibit nonlocality for lower values of \(\eta_{\mathsf{L}}\) than CHSH correlations. The effect is much more pronounced in the no-binning case, but it is also present in the binning case (although it is too small to be visible in the figure). We note that the critical efficiencies for \(\eta_{\mathsf{L}}\) obtained by solving (65) are not necessarily optimal, but only represent an upper-bound on the lowest admissible \(\eta_{\mathsf{L}}\), as we rely on a NPA relaxation at a finite level to compute them. We therefore also used heuristic seesaw algorithms to obtain lower-bounds on the critical efficiencies. Given efficiencies \(\eta_{\mathsf{S}}\) and \(\eta_{\mathsf{L}}\), we search for explicit SRQ strategies that reproduce the corresponding correlations \(p^{\vec{\eta}}\). If we are able to do so, the given value \(\eta_{\mathsf{L}}\) represent a lower-bound on the critical efficiency necessary to exhibit long-range nonlocality. In the binning case, the lower-bounds obtained through our heuristic search over explicit quantum strategies are at most \(1\%\) below the upper-bounds obtained through the NPA method and thus not represented in the figure. In the no-binning case, a small gap exists between the upper-bounds and lower-bounds, which are both plotted in Fig. 10. Note that the existence of this gap does not affect our finding that BB84 correlations are more robust than CHSH correlations to losses in the long-path for high values of \(\eta_{\mathsf{S}}\). Another important experimental consideration for Bell test implementations is noise. Here, we focus on the simplest case of local white noise: we assume that each party has local visibility \(\nu_{B_{z}}=\nu_{A}\equiv\nu\). This can be modelled by replacing the state \(|\phi_{+}\rangle\) in the ideal implementation by \[\phi_{+}^{\nu}=\nu^{2}|\phi_{+}\rangle\langle\phi_{+}|+(1-\nu^{2})\frac{ \mathbf{1}}{4}\,, \tag{66}\] We repeat the analysis for the no-binning case of the previous section for different local visibilities \(\nu\). The results are shown in Fig. 11. For CHSH we find numerically that the switch becomes useless \begin{table} \begin{tabular}{l l l l} \hline \hline Strategies & standard Bell (bin. or not bin.) & routed Bell, bin. & routed Bell, not bin. \\ \hline anti-commuting & \(0.71\lesssim\frac{1}{c_{\theta}+s_{\theta}}\leq 1\) & \(2-\sqrt{2}\approx 0.586\) & \(1/2\) \\ general & \(0.71\lesssim\frac{1}{c_{\theta_{+}}(c_{\theta_{-}}+s_{\theta_{-}})}\leq 1\) & \(0.5\leq\frac{1}{1+c_{\theta_{-}}}<1\) & \(1/2^{*}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Critical efficiency \(\eta_{\mathsf{L}}\) for the strategies given by eqs. (53), and (54) (anti-commuting) or (55) (general) in a standard Bell test or a routed Bell test in the case where the no-click outcome is binned or not. All values are derived analytically and proven to be tight, except for the value in the bottom-right corner which is conjectured and supported by numerical evidence (for each fixed value of \(\theta\), the critical efficiency can be determined by a SDP; we computed it for 1000 values of \(\theta\) in the domain). for local visibilities below \(\sim 0.940\). The BB84 correlations become local, even for perfect efficiency \(\eta_{\sf{B}}=1\) for local visibilities below \(\sim 0.960\). ## 6 Discussion We presented a systematic analysis of routed Bell experiments, an idea recently introduced by CVP [10] with the goal of extending the distance over which nonlocality can be demonstrated in lossy experiments. We argued that certifying genuine nonlocality between the remote parties in routed experiments requires ruling out a more general class of 'classical' models than those considered by CVP, which we termed short-range quantum models. Indeed, even if the behavior of the remote device might not be predetermined by classical variables set at the source, this does not imply that entanglement or quantum information needs to be distributed to it. After formulating the notions of short-range and genuine long-range correlations in routed experiments, we showed how these different correlation sets can be characterized through standard semidefinite programming hierarchies. With these definitions, we find that a short-path CHSH test does not lower the requirements for witnessing long-range nonlocality for a long-path CHSH test. However, borrowing intuition from self-testing and joint-measurability, we identified other long-path tests that do exhibit a'short-path enhancement', i.e. the requirements for observing a long-path quantum advantage are weakened. We then applied this analysis to the original question, namely whether Figure 10: Upper and lower bounds on the critical efficiency \(\eta_{\sf{L}}\) for the distant device \(B_{\sf{L}}\) as a function of the efficiency \(\eta_{\sf{S}}\) for the devices \(A\) and \(B_{\sf{S}}\) closer to the source, for the CHSH correlations (black) and BB84 correlations (blue). The red curve correspond to the case of CHSH correlations in a standard Bell scenario, i.e., ignoring the switch (BB84 correlations in a standard Bell scenario are always local). Dotted (full) lines correspond to NPA upper bounds where the parties bin (keep) their no-click outcomes. The β€˜\(+\)’ values correspond to lower bounds where the parties keep their no-click outcomes. Lower-bounds in the case of binning are at most \(1\%\) from the upper-bounds, hence not depicted. The upper bounds were obtained at level \(`3+AAAA+B_{\sf{B}}B_{\sf{B}}B_{\sf{B}}+B_{\sf{L}}B_{\sf{L}}B_{\sf{L}}B_{\sf{L} }+AAB_{\sf{B}}B_{\sf{S}}+AAB_{\sf{L}}B_{\sf{L}}+B_{\sf{S}}B_{\sf{B}}B_{\sf{L} }B_{\sf{L}}\)’ of the NPA hierarchy, see Appendix E for details. We plot all curves until the point \(\eta_{\sf{L}}=\eta_{\sf{S}}\). these enhancements also imply improved detection efficiency requirements for nonlocality. With a combination of analytical and numerical tools, we showed that this is indeed the case, albeit the improvement is significantly lower than suggested by CVP's analysis. In particular, we showed that the efficiencies in routed Bell experiments are still subject to the same universal bounds valid in standard Bell experiments [6, 35]. When both parties have two measurement inputs, these bounds are tight and can already be saturated in standard Bell experiments. As such, the best critical efficiency for routed experiments cannot be lower than the best (i.e., over all quantum strategies) critical efficiency for standard Bell experiments. Whether this is also the case for experiments with more inputs remains an open question. Routed Bell experiments nevertheless do offer significant advantages. Indeed, for specific correlations, the introduction of a switch can be used to increase the robustness to losses and noise. For example, strategies exploiting anti-commuting Pauli measurements on a singlet state have single-sided critical efficiency that is at best \(1/\sqrt{2}\sim 0.71\) in standard Bell experiments, but which is lowered to \(1/2\) in routed experiments. This is the same single-sided critical efficiency as in the Eberhard scheme [4], but with a much simpler and more noise robust experimental setup. Furthermore, this bound can also be reached using the BB84 strategy, which involves both distant parties measuring \(Z,X\) on the singlet. Interestingly, the BB84 correlations are always _local_ in the absence of the switch, i.e. the switch can be used to 'activate' nonlocality. The idea that performing additional tests in a Bell experiment can enlarge the set of correlations that can be certified in a device-independent way is similar in spirit to the results of [38] based on quantum networks, although routed Bell experiments do not require, contrarily to [38], additional sources of entanglement or performing joint measurements. The present work opens up several interesting directions for future research. First, it is natural to consider the symmetric extension of routed Bell experiments, where an additional \(SP\) test is placed on both sides of the source. Our methods can be straightforwardly adapted to this situation. Clearly, this enables a further increase of the total distance over which nonlocality can be certified, but the extent of this increase will require further investigation. Second, it would be interesting to explore the practical applications of routed Bell experiments to cryptography and, in particular, DIQKD. An improvement in the key rate and loss resistance with respect to conventional DIQKD schemes is expected. Note though that the resulting performance cannot be better than analogous prepare-and-measure semi-DI schemes with a trusted preparation device. This is because at best the addition of the measurement device \(B_{\mathsf{S}}\) may lead to an exact certification of the entangled source and of Alice's device \(A\), effectively turning them in a trusted remote preparation device. It follows that simple fully DI schemes based on CHSH Figure 11: Upper bounds on the critical efficiency \(\eta_{\mathsf{L}}\) for the distant device \(B_{\mathsf{L}}\) as a function of the efficiency \(\eta_{\mathsf{R}}\) for the devices \(A\) and \(B_{\mathsf{S}}\) closer to the source for different local visibilities \(v\), when all parties keep their no-click outcomes. These bounds were obtained at the same NPA level as Figure 10. For CHSH correlations the critical efficiencies for standard Bell tests (without the switch) are plotted in red. or BB84 correlations will have key rates necessarily lower or equal than CHSH or BB84 one-sided DI prepare-and-measure protocols [39, 40]. In upcoming work, we provide a detailed analysis of device-independent protocols based on routed Bell experiments [41]. Finally, it would be interesting to investigate how the idea of routed Bell experiments generalize to the more general setting of no-signalling, but supra-quantum, correlations. The analogues of CVP correlations in this context correspond to partially deterministic correlations introduced in [42, 43, 44]. It would be interesting to see if it is possible and how to define the analogues of short-range quantum correlations, and whether'short-path enhancements' of long-path tests are also possible in the no-signalling setting. ## Acknowledgements We thank Anubhav Chaturvedi, Giuseppe Viola, Marcin Pawlowski, Nicolas Gisin and Antonio Acin for useful discussions. We acknowledge funding from the QuantERA II Programme that has received funding from the European Union's Horizon 2020 research and innovation programme under Grant Agreement No 101017733 and the F.R.S-FNRS Pint-Multi programme under Grant Agreement R.8014.21, from the European Union's Horizon Europe research and innovation programme under the project "Quantum Security Networks Partnership"(QSNP, grant agreement No 101114043), from the F.R.S-FNRS through the PDR T.0171.22, from the FWO and F.R.S.-FNRS under the Excellence of Science (EOS) programme project 40007526, from the FWO through the BeQuNet SBO project S008323N, from the Belgian Federal Science Policy through the contract RT/22/BE-QCI and the EU "BE-QCI" program. S.P. is a Research Director of the Fonds de la Recherche Scientifique - FNRS. J.P. is a FRIA grantee of the F.R.S.-FNRS. Funded by the European Union. Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union. The European Union cannot be held responsible for them. An early version of this work was presented at the _Causal Inference & Quantum Foundations Workshop_[https://pirsa.org/23040123](https://pirsa.org/23040123). The code used in this work is available on GitHub at [https://github.com/eplobo/RoutedBell](https://github.com/eplobo/RoutedBell). ## Appendix A Local and quantum bounds on \(\mathcal{J}^{\theta}\) In the absense of a switch, \(\mathcal{J}^{\theta}\) is a regular Bell expression and we can derive its local and quantum bounds using standard methods. The bounds for \(\theta\in[0,\pi/4]\) are given by (27) and (28), i.e., \[\mathcal{J}^{\theta}=t_{\theta}\langle A_{0}B_{0}\rangle+\langle A _{0}B_{1}\rangle+\langle A_{1}B_{0}\rangle-t_{\theta}\langle A_{1}B_{1}\rangle \leq 2\qquad\text{(local bound)}\, \tag{67}\] \[\leq 2/c_{\theta}\qquad\text{(quantum bound)}\,. \tag{68}\] The local bound can be proved quite easily by explicitly checking the value of \(\mathcal{J}^{\theta}\) for all possible local deterministic strategies, which corresponds to checking all possible assignments of \(\pm 1\) to the observables \(A_{x}\) and \(B_{y}\). It is saturated by assigning \(A_{x}=B_{y}=+1\). To prove the quantum bound, on the other hand, takes a little more work. Let us denote the Bell operator corresponding to \(\mathcal{J}^{\theta}\) by \(\mathcal{J}^{\theta}\) \[\mathcal{J}^{\theta}=t_{\theta}A_{0}B_{0}+A_{0}B_{1}+A_{1}B_{0}-t_{\theta}A_{ 1}B_{1} \tag{69}\] and assume without loss of generality (since we do not fix the dimension of the Hilbert space) that all the measurements are projective. Then, using that the observables \(A_{x},B_{y}\) are unitary operators which square to the identity, it is easily verified that \[\left(c_{\theta}\mathcal{J}^{\theta}\right)^{2}=2\mathbf{1}+A_{1}A_{0}(c_{ \theta}^{2}B_{0}B_{1}-s_{\theta}^{2}B_{1}B_{0})+A_{0}A_{1}(c_{\theta}^{2}B_{1} B_{0}-s_{\theta}^{2}B_{0}B_{1}). \tag{70}\] Denoting the the spectral norm by \(\left\|\cdot\right\|\), and using \(\left\|(\cdot)^{2}\right\|=\left\|\cdot\right\|^{2}\), we obtain \[c_{\theta}^{2}\big{\|}\mathcal{I}^{\theta}\big{\|}^{2}=\left\| \left(c_{\theta}\mathcal{I}^{\theta}\right)^{2}\right\| \leq 2+\big{\|}A_{1}A_{0}(c_{\theta}^{2}B_{0}B_{1}-s_{\theta}^{2}B_ {1}B_{0})\big{\|}+\big{\|}A_{0}A_{1}(c_{\theta}^{2}B_{1}B_{0}-s_{\theta}^{2}B_ {0}B_{1})\big{\|},\] \[\leq 2+\|A_{1}A_{0}\|\cdot\big{\|}c_{\theta}^{2}B_{0}B_{1}-s_{ \theta}^{2}B_{1}B_{0}\big{\|}+\|A_{0}A_{1}\|\cdot\big{\|}c_{\theta}^{2}B_{1}B_ {0}-s_{\theta}^{2}B_{0}B_{1}\big{\|}, \tag{71}\] where we have used that fact that for any two operators \(P,Q\), \(\left\|P+Q\right\|\leq\left\|P\right\|+\left\|Q\right\|\), and \(\left\|PQ\right\|\leq\left\|P\right\|\left\|Q\right\|\). Further, since the observables are dichotomic, we have \(\|A_{0}A_{1}\|\leq\|A_{0}|\|\|A_{1}\|\leq 1\), with similar bounds being valid for other pairs of observables. Therefore, \[c_{\theta}^{2}\big{\|}\mathcal{I}^{\theta}\big{\|}^{2} \leq 2+\big{\|}c_{\theta}^{2}B_{0}B_{1}\big{\|}+\big{\|}s_{\theta}^ {2}B_{1}B_{0}\big{\|}+\big{\|}c_{\theta}^{2}B_{1}B_{0}\big{\|}+\big{\|}s_{ \theta}^{2}B_{0}B_{1}\big{\|}\] \[\leq 2+c_{\theta}^{2}+s_{\theta}^{2}+c_{\theta}^{2}+s_{\theta}^{2 }=4. \tag{72}\] Dividing by \(c_{\theta}^{2}\) and taking the square root implies (68). ## Appendix B SRQ bounds through joint-measurability inequalities In this section we prove tight SRQ bounds on various _LP_ expressions of Sections 4 and 5. These bounds are valid when the _SP_ CHSH test is maximal and are then equivalent to joint-measurability (JM) inequalities. ### Bound on \(\mathcal{J}_{\text{L}}^{\theta}\) The bound (30) for \(\mathcal{J}_{\text{L}}^{\theta}\) follows, as shown in Proposition 1, from the following JM inequality \[\frac{1}{2}\left[\text{Tr}(XB_{\text{1L}})+\text{Tr}(ZB_{\text{0L}})\right] \leq\sqrt{2}\,, \tag{73}\] valid for all qubit observables \(B_{\text{0L}},B_{\text{1L}}\) that are jointly-measurable. This JM inequality was already introduced in [18]. We prove it below for completeness. Proof.: Restricting \(B_{\text{L}}\) to be jointly-measurable means that \(B_{\text{0L}}\) and \(B_{\text{1L}}\) can be written as the marginals of a single parent POVM \(\{N_{\beta_{0}\beta_{1}}\}\), as shown in (25). i.e., \[B_{\text{0L}}=M_{0|0}-M_{1|0}=N_{00}+N_{01}-N_{10}-N_{11}\,\] \[B_{\text{1L}}=M_{0|1}-M_{1|1}=N_{00}-N_{01}+N_{10}-N_{11}\, \tag{74}\] where the outcome set \(\{\pm 1\}\) has been relabeled \(\{0,1\}\) for convenience. By rewriting (73) in terms of \(N_{\beta_{0}\beta_{1}}\), we get \[\frac{1}{2}\sum_{\beta_{0}\beta_{1}}\text{Tr}(C_{\beta_{0}\beta_{1}}N_{\beta_ {0}\beta_{1}})\leq\sqrt{2}, \tag{75}\] where, \[C_{\beta_{0}\beta_{1}}=(-1)^{\beta_{0}}Z+(-1)^{\beta_{1}}X. \tag{76}\] Proving (73) is now equivalent to proving (75), with the restriction that the operators \(N_{\beta_{0}\beta_{1}}\) must be a valid POVM, i.e., \(N_{\beta_{0}\beta_{1}}\succeq 0\) and \(\sum_{\beta_{0}\beta_{1}}N_{\beta_{0}\beta_{1}}=\mathbf{1}\). Maximizing the LHS of (75) under these constraints is then an instance of semidefinite programming (SDP). The dual formulation of the SDP provides a certificate for the maximum value, which can be used to construct an analytic proof of (75). We provide such an analytic proof below which we constructed from the dual SDP. Note first the following matrix inequalities, which can be easily verified by explicitly solving for the eigenvalues \[C_{\beta_{0}\beta_{1}}\preceq\sqrt{2}\mathbf{1}\qquad\forall\beta_{0},\beta_{ 1}. \tag{77}\] We then get \[\frac{1}{2}\sum_{\beta_{0}\beta_{1}}\operatorname{Tr}(C_{\beta_{0} \beta_{1}}N_{\beta_{0}\beta_{1}}) \leq\frac{\sqrt{2}}{2}\sum_{\beta_{0}\beta_{1}}\operatorname{Tr}(N _{\beta_{0}\beta_{1}})\] \[=\frac{\sqrt{2}}{2}\operatorname{Tr}(\mathbf{1})=\sqrt{2}\,, \tag{78}\] where in the first line we used (77) and that \(N_{\beta_{0}\beta_{1}}\succeq 0\) and to get the second line we used that \(\sum_{\beta_{0}\beta_{1}}N_{\beta_{0}\beta_{1}}=\mathbf{1}\). A joint-measurement strategy that saturates the inequality (73) is given by \(B_{0\mathtt{L}}=B_{1\mathtt{L}}=(Z+X)/\sqrt{2}\), which shows that the bound in (73) is tight. If \(B_{\mathtt{L}}\) is not restricted to be jointly-measurable, then the bound in (73) can be improved to \(2\) (which is the algebraic maximal value) using \(B_{0\mathtt{L}}=Z\), \(B_{1\mathtt{L}}=X\) ### Bounds on \(\mathcal{J}_{\mathtt{L}}^{\theta_{+},\theta_{-}}\) We now derive the _LP_ inequality (58) for \(J_{\mathtt{L}}^{\theta_{+},\theta_{-}}\). Let us begin by considering the following joint-measurability inequality \[\frac{1}{2}\left[\operatorname{Tr}(ZB_{0\mathtt{L}})+\operatorname{Tr}(ZB_{1 \mathtt{L}})-s_{\theta_{-}}\operatorname{Tr}(XB_{0\mathtt{L}})+s_{\theta_{-}} \operatorname{Tr}(XB_{1\mathtt{L}})+c_{\theta_{-}}\operatorname{Tr}(B_{0 \mathtt{L}}+B_{1\mathtt{L}})\right]\leq 2, \tag{79}\] valid when \(B_{0\mathtt{L}}\) and \(B_{1\mathtt{L}}\) are jointly-measurable and \(\theta_{-}\in]0,\pi/2[\). Proof of (79).: The proof is almost identical to Appendix B.1. We write (79) as \[\sum_{\beta_{0}\beta_{1}}\operatorname{Tr}(C_{\beta_{0}\beta_{1}}(\theta_{-}) N_{\beta_{0}\beta_{1}})\leq 2, \tag{80}\] where, \[C_{\beta_{0}\beta_{1}}(\theta_{-})=\delta_{\beta_{0}\beta_{1}}(-1)^{\beta_{0} }(Z+c_{\theta_{-}}\mathbf{1})+(1-\delta_{\beta_{0}\beta_{1}})(-1)^{\beta_{1} }s_{\theta_{-}}X. \tag{81}\] This expression plays a role analoguous to (75) and as in Appendix B.1 the bounds now follows from the following matrix inequalities which can easily be verified \[C_{\beta_{0}\beta_{1}}(\theta_{-})\preceq\mathbf{1}+c_{\theta_{-}}Z\qquad \forall\ \beta_{0},\beta_{1},\theta_{-}\in]0,\frac{\pi}{2}[. \tag{82}\] We can saturate the inequality (79) by choosing \(B_{0\mathtt{L}}=B_{1\mathtt{L}}=Z\). It is now easy to prove the _LP_ inequality (58). First note that we can view (79) as part of a broader family of joint-measurability inequalities \[\frac{1}{2}[(c_{\theta_{+}}+s_{\theta_{-}}s_{\theta_{+}}) \operatorname{Tr}(ZB_{0\mathtt{L}})+(c_{\theta_{+}}-s_{\theta_{-}}s_{\theta_{ +}})\operatorname{Tr}(ZB_{1\mathtt{L}})+(s_{\theta_{+}}-s_{\theta_{-}}c_{ \theta_{+}})\operatorname{Tr}(XB_{0\mathtt{L}})+\\ (s_{\theta_{+}}+s_{\theta_{-}}c_{\theta_{+}})\operatorname{Tr}(XB _{1\mathtt{L}})+c_{\theta_{-}}\operatorname{Tr}(B_{0\mathtt{L}}+B_{1\mathtt{ L}})]\leq 2, \tag{83}\] that are obtained by rotating the \(X\) - \(Z\) plane by an angle \(\theta_{+}\) around the \(Y\) axis in the Bloch sphere. Such rotations correspond to a change of basis and do not change the bound. Using the equivalence shown in (31) and valid when \(\mathcal{C}_{\mathtt{S}}=2\sqrt{2}\), we can interpret the family of joint-measurability inequalities above as the _LP_ inequalities (58) Note that if the \(B_{y\mathtt{L}}\) observables are not restricted to be jointly-measurable, then the bound in (79) can be improved to \(2\sqrt{1+s_{\theta_{-}}^{2}}\), i.e., \[\frac{1}{2}\left[\operatorname{Tr}(ZB_{0\mathtt{L}})+\operatorname{Tr}(ZB_{1 \mathtt{L}})-s_{\theta_{-}}\operatorname{Tr}(XB_{0\mathtt{L}})+s_{\theta_{-}} \operatorname{Tr}(XB_{1\mathtt{L}})+c_{\theta_{-}}\operatorname{Tr}(B_{0 \mathtt{L}}+B_{1\mathtt{L}})\right]\leq 2\sqrt{1+s_{\theta_{-}}^{2}}. \tag{84}\] Proof.: To prove this, we follow a similar approach as before, but without the restriction to joint measurements. Let us re-express the above inequality as \[\frac{1}{2}\sum_{b,y}\operatorname{Tr}\bigl{(}C_{b,y}(\theta_{-})M_{b|y\text{L} }\bigr{)}\leq\ 2\sqrt{1+s_{\theta_{-}}^{2}}, \tag{85}\] where, \[C_{b,y}(\theta_{-})=(-1)^{b}(Z+c_{\theta_{-}}\mathbf{1})+(-1)^{b+y+1}s_{ \theta_{-}}X. \tag{86}\] Now, consider the following matrix inequalities, which can be easily verified \[C_{b,0}(\theta_{-}) \preceq\lambda_{0}\mathbf{1}+\lambda_{1}X+\lambda_{3}Z\,,\quad \forall b,\] \[C_{b,1}(\theta_{-}) \preceq\lambda_{0}\mathbf{1}-\lambda_{1}X+\lambda_{3}Z\,,\quad \forall b,\] (87a) where, \[\lambda_{0}=\sqrt{1+s_{\theta_{-}}^{2}},\quad\lambda_{1}=-s_{\theta_{-}}+s_{ \theta_{-}}\sqrt{\frac{2-2c_{\theta_{-}}\sqrt{1+s_{\theta_{-}}^{2}}}{1+s_{ \theta_{-}}^{2}}},\quad\lambda_{3}=\frac{c_{\theta_{-}}}{\sqrt{1+s_{\theta_{ -}}^{2}}}. \tag{88}\] These inequalities imply \[\frac{1}{2}\sum_{b,y}\operatorname{Tr}\bigl{(}C_{b,y}(\theta_{-} )M_{b|y\text{L}}\bigr{)}\leq\frac{1}{2}\sum_{b,y}\operatorname{Tr}\bigl{(}( \lambda_{0}\mathbf{1}+(-1)^{y}\lambda_{1}X+\lambda_{3}Z)M_{b|y\text{L}}\bigr{)}\] \[\qquad\qquad\qquad=\frac{1}{2}\sum_{y}\operatorname{Tr}((\lambda _{0}\mathbf{1}+(-1)^{y}\lambda_{1}X+\lambda_{3}Z))=2\lambda_{0}=2\sqrt{1+s_{ \theta_{-}}^{2}} \tag{89}\] A strategy which saturates the inequality (84) is given by, \(B_{0\text{L}}+B_{1\text{L}}=\left(2/\sqrt{1+s_{\theta_{-}}^{2}}\right)Z\), \(B_{1\text{L}}-B_{0\text{L}}=\left(2s_{\theta_{-}}/\sqrt{1+s_{\theta_{-}}^{2}} \right)X\). Finally, the local bound for \(J^{\theta_{+},\theta_{-}}\) in a standard Bell scenario (without the switch) is given by, \[J^{\theta_{+},\theta_{-}}\leq 2(c_{\theta_{+}}+s_{\theta_{+}}+c_{\theta_{-}}) \text{ (local bound)}, \tag{90}\] where we have restricted \(\theta_{-}\) to the interval \(]0,\pi/2[\), and \(\theta_{+}\) to the interval \([0,\pi/4]\). It can be saturated by \(A\) and \(B\) giving the output \(+1\) irrespective of their inputs. When \(s_{\theta_{+}}-s_{\theta_{-}}c_{\theta_{+}}\geq 0\), the value in (90) is also the algebraic maximum of \(J^{\theta_{+},\theta_{-}}\), so it cannot be used in a standard Bell scenario to certify nonlocality. ### Bounds on \(\tilde{\mathcal{J}}_{\text{L}}^{\theta}\) We now prove the \(LP\) inequality (63) for \(\tilde{\mathcal{J}}_{\text{L}}^{\theta}\), which is slightly more general than the ones above since there are three outcomes for \(B_{\text{L}}\). Let us begin by considering the following joint-measurability inequality \[\frac{1}{2}\operatorname{Tr}\left[ZB_{0\text{L}}+XB_{1\text{L}}-\frac{(T_{0 \text{L}}+T_{1\text{L}})}{2}\right]\leq\frac{1}{2}, \tag{91}\] where, \(B_{y\text{L}}=M_{b=+1|y\text{L}}-M_{b=-1|y\text{L}}\), \(T_{y\text{L}}=M_{b=+1|y\text{L}}+M_{b=-1|y\text{L}}\). Proof.: The proof is similar to Sections B.1 and B.2 except that there are three outcomes \(\{+1,-1,\emptyset\}\) for \(B_{\text{L}}\) in this scenario, which we relabel \(\{0,1,2\}\) for convenience. Hence, the joint-measurement \(\{N_{\partial_{\emptyset}\bar{\jmath}_{1}}\}\) of \(B_{\text{L}}\) has, in general, nine outcomes. In terms of this joint-measurement, \(B_{\text{L}}\)'s observables are given by \[B_{0\text{L}} =M_{0|0}-M_{1|0}=N_{00}+N_{01}+N_{02}-N_{10}-N_{11}-N_{12},\] \[B_{1\text{L}} =M_{0|1}-M_{1|1}=N_{00}+N_{10}+N_{20}-N_{01}-N_{11}-N_{21},\] \[T_{0\text{L}} =M_{0|0}+M_{1|0}=N_{00}+N_{01}+N_{02}+N_{10}+N_{11}+N_{12},\] \[T_{1\text{L}} =M_{0|1}+M_{1|1}=N_{00}+N_{10}+N_{20}+N_{01}+N_{11}+N_{21}, \tag{92}\] which lets us rewrite (91) in terms of \(N_{\beta_{0}\beta_{1}}\) as \[\frac{1}{2}\sum_{\beta_{0}\beta_{1}}\operatorname{Tr}(C_{\beta_{0}\beta_{1}}N_{ \beta_{0}\beta_{1}})\leq\frac{1}{2}, \tag{93}\] where, \[C_{\beta_{0}\beta_{1}}=(1-\delta_{\beta_{0},2})(-1)^{\beta_{0}}Z+(1-\delta_{ \beta_{1},2})(-1)^{\beta_{1}}X-(2-\delta_{\beta_{0},2}-\delta_{\beta_{1},2}) \frac{1}{2}. \tag{94}\] It is now straightforward to verify the following matrix inequalities \[C_{\beta_{0}\beta_{1}}\preceq\frac{1}{2}\quad\forall\ \beta_{0},\beta_{1}\,, \tag{95}\] using which (91) can be proved as before. A joint-measurement strategy that saturates the inequality (91) is given by \(M_{0|0}=\frac{1}{4}(\mathbf{1}+Z),\ M_{1|0}=\frac{1}{4}(\mathbf{1}-Z),\ M_{0|1}= \frac{1}{4}(\mathbf{1}+X),\ M_{1|1}=\frac{1}{4}(\mathbf{1}-X)\), which can be shown to be jointly-measurable. It is now straightforward to prove the SRQ bound (63). As before, we obtain a family of joint-measurability inequalities from (91) by rotating the \(X-Z\) plane by an angle \(\theta\) along the \(Y\) axis in the Bloch sphere, given by \[\frac{1}{2}\operatorname{Tr}\left[c_{\theta}ZB_{0\mathbb{L}}-s_{\theta}ZB_{1 \mathbb{L}}+s_{\theta}XB_{0\mathbb{L}}+c_{\theta}XB_{1\mathbb{L}}-\frac{(T_{0 \mathbb{L}}+T_{1\mathbb{L}})}{2}\right]\leq\frac{1}{2}, \tag{96}\] which, using the equivalence given in (31) and valid when \(\mathcal{C}_{\mathsf{S}}=2\sqrt{2}\), gives the \(LP\) inequality (63). If \(B_{\mathbb{L}}\) is not restricted to be jointly-measurable, but can perform a general quantum measurement, then the bound in (91) can be improved to \(1\): \[\frac{1}{2}\operatorname{Tr}\left[ZB_{0\mathbb{L}}+XB_{1\mathbb{L}}-\frac{(T_ {0\mathbb{L}}+T_{1\mathbb{L}})}{2}\right]\leq 1\,, \tag{97}\] Proof.: Let us write (97) as \[\frac{1}{2}\sum_{b,y\in\{0,1\}}\operatorname{Tr}\bigl{(}C_{b,y}M_{b|y \mathbb{L}}\bigr{)}\leq 1, \tag{98}\] where, \[C_{b,y}=(-1)^{b}(\delta_{0,y}Z+\delta_{1,y}X)-\frac{1}{2}. \tag{99}\] Then the bound (97) follows from the following matrix inequalities \[C_{b,y}\preceq\frac{1}{2}\quad\forall\ b,y\,. \tag{100}\] Choosing \(B_{0\mathbb{L}}=Z,\ B_{1\mathbb{L}}=X\) saturates the inequality (97). The local and quantum bounds on \(\tilde{\mathcal{J}}^{\theta}\) in a standard Bell scenario (without the switch) are given by \[\tilde{\mathcal{J}}^{\theta} \leq\begin{cases}2c_{\theta}-1&\text{ if }\theta\in\left[0,\frac{1}{2}\sin^{-1}(\frac{3}{4})\right]\\ c_{\theta}+s_{\theta}-\frac{1}{2}&\text{ if }\theta\in\left[\frac{1}{2}\sin^{-1}( \frac{3}{4}),\frac{\pi}{4}\right]\end{cases}\qquad\text{(local bound)}, \tag{101}\] \[\leq 1\qquad\text{(quantum bound)}. \tag{102}\] Proof.: Let us first prove the local bound. Since \(\tilde{\mathcal{J}}^{\theta}\) is linear in the expectation values, it must be maximized over a deterministic local strategy indexed by a local hidden variable, say, \(\lambda\). Therefore, \[\tilde{\mathcal{J}}^{\theta}\leq\tilde{\mathcal{J}}^{\theta}_{\lambda}=\langle A _{1}\rangle_{\lambda}(c_{\theta}\langle B_{0}\rangle_{\lambda}-s_{\theta} \langle B_{1}\rangle_{\lambda})+\langle A_{0}\rangle_{\lambda}(s_{\theta} \langle B_{0}\rangle_{\lambda}+c_{\theta}\langle B_{1}\rangle_{\lambda})- \frac{\langle T_{0\mathbb{L}}\rangle_{\lambda}+\langle T_{1\mathbb{L}} \rangle_{\lambda}}{2}, \tag{103}\] where \[\langle A_{x}\rangle_{\lambda} =\sum_{a\in\{\pm 1\}}a\ p(a|x,\lambda)\ \,\forall x\in\{0,1\},\] \[\langle B_{y}\rangle_{\lambda} =\sum_{b\in\{\pm 1\}}b\ p(b|y,\lambda)\ \,\forall y\in\{0,1\},\] \[\langle T_{y}\rangle_{\lambda} =\sum_{b\in\{\pm 1\}}p(b|y,\lambda)\ \,\forall y\in\{0,1\}.\] (104a) From ( 104 ) it is clear that \[|\langle B_{y}\rangle_{\lambda}|\leq\langle T_{y}\rangle_{\lambda}\text{ and }|\langle A_{x}\rangle_{\lambda}|\leq 1\text{. Therefore,}\] \[\tilde{\mathcal{J}}^{\theta} \leq\langle A_{1}\rangle_{\lambda}(c_{\theta}\langle B_{0} \rangle_{\lambda}-s_{\theta}\langle B_{1}\rangle_{\lambda})+\langle A_{0} \rangle_{\lambda}(s_{\theta}\langle B_{0}\rangle_{\lambda}+c_{\theta}\langle B _{1}\rangle_{\lambda})-\frac{|\langle B_{0\text{L}}\rangle_{\lambda}|+| \langle B_{1\text{L}}\rangle_{\lambda}|}{2},\] \[\leq|(c_{\theta}\langle B_{0}\rangle_{\lambda}-s_{\theta}\langle B _{1}\rangle_{\lambda})|+|(s_{\theta}\langle B_{0}\rangle_{\lambda}+c_{\theta} \langle B_{1}\rangle_{\lambda})|-\frac{|\langle B_{0}\rangle_{\lambda}|+| \langle B_{1}\rangle_{\lambda}|}{2}. \tag{105}\] The maximum of the right-hand side above occurs when \(\langle B_{0}\rangle_{\lambda}=\langle B_{1}\rangle_{\lambda}=1\text{ if }\theta\in[0,(\sin^{-1}(3/4))/2]\), and when \(\langle B_{0}\rangle_{\lambda}=1,\ \langle B_{1}\rangle_{\lambda}=0\) if \(\theta\in[(\sin^{-1}(3/4))/2,\pi/4]\). Substituting these values in the above inequality we get the desired bound, which can be saturated using the follwing local strategy: When \(\theta\in[0,(\sin^{-1}(3/4))/2]\), \(A\) and \(B\) always output \(+1\). When \(\theta\in[(\sin^{-1}(3/4))/2,\pi/4]\), \(A\) always outputs \(+1\) as before, but \(B\) outputs \(+1\) if \(y=0\), and \(\emptyset\) if \(y=1\). The quantum bound was determined numerically using level-1 of NPA hierarchy. It is tight, as it can be saturated by the quantum strategy given in (53) and (54), with \(B_{\text{L}}\) playing the role of \(B\). ## Appendix C Techniques for obtaining SoS decomposition In this section we describe the techniques we used to obtain the SoS decomposition for the family of shifted Bell operators \(\mathtt{I}_{u}\) (46). These techniques were first used in [34], and we refer the interested reader to this work for more details. The steps are as follows. We seek to write \(\mathtt{I}_{u}=\sum_{i}K_{i}^{\dagger}K_{i}\) for some operators \(K_{i}\). For this, expand \(K_{i}\) in a basis of monomials \(K_{i}=\sum_{ij}r_{ij}V_{j}\), where \(V_{j}\in\{\mathbf{1}\}\bigcup\{A_{0},\ A_{1}\}\otimes\{B_{0\mathtt{S}},\ B_{1 \mathtt{S}},\ B_{0\mathtt{L}},\ B_{1\mathtt{L}}\}\). Therefore, \(\mathtt{I}_{u}=\sum_{jk}V_{j}^{\dagger}\left(\sum_{i}r_{ij}^{*}r_{ik}\right)V_ {k}=\sum_{jk}V_{j}^{\dagger}M_{jk}V_{k}\). Clearly, \(M\succeq 0\). Conversely, any positive operator \(M\) that satisfies \(\mathtt{I}_{u}=\sum_{jk}V_{j}^{\dagger}M_{jk}V_{k}\) is an SoS for \(\mathtt{I}_{u}\). The aim is to find such a (non-unique) matrix \(M\). We can simplify the problem by utilizing known quantum strategies that saturate the bound on (40). The existence of such strategies also establish that the bound (40) is tight. For such strategies we must have, \(\langle\mathtt{I}_{u}\rangle_{\psi}=0\), and consequently, \(\sum_{j}r_{ij}V_{j}\left|\psi\right\rangle=0\ \forall\ i\). Here \(\left|\psi\right\rangle\) and \(V_{j}\) are the known quantum state and measurement operators that yield \(\langle\mathtt{I}_{u}\rangle_{\psi}=0\). We made use of two such strategies which use the maximally entangled state \(\left|\phi_{+}\right\rangle\) and the measurements 1. \(A_{0}=c_{u}Z+s_{u}X,\ A_{1}=c_{u}Z-s_{u}X,\ B_{0\mathtt{S}}=Z,\ B_{1\mathtt{S}} =X,\ B_{0\mathtt{L}}=B_{1\mathtt{L}}=Z,\) 2. \(A_{0}=c_{u}X+s_{u}Z,\ A_{1}=-c_{u}X+s_{u}Z,\ B_{0\mathtt{S}}=Z,\ B_{1\mathtt{S}} =X,\ B_{0\mathtt{L}}=-X,\ B_{1\mathtt{L}}=X.\) (106) This imposes the following linear constraints on the coefficients \(r_{ij}\): \[r_{i\mathbf{1}}+c(r_{iA_{0}B_{0\mathtt{S}}}+r_{iA_{1}B_{0\mathtt{ S}}}+r_{iA_{0}B_{\mathtt{L}}}+r_{iA_{0}B_{0\mathtt{S}}}+r_{iA_{1}B_{1\mathtt{L}}}+r_{iA_{1}B_{0 \mathtt{L}}})+s(r_{iA_{0}B_{1\mathtt{S}}}-r_{iA_{1}B_{1\mathtt{S}}})=0,\] \[\ s(-r_{iA_{0}B_{0\mathtt{S}}}+r_{iA_{1}B_{0\mathtt{S}}}-r_{iA_{0}B _{1\mathtt{L}}}-r_{iA_{0}B_{0\mathtt{S}}}+r_{iA_{1}B_{1\mathtt{L}}}+r_{iA_{1}B_{ 0\mathtt{L}}})+c(r_{iA_{0}B_{1\mathtt{S}}}+r_{iA_{1}B_{1\mathtt{S}}})=0,\] \[r_{i\mathbf{1}}+c(r_{iA_{0}B_{1\mathtt{S}}}-r_{iA_{1}B_{1\mathtt{ S}}}+r_{iA_{0}B_{1\mathtt{L}}}-r_{iA_{0}B_{0\mathtt{S}}}-r_{iA_{1}B_{1\mathtt{L}}}+r_{iA_{1}B_{ 0\mathtt{S}}})+s(r_{iA_{0}B_{0\mathtt{S}}}+r_{iA_{1}B_{0\mathtt{S}}})=0,\] \[\ s(r_{iA_{0}B_{1\mathtt{S}}}+r_{iA_{1}B_{1\mathtt{S}}}+r_{iA_{0}B _{1\mathtt{L}}}-r_{iA_{0}B_{0\mathtt{S}}}+r_{iA_{1}B_{1\mathtt{L}}}-r_{iA_{1}B_{ 0\mathtt{S}}})+c(r_{iA_{1}B_{0\mathtt{S}}}-r_{iA_{0}B_{0\mathtt{S}}})=0.\] (107a) Imposing these constraints, we find a new 5-dimensional monomial basis which must span \[K_{i}\]. The elements in this basis are \[\{\mathtt{I}_{u},P_{1}(u),P_{2}(u),P_{3}(u),P_{4}(u)\}\] (see ( 45 )). A further simplification occurs by symmetry considerations. We note that under the transformation \(\sigma_{1}=\{A_{0}\leftrightarrow A_{1},\ B_{1\mathtt{S}}\rightarrow-B_{1\mathtt{S }},\ B_{0\mathtt{L}}\leftrightarrow B_{1\mathtt{L}}\}\), \(\{\mathtt{I}_{u},\ P_{1},\ P_{2}\}\) are invariant, whereas \(\{P_{3},\ P_{4}\}\) flip sign. Applying this transformation to both sides of \(\mathtt{I}_{u}=\sum_{jk}P_{j}^{\dagger}M_{jk}P_{k}\), we get \[\mathtt{I}_{u} =\sum_{jk}\sigma_{1}(P_{j}^{\dagger})M_{jk}\sigma_{1}(P_{k}),\] \[=\sum_{jk}P_{j}^{\dagger}N_{jk}P_{k}, \tag{108}\] where \(N\) is a new SOS decomposition for \(\mathtt{I}_{u}\) by construction. The convex combination of SOS decompositions is also a valid SOS decomposition. A convenient choice is given by \(G:=(M+N)/2\) since it is block diagonal. In particular, \(G\) is made up of 2 blocks of sizes \(3\times 3\) and \(2\times 2\). Similarly, we make use of another transformation \(\sigma_{2}=\{A_{1}\rightarrow-A_{1},\ B_{0\mathtt{S}}\leftrightarrow B_{1 \mathtt{S}},\ B_{1\mathtt{L}}\rightarrow-B_{1\mathtt{L}}\}\), under which \(\{\mathtt{I}_{u},\ P_{1},\ P_{3}\}\) are invariant and \(\{P_{2},\ P_{4}\}\) flip sign. Imposing this symmetry to \(G\), we find that we can choose to work with a matrix \(G\) that is block diagonal with 1 block of size \(2\times 2\) and 3 blocks of size \(1\times 1\). Finally, we equate the coefficients on both sides of \(\mathtt{I}_{u}=\sum_{jk}P_{j}^{\dagger}G_{jk}P_{k}\), and solve for the non-zero elements of \(G\), which leads to the SoS decomposition for \(\mathtt{I}_{u}\) (46). Appendix D SRQ strategies that reproduce the correlations in (55) with sufficiently small detection efficiency Here we construct an explicity SRQ strategy for the quantum implementations in (55) that reproduces the statistics when the detection efficiency of \(B_{\mathtt{L}}\) is \(1/(1+c_{\theta_{-}})\), where \(\theta_{-}\in]0,\pi/2[\). The task is to construct a joint measurement \(N_{\beta_{0}\beta_{1}}\), where \(N_{\beta_{0}\beta_{1}}\succeq 0\) and \(\sum_{\beta_{0}\beta_{1}}N_{\beta_{0}\beta_{1}}=\mathbf{1}\) for \(\theta_{-}\in]0,\pi/2[\), so that, \[B_{0\mathtt{L}}^{\eta} =\eta B_{0\mathtt{L}}+(1-\eta)\mathbf{1}=N_{00}+N_{01}-N_{10}-N_{ 11},\] \[B_{1\mathtt{L}}^{\eta} =\eta B_{1\mathtt{L}}+(1-\eta)\mathbf{1}=N_{00}-N_{01}+N_{10}-N_{ 11}, \tag{109}\] where \(\eta=1/(1+c_{\theta_{-}})\). For \(\theta_{+}=0\) the joint-measurement is given by \[N_{00} =\left(\frac{c_{\theta_{-}}}{1+c_{\theta_{-}}}\right)(\mathbf{1} +Z),\] \[N_{01} =\frac{1}{2}\left[-\left(\frac{s_{\theta_{-}}}{1+c_{\theta_{-}}} \right)X+\left(\frac{1-c_{\theta_{-}}}{1+c_{\theta_{-}}}\right)\frac{\mathbf{ 1}+Z}{2}+\frac{\mathbf{1}-Z}{2}\right],\] \[N_{10} =\frac{1}{2}\left[\left(\frac{s_{\theta_{-}}}{1+c_{\theta_{-}}} \right)X+\left(\frac{1-c_{\theta_{-}}}{1+c_{\theta_{-}}}\right)\frac{\mathbf{ 1}+Z}{2}+\frac{\mathbf{1}-Z}{2}\right],\] \[N_{11} =0,\] (110a) For other values of \[\theta_{+}\], replace \[Z\] and \[X\] in ( 110 ) with \[c_{\theta_{+}}Z+s_{\theta_{+}}X\] and \[c_{\theta_{+}}X-s_{\theta_{+}}Z\], respectively. ## Appendix E SDP used in Fig. 10 Here, we provide additional details about the implementation of the SDP problem (65), which we solved to obtain the bounds on the critical detection efficiency \(\eta_{\mathtt{L}}\) in Fig. 10. The SDPs were solved using the CVXPY interface to the MOSEK solver. The code is available at [https://github.com/eplobo/RoutedBell](https://github.com/eplobo/RoutedBell). We again focus on the CHSH and BB84 target correlations ((54) with \(\theta=\pi/4\) and \(\theta=0\) respectively). For the binning case, a basis of monomial operators is given by products of the hermitian observables \(\{\mathbf{1},A_{x},B_{yz}\}\) which satisfy \(A_{x}^{2}=\mathbf{1}\), \(B_{yz}^{2}=\mathbf{1}\), \([A_{x},B_{yz}]=0\) and \([B_{0\mathtt{L}},B_{1\mathtt{L}}]=0\). The full set of probabilities \(p^{\overline{i}}\) is in one-to-one correspondence with the expectation values \(\langle A_{x}\rangle\) \(\langle B_{yz}\rangle\), and \(\langle A_{x}B_{yz}\rangle\), which are given by \[\langle A_{x}\rangle =1-\eta_{\text{\tiny B}},\] \[\langle B_{yz}\rangle =1-\eta_{z},\] \[\langle A_{x}B_{y\text{\tiny B}}\rangle =\eta_{\text{\tiny S}}^{2}(-1)^{x\cdot y}\frac{1}{\sqrt{2}}+(1- \eta_{\text{\tiny B}})^{2},\] \[\langle A_{x}B_{y\text{\tiny L}}\rangle =\begin{cases}\eta_{\text{\tiny B}}\eta_{\text{\tiny L}}(-1)^{x \cdot y}\frac{1}{\sqrt{2}}+(1-\eta_{\text{\tiny B}})(1-\eta_{\text{\tiny L}})& \text{for CHSH correlations},\\ \eta_{\text{\tiny B}}\eta_{\text{\tiny L}}\delta_{x,y}+(1-\eta_{\text{\tiny B }})(1-\eta_{\text{\tiny L}})&\text{for BB84 correlations}.\end{cases} \tag{111}\] We solved the SDP problem (65) by imposing the above constraints on the expectations and considering level '\(3+AAAA+B_{\text{\tiny S}}B_{\text{\tiny S}}B_{\text{\tiny S}}B_{\text{\tiny S }}B_{\text{\tiny S}}+B_{\text{\tiny L}}B_{\text{\tiny L}}B_{\text{\tiny L}}B_{ \text{\tiny L}}B_{\text{\tiny L}}+AAB_{\text{\tiny S}}B_{\text{\tiny S}}+AAB_ {\text{\tiny L}}B_{\text{\tiny L}}B_{\text{\tiny L}}+B_{\text{\tiny S}}B_{ \text{\tiny S}}B_{\text{\tiny L}}B_{\text{\tiny L}}\)' of the hierarchy. When the no-click outcomes are not binned, the operators \(A_{x}=M_{0|x}-M_{1|x}\) and \(B_{yz}=M_{0|yz}-M_{1|yz}\) do not square to the identity, but we have the relations \(A_{x}^{2}=M_{0|x}+M_{1|x}\) and \(A_{x}^{3}=M_{0|x}-M_{1|x}=A_{x}\) and similarly for \(B_{yz}\). We thus need to replace the polynomial constraints \(A_{x}^{2}=\mathbf{1}\) and \(B_{yz}^{2}=\mathbf{1}\) from the binning case by the constraints \(A_{x}^{3}=A_{x}\) and \(B_{yz}^{3}=B_{yz}\). The full set of non-binned probabilities \(p^{\text{\tiny ff}}\) are in one-to-one correspondence with the expectation values \(\langle A_{x}\rangle\), \(\langle A_{x}^{2}\rangle\), \(\langle B_{yz}\rangle\), \(\langle B_{yz}^{2}\rangle\), \(\langle A_{x}B_{yz}\rangle\), \(\langle A_{x}B_{yz}^{2}\rangle\), \(\langle A_{x}^{2}B_{yz}\rangle\), \(\langle A_{x}^{2}B_{yz}\rangle\), and \(\langle A_{x}^{2}B_{yz}^{2}\rangle\), which, in our specific problem are given by \[\langle A_{x}\rangle =0,\quad\langle A_{x}^{2}\rangle=\eta_{\text{\tiny B}},\] \[\langle B_{yz}\rangle =0,\quad\langle B_{yz}^{2}\rangle=\eta_{z},\] \[\langle A_{x}B_{yz}^{2}\rangle =\langle A_{x}^{2}B_{yz}\rangle=0,\quad\langle A_{x}^{2}B_{yz}^{2} \rangle=\eta_{\text{\tiny S}}\eta_{z}\] \[\langle A_{x}B_{y\text{\tiny S}}\rangle =\eta_{\text{\tiny S}}^{2}(-1)^{x\cdot y}\frac{1}{\sqrt{2}}\] \[\langle A_{x}B_{y\text{\tiny L}}\rangle =\begin{cases}\eta_{\text{\tiny B}}\eta_{\text{\tiny L}}(-1)^{x \cdot y}\frac{1}{\sqrt{2}}&\text{for CHSH correlations},\\ \eta_{\text{\tiny B}}\eta_{\text{\tiny L}}\delta_{x,y}&\text{for BB84 correlations}.\end{cases}\] (112a) As before, we solved the corresponding SDP problem (65) at level '\(3+AAAA+B_{\text{\tiny S}}B_{\text{\tiny S}}B_{\text{\tiny S}}B_{\text{\tiny S }}+B_{\text{\tiny L}}B_{\text{\tiny L}}B_{\text{\tiny L}}B_{\text{\tiny L}}B_{ \text{\tiny L}}+AAB_{\text{\tiny S}}B_{\text{\tiny S}}B_{\text{\tiny S}}+AAB_ {\text{\tiny L}}B_{\text{\tiny L}}B_{\text{\tiny L}}+B_{\text{\tiny S}}B_{ \text{\tiny S}}B_{\text{\tiny L}}B_{\text{\tiny L}}\)'.
2307.02133
On multivariate orderings of some general ordered random vectors
Ordered random vectors are frequently encountered in many problems. The generalized order statistics (GOS) and sequential order statistics (SOS) are two general models for ordered random vectors. However, these two models do not capture the dependency structures that are present in the underlying random variables. In this paper, we study the developed sequential order statistics (DSOS) and developed generalized order statistics (DGOS) models that describe the dependency structures of ordered random vectors. We then study various univariate and multivariate ordering properties of DSOS and DGOS models under Archimedean copula. We consider both one-sample and two-sample scenarios and develop corresponding results.
Tanmay sahoo, Nil Kamal Hazra, Narayanaswamy Balakrishnan
2023-07-05T09:23:37Z
http://arxiv.org/abs/2307.02133v1
# On multivariate orderings of some general ###### Abstract Ordered random vectors are frequently encountered in many problems. The generalized order statistics (GOS) and sequential order statistics (SOS) are two general models for ordered random vectors. However, these two models do not capture the dependency structures that are present in the underlying random variables. In this paper, we study the developed sequential order statistics (DSOS) and developed generalized order statistics (DGOS) models that describe the dependency structures of ordered random vectors. We then study various univariate and multivariate ordering properties of DSOS and DGOS models under Archimedean copula. We consider both one-sample and two-sample scenarios and develop corresponding results. **Keywords:** Archimedean copula, generalized order statistics, record values, sequential order statistics, stochastic orders **2010 Mathematics Subject Classification:** Primary 90B25 Secondary 60E15; 60K10 ## 1 Introduction Order statistics (OS) and record values arise naturally in several statistical modeling and inferential problems (see [1, 2, 21, 23, 25]). As a more general framework in which both these models are incorporated, the notion of generalized order statistics (GOS) was introduced. In addition, this GOS model contains several other models of ordered random variables, such as, order statistics with non-integral sample size, \(k\)-record values, Pfeifer's records, \(k_{n}\)-records from non-identical distributions, ordered random variables from truncated distributions, progressively type-II censored order statistics, and so on. Thus, the GOS model provides a unified class of models, with a variety of interesting and practical characteristics, which can be used to describe and study many real-world problems. On the other hand, the sequential order statistics (SOS), an extension of ordinary order statistics (OS), are used to represent the lifetimes of systems. In the SOS model, the failure of any component has an impact on the remaining surviving components and so the distributions of the lifetimes of remaining components are assumed to differ from the original ones. In reliability theory, there is a one-to-one relation between SOS and the lifetimes of sequential \(k\)-out-of-\(n\) systems (see the definition in [17]). In fact, the lifetime of a sequential \(k\)-out-of-\(n\) system is the same as the \((n-k+1)\)-th sequential order statistic of the lifetimes of components of the system. One may note that the GOS model is closely related to the SOS model. In particular, a specific choice of distribution functions (i.e., under the proportional hazard rate (PHR) model) in the SOS model leads to the GOS model. Thus, the SOS model can be viewed as a more generalized model that contains almost all existing models of ordered random variables. In the literature, numerous studies have been carried out concerning univariate and multivariate stochastic comparisons of ordinary order statistics (see [4, 6, 9, 11, 20, 27, 30, 32, 33, 37, 39] and the references therein). In the same vein, stochastic comparisons of generalized order statistics as well as stochastic comparisons of sequential order statistics have been discussed in the literature. Belzunce et al. [10] developed several results concerning multivariate and univariate stochastic comparisons of generalized order statistics with respect to the usual stochastic order, dispersive order, hazard rate order and likelihood ratio order. Hu and Zhuang [22] subsequently added some more results on univariate stochastic comparisons of generalized order statistics. Chen and Hu [16] studied ordering properties of generalized order statistics with respect to the multivariate dispersive order. Xie and Hu [44] subsequently discussed stochastic comparisons of multivariate marginals of generalized order statistics with respect to multivariate dispersive order. Balakrishnan et al. [3] derived some results for stochastic comparisons of generalized order statistics with respect to increasing convex order. Some more works on generalized order statistics can be found in [42, 12], and the references therein. Additionally, the study of various univariate orderings and ageing properties of sequential order statistics has been carried out by [13, 14, 31, 43]. Zhuang and Hu [45] studied multivariate stochastic comparisons of sequential order statistics with respect to multivariate likelihood ratio order, multivariate hazard rate order and multivariate usual stochastic order. One may note that all the studies listed above, for GOS and SOS models, have been carried out under the assumption that the underlying random variables are independent. The SOS model is defined based on the assumption that the lifetimes of the set of remaining components in each step (i.e., after each failure) are independent. This is indeed a very stringent assumption in many real-life scenarios. For example, consider the oil transmission pipeline station with five pumps. Suppose the station functions effectively as long as three out of the five pumps are operational. Here, the lifetimes of the five pumps are indeed dependent, and the failure of a pump increases the load on the remaining pumps because a proper transmission requires a certain level of oil pressure (i.e., load-sharing effect). This is an example of a sequential 3-out-of-5 system with dependent component lifetimes (see [7]). To overcome the aforementioned drawback of the SOS model, Baratnia and Doostparast [7] recently introduced the notion of developed sequential order statistics (DSOS), which is an extended SOS model. The DSOS model captures the dependency structure between components of a system in each step. Recently, Sahoo and Hazra [38] have studied various univariate stochastic comparison results for DSOS wherein the dependency structure has been described by an Archimedean copula. However, no study has been carried out for multivariate stochastic comparisons of DSOS. Thus, one of our main goals in this paper is to study various univariate and multivariate stochastic comparisons of DSOS governed by an Archimedean copula. In analogy to DSOS model, we introduce the notion of developed generalized order statistics (DGOS), which is a GOS model involving dependent random variables. In particular, what we study in this paper are the following: * Various multivariate stochastic orderings (namely, multivariate usual stochastic order, dynamic multivariate hazard rate order, and multivariate dispersive order) and univariate stochastic orderings (namely, usual stochastic order, hazard rate order, reverse hazard rate order, dispersive order, and increasing convex order) properties of developed sequential order statistics (DSOS) and developed generalized order statistics (DGOS) in both one and two-sample situations. It is worthwhile to mention that the results established here generalize many known results on sequential order statistics, generalized order statistics, record values, progressively type-II censored order statistics, order statistics from truncated distributions, and usual order statistics. The novelty in this work is mainly in considering the DGOS and DSOS models based on Archimedean copula. The rest of this paper is organized as follows. In Section 2, we present some preliminaries. In Section 3, we discuss the notion of some ordered random vectors. In Section 4, we establish some stochastic comparison results for random vectors from DSOS model with identical components. In Section 5, we establish some stochastic comparison results for DGOS model with identical components. Finally, some concluding remarks are made in Section 7. ## 2 Preliminaries Unless otherwise stated, we use the following notation throughout the paper. For an absolutely continuous random variable Z, we denote the cumulative distribution function (CDF) by \(F_{Z}(\cdot)\), the reliability function (RF) by \(\bar{F}_{Z}(\cdot)\), the probability density function (PDF) by \(f_{Z}(\cdot)\), and the cumulative hazard rate function by \(\Delta_{Z}(\cdot)\), where \(\bar{F}_{Z}(\cdot)\equiv 1-F_{Z}(\cdot)\) and \(\Delta_{Z}(\cdot)\equiv-\ln\bar{F}_{Z}(\cdot)\). We denote the set of natural numbers and the set of real numbers by \(\mathcal{N}\) and \(\mathcal{R}\), respectively. We write \(a\stackrel{{ d}}{{=}}b\) to mean that \(a\) and \(b\) have the same distribution. Copulas are very useful in describing the dependence structure between random variables. A wide range of copulas have been discussed in the literature and some of the well-known copulas are Farlie-Gumbel-Morgenstern (FGM) copula, extreme-value copula, Archimedean copulas, and Clayton-Oakes (CO) copula. The family of Archimedean copulas have received considerable attention due to their tractability and ability to capture a wide range of dependence. A comprehensive description of this topic can be found in the book by Nelsen [34]. Below, we give the definition of an Archimedean copula (see [29]). **Definition 2.1**: _Let \(\phi:[0,+\infty]\longrightarrow[0,1]\) be a decreasing continuous function with \(\phi(0)=1\) and \(\phi(+\infty)=0\), and \(\psi\equiv\phi^{-1}\) be the pseudo-inverse of \(\phi\). Then,_ \[C(u_{1},\ldots,u_{n})=\phi\left(\psi(u_{1})+\cdots+\psi(u_{n})\right),\quad \text{for }(u_{1},\ldots,u_{n})\in[0,1]^{n}, \tag{2.1}\] _is called an Archimedean copula with generator \(\phi\) if \((-1)^{k}\phi^{(k)}(x)\geq 0\), for \(k=0,1,\ldots,n-2\), and \((-1)^{n-2}\phi^{(n-2)}(x)\) is decreasing and convex in \(x\geq 0\), where \(\phi^{(k)}(\cdot)\) represents the \(k\)-th derivative of \(\phi\). \(\Box\)_ We now introduce some key notation that will be used in the sequel. For an Archimedean copula with generator \(\phi\), we denote \[H(u)=\frac{u\phi^{\prime}(u)}{1-\phi(u)},\ \ R(u)=\frac{u\phi^{\prime}(u)}{ \phi(u)}\ \text{and}\ G(u)=\frac{u\phi^{\prime\prime}(u)}{\phi^{\prime}(u)},\quad u>0.\] Note that \(H(\cdot)\), \(R(\cdot)\) and \(G(\cdot)\) are all negative-valued functions since \(\phi(\cdot)\) is a decreasing convex function. Before proceeding further, we introduce the following notation. For cumulative distribution functions \(\bar{F_{i}}\), \(i=1,2,\ldots,n\), we denote the corresponding probability density functions, quantile functions, survival functions, hazard rate functions, reversed hazard rate functions, and cumulative hazard rate functions by \(f_{i}\), \(F_{i}^{-1}\), \(\bar{F_{i}}\), \(r_{i}\), \(\tilde{r}_{i}\) and \(D_{i}\), respectively, where \(r_{i}\equiv f_{i}/\bar{F_{i}}\), \(\tilde{r}_{i}\equiv f_{i}/F_{i}\) and \(D_{i}\left(\cdot\right)\equiv-\ln\bar{F_{i}}\left(\cdot\right)\). Similarly, for cumulative distribution functions \(G_{i}\), \(i=1,\ldots,n\), we denote the corresponding probability density functions, quantile functions, survival functions, hazard rate functions, reversed hazard rate functions and cumulative hazard rate functions by \(g_{i}\), \(G_{i}^{-1}\), \(\bar{G_{i}}\), \(h_{i}\), \(\tilde{h}_{i}\) and \(B_{i}\), respectively, where \(h_{i}\equiv g_{i}/\bar{G_{i}}\), \(\tilde{h}_{i}\equiv g_{i}/G_{i}\) and \(B_{i}\left(\cdot\right)\equiv-\ln\bar{G_{i}}\left(\cdot\right)\). The proportional hazard rate (PHR) model is one of the commonly used semi-parametric models in survival analysis and reliability theory. A set of random variables \(\{Z_{1},\ldots,Z_{n}\}\) is said to follow the PHR model if, for \(i=1,\ldots,n\), \[\bar{F}_{Z_{i}}(t)=(\bar{F}(t))^{\alpha_{i}},\mbox{ for some }\alpha_{i}>0\, \mbox{ and for all }t>0,\] where \(\bar{F}\) is the baseline survival function. We shall denote this by \(F_{Z_{i}}\sim\mbox{PHR}(F;\alpha_{i})\), for \(i=1,\ldots,n\). Stochastic orders are very effective tools for comparing two or more random variables/vectors. Below, we give the definitions of some stochastic orders (see [41]) that are most pertinent to the subsequent discussion. **Definition 2.2**: _Let \(X\) and \(Y\) be two absolutely continuous random variables with non-negative supports. Then, \(X\) is said to be smaller than \(Y\) in the_ * _usual stochastic order, denoted by_ \(X\leq_{st}Y\) _or_ \(F_{X}\leq_{st}F_{Y}\)_, if_ \(\bar{F}_{X}(x)\leq\bar{F}_{Y}(x)\) _for all_ \(x\)__\(\in\)__\([0,\infty);\)__ * _hazard rate order, denoted by_ \(X\leq_{hr}Y\) _or_ \(F_{X}\leq_{hr}F_{Y}\)_, if_ \(\bar{F}_{Y}(x)/\bar{F}_{X}(x)\) _is increasing in_ \(x\in[0,\infty);\)__ * _reversed hazard rate order, denoted by_ \(X\leq_{rh}Y\) _or_ \(F_{X}\leq_{rh}F_{Y}\)_, if_ \(F_{Y}(x)/F_{X}(x)\) _is increasing in_ \(x\in[0,\infty);\)__ * _likelihood ratio order, denoted by_ \(X\leq_{lr}Y\) _or_ \(F_{X}\leq_{lr}F_{Y}\)_, if_ \(f_{Y}(x)/f_{X}(x)\) _is increasing in_ \(x\in(0,\infty);\)__ * _dispersive order, denoted by_ \(X\leq_{disp}Y\) _or_ \(F_{X}\leq_{disp}F_{Y}\)_, if_ \(G^{-1}(u)-F^{-1}(u)\) _is increasing in_ \(u\in(0,1)\)_;_ * _increasing convex order, denoted by_ \(X\leq_{icx}Y\) _or_ \(F_{X}\leq_{icx}F_{Y}\)_, if_ \(E(\phi(X))\leq E(\phi(Y))\)_, for all increasing convex functions_ \(\phi\)_;_ * _mean residual life order, denoted by_ \(X\leq_{mrl}Y\) _or_ \(F_{X}\leq_{mrl}F_{Y}\)_, if_ \(\int_{x}^{\infty}\bar{F}_{Y}(u)du/\int_{x}^{\infty}\bar{F}_{X}(u)du\) _is increasing in_ \(x\) _over_ \(\{x:\int_{x}^{\infty}\bar{F}_{X}(u)du>0\};\)__ * _ageing faster order in terms of hazard rate, denoted by_ \(X\leq_{c}Y\) _or_ \(F_{X}\leq_{c}F_{Y}\)_, if_ \(\Delta_{X}\circ\Delta_{Y}^{-1}\) _is convex on_ \([0,\infty)\)_, or equivalently,_ \(r_{X}/r_{Y}\) _is increasing on_ \([0,\infty)\)_._ \(\Box\)__ We now introduce the following notation. Let \(\mathbf{X}=(X_{1},\ldots,X_{m})\) be a nonnegative random vector with an absolutely continuous distribution function. Consider a typical history of \(X\) at time \(t\geq 0\), which is of the form \[h_{t}=\{\mathbf{X}_{I}=\mathbf{t}_{I},\mathbf{X}_ {I}>t\mathbf{e}\},0\mathbf{e}\leq\mathbf{t}_{I} \leq t\mathbf{e},I\subset\{1,\ldots,m\};\] here, \(\mathbf{t}_{I}=(t_{i_{1}},\ldots,t_{i_{k}})\), \(\bar{I}\) is the complement of \(I=(i_{1},\ldots,i_{k})\) in \(\{1,\ldots,m\}\) and \(\mathbf{e}=(1,\ldots,1)\). Given the history \(h_{t}\), let \(i\in\bar{I}\) be a component that is still alive at time \(t\). Its multivariate conditional hazard rate, at time \(t\), is defined as follows: \[\lambda_{i|I}\left(t|\mathbf{t}_{I}\right)=\lim_{\Delta t\to 0^{+}} \frac{1}{\Delta t}P\left(t<T_{i}\leq t+\Delta t|\mathbf{T}_{I}=\mathbf{t}_{I},\mathbf{T}_{\bar{I}}>t\mathbf{e}\right),\] where, \(0\mathbf{e}\leq\mathbf{t}_{I}\leq t\mathbf{e}\), and \(I\subset\{1,\ldots,m\}\) (see [41]). Further, let \(\bar{F}_{1}\) be the marginal distribution function of \(X_{1}\), and \(F_{i|1,\ldots,i-1}\left(\cdot|x_{1},\ldots,x_{i-1}\right)\) be the conditional distribution function of \(X_{i}\), given \(X_{1}=x_{1},\ldots,X_{i-1}=x_{i-1}\), for \(i=2,\ldots,n\). For each \(\mathbf{u}=(u_{1},\ldots,u_{n})\in(0,1)^{n}\), define \[x_{1}\left(\mathbf{u}\right)=F_{1}^{-1}\left(u_{1}\right)\] and sequentially \[x_{i}\left(\mathbf{u}\right)=F_{i|1,\ldots,i-1}^{-1}\left(u_{i}|x_{1 },\ldots,x_{i-1}\right),\quad i=2,\ldots,n.\] Next, we present the definitions of some multivariate stochastic orders that are used in the subsequent sections. **Definition 2.3**: _Let \(\mathbf{X}\) and \(\mathbf{Y}\) be two \(n\)-dimensional random vectors with non-negative supports. Further, let the multivariate probability density functions and the multivariate conditional hazard rate functions of \(\mathbf{X}\) and \(\mathbf{Y}\) be given by \(f(\cdot)\) and \(g(\cdot)\), and \(\eta_{\cdot|\cdot}\left(\cdot|\cdot\right)\) and \(\lambda_{\cdot|\cdot}\left(\cdot|\cdot\right)\), respectively. Then, \(\mathbf{X}\) is said to be smaller than \(\mathbf{Y}\) in the_ 1. _usual multivariate stochastic order, denoted by_ \(\mathbf{X}\leq_{\text{st}}\mathbf{Y}\)_, if_ \(E\left(\phi\left(\mathbf{X}\right)\right)\leq E\left(\phi\left(\mathbf{Y}\right)\right)\)_, for all increasing functions_ \(\phi\)_;_ 2. _dynamic multivariate hazard rate order, denoted by_ \(\mathbf{X}\leq_{\text{dyn}-hr}\mathbf{Y}\)_, if_ \[\eta_{k|I\cup J}\left(u|\mathbf{s}_{I\cup J}\right) \geq \lambda_{k|I}\left(u|\mathbf{t}_{I}\right),\text{ for all }k\in\overline{I\cup J},\] _where_ \(I\cap J=\emptyset\)_,_ \(\mathbf{s}_{I}\leq\mathbf{t}_{I}\leq u\mathbf{e}\) _and_ \(\mathbf{s}_{J}\leq u\mathbf{e}\)_;_ 3. _multivariate likelihood ratio order, denoted by_ \(\mathbf{X}\leq_{\text{lr}}\mathbf{Y}\)_, if_ \(f\left(\mathbf{x}\right)g\left(\mathbf{y}\right)\leq f\left(\mathbf{x}\wedge\mathbf{y}\right) g\left(\mathbf{x}\vee\mathbf{y}\right)\)_, for all_ \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{n}\)_;_ 4. _multivariate dispersive order, denoted by_ \(\mathbf{X}\leq_{\text{disp}}\mathbf{Y}\)_, if_ \(y_{i}\left(\mathbf{u}\right)-x_{i}\left(\mathbf{u}\right)\) _is increasing in_ \((u_{1},\ldots,u_{i})\in(0,1)^{i}\) _for_ \(i=1,\ldots,n\)_._ \(\Box\)__ Like stochastic orders, majorization orders are also quite useful for establishing various inequalities. Different majorization orders have been discussed in the literature, and we give below the definitions of some majorization orders that are used in this work. **Definition 2.4**: _Let \(I^{n}\) denote an \(n\)-dimensional Euclidean space, where \(I\subseteq\mathcal{R}\). Further, let \(\mathbf{x}=(x_{1},\ldots,x_{n})\in I^{n}\) and \(\mathbf{y}=(y_{1},\ldots,y_{n})\in I^{n}\) be any two vectors, and \(x_{(1)}\leq\cdots\leq x_{(n)}\) and \(y_{(1)}\leq\cdots\leq y_{(n)}\) be the increasing arrangements of the components of \(\mathbf{x}\) and \(\mathbf{y}\), respectively._ 1. _The vector_ \(\mathbf{x}\) _is said to weakly supermajorize the vector_ \(\mathbf{y}\) _(written as_ \(\mathbf{y}\overset{w}{\preceq}\mathbf{x}\)_) if_ \[\sum_{i=1}^{j}x_{(i)}\leq\sum_{i=1}^{j}y_{(i)},\quad\text{for }j=1,2,\ldots,n;\] 2. _The vector_ \(\mathbf{x}\) _is said to_ \(p\)_-larger than the vector_ \(\mathbf{y}\) _(written as_ \(\mathbf{y}\overset{p}{\preceq}\mathbf{x}\)_) if_ \[\prod_{i=1}^{j}x_{(i)}\leq\prod_{i=1}^{j}y_{(i)},\quad\text{for }j=1,2,\ldots,n;\] 3. _The vector_ \(\mathbf{x}\) _is said to reciprocally majorize the vector_ \(\mathbf{y}\) _(written as_ \(\mathbf{y}\overset{rm}{\preceq}\mathbf{x}\)_) if_ \[\sum_{i=1}^{j}\frac{1}{x_{(i)}}\geq\sum_{i=1}^{j}\frac{1}{y_{(i)}},\quad\text {for }j=1,2,\ldots,n.\] \(\Box\)__ Stochastic ageing concepts are very useful tools for describing how a system ages over time. In the literature, different ageing classes (such as IFR, DFR, DLR, and so on) have been introduced to characterize different ageing properties of a system (see [6]). Below, we give the definitions of some ageing classes that are most pertinent to the ensuing discussions. **Definition 2.5**: _Let \(X\) be an absolutely continuous random variable with nonnegative support. Then, \(X\) is said to have_ 1. _increasing likelihood ratio (ILR) (resp. decreasing likelihood ratio (DLR)) property if_ \(f_{X}^{\prime}(x)/f_{X}(x)\) _is decreasing (resp. increasing) in_ \(x\geq 0;\)__ 2. _increasing failure rate (IFR) (resp. decreasing failure rate (DFR)) property if_ \(r_{X}(x)\) _is increasing (resp. decreasing) in_ \(x\geq 0;\)__ 3. _decreasing reversed failure rate (DRFR) property if_ \(\tilde{r}_{X}(x)\) _is decreasing in_ \(x\geq 0;\)__ ## 3 Ordered random vectors In this section, we give the definition of DSOS and discuss its important special cases. As an extension of the sequential order statistics (SOS), Baratnia and Doostparast [7] introduced the developed sequential order statistics (DSOS), which are useful for modelling the lifetime of a system with dependent components. The definition of DSOS is as follows (see [7, 31]). **Definition 3.1**: _Let \(F_{1},\ldots,F_{n}\) be \(n\) absolutely continuous cumulative distribution functions with \(F_{1}^{-1}(1)\leq\cdots\leq F_{n}^{-1}(1)\). Consider a system of \(n\) components installed at time \(t=0\). Assume that all components of the system are functioning at the starting time. Let \(X_{1}^{(1)},\ldots,X_{n}^{(1)}\) be \(n\) dependent and identical (DID) random variables, with distribution functions \(F_{1}\), representing the lifetimes of \(n\) components. Assume that the dependence structure between these random variables is described by the Archimedean copula with generator \(\phi\). Then, the first component failure time is given by_ \[X_{1:n}^{\star}=\min\left\{X_{1}^{(1)},\ldots,X_{n}^{(1)}\right\}.\] _Given \(X_{1:n}^{\star}=t_{1}\), the residual lifetimes of the remaining \((n-1)\) components are equal in distribution to the residual lifetimes of \((n-1)\) DID components with age \(t_{1}\) and with cumulative distribution function \(F_{2}\), (instead of \(F_{1}\)) with the same dependence structure; here, \(F_{2}\) is assumed in place of \(F_{1}\) as the failure of the first component would have an impact on the performance of other components. Let the lifetimes of these DID components be represented by \(X_{1}^{(2)},\ldots,X_{n-1}^{(2)}\). Then, for \(j=1,\ldots,n-1\), \(X_{j}^{(2)}\sim F_{2}(\cdot|t_{1})\), where \(\bar{F}_{2}(x|t_{1})=\bar{F}_{2}(x)/\bar{F}_{2}(t_{1})\), for \(x\geq t_{1}\). Moreover, \(X_{j}^{(2)}\geq t_{1}\), for \(j=1,\ldots,n-1\). Next, the second component failure time is given by_ \[X_{2:n}^{\star}=\min\left\{X_{1}^{(2)},\ldots,X_{n-1}^{(2)}\right\}.\] _By proceeding in this manner, we assume that the \(i\)-th failure occurs at time \(t_{i}\)\((>t_{i-1})\), i.e., \(X_{i:n}^{\star}=t_{i}\). Then, the residual lifetimes of the remaining \((n-i)\) components are equal in distribution to the residual lifetimes of \((n-i)\) DID components with age \(t_{i}\) and with distribution functions \(F_{i+1}\) with the same dependence structure. Let the lifetimes of these DID components be represented by \(X_{1}^{(i+1)},\ldots,X_{n-i}^{(i+1)}\). Then, for \(j=1,\ldots n-i\), \(X_{j}^{(i+1)}\sim F_{i+1}(\cdot|t_{i})\), where \(\bar{F}_{i+1}(x|t_{i})=\bar{F}_{i+1}(x)/\bar{F}_{i+1}(t_{i})\), for \(x\geq t_{i}\). Moreover, note that \(X_{j}^{(i+1)}\geq t_{i}\), for \(j=1,\ldots,n-i\). Then, the \((i+1)\)-th component failure time is given by_ \[X_{i+1:n}^{\star}=\min\left\{X_{1}^{(i+1)},\ldots,X_{n-i}^{(i+1)}\right\}.\] _Finally, if the \((n-1)\)-th component failure occurs at time \(t_{n-1}=X_{n-1:n}^{\star}\), then the last component failure time is given by \(X_{n:n}^{\star}\) with reliability function \(\bar{F}_{n}(x|t_{n-1})=\bar{F}_{n}(x)/\bar{F}_{n}(t_{n-1})\), for \(x\geq t_{n-1}\). Then, the random variables \(X_{1:n}^{\star}\leq\cdots\leq X_{n:n}^{\star}\) are called developed sequential order statistics (DSOS) based on \(F_{1},\ldots,F_{n}\), where the dependence structure is described by the Archimedean copula with generator \(\phi\). In short, we denote them by \((X_{1:n}^{\star},\ldots,X_{n:n}^{\star})\sim\mbox{DSOS($F_{1},\ldots,F_{n}$;$ $ \phi$)}\)._ **Remark 3.1**: _One may note that, if \((X_{1:n}^{\star},\ldots,X_{n:n}^{\star})\sim\mbox{DSOS($F_{1},\ldots,F_{n}$;$ \phi$)}\), then \(\{X_{1:n}^{\star},\ldots,\)\(X_{n:n}^{\star}\}\) forms a Markov chain with transition probabilities_ \[P\left(X_{r:n}^{\star}>t|X_{r-1:n}^{\star}=x\right) = \phi\left(\left(n-r+1\right)\psi\left(\frac{\bar{F}_{r}(t)}{\bar{F }_{r}(x)}\right)\right),\quad t\geq x,\ \bar{F}(x)>0, \tag{3.1}\] _where \(\psi\equiv\phi^{-1}\). \(\Box\)_ Generalized order statistics (GOS), a unified notion of ordered random variables, contain many popular models as particular cases, including sequential order statistics (SOS) under PHR model, order statistics with non-integral sample size, \(k\)-record values, Pfeifer's record values, \(k_{n}\)-records from non-identical distributions, and ordered random variables from truncated distributions. We now give the definition of developed generalized order statistics (DGOS), which is a generalization of GOS (see [18, 22, 24]). **Definition 3.2**: _Let \(n\in\mathcal{N}\), \(\gamma_{n,n}=\alpha_{n}=k>0\), \(m_{1},\ldots,m_{n-1}\in\mathcal{R}\), \(M_{i}=\sum_{j=i}^{n-1}m_{j}\), \(1\leq i\leq n-1\), \(\gamma_{i,n}=k+n-i+M_{i}=\left(n-i+1\right)\alpha_{i}>0\), for all \(i=1,\ldots,n-1\), and let \(\tilde{m}=\left(m_{1},\ldots,m_{n-1}\right)\), \(n=2,\ldots,n-1\). The random variables \(X\left(1,n,\tilde{m}_{n},k\right),\ldots,X\left(n,n,\tilde{m}_{n},k\right)\) are said to be developed generalized order statistics (DGOS) from an absolutely continuous distribution function \(F\) with probability density function \(f\) and dependence structure described by the Archimedean copula with generator \(\phi\), denoted by \(\left(X\left(1,n,\tilde{m}_{n},k\right),\ \ldots,X\left(n,n,\tilde{m}_{n},k \right)\right)\sim DGOS(F,\gamma_{1,n},\ldots,\gamma_{n,n};\phi)\), if their joint probability density function is given by_ \[f_{X\left(1,n,\tilde{m}_{n},k\right),\ldots,X\left(n,n,\tilde{m} _{n},k\right)}\left(x_{1},\ldots,x_{n}\right) = \prod_{j=1}^{n}\left\{\phi^{\prime}\left(\left(n-j+1\right)\psi \left(\frac{\bar{F}^{\alpha_{j}}\left(x_{j}\right)}{\bar{F}^{\alpha_{j}} \left(x_{j-1}\right)}\right)\right)\right.\] \[\left.\left(n-j+1\right)\alpha_{j}\psi^{\prime}\left(\frac{\bar{F }^{\alpha_{j}}\left(x_{j}\right)}{\bar{F}^{\alpha_{j}}\left(x_{j-1}\right)} \right)\frac{\bar{F}^{\alpha_{j}-1}\left(x_{j}\right)f\left(x_{j}\right)}{ \bar{F}^{\alpha_{j}}\left(x_{j-1}\right)}\right\},\] _where \(0=x_{0}<\cdots<x_{n}\). \(\Box\)_ Like GOS, DGOS also contains many popular models of ordered random variables with dependence structure described by the Archimedean copula, as listed in Table 1. Below, we give a list of models containing DSOS and its particular cases in Table 2. In subsequent sections, we discuss various results for these models. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline \multicolumn{4}{|c|}{\(\boldsymbol{\gamma_{r,n}(1\leq r\leq n-1)}\)} & \multicolumn{1}{l|}{\(\boldsymbol{\gamma_{n,n}}\)} & \multicolumn{1}{l|}{ \begin{tabular}{l} **Dependance** \\ **structure (\(\phi\))** \\ \end{tabular} } & \multicolumn{1}{l|}{**DGOS Model**} \\ \hline \(\boldsymbol{n-r+1}\) & \(\boldsymbol{\alpha_{r}}\) & \(\boldsymbol{k}\) & & \\ \hline \(n-r+1\) & \(1\) & \(1\) & \(\phi\) & Ordinary order statistics (OS) \\ \hline \(a-r+1\,:\,a\,\in(0,\infty)\) & \(1\) & \(a-n+1\) & \(\phi\) & OS with non integral sample size [36] \\ \hline \(n-r+1\) & \(\alpha_{r}\) & \(\alpha_{n}\) & \(\phi\) & DSOS under PHR model \\ \hline \(n-r+1\) & \(\alpha_{r}\) & \(k\) & \(\phi(u)=e^{-u}\) & Generalized order statistics (GOS) \\ \hline \(1\) & \(1\) & \(1\) & Not applicable & Record value [15] \\ \hline \(k\) & \(1\) & \(k\) & \(\phi\) & \(k\)-th record value [19] \\ \hline \(1\) & \(\alpha_{r}\) & \(\alpha_{n}\) & Not applicable & Pfeifer’s record value [35] \\ \hline \(k_{r}\) & \(\alpha_{r}\) & \(\alpha_{n}k_{n}\) & \(\phi\) & Ordering via truncation [24] \\ \hline \(\nu-r+1\), if \(1\leq r\leq r_{1}\), \(\nu-n_{1}-r+1\), if \(r_{1}<r\leq n\) & \(1\) & \(\nu-n_{1}-n+1\) & \(\phi\) & Progressively type-II censored order statistics [5] \\ \hline \end{tabular} \end{table} Table 1: Models of ordered random variables and their relations with DGOS (see, [17, 18]). We first present some lemmas that are essential for proving the main results of this paper. **Lemma 3.1**: _Let \(\left(X_{1:n}^{\star},\ldots,X_{n:n}^{\star}\right)\sim\) DSOS(\(F_{1},\,\ldots,F_{n};\phi\)), and \(D_{i}\left(\cdot\right)\equiv-\ln\bar{F}_{i}\left(\cdot\right)\) be the cumulative hazard rate function of \(F_{i}\), for \(i=1,\ldots,n\). Then,_ \[X_{1:n}^{\star} =D_{1}^{-1}\left(W^{\left(1\right)}\right), \tag{3.2}\] \[X_{i:n}^{\star} =D_{i}^{-1}\left(W^{\left(i\right)}+D_{i}\left(X_{i-1:n}^{\star} \right)\right),\quad\text{ for }i=2,\ldots,n, \tag{3.3}\] _where_ \[W^{\left(i\right)}=-\ln\left(V^{\left(i\right)}\right)=\min\left\{-\ln\left(1 -U_{1}^{\left(i\right)}\right),\ldots,-\ln\left(1-U_{n-i+1}^{\left(i\right)} \right)\right\},\quad i=1,\ldots,n,\] _and \(U_{j}^{i}\sim Unif(0,1)\), for \(i=1,\ldots,n\), and \(j=1,\ldots,n-i+1\); here, for each \(i\in\{1,\ldots,n\}\), \(U_{j}^{i}\)'s are dependent random variables governed by the Archimedean copula with generator \(\phi\). Moreover, \(\{W^{\left(i\right)},\;i=1,\ldots,n\}\) are independent with survival functions_ \[\bar{F}_{W^{\left(i\right)}}\left(t\right) = \phi\left(\left(n-i+1\right)\psi\left(e^{-t}\right)\right),\quad t >0,i=1,\ldots,n,\;\psi\equiv\phi^{-1}. \tag{3.4}\] The following lemma follows from Remark 3.1 and Lemma 3.1 (see also [18, 22]). **Lemma 3.2**: _Let \(\left(X\left(1,n,\tilde{m}_{n},k\right),\ldots,X\left(n,n,\tilde{m}_{n},k \right)\right)\sim\text{DGOS}(F,\gamma_{1,n},\ldots,\gamma_{n,n};\phi\)), and \(D\left(\cdot\right)\equiv-\ln\bar{F}\left(\cdot\right)\) be the cumulative hazard rate function of \(F\). Then,_ \[\left(X\left(1,n,\tilde{m}_{n},k\right),\ldots,X\left(n,n,\tilde{m}_{n},k \right)\right)\overset{d}{=}\left(D^{-1}\left(B_{1,n}\right),\ldots,D^{-1} \left(\sum_{j=1}^{n}B_{j,n}\right)\right), \tag{3.5}\] _where_ \[B_{j,n}=\min\left\{-\frac{1}{\alpha_{j}}\ln\left(1-U_{1}^{\left(j\right)} \right),\ldots,-\frac{1}{\alpha_{j}}\ln\left(1-U_{n-j+1}^{\left(j\right)} \right)\right\}=\frac{1}{\alpha_{j}}W^{\left(j\right)},\quad j=1,\ldots,n,\] _and \(U_{j}^{i}\)'s are as given in Lemma 3.1. Moreover, the survival function of \(B_{j,n}\) is_ \[\bar{F}_{B_{j,n}}\left(t\right) = \phi\left(\left(n-j+1\right)\psi\left(e^{-\alpha_{j}t}\right) \right),\quad t>0,j=1,\ldots,n,\;\psi\equiv\phi^{-1}. \tag{3.6}\] \begin{table} \begin{tabular}{|l|l|l|} \hline **Condition** & **Notation** & **Model specification** \\ \hline NULL & \(\left(X_{1:n}^{\star},\ldots,X_{n:n}^{\star}\right)\) & \(\sim\) DSOS(\(F_{1},F_{2}\) & DSOS \\ \(\ldots,F_{n};\phi\)) & & \\ \hline \(\phi(u)=e^{-u},\;u>0\) & \(\left(X_{1:n}^{\star},\ldots,X_{n:n}^{\star}\right)\sim\) SOS(\(F_{1},F_{2}\,\ldots,F_{n}\)) & SOS \\ \hline \(F_{i}\sim\)PHR(\(F,\alpha_{i}\)), for \(i=1,\ldots,n\) & \(\left(X\left(1,n,\tilde{m}_{n},k\right),\ldots,X\left(n,n,\tilde{m}_{n},k \right)\right)\sim\) DGOS(\(F,\gamma_{1,n},\ldots,\gamma_{n,n};\phi\)) & GOS with dependent components \\ \hline \(F_{i}\sim\)PHR(\(F,\alpha_{i}\)), for \(i=1,\ldots,n\), and \(\phi(u)=e^{-u},\;u>0\) & \(\left(X\left(1,n,\tilde{m}_{n},k\right),\ldots,X\left(n,n,\tilde{m}_{n},k \right)\right)\sim\) GOS(\(F,\gamma_{1,n},\ldots,\gamma_{n,n}\)) & GOS with independent components \\ \hline \(F_{i}=F\), for all \(i=1,\ldots,n\) & \(\left(X_{1:n}^{\star},\ldots,X_{n:n}^{\star}\right)\sim\) OS(\(F;\phi\)) & OS with DID components \\ \hline \end{tabular} \end{table} Table 2: Models of ordered random variables obtained from DSOS Comparing random vectors from DSOS model with identical components In this section, we establish some stochastic comparison results for random vectors with DSOS model in both one-and two-sample situations. ### One-sample situation In the following three theorems, we compare two random vectors from DSOS model with respect to the usual multivariate stochastic and dynamic multivariate hazard rate orders. We provide the results in light of the assumptions made on the underlying distribution functions upon which the DSOS models are built. We only give the proof of Theorem 4.3 and the proofs of Theorems 4.1 and 4.2 are omitted for the sake of brevity. **Theorem 4.1**: _Let \((X_{1:n}^{\star},\ldots,X_{n:n}^{\star})\sim\mbox{DSOS($F_{1},\ldots,F_{n}$;$ $\phi$) and $(X_{1:n+1}^{\star},\ldots,X_{n+1:n+1}^{\star})\sim\mbox{DSOS($F_{1},\ldots,F_{ n+1}$;$ $\phi$)}\). Then, the following results hold true:_ * \(\left(X_{1:n+1}^{\star},\ldots,X_{n:n+1}^{\star}\right)\ \leq_{st}\ (X_{1:n}^{\star},\ldots,X_{n:n}^{\star})\)_;_ * _Suppose_ \(uR^{\prime}(u)/R(u)\) _is positive and increasing in_ \(u>0\)_. Then,_ \((X_{1:n+1}^{\star},\ldots,X_{n:n+1}^{\star})\)__\(\leq_{dyn-hr}\ (X_{1:n}^{\star},\ldots,X_{n:n}^{\star})\)_._ **Theorem 4.2**: _Let \((X_{1:n}^{\star},\ldots,X_{n:n}^{\star})\sim\mbox{DSOS($F_{1},\ldots,F_{n}$;$ $\phi$)}\). Then, the following results hold true:_ * _If_ \(F_{2}\leq_{hr}\cdots\leq_{hr}F_{n}\)_, then_ \(\left(X_{1:n}^{\star},\ldots,X_{n-1:n}^{\star}\right)\ \leq_{st}\ (X_{2:n}^{\star},X_{3:n}^{\star},\ldots,X_{n:n}^{\star})\)_;_ * _Suppose_ \(uR^{\prime}(u)/R(u)\) _is positive and increasing in_ \(u>0\)_. If_ \(F_{1}\leq_{hr}\cdots\leq_{hr}F_{n}\)_, then_ \(\left(X_{1:n}^{\star},\ldots,X_{n-1:n}^{\star}\right)\ \leq_{dyn-hr}\ (X_{2:n}^{ \star},\ldots,X_{n:n}^{\star})\)_._ **Theorem 4.3**: _Let \((X_{1:n}^{\star},\ldots,X_{n:n}^{\star})\sim\mbox{DSOS($F_{1},\ldots,F_{n}$;$ $\phi$) and $(X_{1:n}^{\star},\ldots,X_{n+1:n+1}^{\star})\sim\mbox{DSOS($F_{1}$, $\ldots,F_{n+1}$;$ $\phi$)}$}\). Then, the following results hold true:_ * _If_ \(F_{1}\leq_{st}F_{2}\leq_{hr}\cdots\leq_{hr}F_{n+1}\)_, then_ \((X_{1:n}^{\star},\ldots,X_{n:n}^{\star})\ \leq_{st}\ \left(X_{2:n+1}^{\star},\ldots,X_{n+1:n+1}^{ \star}\right)\)_;_ * _Suppose_ \(uR^{\prime}(u)/R(u)\) _is increasing in_ \(u>0\)_. If_ \(F_{1}\leq_{hr}\cdots\leq_{hr}F_{n+1}\)_, then_ \((X_{1:n}^{\star},\ldots,X_{n:n}^{\star})\)__\(\leq_{dyn-hr}\ \left(X_{2:n+1}^{\star},\ldots,X_{n+1:n+1}^{\star}\right)\)_._ ### Two-sample situation In the following theorem, we compare two random vectors with DSOS model that are formed from two different samples, with respect to the usual multivariate stochastic, dynamic multivariate hazard rate and multivariate dispersive orders. The proof of the second part follows along the same lines as those of the first part and is therefore omitted. **Theorem 4.4**: _Let \((X_{1:n}^{\star},\ldots,X_{n:n}^{\star})\sim\mbox{DSOS($F_{1},\ldots,F_{n}$;$ $\phi$) and $(Z_{1:n}^{\star},\ldots,Z_{n:n}^{\star})\sim\mbox{DSOS($G_{1}$, $ \ldots,G_{n}$;$ $\phi$)}$}\). Then, the following results hold true:_ * _Suppose_ \(uR^{\prime}(u)/R(u)\) _is increasing in_ \(u>0\)_. If_ \(F_{1}\leq_{st}G_{1}\) _and_ \(F_{i}\leq_{hr}G_{i}\)_,_ \(i=2,\ldots,n\)_, then_ \((X_{1:n}^{\star},\ldots,X_{n:n}^{\star})\ \leq_{st}\ (Z_{1:n}^{\star},\ldots,Z_{n:n}^{\star})\)_;_ * _Suppose_ \(uR^{\prime}(u)/R(u)\) _is increasing in_ \(u>0\)_. If_ \(F_{i}\leq_{hr}G_{i}\)_,_ \(i=1,\ldots,n\)_, then_ \((X_{1:n}^{\star},\ldots,X_{n:n}^{\star})\)__\(\leq_{dyn-hr}\ (Z_{1:n}^{\star},\ldots,Z_{n:n}^{\star})\)_;_ * _Let_ \(F_{1}\stackrel{{ d}}{{=}}\ldots\stackrel{{ d}}{{=}}F_{n} \stackrel{{ d}}{{=}}F\) _and_ \(G_{1}\stackrel{{ d}}{{=}}\ldots\stackrel{{ d}}{{=}}G_{n} \stackrel{{ d}}{{=}}G\)_. If_ \(F\leq_{\mbox{disp}}G\)_, then_ \((X_{1:n}^{\star},\ldots,X_{n:n}^{\star})\ \leq_{disp}(Z_{1:n}^{\star},\ldots,Z_{n:n}^{\star})\)_._ Comparing random vectors from DGOS model with identical components In this section, we establish some stochastic comparisons of random vectors with DGOS model when the underlying components are identical. ### One-sample situation In the following theorem, we compare random vectors with DGOS model with respect to the multivariate dispersive order. **Theorem 5.1**: _Let \(\left(X\left(1,n,\tilde{m}_{n},k\right),\ldots,X\left(n,n,\tilde{m}_{n},k \right)\right)\sim\) DGOS(\(F,\gamma_{1,n},\ldots,\)\(\gamma_{n,n};\phi\)) be such that \(F\) is DFR, \(\tilde{m}_{n+1}=\left(\tilde{m}_{n},m_{n}\right)\), for \(n\in\mathcal{N}\), and \(m_{i}+1\geq 0\) for each \(i\). Then, the following results hold true:_ * _Suppose_ \(R(u)\) _is decreasing in_ \(u>0\)_. If_ \(m_{n}\leq\min\left\{m_{1},\ldots,m_{n-1}\right\}\)_, then_ \(\left(0,X\left(1,n,\tilde{m}_{n},k\right),\right.\)__\(\ldots,X\left(n-1,n,\tilde{m}_{n},k\right)\)__\(\leq_{\text{disp}}\)__\(\left(X(1,n,\tilde{m}_{n},k),\ldots,\)__\(X(n,n,\tilde{m}_{n},k)\right)\)_;_ * _Suppose_ \(R(u)\) _is decreasing in_ \(u>0\)_. Then,_ \(\left(X\left(1,n+1,\tilde{m}_{n+1},k\right),\ldots,\)__\(X\left(n,n+1,\tilde{m}_{n+1},k\right)\right)\)__\(\leq_{\text{disp}}\)__\(\left(X\left(1,n,\tilde{m}_{n},k\right),\ldots,X\left(n,n,\tilde{m}_{n},k\right)\right)\)_;_ * _If_ \(m_{n}\leq\min\left\{m_{1},\ldots,m_{n-1}\right\}\)_, then_ \(\left(0,X\left(1,n,\tilde{m}_{n},k\right),\ldots,X\left(n,n,\tilde{m}_{n},k \right)\right)\leq_{\text{disp}}\)__\(\left(X\left(1,n+1,\tilde{m}_{n},k\right),\ldots,X\left(n+1,n+1,\tilde{m}_{n+1},k \right)\right)\)_._ \(\Box\)__ In the following theorem, we discuss multivariate dispersive ordering for multivariate marginals from the DGOS model. **Theorem 5.2**: _Let \(\left(X\left(1,n,\tilde{m}_{n},k\right),\ldots,X\left(n,n,\tilde{m}_{n},k \right)\right)\sim\) DGOS(\(F,\gamma_{1,n},\)\(\ldots,\gamma_{n,n};\phi\)) be such that \(F\) is DFR, \(\tilde{m}_{n+1}=\left(\tilde{m}_{n},m_{n}\right)\), for \(n\in\mathcal{N}\), and \(m_{i}+1\geq 0\) for each \(i\). Further, let \(1\leq p_{1}<\cdots<p_{i}\leq n\), and \(1\leq q_{1}<\cdots<q_{i}\leq n\) be such that \(q_{i}-p_{i}\geq\cdots\geq q_{1}-p_{1}\geq 0\). Suppose \(G(nu)/R(u)-G(u)/R(u)\) is positive and increasing in \(u>0\). Then, the following results hold true:_ * _If_ \(R(u)\) _is decreasing in_ \(u>0\) _and_ \(m_{n}\leq\min\left\{m_{1},\ldots,m_{n-1}\right\}\)_, then_ \(\left(X(p_{1},n,\tilde{m}_{n},k),\ldots,\)__\(X(p_{i},n,\tilde{m}_{n},k)\right)\)__\(\leq_{\text{disp}}\)__\(\left(X(q_{1},n,\tilde{m}_{n},k),\ldots,X(q_{i},n,\tilde{m}_{n},k)\right)\)_;_ * _If_ \(R(u)\) _is decreasing in_ \(u>0\) _and_ \(m_{n}\leq\min\left\{m_{1},\ldots,m_{n-1}\right\}\)_, then_ \(\left(X\left(p_{1},n+1,\tilde{m}_{n+1},k\right),\right.\)__\(\ldots,X\left(p_{i},n+1,\tilde{m}_{n+1},k\right)\)__\(\left.\leq_{\text{disp}}\)__\(\left(X\left(q_{1},n,\tilde{m}_{n},k\right),\ldots,X\left(q_{i},n,\tilde{m}_{n},k\right)\right)\)_;_ * _If_ \(m_{n}\leq\min\left\{m_{1},\ldots,m_{n-1}\right\}\)_, then_ \(\left(X\left(p_{1},n,\tilde{m}_{n},k\right),\ldots,X\left(p_{i},n,\tilde{m}_{n}, k\right)\right)\)__\(\leq_{\text{disp}}\)__\(\left(X\left(q_{1},n+1,\tilde{m}_{n+1},k\right),\ldots,X\left(q_{i},n+1,\tilde{m}_{n+1},k \right)\right)\)_._ \(\Box\)__ In the following theorems, we present some univariate results. We compare two DGOS models with respect to the usual stochastic, hazard rate, reverse hazard rate, likelihood ratio and dispersive orders. Theorem 5.3(a) is trivially true, while parts (b) and (c) of Theorem 5.3 follow from Theorem 4.1(a) and Theorem 4.3(a), respectively. For the sake of brevity, the proofs of Theorems 5.4(a) and (c), 5.5(a) and (b), 5.6(a) and (c), and 5.7 (a) and (c) are omitted. **Theorem 5.3**: _Let \(\left(X\left(1,n,\tilde{m}_{n},k\right),\ldots,X\left(n,n,\tilde{m}_{n},k\right) \right)\sim\) DGOS(\(F,\gamma_{1,n},\ldots,\)\(\gamma_{n,n};\phi\)), \(\tilde{m}_{n+1}=\left(\tilde{m}_{n},m_{n}\right)\), for \(n\in\mathcal{N}\), and \(m_{i}+1\geq 0\) for each \(i\). Then, the following results hold true:_ * \(X\left(i,n,\tilde{m}_{n},k\right)\)__\(\leq_{st}\)__\(X\left(i+1,n,\tilde{m}_{n},k\right)\)_, for_ \(i=1,\ldots,n-1\)_;_ * \(X\left(i,n+1,\tilde{m}_{n+1},k\right)\)__\(\leq_{st}\)__\(X\left(i,n,\tilde{m}_{n},k\right)\)_, for_ \(i=1,\ldots,n\)_;_ * _If_ \(m_{n}\leq\min\left\{m_{1},\ldots,m_{n-1}\right\}\)_, then_ \(X\left(i,n,\tilde{m}_{n},k\right)\)__\(\leq_{st}\)__\(X\left(i+1,n+1,\tilde{m}_{n+1},k\right)\)_, for_ \(i=1,\ldots,n\)_._ **Theorem 5.4**: _Let \(\left(X\left(1,n,\tilde{m}_{n},k\right),\ldots,X\left(n,n,\tilde{m}_{n},k\right) \right)\sim\) DGOS(\(F,\gamma_{1,n},\ldots,\)\(\gamma_{n,n};\phi\)), \(\tilde{m}_{n+1}=\left(\tilde{m}_{n},m_{n}\right)\), for \(n\in\mathcal{N}\), and \(m_{i}+1\geq 0\) for each \(i\). Then, the following results hold true:_ * _Suppose_ \(uR^{\prime}(u)/R(u)\) _is increasing in_ \(u>0\)_. Then,_ \(X\left(i,n,\tilde{m}_{n},k\right)\ \leq_{hr}\ X\left(i+1,n,\tilde{m}_{n},k\right)\)_, for_ \(i=1,\ldots,n-1\)_;_ * _Suppose_ \(uR^{\prime}(u)/R(u)\) _is increasing and positive in_ \(u>0\)_. Then,_ \(X\left(i,n+1,\tilde{m}_{n+1},k\right)\ \leq_{hr}\ X\left(i,n,\tilde{m}_{n},k\right)\)_, for_ \(i=1,\ldots,n\)_;_ * _Suppose_ \(uR^{\prime}(u)/R(u)\) _is increasing in_ \(u>0\)_. If_ \(m_{n}\leq\min\left\{m_{1},\ldots,m_{n-1}\right\}\)_, then_ \(X\left(i,n,\tilde{m}_{n},k\right)\)__\(\leq_{hr}\ X\left(i+1,n+1,\tilde{m}_{n+1},k\right)\)_, for_ \(i=1,\ldots,n\)_._ **Theorem 5.5**: _Let \(\left(X\left(1,n,\tilde{m}_{n},k\right),\ldots,X\left(n,n,\tilde{m}_{n},k \right)\right)\sim\) DGOS(\(F,\gamma_{1,n},\ldots,\)\(\gamma_{n,n};\phi\)) and \(\left(\tilde{m}_{n},m_{n}\right)\), for \(n\in\mathcal{N}\), and \(m_{i}+1\geq 0\) for each \(i\). Then, the following results hold true:_ * _Suppose_ \(uH^{\prime}(u)/H(u)\) _is decreasing in_ \(u>0\)_. Then,_ \(X\left(i,n,\tilde{m}_{n},k\right)\ \leq_{rh}\ X\left(i+1,n,\tilde{m}_{n},k\right)\)_, for_ \(i=1,\ldots,n-1\)_;_ * _Suppose_ \(uH^{\prime}(u)/H(u)\) _is decreasing and negative in_ \(u>0\)_. Then,_ \(X\left(i,n+1,\tilde{m}_{n+1},k\right)\)__\(\leq_{rh}\ X\left(i,n,\tilde{m}_{n},k\right)\)_, for_ \(i=1,\ldots,n\)_;_ * _Suppose_ \(uH^{\prime}(u)/H(u)\) _is decreasing in_ \(u>0\)_. If_ \(m_{n}\leq\min\left\{m_{1},\ldots,m_{n-1}\right\}\)_, then_ \(X\left(i,n,\tilde{m}_{n},k\right)\)__\(\leq_{rh}\ X\left(i+1,n+1,\tilde{m}_{n+1},k\right)\)_, for_ \(i=1,\ldots,n\)_._ **Theorem 5.6**: _Let \(\left(X\left(1,n,\tilde{m}_{n},k\right),\ldots,X\left(n,n,\tilde{m}_{n},k \right)\right)\sim\) DGOS(\(F,\gamma_{1,n},\ldots,\)\(\gamma_{n,n};\phi\)), \(\tilde{m}_{n+1}=\left(\tilde{m}_{n},m_{n}\right)\), for \(n\in\mathcal{N}\), and \(m_{i}+1\geq 0\) for each \(i\). Suppose \(G(nu)/R(u)-G(u)/R(u)\) is positive and increasing in \(u>0.\)Then, the following results hold true:_ * \(X\left(i,n,\tilde{m}_{n},k\right)\ \leq_{lr}\ X\left(i+1,n,\tilde{m}_{n},k\right)\)_, for_ \(i=1,\ldots,n-1\)_;_ * \(X\left(i,n+1,\tilde{m}_{n+1},k\right)\ \leq_{lr}\ X\left(i,n,\tilde{m}_{n},k\right)\)_, for_ \(i=1,\ldots,n\)_;_ * _If_ \(m_{n}\leq\min\left\{m_{1},\ldots,m_{n-1}\right\}\)_, then_ \(X\left(i,n,\tilde{m}_{n},k\right)\ \leq_{lr}\ X\left(i+1,n+1,\tilde{m}_{n+1},k\right)\)_, for_ \(i=1,\ldots,n\)_._ **Theorem 5.7**: _Let \(\left(X\left(1,n,\tilde{m}_{n},k\right),\ldots,X\left(n,n,\tilde{m}_{n},k \right)\right)\sim\) DGOS(\(F,\gamma_{1,n},\ldots,\)\(\gamma_{n,n};\phi\)), \(\tilde{m}_{n+1}=\left(\tilde{m}_{n},m_{n}\right)\), for \(n\in\mathcal{N}\), and \(m_{i}+1\geq 0\) for each \(i\). Suppose \(G(nu)/R(u)-G(u)/R(u)\) is positive and increasing in \(u>0.\)Then, the following results hold true:_ * \(X\left(i,n,\tilde{m}_{n},k\right)\ \leq_{disp}\ X\left(i+1,n,\tilde{m}_{n},k\right)\)_, for_ \(i=1,\ldots,n-1\)_;_ * _If_ \(R(u)\) _is decreasing in_ \(u>0\)_, then_ \(X\left(i,n+1,\tilde{m}_{n+1},k\right)\ \leq_{disp}\ X\left(i,n,\tilde{m}_{n},k\right)\)_, for_ \(i=1,\ldots,n\)_;_ * _If_ \(m_{n}\leq\min\left\{m_{1},\ldots,m_{n-1}\right\}\)_, then_ \(X\left(i,n,\tilde{m}_{n},k\right)\ \leq_{disp}\ X\left(i+1,n+1,\tilde{m}_{n+1},k\right)\)_, for_ \(i=1,\ldots,n\)_._ ### Two-sample situation We first need the following lemma for proving the main results here. **Lemma 5.1**: _Let \(\left(X\left(1,n,\tilde{m}_{n},k\right),\ldots,X\left(n,n,\tilde{m}_{n},k \right)\right)\sim\) DGOS(\(F,\gamma_{1,n},\ldots,\)\(\gamma_{n,n};\phi\)) and \(\left(Y\left(1,n,\tilde{m}_{n},k\right),\ldots,Y\left(n,n,\tilde{m}_{n},k \right)\right)\sim\) DGOS(\(G,\gamma_{1,n},\ldots,\gamma_{n,n};\phi\)), where \(\gamma_{i}=\left(n-i+1\right)\alpha_{i}\). Similarly, let \(\left(X\left(1,n^{\prime},\tilde{m}^{\prime}_{n^{\prime}},k\right),\ldots,X \left(n^{\prime},n^{\prime},\tilde{m}^{\prime}_{n^{\prime}},k\right)\right)\sim\) DGOS(\(F,\gamma^{\prime}_{1,n},\ldots,\gamma^{\prime}_{n,n};\phi\)) and \(\left(Y\left(1,n^{\prime},\tilde{m}^{\prime}_{n^{\prime\prime}},k\right),\ldots,Y \left(n^{\prime},n^{\prime},\tilde{m}^{\prime}_{n^{\prime}},k\right)\right)\sim\) DGOS(\(G,\gamma^{\prime}_{1,n},\ldots,\gamma^{\prime}_{n,n};\phi\)), where \(\gamma^{\prime}_{i}=\left(n-i+1\right)\beta_{i}\). Suppose \(G(nu)/R(u)-G(u)/R(u)\) is positive and increasing in \(u>0\). If \(\gamma^{\prime}_{1}\leq\gamma_{1}\) and \(X\left(1,n,\tilde{m}_{n},k\right)\leq_{icx}Y\left(1,n,\tilde{m}_{n},k\right)\), then \(X\left(1,n^{\prime},\tilde{m}^{\prime}_{n^{\prime}},k\right)\leq_{icx}Y\left(1,n ^{\prime},\tilde{m}^{\prime}_{n^{\prime}},k\right)\). \(\Box\)_ In the following theorem, two DGOS models are compared with respect to increasing convex order. By using the above lemma, the proof of this theorem can be presented along the same lines as those in Theorem 3.11 of Balakrishnan et al. [3] and is therefore omitted. **Theorem 5.8**: _Let \(\left(X\left(1,n,\tilde{m}_{n},k\right),\ldots,X\left(n,n,\tilde{m}_{n},k\right) \right)\sim\) DGOS(\(F,\gamma_{1,n},\ldots,\)\(\gamma_{n,n};\phi\)) and \(\left(Y\left(1,n,\tilde{m}_{n},k\right),\ldots,Y\left(n,n,\tilde{m}_{n},k \right)\right)\sim\) DGOS(\(G,\gamma_{1,n},\ldots,\)\(\gamma_{n,n};\phi\)) with \(m_{i}+1\geq 0\), for each \(i\). Suppose \(G(nu)/R(u)-G(u)/R(u)\) is positive and increasing in \(u>0\). If \(X\left(1,n,\tilde{m}_{n},k\right)\leq_{icx}Y\left(1,n,\tilde{m}_{n},k\right)\), then \(X\left(i,n,\tilde{m}_{n},k\right)\leq_{icx}Y\left(i,n,\tilde{m}_{n},k\right)\), \(i=2,\ldots,n\). \(\Box\)_ In the following theorem, we compare two GOS model with respect to hazard rate, likelihood ratio, dispersive, mean residual life and increasing convex orders. We only give the proof of part (e) while proofs of other parts can be done in the same way. **Theorem 5.9**: _Let \(\left(X\left(1,n,\tilde{m}_{n},k\right),\ldots,X\left(n,n,\tilde{m}_{n},k \right)\right)\sim\) GOS(\(F,\gamma_{1,n},\ldots,\gamma_{n,n}\)) and \(\left(X\left(1,n^{\prime},\tilde{m}^{\prime}_{n^{\prime}},k^{\prime}\right), \ldots,X\left(n^{\prime},n^{\prime},\tilde{m}^{\prime}_{n^{\prime}},k^{\prime }\right)\right)\sim\) GOS(\(F,\gamma^{\prime}_{1,n},\ldots,\gamma^{\prime}_{n,n}\)). Let \(i\in\{1,2,\ldots,n\}\). Then, the following results hold true:_ * _If_ \(\left(\gamma_{1},\ldots,\gamma_{i}\right)\stackrel{{ p}}{{\preceq}} \left(\gamma^{\prime}_{1},\ldots,\gamma^{\prime}_{i}\right)\)_, then_ \(X\left(i,n,\tilde{m}_{n},k\right)\leq_{hr}X\left(i,n,\tilde{m}^{\prime}_{n^{ \prime}},k^{\prime}\right)\)_;_ * _If_ \(\left(\gamma_{1},\ldots,\gamma_{i}\right)\stackrel{{ w}}{{\preceq}} \left(\gamma^{\prime}_{1},\ldots,\gamma^{\prime}_{i}\right)\)_, then_ \(X\left(i,n,\tilde{m}_{n},k\right)\leq_{lr}X\left(i,n,\tilde{m}^{\prime}_{n^{ \prime}},k^{\prime}\right)\)_;_ * _If_ \(F\) _is DFR and_ \(\left(\gamma_{1},\ldots,\gamma_{i}\right)\stackrel{{ pr}}{{\preceq}} \left(\gamma^{\prime}_{1},\ldots,\gamma^{\prime}_{i}\right)\)_, then_ \(X\left(i,n,\tilde{m}_{n},k\right)\leq_{mrl}X\left(i,n,\tilde{m}^{\prime}_{n^{ \prime}},k^{\prime}\right)\)_;_ * _If_ \(F\) _is DFR and_ \(\left(\gamma_{1},\ldots,\gamma_{i}\right)\stackrel{{ rm}}{{\preceq}} \left(\gamma^{\prime}_{1},\ldots,\gamma^{\prime}_{i}\right)\)_, then_ \(X\left(i,n,\tilde{m}_{n},k\right)\leq_{icx}X\left(i,n,\tilde{m}^{\prime}_{n^{ \prime}},k^{\prime}\right)\)_._ ## 6 Examples In this section, we discuss some examples to demonstrate the sufficient conditions given in theorems of the previous sections. Note that these sufficient conditions are satisfied by many popular Archimedean copulas (with specific choices of parameters) that capture both positive and negative dependence structures. For the sake of completeness, below we give three examples. Some more examples can be found in [37, 38]. The following example demonstrates the condition " \(uR^{\prime}(u)/R(u)\) is positive and increasing in \(u>0\)" **Example 6.1**: _Consider the Archimedean copula with generator_ \[\phi(u) = e^{\frac{1}{\theta_{1}}\left(1-e^{u}\right)},\quad\theta_{1} \in\left(0,1\right],\;u>0,\] _which gives_ \[\frac{uR^{\prime}(u)}{R(u)} = 1+u,\quad u>0.\] _It can be easily shown that \(uR^{\prime}(u)/R(u)\) is positive and increasing in \(u>0\). Thus, the required condition is satisfied. \(\Box\)_ Below we give an example that illustrates the condition " \(uH^{\prime}(u)/H(u)\) is negative and decreasing in \(u>0\)". **Example 6.2**: _Consider the Archimedean copula with generator_ \[\phi(u) = 1-\left(1-e^{-u}\right)^{\frac{1}{\theta_{2}}},\quad\theta_{2} \in\left[1,\infty\right)\;u>0,\] _which gives_ \[\frac{uH^{\prime}(u)}{H(u)} = -\frac{1+e^{u}(u-1)}{e^{u}-1},\quad u>0.\] _It can be easily shown that \(uH^{\prime}(u)/H(u)\) is negative and decreasing in \(u>0\). Thus, the required condition is satisfied. \(\Box\)_ The following example illustrates the condition "\(G(nu)/R(u)-G(u)/R(u)\) is positive and increasing in \(u>0\)". **Example 6.3**: _Consider the Archimedean copula with generator_ \[\phi(u) = e^{1-(1+u)^{\frac{1}{\theta_{3}}}},\quad\theta_{3}\in(0,\infty) \;\,u>0,\] _which gives_ \[G(u)=-\frac{1}{\theta_{3}}u\left(1+u\right)^{\frac{1}{\theta_{3} }-1}+u\left(1+u\right)^{-1}\left(\frac{1}{\theta_{3}}-1\right),\quad u>0,\] \[R(u)=-\frac{1}{\theta_{3}}u\left(1+u\right)^{\frac{1}{\theta_{3} }-1},\quad u>0,\] _and_ \[\frac{G(u)}{R(u)} = 1-\frac{1-\theta_{3}}{(1+u)^{\frac{1}{\theta_{3}}}},\quad u>0.\] _Let us fix \(\theta_{3}=0.4,0.5\) and \(0.6\). It can be easily shown that \(uG^{\prime}(u)/G(u)\) and \(G(u)/R(u)\) are increasing in \(u>0\). Consequently, from Remark 3.1(a) of Sahoo and Hazra [37], we have that \(G(nu)/R(u)-G(u)/R(u)\) is positive and increasing in \(u>0\)._ ## 7 Concluding remarks There are several models of ordered random variables/vectors that arise naturally in practice, such as ordinary order statistics, order statistics with non-integral sample size, \(k\)-record values, Pfeifer's records, \(k_{n}\)-records from non-identical distributions, ordered random variables from truncated distributions, progressively type-II censored order statistics, and so on. The generalized order statistics (GOS) and sequential order statistics (SOS) are two generalized models that contain all the aforementioned models as particular cases. However, these two models are defined based on the assumption that the underlying random variables are independant. As a generalization, we consider here the developed generalized order statistics (DGOS) and developed sequential order statistics (DSOS) models which capture the dependence structure between the underlying random variables. We then establish different univariate and multivariate ordering properties of DSOS and DGOS, wherein the dependence structures between the underlying random variables are described by the Archimedean copula. The results established here generalize many results for the GOS and SOS models with identical components known in the literature. The main focus of this paper is to consider the models of ordered random vectors, where the dependence structure is described by the Archimedean copula. The family of Archimedean copulas is popular due to its flexibility and ability to describe a wide range of dependence. Hence, the study of DSOS and DGOS models governed by the Archimedean copula for comparing the involved ordered random variables is naturally of great interest. ## Acknowledgments The first author sincerely acknowledges the financial support received from UGC, Govt. of India, while the work of the second author was supported by IIT Jodhpur, India.
2305.14128
Dr.ICL: Demonstration-Retrieved In-context Learning
In-context learning (ICL), teaching a large language model (LLM) to perform a task with few-shot demonstrations rather than adjusting the model parameters, has emerged as a strong paradigm for using LLMs. While early studies primarily used a fixed or random set of demonstrations for all test queries, recent research suggests that retrieving semantically similar demonstrations to the input from a pool of available demonstrations results in better performance. This work expands the applicability of retrieval-based ICL approaches by demonstrating that even simple word-overlap similarity measures such as BM25 outperform randomly selected demonstrations. Furthermore, we extend the success of retrieval-based ICL to instruction-finetuned LLMs as well as Chain-of-Thought (CoT) prompting. For instruction-finetuned LLMs, we find that although a model has already seen the training data at training time, retrieving demonstrations from the training data at test time yields better results compared to using no demonstrations or random demonstrations. Last but not least, we train a task-specific demonstration retriever that outperforms off-the-shelf retrievers.
Man Luo, Xin Xu, Zhuyun Dai, Panupong Pasupat, Mehran Kazemi, Chitta Baral, Vaiva Imbrasaite, Vincent Y Zhao
2023-05-23T14:55:25Z
http://arxiv.org/abs/2305.14128v1
# Dr.ICL: Demonstration-Retrieved In-context Learning ###### Abstract In-context learning (ICL), teaching a large language model (LLM) to perform a task with few-shot demonstrations rather than adjusting the model parameters, has emerged as a strong paradigm for using LLMs. While early studies primarily used a fixed or random set of demonstrations for all test queries, recent research suggests that retrieving semantically similar demonstrations to the input from a pool of available demonstrations results in better performance. This work expands the applicability of retrieval-based ICL approaches by demonstrating that even simple word-overlap similarity measures such as BM25 outperform randomly selected demonstrations. Furthermore, we extend the success of retrieval-based ICL to instruction-finetuned LLMs as well as Chain-of-Thought (CoT) prompting. For instruction-finetuned LLMs, we find that although a model has already seen the training data at training time, retrieving demonstrations from the training data at test time yields better results compared to using no demonstrations or random demonstrations. Last but not least, we train a task-specific demonstration retriever that outperforms off-the-shelf retrievers. ## 1 Introduction Language models are now the foundation models for many natural language processing tasks across a wide range of domains (Bommasani et al., 2021). One of the most exciting emergent abilities (Wei et al., 2022) of large language models (LLMs) is in-context learning (ICL) (Brown et al., 2020; Mishra et al., 2022). With ICL, instructions and a few demonstrative examples are augmented to the inputs to LLMs, allowing them to perform well on new tasks without the need for fine-tuning. Typically, ICL approaches utilize random or hand-crafted demonstrations that are applied across various queries. This may, however, not always be optimal. Recent research has revealed that using demonstrations semantically similar to the input query can enhance performance (Liu et al., 2022). In this work, we investigate two off-the-shelf retrievers, BM25 (Robertson et al., 2009) and GTR (Ni et al., 2021), where BM25 is a sparse retriever that finds demonstrations with the highest (weighted) word overlap with the query, while GTR is a dense retriever that seeks demonstrations semantically closest to the query. Then, we utilize them to obtain query-specific demonstrations, and study demonstration-retrieved ICL (Dr. ICL) with a general and an instruction-finetuned LLM. Beyond previous work, several interesting findings are discovered through our experiments as shown in Figure 1. Firstly, we establish that both BM25 and GTR can find more effective demonstrations than random demonstrations in both one-shot and few-shot ICL settings. Such off-the-shelf retrievers make Dr. ICL an appealing paradigm for real-world applications. Secondly, our results with an instruction-finetuned LLM, i.e., FlanPaLM (Chung et al., 2022), indicate that training data can be useful not only for training models but for accompanying a retriever for testing, suggesting a more efficient way to utilizing training data which are expensive to collect. Lastly, by combining with an advanced prompting technique, Chain-of Figure 1: The average performance of PaLM and FlanPaLM on five datasets, with one and few-shot ICL. Retrieved demonstrations given by either BM25 or GTR yield better performance than random demonstrations. Thought (CoT) (Wang et al., 2022), demonstration-retrieved proves to be more effective than relying solely on CoT. This suggests that Dr. ICL can boost the performance of powerful prompt engineering techniques. Next, we aim to go beyond off-the-shelf retrievers which are often geared towards question answering or information retrieval tasks thus the retrieved demonstrations might capture query-specific knowledge required to answer the query. However, the retrieved demonstrations given by the off-the-shelf retrievers might not represent the nature of the task and how the task should be solved in general. Consider, for example, the query "In a barn are chickens and rabbits with 35 heads and 94 legs total. How many chickens and rabbits are there?". Off-the-shelf retrievers may mostly provide information about the animals in the question and their properties such as number of heads and legs (i.e. query-specific knowledge), but may not provide enough similar linear algebra questions (i.e. information about the nature of the task). Therefore, we develop a demonstration retriever that is tailored to retrieving representative demonstrations. Figure 2 showcases the process of training the demonstration retriever: we first create a demonstration retrieval training set using signals from a language model. Concretely, we use an off-the-shelf retriever to find demonstration candidates for a given input question, prepend them to the question, and then obtain probabilities from the language model to re-rank the candidates. We then use the top-\(n\) and bottom-\(n\) candidates as positive and hard-negative examples, respectively, to construct a training set and train the retriever to identify the best demonstration example for a given query. Experimental results show that the demonstration retriever outperforms off-the-shelf retrievers, with more noticeable improvement in one-shot ICL. This encouraging result indicates that the trained retriever could offer an effective substitute for off-the-shelf models. ## 2 Related Work ### Few-shot In-context Learning Few-shot in-context learning (ICL) is a technique that allows language models, such as GPT-3 (Brown et al., 2020) and PaLM (Chowdhery et al., 2022), to generalize to new tasks based on a small number of examples. ICL offers several advantages over the traditional training approach of language models, which involves pre-training followed by fine-tuning. One key benefit is that fine-tuning may not always be feasible due to restricted access to the LLM or inadequate computational resources (Brown et al., 2020). Additionally, ICL avoids the issues commonly associated with fine-tuning, such as overfitting or shocks (Ying, 2019; Kazemi et al., 2023), as it does not modify the model's parameters, allowing it to remain general. However, the effectiveness of ICL varies depending on various factors, such as the order of the demonstrations (Kumar and Talukdar, 2021), the distribution of the demonstrations (Min et al., 2022), and the complexity and quality of the prompts themselves (Zhao et al., 2021; Arora et al., 2022). Some research has shown that lower perplexity prompts (Gonen et al., 2022) and open-ended question-answer formats (Arora et al., 2022) tend to lead to better performance, while others have found that intermediate reasoning steps (Wei et al., 2022) and higher complexity prompts (Fu et al., 2022) can also improve results on certain tasks (Suzgun et al., 2022; Wang et al., 2022). In an effort to understand how ICT works, studies have suggested that ICL may involve implicit Bayesian inference (Xie et al., 2021) and a symbiotic relationship between text and patterns (Madaan and Yazdanbakhsh, 2022), and can behave similarly to explicit fine-tuning (Dai et al., 2022). Our work focus on the effect of demonstrations for ICL with large language models. ### Retrieval Augmented Demonstrations As summarized in Table 1, several previous works have explored retrieval techniques for identifying more informative demonstrations to boost in-context learning. KATE (Liu et al., 2022) discovers that semantically closer demonstrations outperform random ones for GPT-3 in-context learning. They employ language models trained on tasks like natural language inference and sentence textual similarity as semantic representations and utilize the kNN algorithm to search for demonstrations. EPR (Rubin et al., 2022) develops a retriever based on language model signals to find superior demonstrations compared to off-the-shelf retrievers. Instead of using a separate retriever for each task, UPRISE Cheng et al. (2023) merges multiple training datasets into a retrieval corpus and trains a universal retriever for cross-domain tasks. PARC (Nie et al., 2022) employs a multilin gual retrieval strategy to find demonstrations from high-resource tasks, thereby enhancing the performance of low-resource domain tasks. CEIL Ye et al. (2023), instead of retrieving few-shot demonstrations independently, introduces an iterative retrieval method to identify both diverse and similar few-shot examples. While the aforementioned methods retrieve demonstrations from training data, Madaan et al. (2022); Dalvi et al. (2022) incorporate human feedback to create demonstrations and maintain a dynamic retrieval corpus. Z-ICL Lyu et al. (2022) generates pseudo demonstrations to enhance zero-shot in-context performance. In contrast to the methods that retrieve explicit demonstrations, RETROPROMPT Chen et al. (2022) transforms explicit demonstrations into implicit neural demonstrations represented by vectors. Rather than using a retriever, Ram et al. (2023) applies a cross-attention reranker to re-rank documents retrieved by BM25. ## 3 Demonstration-Retrieved In-Context Learning (Dr. ICL) We start by describing ICL for general tasks (including classification or generation tasks). For a task \(T\), given an input text \(x_{q}\), an LLM is used to predict the answer \(y_{q}\) conditioned on a set of _demonstrations_ of the task, \(Demo=\{d_{1},d_{2},\cdots,d_{n}\}\), where \(d_{i}=(x_{i},y_{i})\) is a pair of input and ground truth answer. Typically, \(d_{i}\) is linearized as a string (e.g., "question: \(x_{i}\)\(\backslash\) answer: \(y_{i}\)") and then provided to the LM. Recently, the Chain-of-thoughts prompting technique Wei et al. (2022) has demonstrated its effectiveness in handling complex reasoning tasks. The primary concept involves including intermediate reasoning steps for each demonstration, so it consists of not only the input and ground truth answer but also the step-by-step reasoning process. There are multiple strategies for choosing the set of demonstrations. For instance, one could randomly or manually select a fixed set \(Demo\) to be applied to all queries of task \(T\). Alternatively, a retriever can be used to find query-specific demonstrations from the training set \(D_{train}\): \[Demo_{x_{q}}=Retriever(x_{q},D_{train},n), \tag{1}\] where \(Demo_{x_{q}}\) are the top-\(n\) demonstrations that the retriever considers most suitable for the input \(x_{q}\). In this work, we consider two off-the-shelf retrievers, BM25 and GTR (Section 3.1), and then propose a method to train a retriever tailored to the target task \(T\) (Section 3.2). ### Off-the-shelf Retrievers BM25 Robertson et al. (2009) is a bag-of-words model that calculates relevance scores using term frequency, inverse document frequency, and document length normalization. It has proven effective and efficient, making it easily deployable in large-scale, real-world applications. However, BM25 heavily relies on keyword matching and lacks context understanding, which may result in less accurate outcomes. In contrast, GTR Ni et al. (2021) is a dual-encoder neural retriever (based on T5) trained on the MS Marco dataset Nguyen et al. (2016). GTR excels in semantic and context comprehension and is easily transferable to downstream tasks or specific domains. However, it has lower memory and computational efficiency, and lacks interpretability. \begin{table} \begin{tabular}{l l l l l l l} \hline \hline **Paper** & **LLMs** & **Retrieval Method** & **Retrieval Corpus** & **Evaluation Tasks** & **\# of Shots in Prompts** & \(\text{CoT}\) \\ \hline KATE 2022 & GPT-3 & RoBERT+4NN & In-Domain TD & SA, T2T & Few-shots & No \\ \hline EPR 2022 & GPT-3, & SheBERT, BM25, FT Retiever & In-Domain TD & SRM & Few-shots & No \\ & GPOEX, & GPOEX, & GPOEX, & GPOEX, & GPOEX, & GPOEX, & GPOEX, \\ & GPOEX & GPOEX, & GPOEX, & GPOEX, & GPOEX, & GPOEX, & GPOEX, \\ \hline CEIL 2023 & GPT-Noc, & BM25, BERT, DPR, FT Retriever & In-Domain TD & SA, PD, NLI, CSR, QA, & Few shots & No \\ & GPOEX, & GPOEX, & GPOEX, & GPOEX, & GPOEX, & GPOEX, & GPOEX, \\ \hline UPRISE 2023 & GPT-Noc, & FT Retriever & Cross Tasks TD & RC, QA, NLI, SA, CSR, & Few shots & No \\ & BLOOM, & GPOEX, & GPOEX, & GPOEX, & GPOEX, & GPOEX, & GPOEX, \\ & GPT GPT-3 & & GPOEX, & GPOEX, & GPOEX, & GPOEX, & GPOEX, \\ \hline Ours & PaLM, Flan-PalM & BM25, GTR, FT Retriever & In-Domain TD & QA, NLI, MathR, BC & One-shot, Few-shots & Yes \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison with Related Work. TD: training data, QA: question answering, RC: reading comprehension, NLI: natural language inference, SA: sentiment analysis, CSR: commonsense reasoning, CR: Coreference Resolution, MathR: mathematical reasoning, PD: paraphrase detection, SP:semantic parsing, CodeG: code generation, SRM: Sentence representation mapping, T2T: Table to Text generation, Question Answering, ### Demonstration Retriever Training Demonstration retrieval aims to find the most representative demonstrations for each input query. Ideally, the demonstrations should capture both (a) the query-specific knowledge required to answer the query, and (b) the nature of the task and how the task should be solved in general. Off-the-shelf retrievers such as BM25 and GTR were designed for information retrieval and question answering. As such, they mostly retrieve demonstrations of type (a) but not (b). To fill this gap, we propose to train a demonstration retriever by leveraging the feedback from a language model. As demonstrated in Figure 2, the process involves two steps: obtaining the training data and training a retriever on the data. Obtain the Training dataWe want to teach the retriever model to locate examples that lead to the most accurate predictions. We propose to mine a set of demonstrations for each input query \(x_{q}\) in the training data as follows. First, given a question-answer pair \((x_{q},y_{q})\in D_{train}\), we use an off-the-shelf retriever to find a demonstration candidate set \(D\) for \(x_{q}\), where \(x_{q}\) is exclusive from \(D\). Second, we test each demonstration \(d\in D\) on how much it helps on the target task. The LM probability \(p_{\text{LM}}(y_{q}\mid d,x_{q})\) of the gold answer \(y_{q}\) is used as the score for the demonstration. Finally, we keep the top-\(n\) demonstration as the positive demonstrations, and the bottom-\(n\) as the hard negative demonstrations. Training ProcedureOur retriever is a dual encoder, which defines the score of any query-document pair \((q,d)\) as \(s(q,d)=v_{q}^{\top}v_{d}\), where \(v_{q}\) and \(v_{d}\) are the embeddings of \(q\) and \(d\). We initialize our retriever with GTR, and then fine-tune it on the training data via contrastive loss with both in-batch and hard negatives: \[\mathcal{L}_{con}=-\log\frac{e^{s(q,d^{+})}}{e^{s(q,d^{+})}+\sum_{j}e^{s(q,d_{ j}^{-})}}, \tag{2}\] where \(d^{+}\) and \(d_{j}^{-}\) are the positive and negative demonstrations. The negative demonstrations include the positive demonstrations for the other input queries in the same batch and 1 randomly-chosen hard negative demonstration. ## 4 Experiments Datasets and Evaluation MetricsWe study various tasks across 5 datasets: free-form question answering (NQ), natural language inference (ANLI-r3), mathematical reasoning (GSM8k and AQuA) and boolean question answering (StrategyQA). For the last three datasets, we apply CoT. All the tasks are evaluated by exact matching accuracy. Language ModelsPaLM-540B (Chowdhey et al., 2022) and Flan-PaLM (540B) (Chung et al., 2022) are used as the primary LLMs. Both models have the same architecture, but Flan-PaLM has been further trained on thousands of tasks for instruction learning (including all the five datasets we studied in this paper) and shows superior generalization performance compared to PaLM. At Figure 2: Pipeline for training demonstration retriever and inference (R for a neural retriever). Figure on the left shows the procedure of obtaining data to train a demonstration retriever: an off-the-shelf retriever takes an input query \(x_{q}\) and retrieves top-\(k\) (e.g., 100) demonstrations candidates from the training corpus. Then an LLM is used to output the score of the ground truth of \(y_{q}\) with each retrieved demonstration and \(x_{q}\). Figure on the right shows the inference pipeline for in-context learning with the trained demonstration retriever. inference time, we use the temperature of 0.0 and maximum decoding length 10 for tasks without CoT and 256 for tasks involving CoT. RetrieversAs explained in SS3, we explore using BM25 and GTR as off-the-shelf retrievers, as well as training our own retriever for each task. For BM25, we use uncased BERT wordpiece tokenization and parameters \((k_{1},b)=(1.5,0.75)\). For GTR, we use the pretrained GTR-Base model. When mining data for training our retriever, we use the pretrained GTR to retrieve 100 demonstrations candidates, and then use PaLM-62B to score each candidate. (We used the smaller PaLM-62B instead of 540B for efficiency.) Then we select the top-5 reranked demonstrations as the positive candidates to fine-tune GTR. Retrieval CorpusWe create a separate retrieval corpus for each task using the associated training data. For tasks with CoT, each entry in the corpus is composed of the question, the CoT, and the answer, while for other tasks are without the CoT. ### Results Off-the-shelf-retriever performanceFigures 3 and 4 show the performance of PaLM and FlanPaLM under one-shot and few-shot ICL settings, with and without retrievers. We make the following observations. Observation 1: Off-the-shelf retrievers are capable of finding more effective demonstrations than random ones.Figure 3 shows that the demonstrations retrieved by BM25 or GTR are better than random ones under both one-shot and few-shot scenarios for the PaLM model. It is worth mentioning that BM25 is more efficient in terms of indexing memory and retrieval latency compared to semantic retrievers like GTR or other sentence encoders Liu et al. (2022), which makes it easier to deploy. Observation 2: Dr. ICL improves instruction-finetuned LLM. Previous research has primarily focused on investigating demonstration retrieved ICL with general LLMs (such as GPT-3) rather than instruction-finetuned LLMs, possibly because they did not consider reusing the training data. In our study, we examine Dr. ICL with Flan-PaLM, an instruction-finetuned LLM, and present the results in Figure 4. Overall, the retrieved demonstrations outperform no demonstrations or random demonstrations. This implies that the training data should be reused during inference as they can be retrieved and enhance the performance, even if the model has already seen such data. We conjecture that the retrieved demonstrations may enhance knowledge localization for ICL, which could explain the observed improvement. Observation 3: Dr. ICL can further improve advanced prompting technique, Chain-of-Thought. Figure 4: Flan-PaLM: One-shot and few-shot inference with three types of demonstrations, random, BM25, and GTR. Figure 3: PaLM: One-shot and few-shot inference with three types of demonstrations, random, BM25, and GTR. In our experiments on GSM8k, StrategyQA, and AQuA, using Dr. ICL in conjunction with CoT results in improved performance under both one-shot and few-shot ICL scenarios. This finding suggests that Dr. ICL has the potential to enhance the performance of powerful prompting techniques. The observations above hold significant values for real-world applications. Incorporating ICL with a simple BM25 demonstration retriever, which is highly scalable in terms of latency and indexing memory, is proven to improve the performance of the LLM, including when instruction finetuning or Chain-of-Thought were used. Examples of retrieved demonstrations given by the off-the-shelf retrievers are given in the Table 4 in Appendix. **Trained Demonstration Retriever Performance** We experiment our trained demonstration retriever with PaLM. Table 2 displays both one-shot and few-shot performance and show that the demonstration retriever is better than off-the-shelf GTR in almost all cases, leading to a better overall performance. Notably, the improvements were most significant in the one-shot ICL scenario, which requires less inference latency and computing resources than few-shot ICL. These promising results suggest that the trained retriever could provide an effective alternative to off-the-shelf models. ## 5 Analysis To rule out the chance that retrieved demonstrations are more advantageous than random ones simply because in the benchmark datasets the former's answers are identical to the correct ones, we assess the overlap percentage between the demonstration responses and the target. In the few-shot scenario, we aggregate the answers from the demonstrations via majority voting. From Table 3, it is evident that for the first forth datasets, the overlap ratio is roughly equal to or less than the uniform distribution, suggesting that the benefits of the retrieved demonstrations are not due to label identification. In the case of the NQ, we notice a considerable overlap between demonstration answers and the ground truth. We then randomly select 100 instances out of the 433 overlapped cases from GTR-retrieved demonstrations (one-shot) and manually examine them. We find that, indeed, for the majority of the 100 instances, the input questions are semantically equal to the demonstration questions. ## 6 Discussion and Conclusion In this work, we first leverage two off-the-shelf retrievers to enhance ICL by searching query-oriented demonstrations. Our experiments demonstrated that off-the-shelf retrievers are more effective than random demonstrations, with GTR generally retrieving more representative demonstrations than BM25. More importantly, our results with Flan-PaLM indicated that training data can be useful not only for training a model but also for improving the performance of fine-tuned LLM during testing via ICL. Our experiments with CoT also suggests that integrating Dr. ICL with advanced prompting techniques can further improve model's performance. Additionally, we trained a demonstration retriever that further improved the overall performance of off-the-shelf retrievers, with the most significant improvements observed in the one-shot scenario. One interesting future research challenge is retrieving demonstrations across tasks in situations where training data is not available. \begin{table} \begin{tabular}{c c c c} \hline \hline **Task** & **Method** & **One Shot** & **Few Shots** \\ \hline \multirow{2}{*}{NQ} & GTR & 37.8 & 43.9 \\ & Demo-GTR(our) & **39.2(+1.4)** & 43.9 \\ \hline \multirow{2}{*}{ANLI (r3)} & GTR & 54.0 & 59.0 \\ & Demo-GTR(our) & **54.8(+0.8)** & 59.0 \\ \hline \multirow{2}{*}{GSM8k} & GTR & 57.7 & 61.0 \\ & Demo-GTR(our) & **59.3(+1.6)** & **61.5(+0.5)** \\ \hline \multirow{2}{*}{Avg.} & GTR & 49.8 & 54.6 \\ & Demo-GTR(our) & **51.1(+1.3)** & **54.8(+0.2)** \\ \hline \hline \end{tabular} \end{table} Table 2: Performance of PaLM using GTR and Demo-GTR retrieved demonstrations. Demo-GTR consistently achieves better performance than GTR in one-shot case. \begin{table} \begin{tabular}{c c c c c} \hline \hline **Task** & **Random** & **Retriever** & **One-shot** & **Few-shot** \\ \hline \multirow{2}{*}{ANLI3} & \multirow{2}{*}{33.33} & BM25 & 33.33 & 31.42 \\ & & GTR & 34.75 & 32.25 \\ \hline \multirow{2}{*}{StrategyQA} & \multirow{2}{*}{50.0} & BM25 & 48.79 & 47.34 \\ & & GTR & 47.83 & 48.31 \\ \hline \multirow{2}{*}{AQUA} & \multirow{2}{*}{20.0} & BM25 & 22.83 & 25.98 \\ & & GTR & 24.02 & 22.05 \\ \hline \multirow{2}{*}{GSM8K} & \multirow{2}{*}{0.0} & BM25 & 1.36 & 1.82 \\ & & GTR & 0.99 & 1.14 \\ \hline \multirow{2}{*}{NQ} & \multirow{2}{*}{0.0} & BM25 & 8.95 & 8.70 \\ & & GTR & 11.99 & 11.08 \\ \hline \hline \end{tabular} \end{table} Table 3: Overlapped Ratio of Demonstrations Answers with Targets: **Random** represents the probability of selecting the correct label if we select randomly from the space of possible labels.
2306.15766
Large Language Models as Annotators: Enhancing Generalization of NLP Models at Minimal Cost
State-of-the-art supervised NLP models achieve high accuracy but are also susceptible to failures on inputs from low-data regimes, such as domains that are not represented in training data. As an approximation to collecting ground-truth labels for the specific domain, we study the use of large language models (LLMs) for annotating inputs and improving the generalization of NLP models. Specifically, given a budget for LLM annotations, we present an algorithm for sampling the most informative inputs to annotate and retrain the NLP model. We find that popular active learning strategies such as uncertainty-based sampling do not work well. Instead, we propose a sampling strategy based on the difference in prediction scores between the base model and the finetuned NLP model, utilizing the fact that most NLP models are finetuned from a base model. Experiments with classification (semantic similarity) and ranking (semantic search) tasks show that our sampling strategy leads to significant gains in accuracy for both the training and target domains.
Parikshit Bansal, Amit Sharma
2023-06-27T19:29:55Z
http://arxiv.org/abs/2306.15766v1
# Large Language Models as Annotators: Enhancing Generalization of NLP Models at Minimal Cost ###### Abstract State-of-the-art supervised NLP models achieve high accuracy but are also susceptible to failures on inputs from low-data regimes, such as domains that are not represented in training data. As an approximation to collecting ground-truth labels for the specific domain, we study the use of large language models (LLMs) for annotating inputs and improving the generalization of NLP models. Specifically, given a budget for LLM annotations, we present an algorithm for sampling the most _informative_ inputs to annotate and retrain the NLP model. We find that popular active learning strategies such as uncertainty-based sampling do not work well. Instead, we propose a sampling strategy based on the difference in prediction scores between the base model and the finetuned NLP model, utilizing the fact that most NLP models are finetuned from a base model. Experiments with classification (semantic similarity) and ranking (semantic search) tasks show that our sampling strategy leads to significant gains in accuracy for both the training and target domains. ## 1 Introduction A common limitation of supervised NLP models is that they fail to generalize in _low data_ regimes, corresponding to inputs from subgroups or domains that have limited labelled data in the training set. These generalisation errors occur due to distribution shifts between the new inputs and training data, that render some of the correlations learnt by the model as invalid Wang et al. (2022). For instance, models may learn spurious correlations with sensitive attributes like gender Sun et al. (2019) or may over-emphasize lexical patterns Gururangan et al. (2018); or in some cases, inputs may exhibit a new concept that has not been seen in the training data Gama et al. (2014). As a motivating example, consider the task of determining _semantic similarity_ between a pair of sentences Reimers and Gurevych (2019). This task forms the basis of information retrieval and recommendation systems such as similar question recommendation on online forums Wang et al. (2018) or product recommendation on e-commerce websites He and McAuley (2016). In such systems, it is common to encounter new unseen domains during deployment. For instance, introduction of a new item category or users from a new demographic may cause failures for a deployed model due to a shift in distribution of inputs in the system compared to the training data. Unlabelled data is readily available for such distribution shift (i.e., new questions posted by users or items from a new category), but labelling the data requires considerable human effort. In other cases, failures may occur due to hard-to-learn semantic patterns found in a small minority of the training data (see the example pair containing lexically similar questions on oxygen and glucose in Figure 1). A common solution in all these cases is to collect more labelled data distinct from the distribution of training data, but labelling (or _annotating_) data is an expensive and manual process. To address this issue, prior work suggests using large language models (LLMs, Ouyang et al. (2022); Brown et al. (2020)) to annotate data. LLMs like GPT-3 obtain promising accuracy for annotating data for a variety of NLP tasks including sentiment classification Ding et al. (2022), keyword relevance Choi et al. (2023); Gilardi et al. (2023) and question answering Gilardi et al. (2023). However, LLM-based annotations can be noisy and due to efficiency reasons, we cannot deploy LLM models directly. In this paper, we take the natural next step and ask whether annotations from LLMs can be used to enhance generalization of existing NLP models. Given a corpus of unlabelled data, we find that a naive application of LLMs (annotating inputs at random) provides only marginal gains on total ac curacy and in some cases, can worsen accuracy for low-data groups. To optimize the sampling, we formulate the problem of sampling inputs for annotation as an _active learning_ problem (Zhang et al., 2022). However, we find that the popular sampling strategy based on model uncertainty Lewis (1995) is also not optimal for LLM-based annotation. Using experiments on classification (semantic similarity) and ranking (semantic search) tasks, we propose an alternative strategy for sampling inputs. For cost-efficient sampling of new unlabeled inputs for LLM annotations, an intuitive solution is to annotate only those inputs that the NLP model is expected to be incorrect on, i.e., inputs where the NLP model's prediction and the ground truth label would differ. In the absence of GT labels for new inputs, we propose a metric, _Conditional Informativeness_, that approximates this intuition. We utilize the fact that state-of-the-art supervised NLP models are often finetuned from a _base model_ such as BERT (Vaswani et al., 2017) that provides an initial embedding for the input. For a given input and an NLP task, Conditional Informativeness measures the deviation between the prediction score from the base model and the score from the NLP model finetuned using the available labelled data for the task. We argue that the inputs which have the max deviation between the two scores are the ones likely to be incorrectly predicted by the finetuned model and hence the most informative ones for finetuning over the base model. Our sampling metric provides a practical way to improve generalization of NLP models for a task (see Figure 1 for an illustration). Given a budget for LLM annotation (i.e., number of queries), we select the inputs having the maximum Conditional Informativeness for LLM annotation and then retrain the NLP model using this additional training data. Our algorithm shows significant improvements in target domain and total accuracy, on the Quora dataset for the semantic similarity task, and on Amazon and Wikipedia datasets for the semantic search task. Our algorithm also provides higher gains than the uncertainty-based sampling from active learning literature. This may be because of the error distribution of LLM annotations: only for inputs with high deviation, the LLM-based annotations may be expected to be more accurate than the base model. To summarize, we make the following contributions. 1) The Conditional Informativeness metric for sampling inputs for LLM-based annotation that outperforms commonly used active learning approaches. 2) Experiments on semantic similarity and search tasks that show LLM annotations can significantly improve both in-domain and target domain accuracy. ## 2 Related Work LLMs for data augmentation.A popular framework for improving a NLP model's generalization has been to generate new data using LLMs and test the model's output using a human-in-the-loop, i.e. LLMs are used in partnership with human participants for data generation and testing/debugging models Ribeiro and Lundberg (2022); Wang et al. (2021). In recent work, He et al. (2023) utilise the same strategy for training an NLP model: they use GPT-3 for data generation over under-represented groups, which are then annotated by users before including in training set. However, with more capable LLMs like Chat-GPT, LLMs are now capable of not just generating data, but also annotating it (while faithfully following annotation instructions). Recent work Gilardi et al. (2023); He et al. (2023); Ding et al. (2022) has looked at the annotation accuracy for LLMs and found them to be at par with crowd-worker annotators. Combining generation and annotation, parallel to us, Whitehouse et al. (2023) explore the utility of both input and labels generated from LLMs for crosslingual common sense reasoning tasks. Similarly, for the task of building a sentence embedding using contrastive learning, Cheng et al. (2023) use LLMs to both generate novel input pairs and then score their similarity. Motivated by real-world applications from information retrieval, we focus our attention on the _unsupervised domain adaptation_(Ramponi and Plank, 2020) (UDA) setting where unlabelled inputs are easily available. UDA methods assume a source labeled domain and a target unlabeled domain with the goal of adapting to the target domain (while also performing well on the source domain). For instance, Saad-Falcon et al. (2023) motivate the passage reranking task where a large number of unlabelled passages are available. They use LLMs to generate synthetic queries for a given passage and then use such augmented data to train a downstream model. Given the potential of LLM-annotated data for training downstream classifiers and the associated costs of querying them, we study how to _efficiently_ utilise these annotations to train a more generalizable NLP model; specifically, which inputs to annotate for maximum benefit? **Semantic similarity with limited labeled data.**[3] present a comprehensive survey of data augmentation techniques for limited label data settings in NLP. AugSBERT [16] present an augmentation strategy that uses a bigger (oracle) cross-encoder model for generating (pseudo-)labels for unlabeled inputs. These inputs are then utilised to train a smaller and efficient NLP model. Such an oracle, however, is limited by the training data whereas LLMs are known to have zero-shot capabilities that generalize to new domains[15]. In addition to augmentation, unsupervised domain adaptation methods have also been proposed. Apart from the main task learning loss, [14] propose an additional loss which minimizes the divergence between source and target domain representations. Recent work UDApter [15] combines UDA methods with adapters for efficient domain adaptation. However, domain matching techniques work only under a restrictive set of assumptions [11]. Instead, we aim to approximate the ground-truth labels through LLMs, thereby converting the unsupervised problem into a simpler, supervised learning problem. [13] investigates the failure modes of Open-domain question answering when faced with distribution shifts. In addition they propose a few-shot data augmentation method for improving generalisation of these models. The augmentations uses LLMs to generate question for a given passage. **Active Learning.**Choosing which inputs to annotate has been classically studied as an active learning problem [17].In active learning setup, we are given a small set of \(L\) labeled inputs, along with a large pool of \(U\) unlabeled inputs. We are also specified a budget \(B\), which denotes the number of inputs from the unlabeled data that can be annotated by an oracle/human. Active learning explores how to best sample \(B\) inputs from the unlabeled pool to maximize the generalization accuracy of the final model that is trained on the original \(L\) + (annotated) \(B\) samples. Active Learning uses two primary criterion for sample selection : Informativeness and Representativeness [14]. The most popular informativeness technique is uncertainty sampling [18, 19] and for representativeness is diversity/density. As an appli Figure 1: Enhanced Generalization using LLM Annotations. Illustration of our algorithm using the duplicate question detection task. We propose a sampling strategy based on deviation of an NLP model’s similarity score from the base model, called _(base model)_-conditional informativeness. Inputs are sampled using this strategy (Step 2), annotated using an LLM (Step 3) and then added to the training set of the NLP model. Our sampling strategy performs significantly better than random or active learning-based strategies. cation, recent work (Margatina et al., 2023) uses active learning in an in-context learning setting for LLMs and shows that similarity based sampling (instead of uncertainty and diversity) are most effective for in-context learning. In this paper, we focus on LLM-based annotations and evaluate the uncertainty-based informativeness sampling technique. Based on our experiments, we also propose a new informativeness criterion. ## 3 Conditional informativeness criterion for sampling LLM annotations ### Background: Building NLP classifiers using base models Given a domain of sentences, \(\mathcal{X}\), and a task \(\mathcal{T}:\mathcal{X}\rightarrow\{0,1\}\) we consider learning a classifier function \(f:\mathcal{X}\rightarrow\{0,1\}\) which follows the task i.e. \(f(x)=\mathcal{T}(x)\ \forall\ x\in\mathcal{X}\). The function aims to learn features which are predictive of the output label and their mapping to the output label. A subset of the domain \(\mathcal{X}\) is denoted by \(X=\{x_{0},x_{1},x_{2},\ldots,x_{|X|}\}\subseteq\mathcal{X}\). The output label of \(x_{i}\) is \(\mathcal{T}(x_{i})\) and is denoted by \(t_{i}\). A set of examples can be represented as \[D=\{(x_{i},t_{i}):i\in[|X|]\} \tag{1}\] Unlabeled examples lack the task label \(t_{i}\). Semantic Similarity.As an example, consider the semantic similarity task (Cer et al., 2017). Inputs for semantic similarity come from \(\mathcal{X}\times\mathcal{Y}\) where \(\mathcal{X}\) and \(\mathcal{Y}\) are a pair of domain of sentences. The domains can be the same or different for symmetric and asymmetric similarity respectively. For a given input \((x_{i},y_{i})\), the task output is 1 if a pair are semantically similar, and 0 if they are not. The classifier for semantic similarity is hence defined as \(f:\mathcal{X}\times\mathcal{Y}\rightarrow\{0,1\}\). We denote a training set as : \[D=\{((x_{i},y_{i}),t_{i}):i\in[|X|]\} \tag{2}\] Further details on semantic similarity are in Supp. E. Finetuning on Base model.NLP models are usually finetuned on top of some pretrained text models (e.g., we use MSMARCO-DistilBERT-v4 for semantic similarity) called _Base_ model. The base model adheres to an approximation of the task based on the pretraining dataset and provides initial embedding for the input. We call these features defined by the base model as _pretrained features_. ### A domain adaptation case study: Which inputs to annotate? To evaluate different input sampling techniques for LLM annotations, we consider the semantic similarity task of duplicate question detection. We train bi-encoders (SBERT (Reimers and Gurevych, 2019)) on the Quora Questions Pair dataset (Wang et al., 2018),using MSMARCO-DistilBERT-v4 as the base model. To simulate a challenging target domain, we remove 60% of "extreme" examples from the training dataset. These are examples where the base model either obtains the lowest mean squared error w.r.t. the ground truth labels or obtains the highest mean squared error. That is, half of the examples (30%) are the _easy_ examples the base model is (most) correct on and the remaining half are the _hard_ examples where the base model is (most) incorrect on. Further, we remove labels from the target domain. Hence from the original data we have 40% "source" labeled examples and 60% "target" unlabeled examples. For accuracy evaluation on both source and target domains, we create analogous domains over the test set too. We consider an active learning setup where selected inputs from target domain can be annotated by an LLM and augmented in the training set. After augmentation, the model is trained on the source domain + augmented dataset. We consider two popular active-sampling approaches in literature: Random and Uncertainty-based sampling. Apart from these, we include two additional sampling techniques based on our knowledge of the target domain: _base-consistent-sample_ and _base-inconsistent-sample_. These are designed to capture the _easy_ and _hard_ examples that constitute the target domain. Given labeled data L, unlabeled inputs U and a budget for annotation as B we have : * **random-sampling.** We randomly select \(B\) inputs out of the \(U\) unlabeled inputs for annotations. * **uncertainty-sampling.** We first finetune the base model on the given labeled data \(L\) and then select the \(B\) (budget) most uncertain (according to the finetuned model) unlabeled inputs (out of \(U\)). * **base-consistent-sampling.** We choose top \(B\) examples having lowest (MSE) error on base model predictions with GT labels. * **base-inconsistent-sampling**. We choose top \(B\) examples having highest (MSE) error on base model predictions with GT labels. These \(B\) inputs are then annotated and included for final training on \(L+B\). AUC under different sampling strategies.Using gpt-3.5-turbo as the annotater LLM, we report AUC (area-under-(ro)curve) in Table 1. We set a budget \(B\) of 10% of the dataset for annotation. For details on prompts used, see Sec 4.1. Looking at the AUC metric for the complete target domain, we observe that random-sampling and uncertain-sampling lead to similar improvements compared to the training set. Compared to these active learning techniques, base-inconsistent-sampling leads to almost twice the AUC improvement. That is, annotations with LLM are best under base-inconsistent-sampling. Remarkably, with only 10% of the examples annotated, AUC with base-inconsistent-sampling is even higher than the setting where we augment the _full_ target domain (100% of examples). In contrast, base-consistent-sampling hurts generalization. Even though base-consistent-sampling was designed to sample examples with low base model error, it obtains worse AUC than base-inconsistent-sampling on test examples with low base model error. Results on using Ground Truth (GT) labels for annotations (instead of LLM annotations) are in Supp. Table 11. Implications.The above results indicate that for LLM annotations, uncertainty-sampling may not be the best technique. To understand these results, note that the original model finetuned on training set (first row in Table 1, with no augmentation from target domain) has high generalization (AUC) for low base error inputs while generalizing poorly for high base error inputs. Annotating with base-consistent-sampling is thus a waste of budget as the base and simple finetuned model are already good on the low base error inputs. Moreover since LLM-annotations are not perfect, augmenting with base-consistent-sampling introduces noise into the model, when the model already has a high accuracy. On the other hand, high base error examples, which are targeted by base-inconsistent-sampling, do have substantial room for AUC improvement when considering the original finetuned model. This indicates that LLM annotations should focus only on base-inconsistent-sampling inputs, as such annotations may be the most _informative_. ### Conditional Informativeness metric Based on the experiments above, we find that when annotated with LLMs, high base error, or _base-inconsistent_ examples are the most informative for training. But base-inconsistent-sampling, as described above, is not practical since it requires knowledge of the ground truth labels of inputs. Hence in this section, we develop an approximate metric for quantifying the degree of base-inconsistency of unlabeled inputs. We use a metric which measures deviation of the finetuned NLP model from the base model, and call it Conditional Informativeness, since it depends on the base model in addition to the finetuned model. For a input \(x_{i}\) we define it as \[z_{i}(f,f_{0})=Dev(f(x_{i}),f_{0}(x_{i})) \tag{3}\] where \(f_{0}\) is the base model, \(f\) is the finetuned model and Dev is a measure of deviation. We use simple squared error in our work.The intuition is that during the finetuning process with the goal of minimizing error, a model is more likely to deviate from the base model's score on an input if the base model has high error on that input. Here we assume that the finetuned model's score deviation captures this notion of base error, which can be generalized to the unlabelled inputs. We present qualitative examples from our metric on Quora dataset in Table 2. These inputs were selected by our Conditional Informativeness metric as having high deviation. While for the first pair of examples the lexical similarity (base semantic) is of the pair is low, their semantic meaning (_duplicate question_ semantics) is the same, while for the second pair, while the lexical similarity is high, their semantic similarity is low. When doing LLM annotations, inputs like these would be the most informative for training. The formulation above defines Conditional Informativeness based on deviation of individual input semantic similarity scores. But we can also define Conditional Informativeness using deviation at a domain level. For example, for a multi-domain dataset with domain information for each input, the metric can be averaged over the entire domain to find the most suitable domains for LLM annotations. ## 4 EAGLE: Enhanced Generalization using LLM Annotations Based on the Conditional Informativeness metric, we now present the _EAGLE_ algorithm for enhancing generalization of NLP models using LLM annotations. As in Section 3, we consider an active learning setup where we are given some labeled examples \(L\) and a pool of unlabeled inputs \(U\) along with a budget \(B\) of annotating unlabeled inputs (using LLMs). In addition to standard classification tasks, our algorithm can also work for other tasks such as ranking. We first present the general algorithm and then present instantiations of it for a classification task (semantic similarity) and a ranking task (semantic search). ### The EAGLE Algorithm Step 1: Computing Conditional InformativenessAs the first step, we finetune our base model on the labeled data \(L\) to get a finetuned model \(f\) i.e., \[f=\operatorname*{argmin}_{f}\mathbb{E}_{(x_{i},t_{i})\in L}[\mathcal{L}(f(x_{i} ),t_{i})] \tag{4}\] Using \(f\), we compute the Conditional Informativeness score \(z_{i}(f,f_{0})\) where \(f_{0}\) is the base model, for each unlabeled input \(x_{i}\in U\) i.e. \[z=\{z_{i}(f,f_{0}):x_{i}\in U\} \tag{5}\] Step 2: Sampling inputs using Conditional InformativenessThe next step involves sampling appropriate inputs for LLM annotations. We either choose to do an input-wise Conditional Informativeness sampling, or if the data is domain annotated, we can do domain level annotations. For input-wise sampling we select the top \(B\) samples i.e. \[U_{sampled}=\{x_{i}:z_{i}\in\operatorname{top}(z,B)\} \tag{6}\] For domain level annotations, we can obtain the domain-level Conditional Informativeness metric (by averaging the metric over inputs belonging to the domain). In this case, the budget B is uniformly distributed over inputs in selected domains. Step 3: Annotating sampled inputs using LLMGiven a sampled set of unlabelled input \(U_{sampled}\), we use LLM annotations for these inputs to get an annotated set as \(L^{\prime}_{sampled}\). We denote LLM annotations function by \(\mathcal{T}^{\prime}:\mathcal{X}\rightarrow\{0,1\}\), and hence the LLM annotation for input \(x_{i}\) as \(t^{\prime}_{ij}\in\{0,1\}\). The augmented dataset made from \(U\) is hence \(L^{\prime}\) \[L^{\prime}_{sampled}=\{(x_{i},t^{\prime}_{i}):x_{i}\in U\} \tag{7}\] Step 4: Finetuning classifier on augmented labelled dataFinally we finetune the base model on the augmented dataset \(L+L^{\prime}_{sampled}\) using Eq 4. ### Application: Semantic Similarity We present how our algorithm can be used for the semantic similarity task described in Section 3. Step 1 follows from the main algorithm. The Conditional Informativeness computation follows Eq 3, with the only caveat being that the classifier function now takes two inputs : \[z_{i}(f,f_{0})=Dev(f(x_{i},y_{i}),f_{0}(x_{i},y_{i})) \tag{8}\] \begin{table} \begin{tabular}{|l|c c c|} \hline Data & Complete Test & High Base Error & Low Base Error \\ \hline Initial Train Set & 86.824 \(\pm\) 0.038 & 59.335 \(\pm\) 0.139 & **99.068 \(\pm\) 0.048** \\ \hline + 100\% (complete target domain) & 87.544 \(\pm\) 0.035 & **65.785 \(\pm\) 0.121** & 98.164 \(\pm\) 0.044 \\ \hline + Random-sampling 10\% & 87.052 \(\pm\) 0.151 & 60.551 \(\pm\) 0.701 & 98.805 \(\pm\) 0.058 \\ + Uncertain-sampling 10\% & 87.620 \(\pm\) 0.029 & 61.594 \(\pm\) 0.433 & 99.081 \(\pm\) 0.024 \\ + Base-consistent-sampling 10\% & 86.763 \(\pm\) 0.149 & 59.986 \(\pm\) 0.340 & 98.833 \(\pm\) 0.024 \\ + Base-inconsistent-sampling 10\% & **88.108 \(\pm\) 0.062** & **65.538 \(\pm\) 0.175** & 98.861 \(\pm\) 0.046 \\ \hline \end{tabular} \end{table} Table 1: AUC for Quora duplicate questions task, before and after including LLM-based annotations using four different sampling techniques: random, uncertainty, base-consistent and base-inconsistent. AUC is evaluated on the full test set, the test subset with high base model error and the test subset with low base model error. Sampling just 10% of the data for annotation using base-inconsistent-sampling is better than annotating with the complete (100%) target dataset. \begin{table} \begin{tabular}{|l|c c|} \hline Pair & \multicolumn{2}{c|}{Similarity} \\ & Base & Finetuned \\ \hline What is a good finet plan for a comment that wants to gain weight? & Low & High \\ What food should lead to gain weight? & Low & High \\ How can you determine the structure for glucose? & High & Low \\ \hline \end{tabular} \end{table} Table 2: Quora test examples having high Conditional Informativeness, i.e. finetuned predictions are different from base model predictions. Base model captures lexical similarity while finetuned captures target semantics. Sampling is done in the same way with the algorithm selecting top \(B\) inputs having highest \(z_{i}\). LLM Annotation DetailsConsider a set of unlabeled examples \(U\) consisting of pairs \((x_{i},y_{i})\) to be annotated by the LLM. We construct a prompt which consists of set of pairs \((x_{i},y_{i})\) of sentences. For cost-efficiency we consider 10 pairs in each prompt for our experiments. The prompt asks the LLM to output all the pairs which are semantically similiar (with semantics defined appropriately inside the prompt). All pairs outputted by LLM as similar are considered similar while rest are not. See Table 3 for example annotation outputs on Quora dataset. ### Application: Semantic Search While semantic similarity is a fundamental task, real world applications often rely on _semantic search_. In such applications, the \(X\) is called set of all _queries_ denoted as \(X=\{x_{0},x_{1},x_{2},\ldots,x_{|X|}\}\), while \(Y\) is the set of _labels_ denoted as \(Y=\{y_{0},y_{1},y_{2},\ldots,y_{|Y|}\}\). These search for an optimal semantic match for a sentence \(x\in X\) from the set \(Y\), i.e. \[g(x,\mathcal{T})=\mathit{argmax}_{y_{i}\in Y}\mathcal{T}(x,y_{i}) \tag{9}\] In practice since we don't have the true semantics \(\mathcal{T}\) (e.g., relevance to query), we use some approximation of semantics for argmax. We denote a set of examples by: \[D_{\mathit{search}}=\{((x_{i},Y),T_{i}):i\in[|X|]\} \tag{10}\] where \[T_{i}=\{t_{ij}:j\in[|Y|]\} \tag{11}\] Unlabeled samples lack \(T_{i}\) information. Following Eqn. 3, Conditional Informativeness on set \(X\) is defined as \[\begin{split} y=& g(x_{i},f_{0})\\ z_{i}(f,f_{0})=&\mathit{Dev}(f(x_{i},y),f_{0}(x_{i},y))\end{split} \tag{12}\] where \(g(.)\) finds the nearest \(y_{j}\in Y\) for \(x_{i}\) according to base embedding function \(f_{0}\) (Eqn 9). LLM Annotation DetailsThe unlabeled set \(U\) now consisting of pairs \((x_{i},Y)\) to be annotated by LLMs. Querying semantic similarity for each query, label pair \(\{(x_{i},y_{j}):y_{j}\in Y\}\) is very expensive. Hence we first create a filtered set of labels from a semantic similarity model (in our case fine-tuned model) \(f\). With slight abuse of notation, we consider an extension of the function \(g\) in Eq 9 as \(g(x,f,K)\) where \(g\) now outputs a set of top \(K\) labels for each query. Our filtered set is hence \(Y^{\prime}=g(x,f,K)\) where \(f\) is the finetuned model. The set \(Y^{\prime}\) hence consists of top \(K\) ranked labels for a query according to the finetuned model \(f\). The rest of the labels (which weren't in the top \(K\) ranking of the finetuned model) i.e. \(Y/Y^{\prime}\) have their semantic similarity set to 0. We query the LLM for semantic similarity of labels in the filtered set \(Y^{\prime}\), where \(|Y^{\prime}|=K\). Hence this helps us reduce the complexity of searching through the whole label space by restricting the search space using the finetuned model \(f\). We take \(K\) = 10 for all experiments. For each pair \(\{(x_{i},y_{j}):y_{j}\in Y^{\prime}\}\) we can then query LLM similar to semantic similarity setup above. \[L^{\prime}=\{((x_{i},Y),T^{\prime}_{i}):(x_{i},Y)\in U\} \tag{13}\] We empirically observe that it is better to provide one prompt for each query along with its top \(K\) filtered labels. The labels should be ordered by their semantic similarity score according to the model \(f\) in the prompt. Example prompts used in our experiments can be found in Supp. A. The filtering step in semantic search can use any good similarity model. In our experiments, we utilise our finetuned model \(f\) for the filtering step. ## 5 Experiments We evaluate EAGLE algorithm on two tasks: **1)**_semantic similarity_, a fundamental task; **2)**_semantic search_, a real-world task motivated by information retrieval applications. We assume that in addition to some labeled examples, we are also given a large pool of unlabeled inputs. For semantic similarity we consider generalisation in limited labeled data setting, while for semantic search we evaluate generalisation to unlabeled target domains. In limited labeled data setting, both the labeled and unlabeled inputs follow the same distribution while when \begin{table} \begin{tabular}{l|c c c c} \hline \hline \multicolumn{1}{c|}{\multirow{2}{*}{Pair}} & Devailation & GT & LLM \\ \cline{2-4} \cline{6-6} & Why does Cube increase the presence of Sentuations Boy Natural Base? & Low & 0 & 1 \\ \hline What are the best hallucinations in Mexico? & Low & 1 & 0 \\ \hline What is third picking model? & High & 0 & 0 \\ \hline What is a single model? & & High & 0 & 0 \\ \hline How was training performed in Academic India? & \multirow{2}{*}{High} & \multirow{2}{*}{1} & \multirow{2}{*}{1} \\ \cline{2-4} is those proof of ancient Indian working process? & & & \\ \cline{2-4} thus what did they translate in and with what countries? & & & \\ \hline \hline \end{tabular} \end{table} Table 3: LLM (gpt-3.5-turbo) annotations for some low and high deviation examples. LLM is able to correctly guess the high deviation ones while is incorrect on the low deviation ones. LLM annotation accuracy is agnostic of the deviation. See Supp A for prompts used. adapting to unlabeled target domain (the unlabeled input), there is a distribution shift in inputs of the labeled and unlabeled examples. We show how our Conditional Informativeness based sampling inputs helps to improve generalization in both of these settings. We do our experiments on embedding-based semantic similarity as defined below. **Embedding-based Semantic Similarity/Search** The search argmax operation (Eq 9) over the complete query set \(|X|\) is quadratic (i.e. \(|X|\times|Y|\)). For efficient computation, we use embeddings based semantic similarity (SBERT Reimers and Gurevych (2019)), where both queries and labels are seperately embedded into a \(N\) dimensional unit norm space. The dot product between the embedded representations of sentences gives the semantic similarity. Hence the goal is to learn a embedding function \(h:\mathcal{X}\cup\mathcal{Y}\rightarrow\mathbb{R}^{N}\) s.t. \(h(x_{i})^{\intercal}h(y_{i})\) gives the semantic similarity between the functions. **Sampling methods.** In addition to Conditional Informativeness, we consider random-sampling and an active learning uncertainty-based sampling algorithm. As an oracle, we also consider ground-truth labels for the same inputs sampled by each of these sampling algorithms. **Implementation details.** We consider the base model as MSMARC0-DistilBERT-v4 for both tasks. For LLM-based annotations we use GPT-3.5-Turbo. See Supp. A for prompts used in the experiments. We tried open source models such as TogetherComputer/RedPajama-INCITE-7B-Base (along with the Chat and Instruct version) or MosaicML/mpt-7b-chat but did not obtain good annotation accuracy. All results are reported for 3 seeds. Other training details are in Supp. C. ### Semantic Similarity **Setup** We conduct experiments on the _Quora Question Pairs_Wang et al. (2018) dataset, which consists of pairs of questions. The task is to label each pair as a duplicate or not, i.e., whether the questions have the same intent or not. We subsample 38400 training pairs from the train set. We consider a setup where 10% of the Quora dataset is labeled by ground truth, while rest of the 90% forms the unlabeled pool of data. We present test AUC (Area-Under-ROC) numbers as the evaluation metric. \begin{table} \begin{tabular}{|l|c|} \hline Data & Test AUC \\ \hline Initial Train set & 85.780 \(\pm\) 0.002 \\ \hline + Random LLM & 86.001 \(\pm\) 0.139 \\ + Uncertainty LLM & 86.058 \(\pm\) 0.089 \\ + Conditional Informativeness LLM & **86.432 \(\pm\) 0.106** \\ \hline + Random GT & 86.677 \(\pm\) 0.068 \\ + Uncertainty GT & 87.125 \(\pm\) 0.135 \\ + Conditional Informativeness GT & **87.445 \(\pm\) 0.091** \\ \hline \end{tabular} \end{table} Table 4: AUC for different sampling techniques for Quora semantic similarity task. We sample 10% of unlabeled data. For LLM-based annotations, the best method is to sample using our Conditional Informativeness sampling while for GT based annotation, both uncertain and Conditional Informativeness based sampling are good. Figure 2: Gain in AUC on including LLM-annotated and GT-based augmentations on Quora dataset. The orange line is gain in AUC with uncertainty based sampling (shaded region shows std err.). We divide the unlabeled data into 20 quantiles, based on the Conditional Informativeness metric. Conditional Informativeness increases from left to right. For LLMs, uncertainty is not a good method for sampling and Conditional Informativeness based sampling is better, while for GT-based augmentations uncertainty based sampling is better. Comparison with Random and Uncertainty SamplingWe follow the Algorithm from 4.1 for semantic similarity. Using the model finetuned on labeled data, we sample 10% of unlabeled data for annotation, according to various sampling strategies (namely random, uncertainty and Conditional Informativeness). For details on how LLM annotations are done see Sec 4.2. We also present results on annotations with ground truth labels i.e. \(t^{\prime}_{i}=t_{i}\) (Sec 4.2). In Table 4, we show that for LLM-based annotations, Conditional Informativeness-based sampling achieves significantly better test AUC than random and uncertainty sampling. In comparison, for annotating with GT labels, both uncertainty and Conditional Informativeness- based sampling yield high AUC. Evaluating Conditional Informativeness-based QuantilesTo find out why uncertainty-based sampling did not work for LLM annotations, we divide the data into 20 quantiles, each having 5% of unlabeled data based on Conditional Informativeness metric. Figure 2 shows the gain in AUC on including these samples (LLM or GT annotated) with the training data. As a comparison, the orange line in the plot signifies accuracy on sampling 5% from uncertainty metric (shaded portion is std error). For LLM annotations, we observe that uncertainty is not a good technique for sampling and Conditional Informativeness-based sampling is better, while for GT-based augmentations uncertainty-based sampling provides better gains than Conditional Informativeness-based sampling. ### Semantic Search Next, we evaluate the utility of Conditional Informativeness sampling for generalisation to unlabeled target domains in semantic search tasks. DatasetsWe consider two recommendation datasets for semantic search : 1) LF-WikiSeeAlsoTiles-320K(Bhatia et al., 2016) (i.e., Wikipedia) considers a recommendation/retrieval setting. The train set consists of Wikipedia page titles (queries \(X\)) along with a large set of page titles (labels \(Y\)). For Wikipedia, a label \(y_{j}\) is semantically similar to a query \(x_{i}\) if the label is likely to occur in the _SeeAlso_ section of the query article's wiki-page. As described in Section 4.3 for the semantic search task, the set of labels remains fixed to \(Y\). The task is to learn embeddings which follow the semantics above. For each article \(X\) we also parse its category information, which we use as it domain label. If for an article \(x_{i}\), it's categorical information contains "USA" or "America" it belongs to the domain USA, otherwise not. 2) LF-AmazonTiles-131K(Bhatia et al., 2016) (i.e., Amazon) considers recommendations in e-commerce _AlsoBought_ product setting. Given a query product (\(X\)) the labels correspond to possible products a user might buy (\(Y\)). Here too we consider categorical information for all query products \(X\). We construct two domains in Amazon. All products in "Books" category are in the Books domain, while all products in the "Kitchen and Dining" category form the Kitchen domain. SetupFor Wikipedia we consider the USA domain as our target unlabeled domain, and the rest of the dataset as our labeled data. Similarly for Amazon we construct two versions of the dataset, one where we Books domain as the target unlabeled domain and another where we consider Kitchen as the target unlabeled domain. We use _Precision@1_ (P@1) metric for evaluation, i.e. the fraction of queries whose top ranked label is semantically sim \begin{table} \begin{tabular}{l|c c|c c} \hline & \multicolumn{2}{c|}{Wikipedia} & \multicolumn{2}{c}{Amazon} \\ & USA & Total & Books & Total \\ \hline Initial Train set & 12.530 \(\pm\) 0.034 & 19.048 \(\pm\) 0.019 & 17.226 \(\pm\) 0.008 & 24.904 \(\pm\) 0.076 \\ \hline + Target LLM Random 40\% & 13.188 \(\pm\) 0.073 & 19.232 \(\pm\) 0.024 & 17.959 \(\pm\) 0.075 & 25.065 \(\pm\) 0.049 \\ + Target LLM Conditional Informativeness (bottom 40\%) & 13.089 \(\pm\) 0.079 & 19.209 \(\pm\) 0.034 & 18.021 \(\pm\) 0.033 & 25.110 \(\pm\) 0.038 \\ + Target LLM Conditional Informativeness (middle 40\%) & 13.166 \(\pm\) 0.021 & 19.228 \(\pm\) 0.022 & 18.123 \(\pm\) 0.060 & 25.216 \(\pm\) 0.012 \\ + Target LLM Conditional Informativeness (top 40\%) & **13.372 \(\pm\) 0.058** & **19.363 \(\pm\) 0.023** & **18.351 \(\pm\) 0.028** & **25.271 \(\pm\) 0.030** \\ \hline + Target GT Random 40\% & 13.893 \(\pm\) 0.047 & **19.430 \(\pm\) 0.052** & 18.375 \(\pm\) 0.068 & **25.329 \(\pm\) 0.051** \\ + Target GT Conditional Informativeness (bottom 40\%) & 13.911 \(\pm\) 0.032 & 19.395 \(\pm\) 0.013 & 18.455 \(\pm\) 0.054 & 25.271 \(\pm\) 0.020 \\ + Target GT Conditional Informativeness (middle 40\%) & **13.973 \(\pm\) 0.070** & 19.327 \(\pm\) 0.023 & 18.400 \(\pm\) 0.027 & 25.213 \(\pm\) 0.075 \\ + Target GT Conditional Informativeness (top 40\%) & 13.878 \(\pm\) 0.015 & 19.414 \(\pm\) 0.015 & **18.613 \(\pm\) 0.043** & 25.285 \(\pm\) 0.057 \\ \hline \end{tabular} \end{table} Table 5: P@1 for test target domain (USA in Wikipedia and Books in Amazon) and complete test set. For LLM-based annotation, top 40% samples according to our Conditional Informativeness are optimal for total accuracy (while also being optimal for target domains accuarcy). For GT based annotations, Random Sampling is best for total accuracy. Best target domain accuracy method for GT is inconclusive. ilar (or _relevant_) to the query, i.e, \[Precision@1=\mathbb{E}_{x_{i}\in X}[\mathcal{T}(x_{i},g(x_{i},f))]\] For the GT annotation oracle, we annotate the top \(K\) sampled labels (using the finetuned model \(f\)) with ground truth information i.e. for a query \(x_{i}\), \(t^{\prime}_{ij}=t_{ij}\ \forall\ y_{j}\in Y^{\prime}\) and \(t^{\prime}_{ij}=0\ \forall\ y_{j}\notin Y^{\prime}\). See Sec 4.3 for notation (Eq 11,13). Note that for all labels which are not ranked in top \(K\) by the finetuned model have their semantic similarity set to 0, even if they were relevant in GT. For other details refer to Supp. C. ResultsWe present test P@1 for the target domains (_USA_ domain in Wikipedia and _Books_ domain in Amazon) and the complete source + target domain test sets in Table 5. We find that when augmenting with LLM based annotations, selecting inputs which are in the top 40% inputs according to our Conditional Informativeness are optimal for total accuracy (while also being optimal for target domains accuarcy). For GT based annotations, Random Sampling is best for total accuracy, though results are not significant. Using Domain-knowledge for Qualitative measure of Conditional InformativenessOn the Amazon recommendation task, consider domain adaptation to Books or Kitchen domains. For Book recommendations using only book titles (e.g., say The Kite Runner for A Thousand Splendid Suns) the Conditional Informativeness would be high for encoder based models (assuming that encoder doesn't have the necessary domain knowledge for book recommendations, i.e., the two books share the same author). That is, it would require more world knowledge than for domains like Kicthen, (e.g., Kaiser Bakeware Muffin Pan for Nordic Ware Brownie Pan) which are more likely to be consistent with the base model's semantics (in this case lexical similarity). For the domain Kitchen, we can see in Table 6 that including LLM-based annotations for domain Kitchen does not provide any gains compared to the base model. In comparison, for other domains like Books, LLM annotations lead to better generalisation than both base and training set finetuned models. Refer to Supp. B for a plot showing how LLMs are not better than finetuned/base model for Amazon(Kitchen) domain, whereas for Wiki(USA) and Amazon(Books) LLMs are significantly better (Fig 3). For accuracy improvements on Kitchen domain, techniques utilising regularisation to base model may be suitable and LLMs may not be needed. ## 6 Conclusion We showed how LLMs can be used for annotations and how sampling of inputs plays an important role in improving an NLP model's generalization. To this end, we presented a novel sampling algorithm for input selection that performs better than the popular technique of uncertainty-based sampling. As future work, we would like to test whether the Conditional Informativeness metric applies to other NLP tasks beyond semantic similarity. For the semantic search setting, given the generative capabilities of LLMs, an interesting future direction is to use LLMs to generate labels for queries while restricting the generated label set to our target label set.
2306.01713
Fast estimation of the look-elsewhere effect using Gaussian random fields
We discuss the use of Gaussian random fields to estimate the look-elsewhere effect correction. We show that Gaussian random fields can be used to model the null-hypothesis significance maps from a large set of statistical problems commonly encountered in physics, such as template matching and likelihood ratio tests. Some specific examples are searches for dark matter using pixel arrays, searches for astronomical transients, and searches for fast-radio bursts. Gaussian random fields can be sampled efficiently in the frequency domain, and the excursion probability can be fitted with these samples to extend any estimation of the look-elsewhere effect to lower $p$-values. We demonstrate this using two example template matching problems. Finally, we apply this to estimate the trial factor of a $4^3$ accelerometer array for the detection of dark matter tracks in the Windchime project. When a global significance of $3\sigma$ is required, the estimated trial factor for such an accelerometer array is $10^{14}$ for a one-second search, and $10^{22}$ for a one-year search.
Juehang Qin, Rafael F. Lang
2023-06-02T17:35:28Z
http://arxiv.org/abs/2306.01713v2
# Fast estimation of the look-elsewhere effect using Gaussian random fields ###### Abstract We discuss the use of Gaussian random fields to estimate the look-elsewhere effect correction. We show that Gaussian random fields can be used to model the null-hypothesis significance maps from a large set of statistical problems commonly encountered in physics, such as template matching and likelihood ratio tests. Some specific examples are searches for dark matter using pixel arrays, searches for astronomical transients, and searches for fast-radio bursts. Gaussian random fields can be sampled efficiently in the frequency domain, and the excursion probability can be fitted with these samples to extend any estimation of the look-elsewhere effect to lower \(p\)-values. We demonstrate this using two example template matching problems. Finally, we apply this to estimate the trial factor of a \(4^{3}\) accelerometer array for the detection of dark matter tracks in the Windchime project. When a global significance of \(3\sigma\) is required, the estimated trial factor for such an accelerometer array is \(10^{14}\) for a one-second search, and \(10^{22}\) for a one-year search. ## I Introduction In hypothesis testing problems with composite hypotheses, the correct frequentist \(p\)-value might not be the same as the \(p\)-value one would compute for fixed values of the composite hypothesis parameters [1]. This is referred to as the look-elsewhere effect. The correct \(p\)-value given composite hypotheses is often termed the global \(p\)-value, whereas the \(p\)-value computed with fixed parameters is termed the local \(p\)-value. The look-elsewhere effect correction is often parameterized by a trial factor, which is the ratio between the local and global \(p\)-values [2]. A simple approach to finding the trial factor would be to run a large number of null-hypothesis Monte-Carlo simulations, often using simplified or "toy" models that retain the relevant statistical properties. An example of this approach can be found in [3]. However, data-analysis and inference of modern experiments can be extremely computationally-intensive, requiring dedicated computational infrastructure. Even then, with complex and high-dimensional problems, the use of toy Monte Carlo simulations can be computationally too expensive. This makes estimation of trial factors for the purposes of sensitivity projections for future experiments difficult. Thus, the use of Gaussian random fields to directly generate null-hypothesis significance maps (termed 'null significance maps') and estimate the trial factor can be very useful. Gaussian random fields are random functions over a domain, where the values of every finite collection of points on the domain are described by a multivariate Gaussian distribution. In parameter estimation problems, the domain would be the parameter space as defined by the relevant parameters, such as a finite two-dimensional Euclidean space for a search for a transient in an image. Such fields can be viewed as a higher-dimensional generalization of Gaussian processes, commonly used in Gaussian process regression [4]. Gaussian random fields are used for the estimation of the look-elsewhere effect in neuroimaging [5], and for the modelling of the matter distribution in the universe [6]. Work exists regarding the use of Gaussian random fields for the estimation of the look-elsewhere effect in physics [7; 8]. The use of Gaussian random fields to estimate look-elsewhere effect corrections relies on computation of the excursion probability, which is the probability for samples of a Gaussian random field to exceed a given significance level. In this paper, we detail techniques to use Gaussian random fields for the estimation of look-elsewhere effect corrections, with a focus on problems with underlying Gaussian random variables. In section II, we discuss the classes of problems that can be modelled by Gaussian random fields, overview the spectral method for efficient sampling of Gaussian random fields, and introduce an analytic approximation for the excursion probability. We then demonstrate these methods using a 2-dimensional template-matching problem with a Gaussian kernel, and a 1-dimensional template-matching problem with a non-Gaussian kernel, in section III and section IV respectively. Finally, in section V, we use these methods to estimate the trial factor when searching for dark matter tracks using an array of accelerometers. ## II Statistical underpinnings of method This section is split into 3 parts. First, we explore why Gaussian random fields correctly model the significance maps of a large set of problems in section II.1. Second, we discuss how to sample Gaussian random fields efficiently in section II.2. Finally, in section II.3 we describe a fitting procedure that can be used to extend this method to small \(p\)-values using an analytic approximation of the excursion probability of a Gaussian random field, and a way to directly estimate the excursion probability using the Euler characteristic when certain conditions that will be elaborated upon are met. ### Why Gaussian random fields can model a large set of problems Let us consider a statistical problem where one searches for a fluctuation over a finely-spaced set of Gaussian random variables distributed in a parameter space. One example of this could be a template matching search for a transient over a regular grid of CCD pixels with Gaussian noise, as shown in Fig. 1. Such a setup might be encountered in searches for astronomical transients [9; 10]. While a 2D grid with a simple template that is symmetric and does not vary with position is depicted in Fig. 1, this is for ease of illustration, and these assumptions are not made in the following argument except where noted. We can see that at each possible template position, the resultant signal strength recovered is a weighted sum of Gaussian random variables, where the weights correspond to the template amplitude at a given random variable. The significance map formed using such a template thus corresponds to the formal definition of a random field [11] where each point is Gaussian-distributed, and every collection of points represent a multivariate Gaussian distribution. In addition, given independent underlying random variables, the covariance between two points can be computed from the template directly. Consider two points in the parameter space, \((\mathbf{x}_{0},\mathbf{x}_{1})\). At each point, the random variable in the case of a signal-free dataset is given by: \[Y_{i}=\sum_{j}\alpha_{i,j}X_{j} \tag{1}\] where \(X_{j}\) are the underlying finely spaced Gaussian random variables such as CCD pixels, and \(\alpha_{i,j}\) refers to the template value at each underlying random variable for template \(i\). the expected value of \(X_{j}\), \(\mathbb{E}(X_{j})=0\), is taken without loss of generality, as the mean value can be subtracted. As such, the covariance would be given by: \[\begin{split} K(\mathbf{x_{0}},\mathbf{x_{1}})&= \mathrm{cov}\left(\sum_{j}\alpha_{0,j}X_{j},\sum_{k}\alpha_{1,k}X_{k}\right) \\ &=\mathbb{E}\left(\left(\sum_{j}\alpha_{0,j}X_{j}\right)\left( \sum_{k}\alpha_{1,k}X_{k}\right)\right)\end{split} \tag{2}\] This can be further simplified if the underlying Gaussian random variables are independent, as then \(\mathbb{E}(X_{i}X_{j})=0\) for \(i\neq j\). The covariance can then be computed: \[\begin{split} K(\mathbf{x_{0}},\mathbf{x_{1}})&= \mathbb{E}\left(\left(\sum_{j}\alpha_{0,j}X_{j}\right)\left(\sum_{k}\alpha_{1, k}X_{k}\right)\right)\\ &=\mathbb{E}\left(\sum_{j}\alpha_{0,j}\alpha_{1,j}X_{j}^{2}\right) \\ &=\sum_{j}\alpha_{0,j}\alpha_{1,j}\sigma_{j}^{2}\end{split} \tag{3}\] We can thus see that in the case of template matching with underlying Gaussian random variables, the significance map is modelled by a Gaussian random field and the covariance function can be directly calculated based on the template as well as the measured properties of the underlying random variables. In addition, Gaussian random fields can also be used to model significance maps with underlying Gaussian random variables generated from likelihood ratio tests. This is because the log likelihood-ratio is the sum of squared residuals, normalized by the standard deviation, as shown in Equation (4). \[\begin{split}\Lambda&=\log\left(\frac{\hat{\mathcal{ L}}}{\hat{\mathcal{L}}_{r}}\right)\\ &=\sum_{i}\frac{-\left(X_{i}-\mu_{i}\right)^{2}}{2\sigma}-\sum_{i} \frac{-\left(X_{i}\right)^{2}}{2\sigma}\\ &=\sum_{i}\frac{2\mu_{i}X_{i}-\mu_{i}^{2}}{2\sigma}\end{split} \tag{4}\] Some other problems that do not use underlying Gaussian random variables can still be represented approximately by Gaussian random fields in some circumstances. For example, if a likelihood ratio test is used, the distribution of the test statistic asymptotically approaches a \(\chi^{2}\) distribution due to Wilks' theorem [12] when the relevant conditions, detailed in [12], are satisfied. In such a situation, a signal-free significance map over a parameter space would represent a \(\chi^{2}\) random field [7]. This differs from a Gaussian random field. However, because a \(\chi^{2}\) random variable is defined as the square of a Gaussian random variable, a \(\chi^{2}\) random field can be sampled by sampling a Gaussian random field with the correct covariance, then squaring it. The excursion probability of Figure 1: Diagram depicting a template matching search for an excess over a grid of random variables. The grid of random variables is indicated by the blue points, and the template used to search for an excess is shown as the colored wire-frame distribution. such a random field is thus double the one-sided excursion probability described in section II.3. The fact that a \(\chi^{2}\) random field can be represented using Gaussian random field was noted by Ananiev and Read in [8]. Due to Lindeberg and Lyapunov's central limit theorems, [13; 14], this even holds for non-Gaussian underlying random variables such as Poisson noise, as long as Lindenberg's condition or Lyapunov's condition are satisfied [14]. More generally, our results are applicable whenever the underlying random variables are independent and sufficiently finely distributed to give a large sample size, as long as the criteria for the central limit theorems are met. Template matching problems can then be approximated by Gaussian random fields even if the underlying random variables are non-Gaussian. While a detailed discussion of these conditions is beyond the scope of this paper, verifying the Gaussianity of a given significance map numerically or constraining it using the Berry-Esseen theorem [15] might be sufficient for practical purposes. Taken together, the methods we discuss in this paper are applicable to a large variety of statistical problems commonly encountered in experimental physics. ### Efficient spectral sampling of stationary Gaussian random fields While for sufficiently complex problems, Gaussian random fields might be easier to sample than toy Monte Carlo-based methods, even this might still be too computationally intensive. For example, if we consider template-matching search in a flat 2-dimensional parameter space with \(10^{2}\) bins per dimension, there would be \(10^{4}\) points that need to be correlated with each other, resulting in a covariance matrix with \(10^{8}\) entries. We can see that with higher dimensional problems, populating such a covariance matrix (which is needed for naive sampling of Gaussian random fields) quickly becomes intractable. A review of efficient methods for the sampling of Gaussian random fields can be found in [16]. In sections III and V, we use the spectral method as described in [16] to efficiently sample from Gaussian random fields. This method of generating samples from a Gaussian random field requires the field to be weakly stationary, such that a covariance function can be described by a function of the displacement between two points. In that case, \[K(\mathbf{x_{0}},\mathbf{x_{1}})\equiv K_{s}(\mathbf{x_{0}}-\mathbf{x_{1}}) \tag{5}\] where \(K_{s}(\mathbf{s})\) is the autocorrelation function for a random field of zero mean. For a field with unity variance, which is typical for a significance map that is scaled to represent the signal-to-noise ratio, \(K_{s}(\mathbf{s})\) can be scaled to have a maximum value of one. This case is considered without loss of generality, as one can scale any stationary Gaussian random field to have a variance of unity. The Fourier transform of the autocorrelation function of a weakly stationary process is the power spectral density (PSD) due to the Wiener-Khinchin theorem [17; 18]. The spectral method for sampling Gaussian random fields makes use of this Fourier transform pair. Due to the Wiener-Khinchin theorem, instead of sampling a stationary Gaussian random field with a given autocorrelation function directly, it can be sampled in the frequency domain with the correct PSD as given by the Fourier transform of the autocorrelation function. This can be done by multiplying the Fourier transform of white noise by the square root of the PSD, then performing an inverse Fourier transform to return the sample to the relevant parameter space. We can thus sample stationary Gaussian random fields in high-dimensional parameter space without having to populate extremely large covariance matrices. For the remainder of text, we will refer to this method for sampling stationary Gaussian random fields as the spectral method. ### Analytic approximation of excursion probability Even when sampling Gaussian random fields with the spectral method, in high-dimensional parameter spaces, sampling can still be prohibitively expensive due to the curse of dimensionality. In this scenario, the look-elsewhere effect correction can be extended to lower \(p\)-values than otherwise feasible due to computational requirements using an analytic approximation of the excursion probability. For a random field \(\left\{f(t):t\in M\right\}\), the excursion probability over a confidence level \(u\) is defined as: \[p_{\text{excur}}=\mathbb{P}\left\{\sup_{t\in M}f(t)\geq u\right\} \tag{6}\] It can be seen that the excursion probability represents the probability that any point on a random field exceeds \(u\). The excursion probability for a smooth Gaussian random field on a locally convex space is given by [11]: \[\begin{split} p_{\text{excur}}=& C_{0}\Phi\left( \frac{u}{\sigma}\right)+u^{N}e^{-u^{2}/2\sigma^{2}}\sum_{j=1}^{N}C_{j}u^{-j}+\\ &\mathcal{O}\left(e^{-\alpha u^{2}/2\sigma^{2}}\right)\end{split} \tag{7}\] where \(\Phi(u)\) is the Gaussian tail distribution, \(N=\text{dim}(M)\), \(\sigma\) is the standard deviation of the Gaussian random field, and \(\alpha>1\) is a constant describing the exponential suppression of the error term. We can see that Equation (7) contains a number of constants (\(C_{i}\)). While these constants can be computed directly for some cases, this is sometimes non-trivial [11]. In these cases, Equation (7) can be used to fit the excursion probability directly using a set of samples of signal-free significance maps. This, combined with the spectral method of sampling Gaussian random fields shown in section II.2, greatly reduces the computational cost of computing the look-elsewhere effect correction. It should be noted that Equation (7) is based on approximating the excursion probability using the mean Euler characteristic of an excursion set [11]. Thus, for particularly computationally challenging problems, methods from [7] for estimating the Euler characteristic can further reduce the number of samples that are needed to derive the look-elsewhere effect correction. The Euler characteristic \(\varphi\) of an excursion set \(A_{u}\) can be interpreted as the number of isolated regions in an excursion set where the Gaussian random field \(f(t)\) exceeds some significance level \(u\) (\(f(t)>u\)), minus the number of holes in these isolated regions [5]; a precise mathematical definition can be found in [11]. If the Gaussian random field is on a rectangular Euclidean space \(M=\prod_{i}^{N}=1[0,M_{i}]\), the mean Euler characteristic can be expressed in terms of the dimensions of the Euclidean space and the derivatives of the random field [11]: \[\begin{split}\mathbb{E}\left(\varphi(A_{u})\right)=& e^{-u^{2}/2\sigma^{2}}+\\ &\sum_{k=1}^{N}\sum_{J\in\mathcal{O}_{k}}\frac{|J||\Lambda_{J}|^ {1/2}}{(2\pi)^{(k+1)/2}\sigma^{k}}H_{k-1}\left(\frac{u}{\sigma}\right)+\\ &\Phi\left(\frac{u}{\sigma}\right),\end{split} \tag{8}\] where \(\mathcal{O}_{k}\) refers to the set of \(k\)-dimensional faces of \(M\) that contain the origin, and \(|J|\) refers to the volume of the faces. Iterating over the \(k\)-dimensional faces that contain the origin can also be interpreted as iterating over \(k\)-dimensional slices of the space \(M\). \(\Lambda_{J}\) is a \(k\times k\) matrix containing the spectral moments of the Gaussian random field [11]. These can be computed explicitly using the second derivative of the covariance function [11]: \[\lambda_{ij}=-\left.\frac{\partial^{2}}{\partial d^{i}\partial d^{j}}K_{s}( \mathbf{s})\right|_{\mathbf{s}=\vec{0}} \tag{9}\] where \(\lambda_{ij}\) refers to the element of \(\Lambda_{J}\) in the \(i^{\text{th}}\) row and the \(j^{\text{th}}\) column, \(d^{i}\) refers to the \(i^{\text{th}}\) element of the \(\mathbf{s}\) vector, and \(K_{s}(\mathbf{s})\) is the covariance function as defined in Equation (5). For a problem where the point response function is known, this can be computed directly using the following integral [5]: \[\Lambda_{J}=\left.\int\frac{\partial\alpha_{s}(\mathbf{s})}{\partial\mathbf{s }}\frac{\partial\alpha_{s}(\mathbf{s})}{\partial\mathbf{s}^{T}}d\mathbf{s} \right/\int\alpha_{s}(\mathbf{s})^{2}d\mathbf{s} \tag{10}\] where \(\alpha_{s}(\mathbf{s})\) is the point response function, and \(\mathbf{s}^{T}\) denotes the transpose of the \(\mathbf{s}\) vector. This is equivalent to the template function used in Equation (11) where the field is stationary, such that \(\alpha_{s}(\mathbf{x}-\vec{x})=\alpha(\mathbf{x},\vec{x})\). For the case of a Gaussian kernel with covariance matrix \(\Sigma\), we obtain \(\lambda_{J}=\Sigma^{-1}/2\)[5]. For problems with stationary fields and Euclidean parameter spaces where Equation (8) applies, this equation can be interpreted as an approximation of \(p_{\text{excur}}\). Thus, Equation (8) can be combined with either Equation (9) or Equation (10) to produce an estimate of \(p_{\text{excur}}\). ## III Demonstration with a 2D toy problem The ideas introduced in section II can be first demonstrated using a search in a 2D parameter space. Here, we will model a 2D template matching search using a 2D Gaussian random field. 2D Gaussian random fields can be used to model the look-elsewhere correction for various experiments, such as searches for dark matter using pixel detectors [19; 20] and searches for astronomical transients [9; 10]. The expected signal shape is chosen to be a Gaussian kernel for this problem; hence, the template matching kernel can also be modelled using a Gaussian kernel. In such a situation, the normalized covariance function (the correlation function) is also a Gaussian kernel with double the covariance matrix and \(\sqrt{2}\) the linear dimensions, as shown in Figure 2. This can be seen by expanding Equation (3) in the case of a position-independent kernel: \[K(\mathbf{x_{0}},\mathbf{x_{1}})=\sum_{j}\alpha(\mathbf{x_{0}},\vec{x}_{j}) \alpha(\mathbf{x_{1}},\vec{x}_{j})\sigma_{j}^{2} \tag{11}\] where \(\alpha(\mathbf{x_{0}},\vec{x}_{j})\) is the template weight for a sample at \(\vec{x}_{j}\), and a template at \(\mathbf{x_{0}}\). For the case of a stationary field, \(\sigma_{j}\) is the same for all \(j\), so we call this simply \(\sigma\). Then, taking the template function to be a Gaussian with covariance matrix \(\Sigma\), we can apply the continuum approximation if each data samples takes up a volume of \(V_{\mathrm{s}}\). Thus, we Figure 2: 2-dimensional toy problem template matching kernel and covariance function. The ellipse corresponding to the full width at half maximum (FWHM) of the covariance function is \(\sqrt{2}\) bigger than that of the kernel in linear dimension. arrive at: \[K(\mathbf{x_{0}},\mathbf{x_{1}})= \sigma^{2}\sum_{j}\alpha(\mathbf{x_{0}},\vec{x}_{j})\alpha(\mathbf{ x_{1}},\vec{x}_{j})\] \[\approx \frac{\sigma^{2}}{V_{s}}\int\alpha(\mathbf{x_{0}},\vec{x})\alpha( \mathbf{x_{1}},\vec{x})d\vec{x}\] \[= \frac{\sigma^{2}}{V_{s}}\int e^{(\mathbf{x_{0}}-\vec{x})^{T}\Sigma ^{-1}(\mathbf{x_{0}}-\vec{x})}e^{(\mathbf{x_{1}}-\vec{x})^{T}\Sigma^{-1}( \mathbf{x_{1}}-\vec{x})}d\vec{x} \tag{12}\] We can see that this is linearly proportional to the convolution of two Gaussian distributions. It is a well known result that the convolution of Gaussian distributions is Gaussian, with a covariance matrix that is the sum of the covariance matrix of the individual Gaussian random variables[18]. Thus, if the templates in a search are Gaussian, the resulting Gaussian field from the template search has the covariance function: \[K(\mathbf{x_{0}},\mathbf{x_{1}})=K_{s}(\mathbf{x_{0}}-\mathbf{x_{1}})\propto e ^{(\mathbf{x_{0}}-\mathbf{x_{1}})^{T}\Sigma^{\prime-1}(\mathbf{x_{0}}-\mathbf{ x_{1}})} \tag{13}\] where \(\Sigma^{\prime}=2\Sigma\). One can then compute the normalization analytically using the convolution integral Equation (12). In our case, we simply discard the normalization. This is not an issue as the goal is to produce a null significance map for the purpose of calibrating the look-elsewhere effect, and such significance maps are normalized to have unity variance so that the local significance is directly given by the significance map. A single sample from the Gaussian random field described by Equation (13) is shown in Figure 3. The parameter space is divided into 60 bins in each dimension when generating this sample. We can now compare samples generated using a traditional toy Monte Carlo and Gaussian field samples. This is shown in Figure 4. The Gaussian random field is sampled both using the spectral method described in section II.2 and a naive method, where we directly sample a large covariance matrix describing the covariance between every pair of points as a multivariate Gaussian. As expected, the excursion probability obtained from toy Monte Carlo samples agree with those obtained by sampling Gaussian random fields. This demonstrates how Gaussian random fields can be sampled to produce large numbers of null significance map samples without a toy Monte Carlo whereby mock data is generated and used to produce significance maps via template matching. Finally, we can test the use of Equation (7) to fit the excursion probability. As the error term in Equation (7) is exponentially suppressed at small excursion probabilities, the fit only uses data points from after \(u^{2}=10\), where the excursion probability is approximately 0.1. An estimate using the covariance of the Gaussian kernel and Equation (8) is also performed; these are shown in Figure 5. Even though only \(10^{3}\) samples are used to fit the excursion probability, the fit matches the excursion probability expected from the toy MC samples. This demonstrates that fitting a limited set of samples using Equation (7) does indeed allow one to estimate the look-elsewhere effect correction with greatly reduced computational expense, in this case by a factor of \(\sim 100\). These results are summarized in Table 1, where it can be seen that the various methods all agree within expected uncertainties. ## IV Demonstration with a 1D Template Matching Problem We showed in section III that in the case of a Gaussian search kernel and uniformly distributed underlying random variables, the covariance function can be easily Figure 3: Example of a single null random sample. The FWHM of the covariance function is overlaid as a black dashed ellipse for comparison. Figure 4: The fraction of \(10^{5}\) null random samples showing false positives as a function of the significance threshold in units of \(\sigma^{2}\). We can see that the different methods to generate random fields produce global \(p\)-values that are in agreement. computed. Indeed, while the derivation focused on Gaussian kernels, the result should hold in general for kernels that are closed under convolution, such as kernels that represent stable distributions [21]. In many cases, however, the covariance function might be easier to compute numerically. To demonstrate such an example, we consider a 1-dimensional search with a non-Gaussian kernel, for an excess in time-series data due to a particle interacting via a long-range force passing by an accelerometer. This toy problem is inspired by the Windchime project [22], where the direct detection of dark matter particles with masses of around the Planck mass will be attempted. While technically challenging, it has been suggested that this might be possible with large accelerometer arrays [23]. In the case of a dark matter particle passing by an accelerometer, the force as a function of time is given by Equation14, where \(G\) is the gravitational constant, \(m_{\chi}\) is the mass of a dark matter particle, \(m_{s}\) is the test mass of the sensor, \(b\) is the impact parameter of a dark matter track, and \(v\) is the velocity of the dark matter track [23]. \[F(t)=\frac{Gm_{\chi}m_{s}b}{\left(b^{2}+v^{2}t^{2}\right)^{3/2}} \tag{14}\] For template matching purposes, a normalized template with the same shape as Equation14 can be used. This is shown in equation15. \[f(t)=\frac{vb^{2}}{2\left(b^{2}+v^{2}t^{2}\right)^{3/2}} \tag{15}\] In this demonstration, values of \(b=$3\,\mathrm{mm}$\) and \(v=$3\times 10^{5}\,\mathrm{m}\mathrm{/}\mathrm{s}$\) are used, and the sampling rate is \(10^{9}\,\mathrm{Hz}\). With this information, we can generate the template for template matching, and then compute the covariance function using Equation3. As this system is also described by a stationary random field, this is done by computing the autocorrelation of the template. These are shown together with MC data samples in Figure6. Figure 5: The fraction of null random samples showing false positives as a function of the significance threshold in units of \(\sigma^{2}\). A fit using Equation7 is shown (dotted purple), and we can see that a fit with only \(10^{3}\) samples agrees well with the excursion probability derived from \(10^{5}\) toy MC samples. We can see that the Euler characteristic computed using Equation8 (dashed cyan) also agrees well with both the toy MC samples and the excursion probability fit. Figure 6: Top left: The template function of the 1D toy problem. Top right: The correlation function derived as the autocorrelation of the template. Bottom left: One random sample containing a true signal, with the signal truth expectation shown in dashed orange. Bottom right: Two significance maps generated using the toy MC procedure. The blue line contains a true signal, whereas the black dashed line does not. \begin{table} \begin{tabular}{c|c|c} Method & \(p_{4\sigma}\) & \(p_{5\sigma}\) \\ \hline Toy MC & \(\left(6.27^{+0.25}_{-0.24}\right)\times 10^{-3}\) & \(\left(8.0^{+3.4}_{-2.4}\right)\times 10^{-5}\) \\ Gaussian random field & \(\left(6.17^{+0.25}_{-0.24}\right)\times 10^{-3}\) & \(\left(6.0^{+3.0}_{-2.0}\right)\times 10^{-5}\) \\ Gaussian random field, spectral method & \(\left(5.98^{+0.25}_{-0.24}\right)\times 10^{-3}\) & \(\left(5.0^{+2.8}_{-1.8}\right)\times 10^{-5}\) \\ Best fit & 6.36\(\times 10^{-3}\) & 8.8\(\times 10^{-5}\) \\ Euler characteristic estimate & 6.40\(\times 10^{-3}\) & 8.6\(\times 10^{-5}\) \\ \end{tabular} \end{table} Table 1: Global \(p\)-values at \(4\sigma\) and \(5\sigma\) local significance for the 2D toy problem. It can be seen that the \(p\)-values are consistent within stated binomial errors, and both the best fit value produced using a \(1\%\) sample size and the estimate produced with Equation8 reproduce the simulated values well. As in section III, the global \(p\)-values at \(4\sigma\) and \(5\sigma\) local significance are computed using \(10^{5}\) signal-free samples each using a toy MC, direct sampling of the Gaussian random field using the covariance function, sampling of the Gaussian random field in frequency space, and a best-fit with \(10^{3}\) samples using Equation7. The best-fit only uses data points after \(u^{2}=10\), where the excursion probability is approximately 0.1. These results are shown in Table 2. As we expect, the different values agree with the computed uncertainty, demonstrating how the methods outlined in this paper can be used to estimate the look-elsewhere effect. While the example here uses a template that is relevant to the Windchime project, this procedure can be used in general to calibrate the look-elsewhere effect correction for problems involving template matching or matched filtering of time-series data, including sonar [24] and fast radio burst detection [25]. It should be noted that for cases involving multiple templates, correlations between templates would need to be computed as well to avoid underestimation of the significance of a signal. ## V Application to Windchime These methods can now be applied to estimate the look-elsewhere effect correction needed for a dark matter direct detection experiment based on the Windchime concept [22]. In this section, we consider the detection of dark matter interacting via a long range force using a \(0.6\,\mathrm{m}\) array of \(4^{3}\) accelerometers, with a sampling rate of \(10^{7}\,\mathrm{Hz}\). The force on a single sensor is given in Equation14. However, for a particle passing through a sensor array, the impact parameter \(b\) would be different for each sensor, and additionally, the time of closest approach differs between sensors. Thus, instead of using the template for a single sensor, a template for the entire array is considered. As each template represents a track, we have to consider the parameterization of a track through the sensor array. We accomplish this using a bounding sphere that is larger than the accelerometer array, so that each track through the array intersects the bounding sphere twice and hence can be parameterized by two points on the bounding sphere. Any given template can then be parameterized by 6 parameters: velocity (\(v\)), entry time, the spherical coordinates of the entry point \((\cos(\theta_{0}),\phi_{0})\), and the spherical coordinates of the exit point \((\cos(\theta_{1}),\phi_{1})\). The cosine of the \(\theta\) angles is used as evenly-spaced bins in \(\cos(\theta)\) represent equal-sized areas on a sphere. Here, we consider a bounding sphere with a diameter of \(1\,\mathrm{m}\), so that it encloses the entire array. For each set of the 6 parameters, a template is generated by considering the force on each sensor over a series of timesteps. At every timestep, the distance between the particle and every sensor is calculated, and the template for each sensor is computed using the inverse-square law. The equation for the template at the \(i^{\text{th}}\) timestep and the \(j^{\text{th}}\) sensor is thus: \[\mathbf{f}_{ij}=\frac{\mathbf{r}_{ij}}{r_{ij}^{3}} \tag{16}\] After the computation of the entire template over a set of timesteps and all sensors, the template is divided by its sum to normalize it to unity. Finally, the covariance between two sets of parameters can be computed by summing across all sensors and timesteps using Equation3. This allows for the covariance function to be mapped out between one chosen template and other templates in the \begin{table} \begin{tabular}{c|c|c} Method & \(p_{4\sigma}\) & \(p_{5\sigma}\) \\ \hline Toy MC & \((5.82^{+0.25}_{-0.24})\times 10^{-3}\) & \((3.0^{+2.3}_{-1.3})\times 10^{-5}\) \\ Gaussian random field & \((6.13^{+0.25}_{-0.24})\times 10^{-3}\) & \((10^{+4}_{-3})\times 10^{-5}\) \\ Gaussian random field, spectral method & \((6.29^{+0.25}_{-0.25})\times 10^{-3}\) & \((10^{+4}_{-3})\times 10^{-5}\) \\ Best fit & 5.42\(\times 10^{-3}\) & 4.9\(\times 10^{-5}\) \\ Euler characteristic estimate & 7.56\(\times 10^{-3}\) & 8.4\(\times 10^{-5}\) \\ \end{tabular} \end{table} Table 2: Global \(p\)-values at \(4\sigma\) and \(5\sigma\) local significance for the 1D template matching problem. It can be seen that the \(p\)-values are consistent within stated binomial errors, and both the best fit value produced using a 1% sample size and the estimate produced with Equation8 reproduce the simulated values well. Figure 7: 2D slices of the 6D covariance function computed using accelerometer array templates. parameter space. Some 2D slices of the covariance function are shown in Figure 7. It can be seen from Figure 7 that the Gaussian random field representing this problem is not stationary. This is because in the case of a stationary field, the covariance function only depends on the displacement between points, as described in Equation (5). This implies that \(K(\mathbf{x},\mathbf{x}-\mathbf{s})=K(\mathbf{x}-\mathbf{s},\mathbf{x})=K( \mathbf{x},\mathbf{x}+\mathbf{s})\). Thus, for a stationary process, \(K_{s}(\mathbf{s})=K_{s}(-\mathbf{s})\) and the covariance function is symmetric. Unfortunately, this means that Wiener-Khinchin theorem [17; 18] does not apply, and spectral sampling of this covariance is not possible. To get an estimate of the look-elsewhere correction, we can approximate the covariance function using a symmetric functional form. Here, for the purposes of an order-of-magnitude estimate, we use a Gaussian kernel to approximate the covariance function. Similar 2D slices of the approximate covariance function are shown in Figure 8. A random sample from the Gaussian random field represented by Figure 8, sampled using the spectral method, is shown in Figure 9. Random samples are generated approximating the parameter space with a Euclidean parameter space of equal volume. This is conservative, as correlations near the edges of the parameter space that should be connected, such as \(\phi_{0}=-\pi\) and \(\phi_{0}=\pi\), that are not considered would result in a lower trial factor if this approximation was not taken. Thus, the trial factor inferred with this approximation is higher than it would otherwise be. The excursion probability can now be fitted to samples such as Figure 9. The excursion probability estimated with 2000 such samples is shown in Figure 10. We can now compute the trial factor using the fit in Figure 10. First, we need to compute the signal-to-noise ratio threshold needed for a search with confidence level \(1-\alpha\) over time \(t\). Here, we use \(\alpha=0.0027\), corresponding to a significance level of \(3\sigma\). The signal-to-noise ratio threshold is then found by solving Equation (17) for \(u\): \[\Psi(u)\frac{V^{\prime}}{V}\frac{T^{\prime}}{T}-\alpha=0 \tag{17}\] where \(\Psi(u)\) is the fitted excursion probability function, \(\frac{V^{\prime}}{V}\) corresponds to the fraction of parameter space covered by the sampled Gaussian field, and \(\frac{T^{\prime}}{T}\) corresponds to the search time covered by the random field divided by the desired search time. This procedure tells us that we need a signal-to-noise ratio threshold of at least 8.4 for a 1 s search time and 10.4 for a 1 yr search time. The trial factor, \(N_{\text{trials}}\), is given by \[N_{trials}=\frac{\alpha}{\Phi(u)}. \tag{18}\] Figure 8: Gaussian kernel approximation of the slices of the 6D correlation function shown in Fig. 7 Figure 10: The fraction of null random samples showing false positives as a function of the significance threshold in units of \(\sigma^{2}\), with a fit using Equation (7) (dotted purple) and the Euler characteristic (Equation (8)) (dashed cyan). 2000 samples generated using the spectral method are shown here. Figure 9: 2D slice of one null sample, generated using the covariance function shown in Fig. 8. This results in estimated trial factors of \(\sim 10^{14}\) for a \(1\,\mathrm{s}\) search, and \(\sim 10^{22}\) for a \(1\,\mathrm{yr}\) search. We can see that due to the high dimensional search space, the Windchime experiment suffers from a rather high trial factor. Thus, thresholds much higher than the \(5\sigma\) level customary in particle physics [1, 26, 27] are needed for a rare event search with an accelerometer array. ## VI Conclusions In this paper, we described and demonstrated the use of Gaussian random fields in the estimation of the look-elsewhere effect. The presented methods can be used to greatly reduce the computational requirements for the estimation of the look-elsewhere effect. This is particularly useful for high-dimensional and otherwise computationally complex problems. Our methods can also be helpful for sensitivity projections of future experiments, where the computational infrastructure needed for the data-analysis of such an experiment does not yet exist. We have shown that Gaussian random fields can be used to model a large set of statistical problems commonly encountered in physics. When it has been ascertained that a given significance map can be modelled by a Gaussian random field, three techniques that can be used to reduce the computational cost of estimating the look-elsewhere effect correction for local significance are demonstrated in this paper. First, various methods exist for the efficient sampling of Gaussian random fields, such as the spectral method where samples are generated in frequency space. A review of such methods can be found in [16]. This can allow for Gaussian random fields to be sampled more efficiently than the directly sampling from a large covariance matrix. Second, an analytic approximation of excursion probability, from [11], can be used to fit a small set of null significance map samples. Finally, given a stationary Gaussian field and a Euclidean parameter space, we have demonstrated that it is possible to directly compute the excursion probability based on the covariance function or template matching template [5, 11]. These methods can be combined to further reduce the computational cost of estimating the look-elsewhere effect correction at low \(p\)-values. We then demonstrate these techniques on 2D and 1D toy problems. The 2D toy problem represents, for example, searches for dark matter using pixel detectors [19, 20] and searches for astronomical transients [9, 10]. The 1D toy problem represents searches in a 1D parameter space, such as searches for dark matter using accelerometers [22], sonar [24], and fast radio burst detection [25]. Using \(10^{5}\) samples generated with each method, we show that the look-elsewhere effect corrections derived using toy MC significance maps agree with those sampled from Gaussian random fields, both when the Gaussian random field covariance functions are directly sampled and when the Gaussian random fields are sampled using the spectral method. Finally, a much smaller sample of \(10^{3}\) null significance maps is used to fit the excursion probability. This analytic fit also agrees with the other approaches, allowing for a greater reduction in computational cost. We also demonstrate that the analytic fit matches the Euler characteristic computed directly using Equation (8). Finally, we have applied these techniques to a \(4^{3}\) accelerometer array based on the Windchime concept. We find that when we require a global significance of \(3\sigma\) the estimated trial factor for such an accelerometer array is \(10^{14}\) for a \(1\,\mathrm{s}\) search, and \(10^{22}\) for a \(1\,\mathrm{yr}\) search. Taken together, the methods we introduce can help speed up the computation of trial factors in high-dimensional statistical problems, such as that encountered in track finding for Windchime. ###### Acknowledgements. We thank Uzu Lim (Oxford University) for extremely helpful discussions about random field theory, especially with regard to the computation of the spectral moments of a Gaussian random field. This work was supported by the U.S. DOE Office of Science, Office of High Energy Physics, QuantISED program (under FWP ERKAP63).
2302.02935
Thermophysical properties of FLiBe using moment tensor potentials
Fluoride salts are prospective materials for applications in some next generation nuclear reactors and their thermophysical properties at various conditions are of interest. Experimental measurement of the properties of these salts is often difficult and, in some cases, unfeasible due to challenges from high temperatures, impurity control, and corrosivity. Therefore, accurate theoretical methods are needed for fluoride salt property prediction. In this work, we used moment tensor potentials (MTP) to approximate the potential energy surface of eutectic FLiBe (0.66 LiF 0.33 BeF2) predicted by the ab initio (DFT D3) method. We then used the developed potential and molecular dynamics to obtain several thermophysical properties of FLiBe, including radial distribution functions, density, self-diffusion coefficients, thermal expansion, specific heat capacity, bulk modulus, viscosity, and thermal conductivity. Our results show that the MTP potential approximates the potential energy surface accurately and the overall approach yields very good agreement with experimental values. The converged fitting can be obtained with less than 600 configurations generated from DFT calculations, which data can be generated in just 1200 core hours on today's typical processors. The MTP potential is faster than many machine learning potentials and about one order of magnitude slower than widely used empirical molten salt potentials such as Tosi Fumi.
Siamak Attarian, Dane Morgan, Izabela Szlufarska
2023-02-06T17:10:43Z
http://arxiv.org/abs/2302.02935v1
## Thermophysical properties of FLiBe using moment tensor potentials ## Abstract: Fluoride salts are prospective materials for applications in some next-generation nuclear reactors and their thermophysical properties at various conditions are of interest. Experimental measurement of the properties of these salts is often difficult and, in some cases, unfeasible due to challenges from high temperatures, impurity control, and corrosivity. Therefore, accurate theoretical methods are needed for fluoride salt property prediction. In this work, we used moment tensor potentials (MTP) to approximate the potential energy surface of eutectic FLiBe (66.6% LiF - 33.3% BeF\({}_{2}\)) predicted by the _ab initio_ (DFT-D3) method. We then used the developed potential and molecular dynamics to obtain several thermophysical properties of FLiBe, including radial distribution functions, density, self-diffusion coefficients, thermal expansion, specific heat capacity, bulk modulus, viscosity, and thermal conductivity. Our results show that the MTP potential approximates the potential energy surface accurately and the overall approach yields very good agreement with experimental values. The converged fitting can be obtained with less than 600 configurations generated from DFT calculations, which data can be generated in just 1200 core hours on today's typical processors. The MTP potential is faster than many machine learning potentials and about one order of magnitude slower than widely used empirical molten salt potentials such as Tosi/Fumi. ## 1 Introduction: Molten salts have garnered significant attention due to their applications in molten salt reactor systems both as coolants and fuel salts [1], concentrated solar power plants [2], and molten salt batteries [3]. In particular, FLiBe has been used in the molten salt reactor experiment in the 1960s [4] and it is one of the prospective salts to be used in generation IV reactors [5]. FLiBe has a low neutron absorption cross-section, high volumetric heat capacity [6], and is liquid in the temperature range 732 K - 1703 K [7]. Although many experimental measurements of FLiBe are available in the literature, the reported thermophysical properties are scattered with uncertainties up to 20% [8]. This scatter in measured properties has been suggested to be due to such issues as the presence of impurities and deviation from the 2:1 ratio between LiF and BeF\({}_{2}\) in the experiment [8]. As an alternative, accurate theoretical methods can be utilized to calculate the thermophysical properties of FLiBe in well-controlled conditions. Several works based on _ab initio_ molecular dynamics (AIMD) have been conducted to study various aspects of FLiBe [9, 10, 11, 12, 13, 14]. While AIMD simulation is a highly accurate method based on quantum mechanics, a number of properties relevant for applications, such as viscosity, thermal conductivity, melting temperature, etc. are impractical to study with AIMD due to the limitations in length- and time-scales of such simulations. An alternative approach involves using interatomic potential molecular dynamics (IPMD) simulations [15, 16, 17, 18] where the interactions between atoms are fit to experimental and quantum mechanical calculations. IPMD, also often called classical MD, allows simulations of larger supercells for longer times than AIMD, which times and sizes are sufficient for predicting complex thermodynamic and transport properties. Properties from classical MD simulations can generally be determined with enough numerical precision that their errors are dominated by the accuracy of the underlying interatomic potentials. In the past decade, significant progress has been made in the development of so-called machine learning interatomic potentials (MLIP), where the functional form of the potential does not have a physical meaning as in the case of traditional potentials and the potential parameters are fitted using ML-based techniques. MLIPs have shown promise for conducting MD simulations with near _ab initio_ accuracy but on time- and length-scales comparable to traditional interatomic potentials. Various forms of MLIPs are currently in use in the materials science community, including neural networks interatomic potentials (NNIP) [19], Gaussian approximation potentials (GAP) [20], deep potentials (DPMD) [21], spectral neighbor analysis potentials (SNAP) [22], moment tensor potentials (MTP) [23], etc. MLIPs have also found applications in molten salt modeling. For example, Liang _et al._[24] used DPMD to model MgCl\({}_{2}\)-KCl and found that the Mg\({}^{2+}\) ions in this system have a distorted tetrahedral local geometry. Their reported thermal expansion coefficient and viscosity were in good agreement with the experiments. Sivaraman _et al._[25] used the GAP potential to model molten LiCl. Their calculated density and self-diffusion coefficients also agreed well with experiments. In another example, Feng _et al._[26] used DPMD [27] to study the structure of molten LaCl\({}_{3}\). They found that molten LaCl\({}_{3}\) mainly consists of sevenfold and eightfold coordinated structures. In a recent article, Rodriguez _et al._[28] used the DPMD framework to develop a potential for LiF and FLiBe. Their results were in good agreement with AIMD simulations, however, as the authors themselves noted, due to the lack of van der Waals dispersion interactions in their _ab initio_ simulations, the simulation results deviated from experimental values. Most of the published works on MLIPs for molten salts have used NNIP or DPMD. The number of data points that are typically used to train such potentials is in the tens of thousands [29, 30, 31, 32, 26] and in some cases hundreds of thousands [33, 34, 28, 35]. Here, one data point consists of a single energy and a set of force and stress vectors on atoms from one periodic unit cell configuration (the unit cell typically contains about 100 atoms). Recent work [36] has shown that MTP can be trained with much smaller datasets and is much faster compared to other MLIPs for MD simulations. Up until now, the applicability of MTP potentials to molten salts has not been demonstrated. Here, we use MTP to approximate the potential energy surface of FLiBe, predicted by the DFT-D3 [37] method, which considers dispersion corrections. We then use the developed potential to simulate FLiBe and to predict its properties. There are two main results of this paper: 1) We assess the ability of MTP potentials to model FLiBe, which is essentially an ionic compound, and we consider such factors as the accuracy of fitting, training data requirements, and resulting MLIP speed; 2) We assess the ability of DFT with dispersion corrections combined with MLIP to accurately predict thermophysical properties of FLiBe by comparing our model results with available experimental values. **2. Methodology:** **2.1. MTP potential** The total energy of a system in MTP potential is calculated by the sum of the energies of individual atoms: \[E=\sum_{i=1}^{N_{tot}}E_{i}\] \[E_{i}\] is the energy of the \(i^{\rm th}\) atom, which is calculated as \[E_{i}=\sum_{\alpha}\xi_{\alpha}B_{\alpha}(n_{i})\] Here, \(n_{i}\) is determined based on the atomic environment around atom \(i\), \(B_{\alpha}\)'s are basis functions that are defined by contraction of moment tensors, \(\xi_{\alpha}\)'s are fitting parameters, and \(\alpha\) is the number of basis functions. The moment tensors are defined as \[M_{\mu,v}(n_{i})=\sum_{j}f_{\mu}(|r_{ij}|,z_{i},z_{j})\mathbf{r}_{ij} \otimes...\otimes\mathbf{r}_{ij}\] In the above equation, the summation is over \(j\) neighboring atoms that fall within the cutoff radius centered at atom \(i\). \(z_{i}\) and \(z_{j}\) are the types (atomic species) of atoms \(i\) and \(j\), \(r_{ij}\) is the relative position of atom \(j\) with respect to atom \(i\) (\(r_{ij}\)=\(r_{j}\)-\(r_{i}\) ), and the outer product of vector \(r_{ij}\) is done \(v\) times. The \(f_{\mu}\) function is the radial part of the moment tensor and it is calculated based on a series of fitting parameters \(c_{\mu}\) and Chebyshev polynomials. Interested readers are referred to Refs. [23, 38] for the complete description of MTP. In this work, we used the MLIP package [38] to fit the parameters of MTP and we used its library developed for the LAMMPS package [39] to perform MD simulations. The MLIP package contains a series of MTP potentials with preset hyperparameters named MTP level 2 (MTP02), level 4 (MTP04), etc., that we tested for potential fitting. By increasing each MTP level, the complexity of the potential increases, which means that more fitting parameters are introduced to the potential. We assessed the performance of the MTP potential in three stages. In the first stage, we used a large training set to compare the computational cost vs accuracy of each MTP level. After comparing between different MTP levels we chose the most efficient MTP level and used that for the subsequent stages. In the second stage, we tested the effect of the training set size on the predicted energies and forces. In the final stage, we used an active learning scheme based on the concept of D-optimality [38] implemented in the MLIP package to create a smaller training data set that would provide the desired accuracy. To combine the errors in energies, forces, and stresses during the fitting procedure, the fitting weights of 1, 0.01, and 0.001 were used, respectively, which are the default weights in the MLIP code and have shown to work well in previous studies [38, 40]. ### Data generation and fitting procedure All the atomic configurations used in this work as training and testing data were obtained from AIMD simulations or single-point energy calculations of supercells containing 98 atoms (28 Li, 14 Be, and 56 F). DFT calculations were performed using the VASP 5.4.4 package [41] in the canonical (NVT) ensemble using the Nose thermostat [42] and by considering spin polarization. PBE-GGA approximation [43] was used for the exchange-correlation functional and the D3 method of Grimme [37] was used to account for dispersion forces. PAW-PBE potentials which were used in this study are Li_sv (1s\({}^{2}\)2s\({}^{1}\)), Be (2s\({}^{2}\)), and F (2s\({}^{2}\)2p\({}^{5}\)). An energy cutoff of 600 eV was used for the plane-wave basis set and a single gamma point was used to sample the Brillouin zone. In the first stage of our assessment of the MTP potential, we compared the different MTP potential levels. Initially, the Packmol package [44] was used to generate a random atomic configuration based on the experimental density (2.01 g/cm\({}^{3}\) at 823 K [45]). To save time in reaching the equilibrium structure of FLiBe, we used this configuration to initialize classical MD simulations with an existing potential in the NVT ensemble at temperatures 823 K, 973 K, 1223 K, and 1423 K, for 10 ps at each of these temperatures. At each temperature, the system was found to reach equilibrium after about 2 ps of the simulation as after 2 ps the pressure and the temperature of the system showed minor oscillations around the average values. The final configurations from each of the above NVT MD simulations were used as the starting points for AIMD simulations. The interatomic potential parameters for FLiBe were obtained from [15] by assuming a Tang-Toennies dispersion damping of 1.0 and using the parameters in the Tosi/Fumi potential. This step of using a classical MD potential to generate input for AIMD simulations is not strictly necessary for fitting the ML potential and it is used mainly to accelerate the equilibration of atomic configuration used in AIMD. Several AIMD simulations were performed at four temperatures: 823 K, 973 K, 1223 K, and 1423 K, and at different densities. At the equilibrium density for each temperature, we ran an AIMD simulation in the NVT ensemble for 1.5 ps with a timestep of 1 fs. Separate AIMD simulations in the NVT ensemble at densities \(\pm\)3% and \(\pm\)6% higher/lower than the experimental density were also performed at each temperature for 0.5 ps to add more diversity to the training data. We collected the atomic configurations at each time step and overall, 14,000 atomic configurations were obtained. Out of these, we picked the atomic configurations at every 10th timestep to include in the training set (1,400 configurations) and the remaining configurations were used as an initial testing set called Test1 (12,600 configurations). We used the training set to fit MTP potentials with different levels and compared the accuracies and computation times of the fitted potentials. We also generated a more demanding test data set with less correlation to the training data, which we call Test2. To generate Test2 we obtained a random atomic configuration from Packmol and conducted an ionic relaxation using VASP. Then we started from the relaxed system and ran two AIMD simulations one at 600 K at a density 10% lower than the experimental density and one at 1600 K at a density 10% higher than the experimental density, each for 5 ps. Due to the different temperatures and densities of these simulations compared to the simulations used to generate the training data, these runs were more likely to have atomic configurations much different than what was used in the training data, which makes this a more demanding data set for assessing the potential. Overall, 10,000 configurations were generated for this Test2 data set. In the second stage of our assessment, we examined the effect of the training set size (the number of training data in the training set) on the accuracy of the MTP potential. Toward this end, we made many random subsets of the original 1,400 training set with various set sizes, fit the potential with these subsets, and assessed them with the Test1 and Test2 sets. In the final stage of our assessment, we developed a potential based on the active learning scheme. To do that, we conducted a new AIMD simulation and a new set of single-point energy calculations based on the actively selected configurations from MD simulations using the D-optimality criterion [46]. The active learning method is discussed in more detail in section 3.3. This final potential was then assessed with the Test1 and Test2 sets. Since FLiBe is an ionic compound long-range interactions are important in calculating the energies. We have tested the convergence of the energies and forces with respect to the cutoff radius of the interactions and found that there are no improvements in the energy and force errors beyond the cutoff radius of 7 A. However, there are still some structural features detectable in the radial distribution functions of the salt (shown in Figure A.1 in the supplementary data) up to the cutoff of 10 A. We have therefore used 10 A as a cutoff in our final energy calculations. We note that this conservative choice was practical in this work, but others might want to consider a smaller cutoff as the MD simulations with the cutoff of 7 A are 2-3 times faster than for 10 A. This distance is long enough to include multiple neighbor shells in the liquid, and we expect that electrostatic interactions are negligible beyond this distance for near-equilibrium configurations of FLiBe. Subtracting off the long-range electrostatic interactions in advance to assure their convergence could potentially reduce the range needed or increase accuracy, but we have not pursued this strategy here. ### Molecular dynamics All the MD simulations in this work were performed using the LAMMPS package [39]. The initial atomic positions of each MD simulation were generated randomly using Packmol. We needed to make sure that when we start to calculate a property of FLiBe, the system is in equilibrium. To this end, we started each MD simulation by assigning random velocities from the Maxwell - Boltzmann distribution at 1600 K and let the system cool down for 10 ps to the temperature and the pressure relevant to that simulation. For example, if we needed to calculate a property of FLiBe at 800 K and 1 kbar, first we performed a controlled pressure-controlled temperature (NPT) simulation that took the system from 1600 K and 0 bar to 800 K and 1 kbar during 10 ps, and then continued the simulation at 800 K and 1 kbar to calculate the desired property. For the remainder of the paper, we will refer to this stage of each MD simulation as the initial equilibration. In order to calculate radial distribution function (RDF), volume, density, diffusivity, enthalpy, thermal expansion coefficient, and specific heat capacity, we used a simulation cell consisting of 6272 atoms and a simulation time step of 1 fs. After the initial equilibration, we kept the system in the NPT ensemble at T\({}_{\mathrm{d}}\) and 0 bar for 100 ps where T\({}_{\mathrm{d}}\) is the desired temperature in each simulation. The quantity of interest was obtained by averaging over the full production run time of 100 ps. The radial distribution function (RDF) was calculated at just 973 K. For the other properties we performed simulations at temperatures that ranged from 600 K to 1,600 K with 50 K increments, which resulted in a total of 21 simulations. The self-diffusion coefficient was calculated from the slope of the mean squared displacement (MSD) using Einstein's relation [47] \[D=\frac{1}{6}\lim_{t\rightarrow\infty}\frac{d}{dt}\left[\frac{1}{N}\frac{1}{n_ {t}}\sum_{i=1}^{N}\sum_{j=1}^{n_{t}}\left(r_{i}\big{(}t_{j}+dt\big{)}-r_{i} \big{(}t_{j}\big{)}\right)^{2}\right]\] where \(D\) is the self-diffusion coefficient, \(N\) is the number of atoms, and \(n_{t}\) is the number of time origins. The thermal expansion coefficient was calculated using the following equation \[\alpha=\frac{1}{V}\left(\frac{dV}{dT}\right)|_{p}=-\frac{1}{\rho}\left(\frac {d\rho}{dT}\right)|_{p}\] where \(V\) and \(\rho\) are the equilibrium volume and density at each temperature, respectively. The specific heat was calculated using the following equation \[c_{p}=\frac{\partial h}{\partial T}\] where \(h\) is the enthalpy. The bulk modulus was calculated using a simulation setup the same as for the RDF and other properties described above, except for different temperatures and the use of a grid of pressures. Bulk modulus was calculated for 6 temperatures between 800 K and 1300 K at every 100 K. For each temperature (T\({}_{d}\)), 7 different simulations were performed at pressures (P\({}_{d}\)) ranging from -0.6 to 0.6 GPa chosen with 0.2 GPa intervals. After the initial equilibration, we kept the system in the NPT ensemble at constant temperature T\({}_{d}\) and constant pressure P\({}_{d}\) for 100 ps. In each of these simulations, the volume and the pressure were calculated as the average value over 100 ps. For each temperature, the bulk modulus was obtained by fitting the 3\({}^{rd}\) order Birch-Murnaghan equation of state [48, 49] using volumes and pressures calculated from the 7 simulations performed with different P\({}_{d}\). Viscosity was calculated using the Green-Kubo relation [50, 51] \[\eta=\frac{V}{k_{B}T}\int_{0}^{\infty}<P_{\alpha\beta}(t).\,P_{\alpha\beta}(0) >dt\] Here, \(\eta\) is the viscosity, \(k_{B}\) is the Boltzmann constant, and \(P_{\alpha\beta}\) are the off-diagonal elements of the stress tensor. For viscosity calculations, we used a simulation cell consisting of 1,540 atoms and the simulation time step was 1 fs. An autocorrelation time of 20 ps was chosen for the viscosity calculations at 700 K and 750 K and 10 ps in the temperature range 800 K - 1200 K. In our simulations, the selected time intervals were shown to be sufficient to allow the decay of the autocorrelation function of the diagonal stress components to zero. At each temperature, after the initial equilibration, the simulation was carried out for 5 ns using the micro-canonical (NVE) ensemble. In all the simulations, the running integral of the stress autocorrelation function remained stable after about 4 ns. Previous simulation studies on the thermal conductivity of molten salts have shown that it is a challenging quantity to calculate [15, 52, 53]. Here, we determined the thermal conductivity using the Muller-Plathe nonequilibrium method [54]. We follow the approach outlined by Pan _et al._[55], where thermal conductivity calculated using the Muller-Plathe method was found to be in better agreement with experimental values than results based on the Green-Kubo method. In our calculations, we used a time step of 0.5 fs, a supercell containing 31,360 atoms (with dimensions \(4.2\) nm \(\times\) 4.2 nm \(\times\) 21 nm), and a kinetic energy swap rate of 1 in every 1,000 steps. For each temperature, after the initial equilibration in the NPT ensemble for 10 ps, the simulations ran for 2 ns in the NVE ensemble. ## 3 Results and discussion: ### MTP potentials with increasing levels of complexity Due to the nonlinearity of the MTP formulation for multicomponent systems, the fitting procedure is carried out using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method. The optimized parameters of the MTP potential obtained by this method depend on how the parameters have been initialized. This dependency implies that for the same training data, every optimization session can result in different optimized parameters, and therefore, different errors for forces and energies. Here, for each potential level, we carried out 3 optimization sessions and chose the one with the lowest combined errors (the potential levels and error weights are discussed in Section 2.1). Table 1 shows the root mean square errors (RMSE) of the energies and forces for the trained MTP potentials with different levels of complexity. **Table 1.**_Root mean squared errors (RMSE) of energies and forces for different levels of the MTP potential. The numbers after MTP are the complexity levels of the potential as described in Section 2.1. The values are in the units of meV/atom for energies, and meV/A for forces. The IPMD is based on the MD simulations with the Tosi/Fumi potential and is provided to compare the computational cost. Computational cost is shown in the units of (core seconds)/(atom.timestep), where atom.timestep is determined by dividing the total core seconds by the product of the number of atoms and the number of timesteps. Data is for an Intel Xeon Gold 5218R processor._ \begin{tabular}{l c c c c c c} \hline \hline & MTP06 & MTP08 & MTP10 & MTP12 & MTP14 & IPMD \\ \hline Total number of fitting parameters & 149 & 153 & 232 & 245 & 340 & - \\ Energy RMSE (meV/atom) & & & & & & \\ Training set & 5.4 & 3.6 & 1.8 & 1.8 & 1.3 & - \\ Testing set (Test1) & 5.4 & 3.6 & 1.7 & 1.8 & 1.3 & - \\ Testing set (Test2 at 600 K) & 3.3 & 3.2 & 2.9 & 2.8 & 2.3 & - \\ Testing set (Test2 at 1600 K) & 5.4 & 3.8 & 2.5 & 2.1 & 2.5 & - \\ Force RMSE (meV/A) & & & & & & \\ Training set & 173 & 111 & 59 & 59 & 41 & - \\ Testing set (Test1) & 173 & 111 & 59 & 59 & 41 & - \\ Testing set (Test2 at 600 K) & 159 & 95 & 53 & 58 & 39 & - \\ Testing set (Test2 at 1600 K) & 193 & 128 & 69 & 77 & 50 & - \\ Computational cost (core seconds)/(atom.timestep) & 8.7\(\times\)10\({}^{5}\) & 1.2\(\times\)10\({}^{4}\) & 2.2\(\times\)10\({}^{4}\) & 3.6\(\times\)10\({}^{4}\) & 6\(\times\)10\({}^{4}\) & 1.9\(\times\)10\({}^{5}\) \\ \hline \hline \end{tabular} The first noticeable result is that the errors of training and Test1 testing sets are almost the same for all the potentials. This suggests an excellent interpolating power of the MTP potentials which is the main expectation of the machine learning potentials. However, due to the correlation of the training and Test1 testing sets, one cannot be sure about the performance of the potential in relevant but unseen atomic environments. For this reason, we conducted 2 separate AIMD simulations at temperatures and densities different from the AIMD simulations that generated the training data to create the Test2 set as discussed in section 2.2. Between the two Test2 sets, the one at 1,600 K (Test2_1600) has higher errors compared to the errors of the training set, likely due to the faster ionic motion leading to the formation of more varied atomic environments within the supercells. Earlier studies [26, 53, 56] have shown that energy errors smaller than 5 meV/atom and force errors smaller than 100 meV/A are generally sufficient for predicting such properties as density, RDF, diffusion coefficient, and viscosity. As can be seen in Table 1 MTP10, MTP12, and MTP14 provide sufficient accuracies by this measure for all the training and testing sets. Figure 1 shows the errors vs MTP level for the training set and Test2_1600 testing set. To better understand the role of the MTP potential complexity we consider its impact on the energy and force errors in Figure 1. Figure 1 (a) shows that from MTP06 to MTP10 the energy Figure 1: Comparison of the root mean square errors (RMSE) for (a) energies and (b) forces of the training and testing sets predicted by different MTP levels. errors decrease considerably for both the training set and Test2_1600 set. At MTP12 the error does not change for the training set, but there is a slight decrease in the error of the Test2_1600 set. At MTP14 the error of the training set slightly decreases while the error of the Test2_1600 set increases. Figure 1 (b) shows that from MTP06 to MTP10 the force errors decrease considerably for both the training and Test2_1600 set. At MTP12 the error does not change for the training set, but the error of the Test2_1600 set increases. At MTP14 the errors of both curves slightly decrease. Figure 2 compares the energy errors of the training set vs computational cost for different MTP levels. Among the different levels of MTP potentials that showed acceptable errors, MTP10 was found to be the most efficient for our system of 3 species (Li, Be, F). MTP14 has slightly smaller errors, but it is almost 3 times slower than MTP10. As a result, we chose MTP10 for assessing Figure 2: Root mean square errors (RMSE) of the energies of the training set vs computational cost. The computational cost is reported for 1 core of the Intel Xeon Gold 5218R processor. the training set size effect and active learning in Sections 3.2 and 3.3. It is interesting to consider the computational cost of MD simulations with MTP for FLiBe, which is in the range of 10-4 - 10-3 core seconds/(atom.timestep). This cost is lower than the NNIP potential developed for FLiBe in Ref. [31] which is in the range of 10-3 - 10-2 core seconds/(atom.timestep) or for the GAP potential fitted for molten HfO\({}_{2}\)[57] which is in the range 10-3 - 10-2 core seconds/(atom.timestep). We note that the potential development that was reported in Refs. [31] and [57] did not involve hyperparameter optimization. If such optimization was included (as in our work), the computational cost reported in those studies could potentially be reduced. However, in a study that was focused on the comparison between MLPs [36], MTP was shown to be the fastest among the tested MLIP formulations (including NNIP and GAP). Nonetheless, our developed MTP potential is about an order of magnitude slower than the traditional IPMD with the Tosi/Fumi potential, as shown in Table 1. ### Training size effect The results of Section 3.1 showed that MTP potentials trained by a training set containing 10% of the configurations obtained from AIMD simulations can predict the energies and forces of the other 90% (testing set Test1) with the same error as the errors of the predictions of the training set. It is interesting to know the minimum training set size that would yield the same results. To that end, we fitted MTP10 potentials with various training set sizes. For each set size, we made 5 random samplings from the original training set (that contains 1400 atomic configurations), fitted a potential, calculated the errors, and averaged the results. Figure 3 shows the Learning curves for MTP10s with growing training set sizes. The insets in Figure 3 show the same data for the Test2_1600 set as in the main plot but starting from a training set size of 10. The fluctuations in the errors may partly be due to the uncertainties that arise from the fitting procedure (i.e., some fits may trap the parameters in a local optimum with a relatively higher error) and partly due to the quality of randomly sampled training data (i.e., the number of atomic environments that are represented by the data). The curves in the insets show that after about 40 training data the lowest values of the energy and force errors are relatively close to 2.5 meV/atom and 69 meV/A, respectively, which are the errors of the MTP10 potential trained by the entire 1,400 training data. This trend suggests that it is possible to find a training set with as low as 40 samples and fit a potential with low prediction errors. Figure 3: Learning curves of MTP10 with growing training set sizes for (a) energies and (b) forces. The energy error of the test set stops decreasing and oscillates about 3 meV/atom after the training set size reaches the value of 20. The energy errors of the Test2_1600 set shown in the inset of (a) are magnified for the training set sizes between 10 and 100. In the case of the force errors (b), the decrease ceases around the training set size of 40 and starts oscillating about 85 meV/Γ…, as shown in the inset of (b). To test the stability of the MTP10 potentials fitted in this section we chose several of them that were fitted with training set sizes between 40 to 100 and had small testing errors. Using each of the selected potentials we conducted two MD simulations in the NPT ensemble one at 600 K and 0 bar and one at 1,600 K and 0 bar with a supercell containing 6,272 atoms. The MD simulations were run for 100 ps with a time step of 1 fs. Most of these MD simulations failed either due to the lost atoms error in LAMMPS or due to an extreme supercell expansion during the simulation. Only a few of the tested potentials successfully finished the MD simulations. It should be noted that the MPT10 potential that was fitted with the entire training set (1,400 training data) in Section 3.1 also passed this test. Based on these results we were not able to conclusively suggest a minimum number of randomly selected training data that would fit a stable MTP potential. Furthermore, this result suggests that low energy and force errors, even for a relatively independent test set like the Test2 set used here, do not imply that one has a stable potential that can be used robustly for practical molecular dynamics simulations even under the same temperature conditions as the test data. ### Active selection of training samples In the previous section, we showed that we can get a small testing error from a potential fitted with a few randomly sampled training data, however, such fitting does not necessarily result in a stable potential. An alternative method of selecting training data is through the process of active learning (active sampling) [58]. Active learning generally means selecting a set of training data that have enough diversity in the feature space for the purposes of fitting the potential. Several active learning schemes are in use for machine learning potential development [57, 59, 60]. Here, we used an active sampling method based on the D-optimality criterion as implemented in the MLIP package. In this method, a set of \(n\) configurations is initially generated and provided to the code as the starting point. Each configuration is converted to a set of \(m\) features and placed in a matrix form. The algorithm then selects \(m\) configurations out of \(n\) that have the maximum modulus of the determinant (\(|\)det(A)\(|\)). \(m\) is the number of the basis functions used in the MTP potential. These \(m\) configurations are called the active set. Later, when a new configuration is introduced to the algorithm, it is added to the training set, if it could increase \(|\)det(A)\(|\) by replacing one of the configurations already existing in the active set. [46]. This means that during the active learning procedure, we provide the algorithm with a set of configurations. Some of these configurations will be chosen to be added to the training set and the rest are discarded. Every time the training set is updated, the active set is also updated accordingly. To fit a potential by active learning we generated an initial training set by running an AIMD simulation in the NVT ensemble at 1,100 K for 1 ps with a timestep of 1 fs and selecting 10% of the generated configuration (every 10th timestep). The details of the DFT calculations in this section (the number of atoms, energy cutoff, etc.) are the same as in Section 2.2. This data set was used both to fit an initial MTP10 potential and as the starting point of the active learning procedure. Using the initial potential, we then ran three MD simulations with a small supercell (98 atoms) in the NPT ensemble at temperatures 1,000 K, 1,100 K, and 1,200 K, each for 20 ps with a time step of 1 fs, and collected 10% of the configurations (every 10th timestep for each simulation). The active learning algorithm selected 315 out of the 6,000 collected configurations to be added to the training set using the procedure discussed in the preceding paragraph. We conducted single-point DFT calculations with the selected configurations and added the results to the training set. We then fitted the potential with the updated training set, which contained 415 configurations. This second potential was then used to conduct additional MD simulations at temperatures 600 K, 800 K, 1,400 K, and 1,600 K, again each for 20 ps with a time step of 1 fs. Using these simulations we collected 8,000 configurations in total, from which the active learning algorithm selected 142 configurations to be added to the training set. After conducting DFT calculations with these 142 configurations, the results were added to training set. At this stage, the training set contains 557 training data. We then trained another MTP10 potential. We did another round of data generation in the temperature range 600 K - 1600 K using the new potential, but we found that in this step, the algorithm did not sample any additional training data and therefore we stopped the training procedure. When developing a potential for applications in a certain temperature range, it is a good practice to include training data that belong slightly outside the intended range of application. In the case of FLiBe, the experimental melting point is 732 K, and most of the experimentally available thermophysical properties are in the range 732 K - 1400 K. To make sure that our calculated properties at both ends of this temperature range are reliable, we used a training data that are in the range 600 K - 1600 K. The data at 600 K are for the amorphous solid phase of FLiBe and not the crystalline phase. It is worth mentioning that the choice of sampling the data from NPT or NVT simulations is arbitrary. The important thing to consider is that the data has sufficient diversity in terms of the supercell size and temperature. Another point to consider is that in the active learning procedure we used a mixture of data obtained from AIMD and from single-point energy calculations by DFT. Specifically, the single-point energy calculations are performed on the atomic configurations that were extracted from MTP-MD simulations. Since the energy and forces depend only on interatomic distances, these properties will not depend on whether the configurations were generated from AIMD or from single-point energy minimization. Figure 4 shows the parity plots of the energies and forces for the training and Test3 testing set, as predicted by the final potential. The Test3 testing set used in Figure 4 consists of both Test1 and Test2 of Section 3.1 which are uncorrelated to the training data generated through active learning in this section. Table 2 compares the predicted errors of the potential trained by 1400 samples in section 3.1 (MTP10\({}_{1400}\)) and our final potential in this section (MTP10\({}_{557}\)). The predicted errors of both potentials are comparable. This shows that the active learning procedure is effective both in terms of reducing the number of DFT data required to be generated for training and in terms of the training time of MTP, where both were reduced by more than 50%. For the remainder of this work, we use MTP10\({}_{557}\) to calculate the thermophysical properties of FLiBe. It is worth mentioning that the overall process of the potential development in this section by active learning, including the initial AIMD simulation for 1 ps, active sampling from the MD Figure 4: Parity plots of the energies of the (a) training and (b) Test3 sets and forces of the (c) training and (d) Test3 sets for the MTP10 trained by active learning (MTP10\({}_{557}\)). The diagonal solid line in each figure shows the perfect fit. The Test3 set is the combination of Test1 and Test2 sets consisting of 22600 configurations. trajectories, 457 single point energy calculations using DFT, and 3 sets of MTP potential training were performed in less than 48 hours using a single node with 40 cores based on Intel Xeon Gold 5218R processors. ### Radial distribution function To assess how well the potential developed in this study can predict local structural features, we compared the RDF obtained from MTP MD and our DFT AIMD for Be\({}^{2+}\)-F-, Li\({}^{+}\)-F- and F-F at 973 K (see Figure 5). The curves for MTP and DFT almost overlap each other in the range between 0 and 6 A. RDF for Be\({}^{2+}\)-F- shows a sharp peak at 1.54 A and then decays to zero. This suggests a strong bonding between Be\({}^{2+}\) and F- within FLiBe. On the other hand, the RDF for Li\({}^{+}\)-F has a wider first peak that does not decay to zero. This result suggests that the first nearest neighbor shell for Li\({}^{+}\) and F is more diffuse and not as well defined as for Be\({}^{2+}\) and F, which further indicated that the bonding between Li\({}^{+}\) and F in FLiBe is not as strong as between Be\({}^{2+}\) and F. Table 3 compares the position of the first RDF peak (which corresponds to the average bond length) and the coordination number of the aforementioned ionic pairs, as obtained from MTP, DFT, IPMD [16], and from experiment [61]. The values of the first RDF peak obtained from MTP are in good agreement with the results obtained from DFT, IPMD, and the experiment. The coordination numbers of Li\({}^{+}\)-F and F-F obtained from MTP and DFT however, do not seem to agree with the experiment. This disagreement likely results from the fact that the location of the first minimum after the peak cannot be confidently identified, especially for Li\({}^{+}\)-F and F-F. _Figure 5. Comparison of the radial distribution functions (RDF) of FLiBe obtained from MTP and DFT at 973 K._ _Table 3. Position of the first peak in RDF shown in Figure 5 and the coordination number. MTP and DFT values are determined in this work, IPMD is from MD simulation with PIM potential [16] and experimental results are from Ref. [61]._ \begin{tabular}{c c c c c c c c} \hline & \multicolumn{4}{c}{Position of first RDF peak (Γ…)} & \multicolumn{4}{c}{Coordination number} \\ \hline & MTP & DFT & IPMD & Experiment & MTP & DFT & IPMD & Experiment \\ \hline Be\({}^{2+}\)-F & 1.54 & 1.54 & 1.58 & 1.58 & 3.92 & 3.98 & 4.0 & 4.0(3) \\ Li\({}^{+}\)-F & 1.84 & 1.87 & 1.81 & 1.85 & 4.84 & 4.45 & 4.0 & 4 (1) \\ F-F & 2.59 & 2.59 & 2.61 & 2.56 & 11.98 & 11.96 & 11.3 & 8(2) \\ \hline \end{tabular} _Figure 6. Comparison of the radial distribution functions (RDF) of FLiBe obtained from MTP and DFT at 973 K._ _Figure 7. Comparison of the radial distribution functions (RDF) of FLiBe obtained from MTP and DFT at 973 K._ _Figure 8. Comparison of the radial distribution functions (RDF) of FLiBe obtained from MTP and DFT at 973 K._ _Figure 9. Comparison of the radial distribution functions (RDF) of FLiBe obtained from MTP and DFT at 973 K._ In the remainder of the paper, we provide the results of the calculations of the thermophysical properties of FLiBe. For the calculations of density, diffusivity, thermal expansion coefficient, and specific heat capacity we conducted three separate sets of simulations. For the bulk modulus, viscosity, and thermal conductivity due to the computational cost of the simulations, we conducted two sets of simulations, each starting from an uncorrelated supercell. Since the errors in the calculated results are smaller than the size of the data points shown in the figures, we provided the data with their errors in separate tables that can be found in the supplementary data. It should be noted that due to the high computational cost of conducting the MD simulations multiple times the reported sampling errors are based on only a few data points. Given this limited set of independent runs, the sampling error estimates cannot be considered converged and merely serve as a guide on the qualitative scale of the sampling errors and to show that the results are reproducible. Please see the supplementary data for more discussion of the uncertainties. ### Density In Figure 6 we compare the temperature-dependent density calculated from MD simulations with MTP to several results obtained experimentally [45, 62, 63, 64] and theoretically [10, 15, 28]. The densities calculated in the current study fall within the range of experimental results. The densities obtained using the van der Waals density functional (vdw-DF) by Nam _et al._[10] overlap with our results between 800 K and 1,150 K. The densities calculated using IPMD [15] are close to the lower bound of the experimental results. DPMD results are taken from the work of Rodriguez _et al._[28], where the potential was fitted to DFT data without considering dispersion corrections. The density predicted by the authors is about 10% lower than the lower bound of the experimental results. We fitted a linear function to our results and obtained the following expression (shown as a black dashed line in Figure 6): \[\rho=2.4422-4.70\times 10^{-4}\ T\ \frac{g}{cm^{3}},\quad 600\ K\leq T\leq 1600\ K\] ### Diffusion Figure 7 shows the plots of the self-diffusion coefficients of Li, Be, and F versus temperature on a logarithmic scale obtained from MTP (this work), IPMD [16], vdw-DF [10], DPMD [28], and experiment [65]. It can be seen that for all three atomic species, the results obtained using MTP Figure 6: Temperature-dependent density of FLiBe predicted by MTP compared to values obtained in other experimental (Exp1-Ignat’ev et. al [62], Exp2-Janz [45], Exp3-Humrickhouse and Merrill [63], Exp4-Chen et. al [64]) and theoretical (vdw-DF-Nam et. al [10], IPMD-Smith et. al [15], DPMD-Rodriguez et. al [28]) studies. The experimental results and theoretical predictions are shown with solid and dashed lines, respectively. are closer to the experimental results [65] than the results obtained using DPMD [28], which could be due to the lack of accounting for dispersion forces in the development of DPMD. Our calculations are generally in agreement with the experimental results, especially for fluorine (see Fig. 7(c)). **Figure 7.**_Temperature-dependent diffusion coefficient of (a) Li, (b) Be, and (c) F calculated from MTP compared to the results using DPMD-Rodriguez et. al_[28], IPMD- Salanne et. al [16], _vdw-DF-Nam et. al [10] and experiment- Mei et. al [65]. The fitted lines are only shown for MTP and experimental results and are based on Equation 9 with the parameters provided in Table 4._ Diffusion coefficients were determined by fitting the Arrhenius equation \[D=D_{0}e^{(\frac{E_{A}}{RT})}\] to data obtained either from MTP or from experiment [65] and the results are shown in Table 4. Due to the few data points reported in the theoretical studies (IPMD [16], DPMD [28], and vdw-DF[10]), the fit would be inaccurate, and therefore we did not fit the Arrhenius equation to those data. In the above equation, \(D\) is the self (tracer) diffusion coefficient, \(D_{0}\) is the diffusion prefactor, \(E_{A}\) is the activation energy, \(R\) is the universal gas constant, and \(T\) is the temperature. The activation energy (\(E_{A}\)) corresponds to the slope of the fitted line in Figures 7 (a)-(c) and the diffusion prefactor (\(D_{0}\)) corresponds to the intersection of the fitted line with the diffusion axis (vertical axis) when \(T\rightarrow\infty\) or \(\frac{1000}{T}\to 0\). _Table 4. Parameters of the Arrhenius equation for diffusion. The value of the activation energy (\(E_{A}\)) is in the units of kJ/mol, and the prefactor (\(D_{0}\)) is in the units of 10-6 cm\({}^{2}\)/s. For each method, we provided the temperature range of the reported diffusivity data that were used to fit the parameters._ The activation energies obtained from MTP, and the experiment [65] shown in Table 4 are in good agreement. The prefactors on the other hand show a noticeable difference, which is likely due to the large sensitivity of this value to small differences in the data. This sensitivity can be understood by looking at Figure 7 and considering that a slight deviation between the fitted lines of MTP and the experiment would cause the two fitted lines to intersect the diffusion axis at quite different points when extrapolated to \(T\rightarrow\infty\) (\(\frac{1000}{T}\to 0\)) ### Thermal expansion coefficient and specific heat capacity Figure 8 shows the thermal expansion coefficient (TEC) obtained in our work compared to available experimental [45, 62, 63, 64] and theoretical values [15]. We calculated TECs based on the temperature-dependent density data of each reference, except for the IPMD from Smith _et al._[15], in which case we used the function the authors provided for TEC. Our results fall within the experimental range. The relationship between TEC and temperature calculated in our work is provided numerically through a simple linear fit below: \[TEC=1.8319\times 10^{-4}+5.55\times 10^{-8}\;T\;\frac{1}{K},\quad 600\leq T\leq 1 600\] The specific heat capacity (C\({}_{\rm p}\)) calculated in our work is compared to the experimental works in Table 5. Our result is in good agreement with the experiments. Figure 8: Temperature-dependent thermal expansion coefficient (TEC) of FLiBe predicted by MTP compared to other experimental (Exp1-Janz [45], Exp2- Humrickhouse and Merrill [63], Exp3-Vaslow and Narten [64], Exp4- Ignat’ev et. al [62]) and theoretical (IPMD-Smith et. al [15]) studies. The experimental results and theoretical predictions are shown with solid and dashed lines, respectively. \begin{table} \begin{tabular}{c c} \hline \hline MTP & 2245 \\ (600 K - 1600 K) & \\ Douglas and Payne [66] & 2347 \\ (745 K - 900 K) & \\ Benes and Konings [67] & 2390 \\ (Temperature not mentioned) & \\ Gierszewski _et al._[68] & 2380 \(\pm\) 20\% \\ (600 K - 1200 K) & \\ \hline \hline \end{tabular} \end{table} Table 5: Specific heat capacity (J/Kg.K) ### Bulk modulus Figure 9 shows the temperature-dependent bulk modulus calculated in our work, vdw-DF simulations of Nam _et al._[10], and the experimental work of Cantor _et al._[69]. The experimental results are based on the compressibility data provided in Ref. [69] with an uncertainty of a factor of 3. Our results fall within the uncertainties of the experiment. Comparing MTP to vdw-DF, the predicted bulk moduli are close at 800 K and begin to deviate as the temperature increases. Considering Figure 6, where the densities of MTP and vdw-DF (calculated at pressure \(P\)=0) overlap between 800 K and 1150 K, one may expect that the bulk moduli would also follow the same trend. It should be noted that the bulk modulus is a function of the relation between pressure and density. Although the predictions of density at \(P\)=0 at \(T\)=\(T_{1}\) may be the same between MTP (fitted to DFT-D3) and vdw-DF, it is not guaranteed that their predictions would exactly be the same at \(P\)\(>\)0 or \(P\)\(<\)0 at \(T\)=\(T_{1}\). In addition, as shown in Figure 6, the density vs. temperature calculated from vdw-DF starts to deviate from its initial linear trend above 1150 K. In contrast, density calculated from MTP (fitted to DFT-D3) shows a linear trend with temperature for the entire range of temperatures considered in this study. It is possible that if the vdw-DF had followed its initial linear trend in density in the entire temperature range, perhaps the bulk modulus calculated from vdw-DF would show a similar trend with temperature as the bulk modulus calculated from MTP. ### Viscosity Figure 10 shows the temperature-dependent viscosity of FLiBe calculated in this work compared to other experimental [68, 70, 71, 72, 73] and theoretical [15, 28] works. Our results are in excellent agreement with the experimental results of Abe _et al._[72] and Blanke _et al._[73]. The predicted results of Smith _et al_.[15] using IPMD are closer to the experimental work of Janz _et al_.[70]. DPMD underestimates the viscosity of FLiBe at lower temperatures. The relationship between viscosity and temperature calculated in our work is as follows \[\mu=0.0638\times Exp\left(\frac{4125}{T}\right)\ \ \ \ \ (\rm{mPa.\,s})\ \ ### Thermal conductivity Figure 11 shows the thermal conductivity of FLiBe calculated in this work and other experimental [68, 74] and theoretical [15, 28] works. As can be seen, the measured thermal conductivity of each experiment is constant in the temperature range \(800\) K \(<\) T \(<\) 1200 K. According to the fitted line to our calculation with MTP, the thermal conductivity increases very slightly from 1.28 to 1.32 in the temperature range \(800\) K \(<\) T \(<\) 1200 K and is about 20% higher than the measurement of McDuffie _et al._[74] (shown as Exp1 in Figure 11). The theoretical work of Smith _et al._[15] using IPMD also overestimates the thermal conductivity. In their work, Figure 10: Temperature-dependent viscosity of FLiBe predicted by MTP compared to other experimental (Exp1-Janz et. al [70], Exp2- Gierszewski et. al [68], Exp3-Cohen and Jones [71], Exp4-Abe et. al [72], Exp5- McDuffie et. al [73]) and theoretical (IPMD-Smith et. al [15], DPMD-Rodrigues et. al[28]) studies. The experimental results and theoretical predictions are shown with solid and dashed lines respectively. the authors argued that in fully dissociated ionic systems, the calculated thermal conductivity deviates from experimental results, especially for lighter alkali cations such as Li. Based on this argument, they obtained the thermal conductivity of LiF using IPMD, calculated the difference between their prediction and experimental results for LiF, and used this difference to correct their calculation of the thermal conductivity of FLiBe. By doing so they lowered their initially predicted thermal conductivity of FLiBe by 16%. The predictions of DPMD underestimate the thermal conductivity at lower temperatures and show an increasing trend with increasing temperature. The difference in the thermal conductivity of molten salts calculated from MTP and determined from experiments is similar to differences reported in other studies on molten salts with MLIPs Figure 11: Temperature-dependent thermal conductivity (\(\kappa\)) of FLiBe predicted by MTP compared to other experimental (Exp1- McDuffie et. al [74], Exp2- Gierszewski et. al [68]) and theoretical (IPMD-Smith et. al [15], DPMD-Rodriguez et. al [28]) studies. The experimental results and theoretical predictions are shown with solid and dashed lines, respectively. [53, 75, 76]. One possible explanation for these differences lies in the inherent approximations of DFT calculations. For example, in a recent work Tisi _et al.[77]_ developed a DPMD potential for water trained on DFT-PBE. They calculated the thermal conductivity of water by both AIMD and DPMD simulation and showed that the results are within 5% of each other in the temperature range 400 K - 520 K, but about 60% higher than the experimental values. The authors trained another DPMD based on the more accurate DFT-SCAN and this time the predicted thermal conductivities were around 30% higher than the experimental results. Such approximations inherent to the exchange-correlation functionals carry over into the machine learning based interatomic potentials and affect the properties calculated by MD simulations. ## 4 Conclusion: In this work, we developed a machine learning potential based on the MTP framework for FLiBe and assessed the performance of the MTP potentials in modeling molten salts. The results showed that training MTP with as low as 600 samples provides accuracies of less than 3 meV/atom for energies and less than 60 meV/A for forces. The entire process of potential development, including the data generation and the training, can be done in less than 2 days with a single node with 40 cores. The computational cost of MD simulations with MTP in our work is lower than the reported values for equivalent molten salts where other MLIPs such as GAP and NNIP are used. We calculated several thermophysical properties, including radial distribution functions, density, self-diffusion coefficients, thermal expansion, specific heat capacity, bulk modulus, viscosity, and thermal conductivity, of FLiBe and compared the results to the available experimental data. Our predicted properties are generally in very good agreement with experiments and our results suggest that considering van der Waals dispersions during the generation of training data improves the predictions of the developed potentials. Our results demonstrate that MTP potential is viable for modeling the thermophysical properties of molten salts. ## Acknowledgments We gratefully acknowledge support from the Department of Energy (DOE) Office of Nuclear Energy's (NE) Nuclear Energy University Programs (NEUP) under award \(\#\) 21-24582.
2307.06355
Moving NS Punctures on Super Spheres
One of the subtleties that has made superstring perturbation theory intricate at high string loop order is the fact that as shown by Donagi and Witten, supermoduli space is not holomorphically projected, nor is it holomorphically split. In recent years, Sen (further refined by Sen and Witten) has introduced the notion of vertical integration in moduli space. This enables one to build BRST-invariant and well-defined amplitudes by adding certain correction terms to the contributions associated to the traditional "delta function" gauge fixing for the worldsheet gravitino on local patches. The Sen and Witten approach is made possible due to there being no obstruction to a smooth splitting of supermoduli space, but it may not necessarily be the most convenient or natural solution to the problem. In particular, this approach does not determine what these corrections terms actually are from the outset. Instead, it shows that such correction terms in principle exist, and when included make all perturbative amplitudes well-defined. There may be situations however where one would like to instead have a well-defined and fully determined path integral at arbitrary string loop order from the outset. In this paper, I initiate an alternative (differential-geometric) approach that implements the fact that a smooth gauge slice for supermoduli space always exists. As a warmup, I focus specifically on super Riemann surfaces with the topology of a sphere in heterotic string theory, incorporating the corresponding super curvature locally, and introduce a new well-defined smooth gauge fixing that leads to a globally defined path integral measure that translates arbitrary fixed ($-1$) picture NS vertex operators (or handle operators) (that may or may not be offshell) to integrated (0) picture. I also provide some comments on the extension to arbitrary super Riemann surfaces.
Dimitri P. Skliros
2023-07-12T18:00:01Z
http://arxiv.org/abs/2307.06355v3
# Moving NS Punctures on Super Spheres ###### Abstract One of the subtleties that has made superstring perturbation theory intricate at high string loop order is the fact that, as shown by Donagi and Witten, supermoduli space is not holomorphically projected, nor is it holomorphically split. In recent years, Sen introduced the notion of vertical integration in moduli space (further refined by Sen and Witten). This enables one to use the traditional (only locally-defined) gauge fixing for the worldsheet gravitino in local patches, allowing one to formulate the theory on the moduli space of ordinary Riemann surfaces, and then prescribes certain correction terms to account for the incorrect gauge fixing to restore BRST invariance. This approach makes use of the fact that there is no obstruction to a smooth splitting of supermoduli space. It may, however, not necessarily be the most convenient or natural solution to the problem. There may be situations where one would like to have a well-defined path integral at arbitrary string loop order from the outset. In this paper I initiate an alternative approach that implements the fact that a smooth gauge slice for supermoduli space always exists. As a warmup, I focus specifically on super Riemann surfaces with the topology of a 2-sphere in heterotic string theory, incorporating the corresponding super curvature locally, and introduce a new well-defined smooth gauge fixing that leads to a globally defined path integral measure that translates fixed \((-1)\) picture vertex operators (or handle operators) (that may or may not be offshell) to integrated (0) picture. I also provide some comments on the extension to arbitrary super Riemann surfaces. ## 1 Introduction One of the most basic starting points for superstring perturbation theory is the notion of a vibrating loop of string, suitably formulated so as to naturally incorporate the elementary principles of quantum mechanics and relativity. A loop of string, in turn, has left- and right-moving degrees of freedom which turn out to be largely independent [1, 2]. Famously, in the heterotic string [3, 4] this distinction between chiral and anti-chiral halves is so stark that (in one formulation) we can even regard the two halves as living in a different number of spacetime dimensions [2, 5]. At tree level, this "chiral factorisation" leads to a whole host of interesting developments, such as the celebrated Kawai-Lewellen-Tye (KLT) relations [6], which have in turn led to a whole range of progress (see [7] and references therein), such as BCJ duality, double copy constructions, colour-kinematics dualities, ambitwistor strings, and various other recent incarnations; see also [8] and references therein. Although this has also led to vast simplifications, enabling, e.g., one to carry out computations at high loop orders in the context of supergravity, the string theory understanding thereof at high loop order is much less understood or developed. In the full string theory context, at loop level, a chiral factorisation (at least at the level of integrands) that is reminiscent of the KLT relations is also present under certain assumptions. This in turn led to the D'Hoker and Phong chiral splitting theorem [9] (elaborated on in detail and for general constant backgrounds in bosonic string theory in [10]). This theorem is used in a number of recent applications [11, 12], including the celebrated 2-loop calculations [13]. Briefly, this theorem states that (super)string integrands chirally factorise when we hold the loop momenta (and Dirac zero modes when present) fixed. It however relies on the Belavin-Knizhnik theorem [14] which is a statement about the chiral factorisation of the ghost or superghost contributions to the superstring path integral measure when the total central charge vanishes. And this in turn is based on the assumption that supermoduli space is holomorphically split, an assumption that has been suspected to be incorrect for decades [15, 16, 17], and is now known to break down at sufficiently high genus as shown by Donagi and Witten in [18]. So it turns out that superstring amplitudes do not chirally factorise beyond a sufficiently high number of string loops. The global obstruction is associated to the fact [18] that supermoduli space is not holomorphically split, nor is it holomorphically projected (see also [16] and especially [17] for some early work on this). To elaborate a little, in the RNS approach [2, 5, 16, 19, 20, 21] to superstring perturbation theory one usually begins by considering embeddings of super Riemann surfaces of a fixed genus into spacetime (and/or a more abstract target space associated to the superconformal field theory of interest). After integrating over all such embeddings, we are to integrate over the corresponding supermoduli space, before finally summing over string loops [2, 5]. To every point in supermoduli space there is a corresponding equivalence class of super Riemann surfaces (super Riemann surfaces related by a superconformal transformation are deemed equivalent). Briefly, the problem is associated to the non-vanishing of certain cohomology classes [17]. In practice, this manifests as follows. Supermoduli space, \(\mathfrak{M}\), in general requires several coordinate charts, \((\mathcal{U}_{m},\widetilde{\tau}^{\tilde{\ell}}_{m};\tau^{\ell}_{m}|\chi^{ \alpha}_{m})\) (with a collection of open sets, \(\{\mathcal{U}_{m}\}\), covering \(\mathfrak{M}\), and \(\widetilde{\tau}^{\tilde{\ell}}_{m},\tau^{\ell}_{m}\) corresponding to even moduli and \(\chi^{\alpha}_{m}\) odd local coordinates), and it is not possible in general to find holomorphic transition functions on patch overlaps, \(\mathcal{U}_{m}\cap\mathcal{U}_{n}\), that preserve the \(\mathbf{Z}\) grading [17] as in: \[\begin{array}{l}\widetilde{\tau}^{\tilde{\ell}}_{m}=\widetilde{f}^{\tilde{ \ell}}_{mn}(\widetilde{\tau}^{1}_{n},\widetilde{\tau}^{2}_{n},\dots)\\ \tau^{\ell}_{m}=f^{\ell}_{mn}(\tau^{1}_{n},\tau^{2}_{n},\dots)\\ \chi^{\alpha}_{m}=\sum_{\beta}\chi^{\beta}_{n}h^{\alpha\beta}_{mn}(\tau^{1}_{ n},\tau^{2}_{n},\dots)\end{array}\qquad\text{ (holomorphic splitting)}\] which would correspond to having found a holomorphic splitting, which in turn does not exist in general. The Grassmann-even quantities \(\widetilde{f}_{mn},f_{mn}\) and \(h_{mn}\) are transition functions defined on patch overlaps, \(\mathcal{U}_{m}\cap\mathcal{U}_{n}\), depending on the even coordinates, \(\widetilde{\tau}^{\tilde{\ell}}_{n},\tau^{\ell}_{n}\), of the \((\mathcal{U}_{n},\widetilde{\tau}^{\tilde{\ell}}_{n};\tau^{\ell}_{n}|\chi^{ \alpha}_{n})\) chart as indicated. It is also not possible to find an atlas that preserves the \(\mathbf{Z}\) grading of the even coordinates only, as in: \[\begin{array}{l}\widetilde{\tau}^{\tilde{\ell}}_{m}=\widetilde{f}^{\tilde{ \ell}}_{mn}(\widetilde{\tau}^{1}_{n},\widetilde{\tau}^{2}_{n},\dots)\\ \tau^{\ell}_{m}=f^{\ell}_{mn}(\tau^{1}_{n},\tau^{2}_{n},\dots)\\ \chi^{\alpha}_{m}=g^{\alpha}_{mn}(\tau^{1}_{n},\tau^{2}_{n},\dots|\chi^{1}_{n},\chi^{2}_{n},\dots)\end{array}\qquad\text{ (holomorphic projection)}\] where the odd transition function, \(g^{\alpha}_{mn}\), only preserves the \(\mathbf{Z}\) grading of the odd supermoduli, \(\chi^{\alpha}_{m}\), mod 2. This corresponds to having found a holomorphic projection, which is also known to not exist in general [18]. What does exist instead is an atlas whose charts are glued together with both even and odd transition functions that only preserve the \(\mathbf{Z}\) grading mod 2: \[\begin{array}{l}\widetilde{\tau}^{\tilde{\ell}}_{m}=\widetilde{f}^{\tilde{ \ell}}_{mn}(\widetilde{\tau}^{1}_{n},\widetilde{\tau}^{2}_{n},\dots;\tau^{1}_ {n},\tau^{2}_{n},\dots|\chi^{1}_{n},\chi^{2}_{n},\dots)\\ \tau^{\ell}_{m}=f^{\ell}_{mn}(\widetilde{\tau}^{1}_{n},\widetilde{\tau}^{2}_{n},\dots;\tau^{1}_{n},\tau^{2}_{n},\dots|\chi^{1}_{n},\chi^{2}_{n},\dots)\end{array} \qquad\text{(general)}\] \[\begin{array}{l}\chi^{\alpha}_{m}=g^{\alpha}_{mn}(\widetilde{\tau}^{1}_{n}, \widetilde{\tau}^{2}_{n},\dots;\tau^{1}_{n},\tau^{2}_{n},\dots|\chi^{1}_{n}, \chi^{2}_{n},\dots)\end{array}\] so that \(\widetilde{f}^{\tilde{\ell}}_{mn},f^{\ell}_{mn}\) are even parity smooth functions of their arguments, whereas \(g^{\alpha}_{mn}\) are odd parity smooth functions of their arguments. The range of the various indices, \(\widetilde{\ell}=1,\dots,\widetilde{\mathfrak{m}}\), \(\ell=1,\dots,\mathfrak{m}\) and \(\alpha=1,\dots,\nu\), correspond to the relevant dimension, even\(|\)odd \(=\widetilde{\mathfrak{m}}+\mathfrak{m}|\nu\), of supermoduli space, whereas the labels \(m,n\) label the charts. (In all cases there are also the obvious compatibility requirements or cocycle relations associated to triple and higher patch overlaps.) In other words, there is no obstruction3 to a smooth splitting of supermoduli space. In fact, all obstructions to a splitting vanish on a smooth supermanifold [17], in that we can always interpolate between one sort of behaviour near the boundary of a patch and another on the interior [17]. There is some very important work that implements this observation, initiated by Sen [22] and further refined by Sen and Witten [23], with further clarifications by Wang and Yin in [24]. There is also a related more algebraic approach carried out by Erler in [25]. These developments are important because we did not have an explicit prescription in the RNS formalism to compute higher loop superstring amplitudes prior to it. Although one should be careful, because this does not mean that it is necessarily the best way to proceed, nor that it is particularly natural. The idea is to work in the picture-changing operator (PCO) formalism, where one has picked a local (e.g. delta function) gauge slice for the worldsheet gravitino that may or may not _a priori_ be globally well-defined. The odd moduli are then integrated out, leading to PCO insertions on an ordinary Riemann surface which is partitioned into regions. The locations of PCOs are chosen such that spurious poles (associated to the incorrect gauge fixing) in each region are avoided. After carrying out this procedure for every region, if one tries to simply add the contributions from every region one finds that the resulting quantity is not well-defined. This manifests in a number of ways, e.g. amplitudes are not gauge invariant (BRST-exact vertex operators do not decouple) [22]. The prescription, dubbed "vertical integration" [22, 23], is to nevertheless go ahead and add the contributions from each region, and then to correct for the incorrect gauge fixing by including correction terms associated to the interface between regions. This effectively connects the aforementioned locally-defined sections along a fibre (corresponding to a coordinate choice) over a fixed point in moduli space. In this manner gauge invariance is restored. Vertical integration, effectively, makes use of the fact that a smooth splitting of supermoduli space always exists. But the situation is not entirely satisfactory yet [26]. The question I would like to ask in this note therefore is how to construct a smooth gauge slice for the integral over supermoduli in the context of heterotic string theory that is well-defined from the outset. In fact, I will consider the perhaps simplest example of such a smooth gauge slice, namely I will derive how to translate a Neveu-Schwarz (NS) puncture across a super Riemann surface with the topology of a 2-sphere. The gauge slice will be defined by making use of a super Riemann surface analogue of a metric, and I will choose a specific superconformal frame that is globally well-defined while keeping local super curvature manifest. In other words, I will introduce a metric on [a subset of] supermoduli space and (after a suitable gauge fixing of a sU(1) invariance and superconformal symmetry) use it to induce a smooth dependence of superconformal transition functions (defining the super Riemann surface) on the supermoduli associated to the location of an NS puncture. The gauge fixing will generalise Polchinski's "as flat as possible" gauge fixing in the bosonic string theory context [27] -reviewed in detail in [28]- suitably generalised to heterotic super Riemann surfaces. So in the current note I initiate a differential-geometric approach to supermoduli space, that is well-defined from the outset, that does not rely on existence of a holomorphic splitting or projection. A central role will be played by the local super curvature, which in a sense localises the "Wu-Yang" type contributions from patch overlaps, so that total derivatives in supermoduli space really do correspond to total derivatives in supermoduli space (as opposed to integrals of total derivatives that receive contributions from fictitious boundaries associated to patch overlaps [16]). (The mechanism is analogous to a baby version of the integral reviewed in Sec. 5.6 in [28] in the context of bosonic string theory.) Note that, because we are allowing for non-trivial super curvature, the point at infinity on the super plane is in a sense trivialised. So that (in a practical calculation) we only really need one coordinate chart for the entire sphere. This is to be contrasted with the approach of Nelson [29] and La and Nelson [30] (see also Mark Doyle's thesis4[31] for a very nice overview of this approach), where the local super curvature is instead hidden in the transition functions on an equatorial band of the super sphere (or in transition functions more generally), making it essential to consider more than a single chart. Footnote 4: I am grateful to Mark Doyle for sharing his PhD thesis with me. Although smoothly translating an NS puncture across a super sphere is perhaps the simplest non-trivial example, this could nevertheless be expected to have wide-ranging implications. Because in addition to providing the relevant map from fixed picture \(-1\) NS vertex operators to integrated \(0\) picture, this also (partially) provides an explicit expression for the measure associated to translating handle operators across a curved surface. Modulo the inclusion of the Ramond sector (that I will not discuss here), the latter provides the basic building block of arbitrarily higher-genus superstring amplitudes, and (due to the underlying smooth gauge slice [17]) there will be no obstruction at any string loop order. So that amplitudes at all loop orders can be treated on equal footing, which in turn one can hope might also be useful beyond perturbation theory. In Sec. 2 we begin by discussing a simple parametrisation of a 2-sphere using both a holomorphic and a smooth viewpoint. In Sec. 3 we define a new set of frame coordinates that we call'superconformal normal coordinates'. These coordinates (by analogy to Riemann normal coordinates in ordinary Riemannian geometry) will enable us to map (or pullback) our superframe (on which we use standard superconformal field theory techniques to define local operators and states) to an underlying curved super Riemann surface. In Sec. 4 we introduce the notion of super curvature that we will adopt throughout the article. In Sec. 5 we will derive the precise expression for the path integral measure that implements the aforementioned gauge slice. In Sec. 6 we show that the path integral measure contributions associated to the gauge slice of interest leads to the expected decoupling of BRST-exact insertions into the path integral. In the Discussion (Sec. 7) we provide some further context and generalisation to arbitrary super Riemann surfaces with an arbitrary number of handles while also highlighting a puzzle that seems to arise in this case. We also mention some future directions. ## 2 Super Riemann Surfaces Let us primarily construct a super Riemann surface, \(\Sigma\), with the topology of a 2-sphere. We can, for instance, glue two copies of the super plane (see also Sec. 5.2.1 in [20]) that are in turn parametrised by the superconformal charts \((\mathcal{U}_{u},u|\theta_{u})\) and \((\mathcal{U}_{w},w|\psi)\). We take these to be centred at \(\mathrm{q}_{u}\in S^{2}\) and \(\mathrm{q}_{w}\in S^{2}\) respectively so that \((u|\theta_{u})(\mathrm{q}_{u})=0\) and \((w|\psi)(\mathrm{q}_{w})=0\). We might think of these points as corresponding to north and south poles of the reduced 2-sphere. We then glue on \(\mathcal{U}_{u}\cap\mathcal{U}_{w}\) (which loosely speaking can be thought of as spanning an equatorial band) with the superconformal transition function, \(uw=1\). Demanding consistency with the superconformal condition, \(D_{\psi}u=\theta_{u}D_{\psi}\theta_{u}\), in turn determines the remaining transition function, \(\theta_{u}(w|\psi)\) (up to an immaterial sign since both the superconformal condition and \(uw=1\) are invariant under \(u|\theta_{u}\to u|-\theta_{u}\) and \(w|\psi\to w|-\psi\)). Proceeding in a similar manner for the anti-chiral half, the full set of transition functions to cover the entire sphere is then [5, 21], \[\widetilde{u}(\widetilde{w})=\frac{1}{\widetilde{w}},\qquad u(w|\psi)=\frac{ 1}{w}\quad\text{and}\quad\theta_{u}(w|\psi)=i\frac{\psi}{w}, \tag{2.1}\] where we arbitrarily picked one of the two signs (the alternative choice is effectively equivalent to replacing \(i\to 1/i\)). In solving the superconformal condition one makes use of the fact that it holds for all \(\psi\). To transition to a smooth description, it is convenient to use the above charts, \((\mathcal{U}_{u},u|\theta_{u})\) and \((\mathcal{U}_{w},w|\psi)\), to construct a super Riemann surface version of a metric on the sphere. The super analogue of a Riemannian metric has been provided in Sec. 3.6.3 in [21], see also [16, 32] for some early work along these lines. As discussed in [21] (and elaborated on very briefly in Sec. 4 here where we also introduce a notion of super curvature), the appropriate structure, locally (where we pick local coordinates \(\widetilde{z}\);\(z|\theta\)) is the following. We can regard a metric as a nonzero section, \(\widetilde{E}=e^{\bar{\varphi}}{\rm d}\widetilde{z}\), of \(T_{L}^{*}\Sigma\) (of rank \(1|0\)), and a nonzero section, \(E=e^{\varphi}\varpi\), (with \(\varpi={\rm d}z-{\rm d}\theta\theta\)) of a rank \(1|0\) subbundle, \({\cal D}^{-2}\subset T_{R}^{*}\Sigma\) (with \(T_{R}^{*}\Sigma\) of rank \(1|1\)). We can then introduce a connection, \(\omega\), on \({\cal D}^{-2}\), and a corresponding gauge invariance, \(E\to e^{u}E\), \(\widetilde{E}\to e^{-u}\widetilde{E}\) and \(\omega\to\omega+{\rm d}u\). The combination \(g^{(z)}=\widetilde{E}\otimes E\), in particular, is then gauge invariant and globally-defined. We will call the quantity \(g^{(z)}\) a metric, but we will not be using this quantity to define areas or distances5, which is what metrics are usually good for. Instead, what is important for our purposes is that \(g^{(z)}\) is globally defined, so that it can be used to specify a gauge slice to translate frames (and hence also NS punctures) across super Riemann surfaces in a well-defined and smooth manner. Footnote 5: Indeed, it is well-known (see e.g. [33]) that using the notion of a metric to define β€œarea” and β€œdistance” on a super Riemann surface is problematic. I am also grateful to Branislav Jurco for some correspondence on this. For concreteness, we can actually proceed by direct analogy to the conformally-flat expression for the metric on an ordinary 2-sphere, \({\rm d}\tilde{w}{\rm d}w/(1+\tilde{w}w)^{2}\) (where we chose the radius of the 2-sphere, \(r=1/2\)). Taking the aforementioned comments into account, consider the specific local expression for the "metric": \[g^{(w)}=\frac{{\rm d}\tilde{w}({\rm d}w-{\rm d}\psi\psi)}{(1+\tilde{w}w)^{2}}, \qquad\mbox{with}\qquad e^{\tilde{\varphi}(\tilde{w};w|\psi)}=\frac{1}{(1+ \tilde{w}w)^{2}}. \tag{2.2}\] Note primarily that \(g^{(w)}\) is not only gauge invariant (see Sec. 4 for some elaboration on this), it is also globally-defined in the sense that it is invariant under the superconformal transformation (2.1), \(g^{(u)}=g^{(w)}\). One can check this by noting that under a general superconformal transformation, \(\tilde{u}\);\(u|\theta_{u}\to\tilde{w}\);\(w|\psi\), we have \({\rm d}\tilde{w}={\rm d}\widetilde{u}(\partial_{\tilde{u}}\tilde{w})\) and \(\varpi_{w}=\varpi_{u}(D_{\theta_{u}}\psi)^{2}\). One may wonder to what extent such an expression for \(g^{(w)}\) in (2.2) always exists on a super Riemann surface with the topology of a 2-sphere. In fact, one can always arrive at this expression starting from any globally-defined (but otherwise arbitrary) metric, \(g^{(w^{\prime})}\), of the correct topology, which may in turn involve any combination of odd and even variables (subject to the fact that \(e^{\varphi^{\prime}(\tilde{w}^{\prime};w^{\prime}|\psi^{\prime})}\) is even) by a unique (up to a phase) superconformal transformation, \(\tilde{w}^{\prime}\);\(w^{\prime}|\psi^{\prime}\to\tilde{w}\);\(w|\psi\). A sketch of a proof that one can always map to a metric of the form (2.2) starting from an arbitrary metric is as follows: one can show this explicitly by building a Taylor series expansion for \(\tilde{w}(\tilde{w}^{\prime})\);\(w(w^{\prime}|\psi^{\prime})|\psi(w^{\prime}|\psi)\) in terms of \(\varphi^{\prime}(\tilde{w}^{\prime};w^{\prime}|\psi^{\prime})\) and its derivatives (evaluated at some base point of our choice). This Taylor series is guaranteed to be convergent since it is superconformal. The coefficients of this series expansion will, of course, not be superconformal in general (since they are constructed out of \(\varphi^{\prime}\) and its derivatives). (This calculation is similar to that elaborated on below, see (3.5) and the associated discussion.) Let us pause momentarily to make a brief remark on notation before we embark on the construction of the smooth gauge slice of interest. We denote the coordinate, \(\tilde{w}\);\(w|\psi\), concisely by the superscript \({}^{(w)}\). A superscript \({}^{(z)}\) will similarly refer to the superconformal coordinate, \(\tilde{z}\);\(z|\theta\), and we reserve this notation for a special frame called a "superconformal normal coordinate" frame (that we will define momentarily). This will have the property that the origin, \(\tilde{z}\);\(z|\theta=0\);\(0|0\), is identified with a supermodulus, \(\widetilde{v}\);\(v|\chi\), in the \({}^{(w)}\) coordinate system, which will in turn will allow us to insert NS punctures at \(\tilde{z}\);\(z|\theta=0\);\(0|0\) and then translate them to integrated picture by mapping to the\({}^{(w)}\) coordinate system. We can then move the NS puncture across the super Riemann surface by allowing \(\widetilde{v}\);\(v|\chi\) to vary, or, more precisely, by associating this quantity to a supermodulus. To simplify the notation, in the superconformal chart with coordinates \(\tilde{z}\);\(z|\theta\) we will sometimes write \(\varphi^{(z)}\) as \(\varphi\), and in the superconformally-related chart \(\tilde{w}\);\(w|\psi\) we will occasionally write \(\varphi^{(w)}\) as \(\hat{\varphi}\), as in (2.2). We also write \(\hat{\varpi}={\rm d}w-{\rm d}\psi\psi\) and \(\varpi={\rm d}z-{\rm d}\theta\theta\), and also \(D_{\theta}=\partial_{\theta}+\theta\partial_{z}\). ## 3 Superconformal Normal Coordinates We now want to insert a NS puncture at some point on the super Riemann surface that, in the \({}^{(w)}\) coordinate with metric defined in (2.2), will correspond to the coordinate: \(\widetilde{w}\);\(w|\psi=\widetilde{v}\);\(v|\chi\). So the \(2|1\) parameters \(\widetilde{v}\);\(v|\chi\) will be identified with the even\(|\)odd moduli associated to this puncture. E.g., we might like to insert an NS vertex operator at this point. We would like to use the standard operator/state correspondence of superconformal field theory on super Riemann surfaces, in which case it is natural to initially take this vertex operator to be in the \(-1\) picture [21, 34], and defined using radial quantisation on the flat super plane using a chart \(\tilde{z}\);\(z|\theta\), inserted at a point with coordinate value \(0\);\(0|0\). So we start off with an NS vertex operator on a flat superplane in the \(-1\) picture inserted at a point \(\widetilde{z}\);\(z|\theta=0\);\(0|0\). To transition to a global picture and associate this NS puncture location with a supermodulus we need to translate it to integrated picture. But we wish do so in such a way that local super curvature is stored locally. So we are generalising Polchinski's bosonic string construction [27] to the superstring. This is to be contrasted with the usual situation encountered in string field theory [35, 36] (originally pioneered by Nelson [29]), where this information that the sphere is curved is instead stored in the transition functions on patch overlaps, see also [30]. It will be much easier to transition to a globally well-defined construction if super curvature is stored locally, which will become possible if we allow the dependence on the supermoduli to be smooth. Notice that we do not want to assume any holomorphic splitting (or a holomorphic projection) of supermoduli space, which as discussed already does not exist in general [18]. A smooth splitting always exists however. After we have translated this vertex operator to integrated picture we can then integrate over \(\widetilde{v}\);\(v|\chi\) in the corresponding path integral. The claim I would like to put forward now is the following. There exists a superconformal change of variables, \(\widetilde{w}\);\(w|\psi\rightarrow\widetilde{z}\);\(z|\theta\), that preserves the metric up to a superconformal factor, \[\frac{1}{(1+\widetilde{w}w)^{2}}{\rm d}\widetilde{w}({\rm d}w-{\rm d}\psi \psi)=e^{\varphi(\widetilde{z};z|\theta)}{\rm d}\widetilde{z}({\rm d}z-{\rm d }\theta\theta), \tag{3.3}\] such that at the location of the puncture, where \(\widetilde{z}\);\(z|\theta=0\);\(0|0\) (which maps to \(\widetilde{w}\);\(w|\psi=\widetilde{v}\);\(v|\chi\)), the new, \(g^{(z)}\), "metric" is "as flat as possible", \[\framebox{$\varphi(0;0|0)=0,\quad{\rm and}\quad D^{n}_{\theta}\varphi(0;0|0)= \partial^{n}_{\tilde{z}}\varphi(0;0|0)=0,\qquad n\geq 1$} \tag{3.4}\] The presence of super curvature (which in the \({}^{(z)}\) frame reads \({\cal R}_{\tilde{z}\theta}=-\partial_{\tilde{z}}D_{\theta}\varphi\)), see Sec. 4, means that mixed derivatives cannot be set to zero by a superconformal transformation, but there is no obstruction to setting purely holomorphic or purely anti-holomorphic derivatives of \(\varphi\) equal to zero at a point as done in (3.4). Again, this is to be contrasted with the approach pioneered by Nelson [29] (and which is also adopted in the string field theory [35, 36]) which instead imlicitly stores super curvature in the transition functions on patch overlaps. I would like to suggest that (3.4) is in fact the appropriate generalisation to the heterotic string of the gauge slice constructed in the context of bosonic string theory by Polchinski in [27]. The advantage of this gauge slice will be that the dependence on the supermoduli \(\widetilde{v}\);\(v|\chi\) will automatically be smooth and globally well-defined, without sacrificing the ability to distinguish left- from right-moving modes (which is in turn vital in order to be able to even define the heterotic string using a worldsheet description). Loosely speaking, this becomes possible by storing (anti-)chiral terms in the super frame coordinates (where we glue with superconformal transition functions that preserve the notion of "left"- and "right"-moving modes), whereas the transition functions of the base (or Einstein) coordinates of \(\Sigma\) (which parametrise the location of the frame on \(\Sigma\)) will instead be smooth. There is, in particular, no obstruction (other than a phase) to parametrising our super Riemann surfaces in this manner (and in fact this procedure generalises to arbitrary string loop order but this will be discussed elsewhere). Secondly, the gauge slice defined by (3.4) is such that, effectively, we can work with only a single patch throughout the entire sphere. This becomes possible by distributing super curvature throughout the surface (as opposed to storing it in transition functions on a patch overlap across an equatorial band of the sphere, or instead storing it at "infinity"). So our approach is quite close to Polchinski's original bosonic string theory calculation [27], but the implementation here is better suited to the heterotic string. We will prove that this gauge slice (3.4) exists and in the process we will also derive the precise superconformal transformation that implements it. It will turn out to be a specific OSp\((2,1)\) transformation (referred to as SPL\((2,\mathbb{C})\) in [37]) with parameters that depend smoothly on \(\widetilde{v}\);\(v|\chi\). We will next build this superconformal transformation explicitly using a Taylor series expansion for \(\widetilde{z}(\widetilde{w})\), \(\theta(w|\psi)\) and \(z(w|\psi)\). A Taylor series for, say, \(\theta(w|\psi)\) about \(\widetilde{w}\);\(w|\psi=\widetilde{v}\);\(v|\chi\) (taking into account that \(\theta(v|\chi)=0\)) takes the general form (see Sec. 2.7 in [38]): \[\theta(w|\psi)=\hat{\psi}D_{\psi}\theta(v|\chi)+\sum_{n=1}^{\infty}\frac{1}{n! }\hat{w}^{n}\Big{(}\partial_{w}^{n}\theta(v|\chi)+\hat{\psi}\,\partial_{w}^{n} D_{\psi}\theta(v|\chi)\Big{)}, \tag{3.5}\] with \(\hat{\psi}=\psi-\chi\) and \(\hat{w}=w-v-\psi\chi\). What distinguishes one superconformal transformation from another are the coefficients \(\partial_{w}^{n}D_{\psi}\theta\) and \(\partial_{w}^{n}\theta\). In particular, we need to derive explicit expressions for \(\partial_{w}^{n}D_{\psi}\theta\) and \(\partial_{w}^{n}\theta\) for \(n=0,1,\dots\) subject to the gauge slice conditions (3.4). Using the superconformal chain rule (according to which, \(D_{\psi}=(D_{\psi}\theta)D_{\theta}\), etc.), it is immediate to see that if (3.4) is satisfied at \(\widetilde{z}\);\(z|\theta=0\);\(0|0\) then so will \(D_{\psi}^{n}\varphi=\partial_{\widetilde{w}}^{n}\varphi=0\) be satisfied at \(\widetilde{z}\);\(z|\theta=0\);\(0|0\), and vice versa. Since furthermore we require the "metric" to be globally-defined, under superconformal transformations we wish to set \(g^{(z)}=g^{(w)}\), as indicated in (3.3). This implies the superconformal factor transforms as: \[\varphi^{(w)}=\varphi^{(z)}-\ln(D_{\theta}\psi)^{2}-\ln\partial_{\widetilde {z}}\tilde{w}, \tag{3.6}\] with \(\hat{\varphi}\equiv\varphi^{(w)}\) in the case of interest given by the sphere metric (2.2). We then hit (3.6) with an appropriate number of derivatives, \(\partial_{w}\), \(D_{\psi}\), and evaluate the resulting relations at \(\widetilde{z}\);\(z|\theta=0\);\(0|0\) (equivalently, \(\widetilde{w}\);\(w|\psi=\widetilde{v}\);\(v|\chi\)). After some elementary manipulations we find, \[\begin{split}\partial_{w}^{n}\theta&=D_{\psi} \Big{(}B_{n-1}(\tfrac{1}{2}\partial_{w}^{s}\widehat{\varphi})D_{\psi}\theta \Big{)}\qquad(n>1)\\ \partial_{w}\theta&=\frac{1}{2}D_{\psi}\widehat{ \varphi}\,D_{\psi}\theta\\ \partial_{w}^{n}D_{\psi}\theta&=B_{n}(\tfrac{1}{2} \partial_{w}^{s}\widehat{\varphi})D_{\psi}\theta\qquad(n>0)\end{split} \tag{3.7}\] which are evaluated at the location of the puncture. The quantities \(B_{n}(a_{s})\equiv B_{n}(a_{1},\dots,a_{n})\) are complete Bell polynomials. We will keep the argument, \(a_{s}=\frac{1}{2}\partial_{w}^{s}\widehat{\varphi}(\widetilde{v}\);\(v|\chi)\), implicit for conciseness. The relations (3.7) follow from (3.4), (3.6), and standard properties of complete Bell polynomials. We then substitute the relations (3.7) back into the Taylor expansion formula (3.5) to arrive at: \[\theta(w|\psi)=D_{\psi}\theta(v|\chi)\Bigg{\{}\widehat{\psi}+\sum_{n=1}^{ \infty}\frac{1}{n!}\widehat{w}^{n}\Big{(}D_{\psi}B_{n-1}+\frac{1}{2}D_{\psi} \widehat{\varphi}B_{n-1}+\widehat{\psi}B_{n}\Big{)}(\widetilde{v}\text{;}v| \chi)\Bigg{\}} \tag{3.8}\] We next evaluate the various complete Bell polynomials making use of the sphere metric (2.2), \[\begin{split} D_{\psi}B_{n-1}&=(n-1)!(n-1)\biggl{(} \frac{-\widetilde{v}}{1+\widetilde{v}v}\biggr{)}^{n}\chi\\ \frac{1}{2}D_{\psi}\widehat{\varphi}B_{n-1}&=(n-1)! \biggl{(}\frac{-\widetilde{v}}{1+\widetilde{v}v}\biggr{)}^{n}\chi\\ \widehat{\psi}B_{n}&=n!\biggl{(}\frac{-\widetilde{v }}{1+\widetilde{v}v}\biggr{)}^{n}(\psi-\chi),\end{split} \tag{3.9}\] and substitute these into (3.8). Carrying out the sum over \(n\) and rearranging we arrive at: \[\theta(w|\psi)=D_{\psi}\theta(v|\chi)\frac{(1+\widetilde{v}v)(\psi-\chi)- \widetilde{v}(w-v)\chi}{\widetilde{v}w+1} \tag{3.10}\] We have not yet determined \(D_{\psi}\theta(v|\chi)\). Putting this aside momentarily, we next extract the corresponding expression for \(z(w|\psi)\). A simple way to construct a Taylor series for \(z(w|\psi)\) is to substitute (3.10) into the superconformal condition, \(D_{\psi}z=\theta D_{\psi}\theta\), and integrate it using the boundary condition, \(z(v|\chi)=0\), (which is inherited from the fact that the puncture is inserted at \(\widetilde{z}\);\(z|\theta=0\)). This procedure leads to the explicit expression: \[z(w|\psi)=[D_{\psi}\theta(v|\chi)]^{2}\frac{(1+\widetilde{v}v)(w-v-\psi\chi)}{ \widetilde{v}w+1} \tag{3.11}\] Notice that despite the fact that the transition functions (3.10) and (3.11) are superconformal in \(w|\psi\), they are nevertheless only smooth in the supermoduli, \(\widetilde{v}\);\(v|\chi\). Carrying out the same procedure for the anti-chiral half, the relation analogous to (3.7) is (see Sec. 2.4.2 in [28]), \(\partial_{\widetilde{w}}^{n}\tilde{z}=B_{n-1}(\partial_{\widetilde{w}}^{*} \widehat{\varphi})\), which is evaluated at \(\widetilde{v}\);\(v|\chi\), and therefore the Taylor series expansion for \(\tilde{z}(\tilde{w})\) around \(\widetilde{w}=\widetilde{v}\) reads: \[\tilde{z}(\tilde{w})=\partial_{\widetilde{w}}\tilde{z}(\widetilde{v})\biggl{\{} \sum_{n=1}^{\infty}\frac{1}{n!}(\tilde{w}-\tilde{v})^{n}B_{n-1}\Bigl{(} \partial_{\widetilde{w}}^{*}\widehat{\varphi}\Bigr{)}\biggr{\}} \tag{3.12}\] Evaluating the complete Bell polynomial taking into account (2.2) leads to, \[B_{n-1}(\partial_{\widetilde{w}}^{*}\widehat{\varphi})=n!\biggl{(}\frac{-v}{1 +\widetilde{v}v}\biggr{)}^{n-1},\] which in turn implies that (carrying out the sum over \(n\)) (3.12) reduces to: \[\widetilde{z}(\widetilde{w})=\partial_{\widetilde{w}}\tilde{z}(\widetilde{v}) \frac{(1+\widetilde{v}v)(\widetilde{w}-\widetilde{v})}{v\widetilde{w}+1} \tag{3.13}\] Returning now to the quantity \(D_{\psi}\theta(v|\chi)\) in (3.10) or (3.11), and also now \(\partial_{\widetilde{w}}\widetilde{z}(\widetilde{v})\) in (3.13), these are in fact not independently determined by the gauge slice. Instead, it is only the combination, \(\partial_{\tilde{w}}\widetilde{z}(\widetilde{v})\,(D_{\psi}\theta(v|\chi))^{2}=1/ (1+\widetilde{v}v)^{2}\), that is determined. As in the bosonic string [27], there is an obstruction (the Euler number) to setting the phase of \(\theta\) or \(z\) to zero globally, but this will be sufficient. (For a detailed derivation of this point see Sec. 2.5 in [28] and in particular the discussion on p. 46 in [28], and also Sec. 2.4.2 therein.) This is the topological or global origin of the \(L_{0}-\widetilde{L}_{0}=0\) constraint that must be satisfied by local vertex operators in superstring perturbation theory [27, 29] or string fields in string field theory [35, 36]. Notice that the superconformal condition implies \((D_{\psi}\theta)^{2}=\partial_{w}z-\partial_{w}\theta\,\theta\), and since \(\theta(v|\chi)=0\) the ambiguity is identical to that in the bosonic string [27, 28], namely: \[\partial_{\widetilde{w}}\widetilde{z}(\widetilde{v})\,\partial_{w}z(v|\chi)= \frac{1}{(1+\widetilde{v}v)^{2}}. \tag{3.14}\] This is as expected, because (apart from the notion of a spin structure) there is no new non-trivial topological information on the worldsheet that arises due to the presence of odd variables (see Sec. 2.1.2 in [20]). From (3.14) we see that there can be no \(\chi\) dependence in either \(\partial_{w}z(v|\chi)\) or \(\partial_{\widetilde{w}}\widetilde{z}(\widetilde{v})\), because there is no other odd variable present in these transition functions. (There can still be other odd moduli associated to other punctures or handle operators inserted elsewhere on the super Riemann surface.) Requiring that these become complex conjugates when we set odd variables to zero [20] then determines each of these up to a \(v,\widetilde{v}\)-dependent phase: \[\partial_{\widetilde{w}}\widetilde{z}(\widetilde{v})=\frac{e^{-i\alpha( \tilde{v},v)}}{1+\widetilde{v}v},\quad\partial_{w}z(v|\chi)=\frac{e^{i\alpha( \tilde{v},v)}}{1+\widetilde{v}v},\quad\mbox{and}\quad D_{\psi}\theta(v|\chi) =\frac{\pm e^{\frac{i}{2}\alpha(\tilde{v},v)}}{\sqrt{1+\widetilde{v}v}}. \tag{3.15}\] The phase \(\alpha(\tilde{v},v)\) is real when we set \(\widetilde{v}=v^{*}\) (where \(v^{*}\) is the complex conjugate of \(v\)). The sign in \(D_{\psi}\theta(v|\chi)\) is meaningful and is associated to a choice of spin structure. Although \(\alpha\) does depend on \(\widetilde{v},v\), it must always cancel out of observable quantities. It will therefore be convenient to absorb \(\alpha\) into a redefinition \(\widetilde{z}\);\(z|\theta\to e^{i\alpha}\widetilde{z}\);\(e^{-i\alpha}z|e^{-i\alpha/2}\theta\) and instead check explicitly that the physically-meaningful quantities do not depend on such a phase. Summarising, the superconformal transformation \(\widetilde{w}\);\(w|\psi\rightarrow\widetilde{z}\);\(z|\theta\) that maps the globally-defined sphere coordinate, \(\widetilde{w}\);\(w|\psi\), to the flat superplane6 coordinate, \(\widetilde{z}\);\(z|\theta\), that in turn translates a NS puncture inserted at \(\widetilde{w}\);\(w|\psi=\widetilde{v}\);\(v|\chi\) to the point \(\widetilde{z}\);\(z|\theta=0\);\(0|0\) is given by: Footnote 6: By β€œflat superplane coordinate” we mean the superconformal frame that is associated to the metric \(g^{(z)}=e^{\widetilde{v}}\mathrm{d}\widetilde{z}(\mathrm{d}z-\mathrm{d} \theta\theta)\) which satisfies \(\varphi=0\) (and in particular (3.4)) at the puncture \(\widetilde{z}\);\(z|\theta=0\);\(0|0\). \[\widetilde{z}(\widetilde{w}) =\frac{\widetilde{w}-\widetilde{v}}{v\widetilde{w}+1} \tag{3.16}\] \[z(w|\psi) =\frac{w-v-\psi\chi}{\widetilde{v}w+1}\] \[\theta(w|\psi) =\eta\frac{\sqrt{1+\widetilde{v}v}}{(\widetilde{v}w+1)}\psi-\eta \frac{\chi}{\sqrt{1+\widetilde{v}v}},\quad\eta=\pm 1.\] We also need the inverses, \[\begin{split}\widetilde{w}(\widetilde{z})&=\frac{ \widetilde{z}+\widetilde{v}}{-v\widetilde{z}+1}\\ w(z|\theta)&=\frac{z+v+\eta\theta\chi/\sqrt{1+ \widetilde{v}v}}{-\widetilde{v}z+1-\eta\theta\chi\widetilde{v}/\sqrt{1+ \widetilde{v}v}}\\ \psi(z|\theta)&=\frac{\sqrt{1+\widetilde{v}v}\, \eta\theta+\chi}{-\widetilde{v}z+1},\end{split} \tag{3.17}\] which follow from (3.16). By construction, notice that \(\widetilde{w}=w^{*}\) when we set the odd variables equal to zero, but we have not required any stronger version of complex conjugation. The quantities (3.16) and (3.17) define the notion of a specific super frame, \(E_{A}{}^{M}\), and its inverse, \(E_{M}{}^{A}\), respectively, where \(A=\widetilde{z},z,\theta\) denotes the frame indices and \(M=\widetilde{w},w,\psi\) could be thought of as Einstein (or base coordinate) indices. At \(\widetilde{w}\);\(w|\psi=\widetilde{v}\);\(v|\chi\) (equivalently \(\widetilde{z}\);\(z|\theta=0\);\(0|0\)), these read explicitly: \[\begin{split} E_{M}{}^{A}&=\begin{pmatrix}E_{ \widetilde{w}}{}^{\tilde{z}}&E_{\widetilde{w}}{}^{z}&E_{\widetilde{w}}{}^{ \theta}\\ E_{w}{}^{\tilde{z}}&E_{w}{}^{z}&E_{w}{}^{\theta}\\ E_{\psi}{}^{\tilde{z}}&E_{\psi}{}^{z}&E_{\psi}{}^{\theta}\end{pmatrix}= \begin{pmatrix}\frac{1}{1+\widetilde{v}v}&0&0\\ 0&\frac{1}{1+\widetilde{v}v}&-\frac{\eta\chi\widetilde{v}}{(1+\widetilde{v}v)^ {3/2}}\\ 0&-\frac{\chi}{1+\widetilde{v}v}&\frac{\eta}{\sqrt{1+\widetilde{v}v}}\end{pmatrix} \\ E_{A}{}^{M}&=\begin{pmatrix}E_{\widetilde{z}}{}^{\tilde{w}}&E_{\widetilde{z}}{}^ {w}&E_{\widetilde{z}}{}^{\psi}\\ E_{z}{}^{\tilde{w}}&E_{z}{}^{w}&E_{z}{}^{\psi}\\ E_{\theta}{}^{\tilde{w}}&E_{\theta}{}^{w}&E_{\theta}{}^{\psi}\end{pmatrix}= \begin{pmatrix}1+\tilde{v}v&0&0\\ 0&1+\tilde{v}v&\chi\tilde{v}\\ 0&\eta\chi\sqrt{1+\widetilde{v}v}&\eta\sqrt{1+\widetilde{v}v}\end{pmatrix} \end{split} \tag{3.18}\] The individual entries are defined as expected, e.g., \(E_{w}{}^{z}=\frac{\partial z}{\partial w}|_{w=v}\), \(E_{\psi}{}^{z}=\frac{\partial z}{\partial\widetilde{\psi}}|_{w=v}\), etc., whereas the corresponding Berezinian [20] for the change of coordinates (evaluated at the puncture) is given by: \[\mathcal{D}(\widetilde{z},z|\theta)=\mathcal{D}(\widetilde{v},v|\chi)\text{Ber }E_{M}{}^{A},\qquad\text{with}\qquad\text{Ber}E_{M}{}^{A}=\frac{\eta}{(1+ \widetilde{v}v)^{3/2}}. \tag{3.19}\] ## 4 Super Curvature It proves useful, especially in the case of more general super Riemann surfaces to introduce the notion of super curvature. Rather than provide the details of this general discussion here however, we will instead only present, very briefly, the ingredients we will be needing in the current paper. We can define super curvature of a heterotic super Riemann surface, \(\Sigma\), as follows. Following [21], we first decompose the contangent bundle, \(T^{*}\Sigma=T^{*}_{L}\Sigma\oplus T^{*}_{R}\Sigma\) (where \(T^{*}_{L}\Sigma\) is of rank \(1|0\) and \(T^{*}_{R}\Sigma\) is of rank \(1|1\)) by declaring that \(T^{*}_{L}\Sigma\) is generated by a quantity \(\widetilde{E}\) whereas \(T_{R}^{*}\Sigma\) is generated by \(E\) and \(F\) (see Sec. 3.6 and in particular Sec. 3.6.3 in [21]). The quantity \(E\) generates a subbundle, \({\cal D}^{-2}\subset T_{R}^{*}\Sigma\). We then introduce a connection, \(\omega\), on the line bundle, \({\cal D}^{-2}\), and postulate a gauge invariance, \[\widetilde{E}\to e^{-u}\widetilde{E},\qquad E\to e^{u}E,\qquad F\to e^{u/2}F, \qquad\omega\to\omega+{\rm d}u. \tag{4.20}\] The corresponding gauge-covariant exterior derivatives of \(\widetilde{E}\) and \(E\) are then: \[\begin{split}{\cal D}\widetilde{E}&=({\rm d}+\omega )\widetilde{E}\\ {\cal D}E&=({\rm d}-\omega)E\\ {\cal D}F&=({\rm d}-{{1\over 2}}\omega)F,\end{split} \tag{4.21}\] where \({\rm d}={\rm d}\widetilde{z}\,\partial_{\bar{z}}+\varpi\partial_{z}+{\rm d} \theta D_{\theta}\) is the ordinary exterior derivative, and a convenient component expansion for \(\omega\) in the chart \(({\cal U}_{\mathbf{z}},\widetilde{z};\!z|\theta)\) is then \(\omega={\rm d}\widetilde{z}\,\omega_{\bar{z}}+\varpi\omega_{z}+{\rm d}\theta \,\omega_{\theta}\). The super analogues of metric compatibility and vanishing torsion are encoded in: \[{\cal D}\widetilde{E}=0,\qquad{\cal D}E+F\wedge F=0 \tag{4.22}\] In practice, it is convenient to fix the above gauge invariance. Omitting details, in a local chart \({}^{(z)}\) this analysis leads to the following explicit expressions: \[\widetilde{E}^{({\mathbf{z}})}=e^{\varphi}{\rm d}\widetilde{z},\qquad E^{({\mathbf{z }})}=\varpi,\qquad F^{({\mathbf{z}})}={\rm d}\theta+\varpi{1\over 2}D_{\theta}\varphi \tag{4.23}\] where \(\varphi(\widetilde{z};\!z|\theta)\) is a smooth function of the arguments. Notice that the "metric" we defined in Sec. 3 (and as very briefly mentioned in Sec. 2), namely \(g^{(z)}=e^{\varphi}{\rm d}\widetilde{z}\otimes\varpi\), is none other than the gauge-invariant combination, \(\widetilde{E}\otimes E\), after gauge fixing. (We could have included a term \(\lambda\widetilde{E}\otimes F^{2}\) in \(g^{(z)}\), with \(\lambda\) an odd smooth function of \(\widetilde{z};\!z|\theta\); this would also be gauge-invariant, but it is not necessary to include this since our expression for \(g^{(z)}\) is already globally-defined as it stands.) The gauge-fixed expression for the connection, \(\omega\), in turn reads: \[\omega={\rm d}\widetilde{z}\,\omega_{\bar{z}}+\varpi\,\omega_{z}+{\rm d} \theta\,\omega_{\theta},\quad\mbox{with}\quad\begin{cases}\omega_{\bar{z}}=0 \\ \omega_{z}=-\partial_{z}\varphi\\ \omega_{\theta}=-D_{\theta}\varphi.\end{cases} \tag{4.24}\] We can use these quantities to arrive at a useful notion of super curvature, \({\cal R}\), defined by: \[{\cal D}^{2}=n{\cal R},\quad\mbox{with}\quad{\cal R}=\varpi{\rm d}\widetilde{ z}\,{\cal R}_{z\bar{z}}+{\rm d}\widetilde{z}{\rm d}\theta\,{\cal R}_{\bar{z} \theta},\] and \(n\) is the \({\rm U}(1)\) weight of the superconformal tensor on which the operator \({\cal D}^{2}\) acts. We have taken into account that \({\cal D}\) preserves \({\rm U}(1)\) weight, and have defined the quantities \({\cal R}_{z\bar{z}}\equiv-D_{\theta}{\cal R}_{\bar{z}\theta}\) and: \[{\cal R}_{\bar{z}\theta}\equiv-\partial_{\bar{z}}D_{\theta}\varphi. \tag{4.25}\] We will usually refer to the component, \({\cal R}_{\bar{z}\theta}\), (out of which the entire expression for \({\cal R}\) can be reconstructed) as the super curvature. Under superconformal changes of coordinates (i.e. analytic maps \(\mathbf{z}=\tilde{z};\!z|\theta\rightarrow\mathbf{w}=\tilde{w} (\tilde{z});\!w(z|\theta)|\psi(z|\theta)\) subject to \(D_{\theta}w=\psi D_{\theta}\psi\)) it transforms as a section of \((T_{L}^{*}\Sigma)\otimes{\cal D}^{-1}\), in particular, \[{\cal R}_{\bar{z}\theta}={\cal R}_{\bar{w}\psi}(\partial_{\tilde{z}}\tilde{w}) (D_{\theta}\psi),\] the corresponding U(1) weight being therefore \(n=-\frac{1}{2}\). Super curvature thus transforms as an odd smooth section of the Berezinian. The quantity \({\cal R}\) is globally-defined, in the sense that under superconformal transformations it transforms as: \({\cal R}^{(\mathbf{z})}={\cal R}^{(\mathbf{w})}.\) We will often find it convenient to work in terms of the super curvature, \({\cal R}_{\bar{z}\theta}\), and its derivatives, since it is these quantities that appear in the path integral measure and vertex operators in heterotic string theory. If we now focus on the specific super Riemann surface of interest, namely that with the topology of a 2-sphere, in the \({}^{(w)}\) superconformal frame (defined in Sec. 2) the super curvature is given by the local expression, \({\cal R}_{\bar{w}\psi}=-\partial_{\bar{w}}D_{\psi}\hat{\varphi}\), so that according to (2.2): \[{\cal R}_{\bar{w}\psi}=\frac{2\psi}{(1+\tilde{w}w)^{2}}. \tag{4.26}\] Since super curvature corresponds to an odd smooth section of the Berezinian,7\({\cal R}_{\bar{w}\psi}=(\partial_{\tilde{z}}\tilde{w}\,D_{\theta}\psi)^{-1}{ \cal R}_{\bar{z}\theta}\), (the integration measure, \({\cal D}(\tilde{w};\!w|\psi)={\cal D}(\tilde{z};\!z|\theta)\partial_{\tilde{z} }\tilde{w}\,D_{\theta}\psi\), transforms in the opposite manner), the integral, Footnote 7: We are being somewhat heuristic here, a more general discussion will be presented elsewhere. \[\chi=\frac{1}{2\pi}\int{\cal D}(\tilde{w};\!w|\psi){\cal R}_{\bar{w}\psi},\] is well-defined and in fact equals the Euler characteristic of the super Riemann surface. We leave it as an exercise for the reader to check that the Euler characteristic is given by its classical value, \(\chi=2\), as expected [21]. An important point is that (on the super sphere) we only need a single coordinate chart to compute this quantity, because we incorporate super curvature locally and furthermore it dies off sufficiently rapidly at infinity. The super curvature, \({\cal R}_{\bar{w}\psi}\), given in (4.26), is in the \({}^{(w)}\) frame. We will also need the corresponding expression in the \({}^{(z)}\) frame. Since the two frames are related by \({\cal R}_{\bar{z}\theta}=\partial_{\tilde{z}}\tilde{w}D_{\theta}\psi{\cal R}_{ \bar{w}\psi}\), according to (3.17) and (4.26), \[{\cal R}_{\bar{z}\theta}(z;\!z|\theta)=\frac{2\theta}{(1+\tilde{z}z)^{2}}+ \frac{2\eta\chi}{\sqrt{1+\tilde{v}v}(1+\tilde{z}z)^{2}},\] and therefore at \(\widetilde{z}\);\(z|\theta=0\);\(0|0\) (equivalently \(\widetilde{w}\);\(w|\psi=\widetilde{v}\);\(v|\chi\)), \[{\cal R}_{\widetilde{z}\theta}=\frac{2\eta\chi}{\sqrt{1+\widetilde{v}v}},\qquad \mbox{and}\qquad\frac{1}{2}D_{\theta}{\cal R}_{\widetilde{z}\theta}=1, \tag{4.27}\] with the following (purely chiral or purely anti-chiral) higher derivatives vanishing: \(\partial^{n}_{z}D_{\theta}{\cal R}_{\widetilde{z}\theta}=\partial^{n}_{ \widetilde{z}}{\cal R}_{\widetilde{z}\theta}=0\) for all \(n=1,2,\dots\). Incidentally, there is also a notion of 'torsion' on super Riemann surfaces [39]. Since we will not be making explicit use of this below, for completeness we simply mention that the torsion constraints are automatically satisfied in the gauge slice of interest, and in particular we find, \({\cal T}_{\theta\theta}{}^{z}=2\), and \({\cal T}_{z\widetilde{z}}{}^{\theta}=-\frac{1}{2}{\cal R}_{\widetilde{z}\theta}\), with all remaining components equal to zero. ## 5 Path Integral Measure To implement the gauge slice developed in Sec. 3 into the corresponding path integral we need to determine the path integral measure. For this we will need to know the change in \(z(w|\psi),\theta(w|\psi)\) with respect to small variations in the supermoduli, \(\widetilde{v}\);\(v|\chi\), keeping the coordinate, \(\widetilde{w}\);\(w|\psi\) fixed. In fact, by keeping \(\widetilde{w}\);\(w|\psi\) fixed we are also keeping the underlying metric fixed.8 This is because (as seen in (2.2)) in the \({}^{(w)}\) coordinate system the metric depends solely on the coordinates \(\widetilde{w}\);\(w|\psi\). (This is to be contrasted with \(\varphi^{(z)}\) which will also depends on supermoduli.) The explicit expression for the path integral insertion that will implement our gauge slice is: Footnote 8: This hint provides the starting point towards understanding how to carry out the corresponding computation for arbitrary super Riemann surfaces with arbitrary super curvature (subject to the Euler number constraint). It will be discussed elsewhere. \[\int{\cal D}(\widetilde{v},\!v|\chi)\delta(\widehat{\cal B}_{\widetilde{v}}) \delta(\widehat{\cal B}_{v})\delta(\widehat{\cal B}_{\chi}), \tag{5.28}\] where we adopt the shorthand, \({\cal D}(\widetilde{v},\!v|\chi)=-i[{\rm d}\widetilde{v},\!{\rm d}v|{\rm d}\chi]\). (Note that the action also depends on supermoduli, and so this must be included when we actually integrate over them.) This quantity (5.28) acts on a single fixed-picture NS vertex operator in the \(-1\) picture (that may or may not be offshell) defined in the \({}^{(z)}\) frame and inserted at \(\widetilde{z}\);\(z|\theta=0\);\(0|0\). Let \(t\) stand for any of the quantities \(\widetilde{v}\), \(v\) or \(\chi\). Since there is only a single patch overlap, \({\cal U}_{z}\cap{\cal U}_{w}\) (which in turn corresponds to an annulus or a punctured disc with the origin \(\widetilde{z}\);\(z|\theta=0\);\(0|0\) absent), the superghost insertions appearing in (5.28) are then determined from: \[\widehat{\cal B}_{t}=\frac{1}{2\pi i}\int_{C_{zw}}\!\biggl{(}-[{\rm d}z|{\rm d }\theta]\biggl{[}\frac{\partial z}{\partial t}-\frac{\partial\theta}{\partial t }\theta\biggr{]}_{w|\psi}\!B_{z\theta}+(-)^{|t|}{\rm d}\widetilde{z}\biggl{[} \frac{\partial\tilde{z}}{\partial t}\biggr{]}_{\widetilde{w}}\!\biggl{)}, \tag{5.29}\] which can be derived from the expression for the measure given by Witten in [2] by a procedure precisely analogous to that in the bosonic string as derived in Sec. 9 in [1] or Sec. 3 in [28]. The derivation linking the two viewpoints is in particular precisely analogous to the derivation linking the first and second equalities in eqn. (3.245) in [28], but we will omit the details. See also [24] for some further context and a more complete discussion. The contour \(C_{zw}\) in (5.29) traverses the annular overlap, \({\cal U}_{z}\cap{\cal U}_{w}\), enclosing the origin, \(\widetilde{z}\);\(z|\theta=0\);\(0|0\), in a counterclockwise sense from the viewpoint of \({\cal U}_{z}\) (so that \(\int_{C_{zw}}[{\rm d}z|{\rm d}\theta]\theta/z=-\int_{C_{zw}}{\rm d}\widetilde{ z}/\widetilde{z}=2\pi i\)). We define \(|t|\) to be \(0\) or \(1\) for \(t\) Grassmann-even or odd parity respectively.9 Footnote 9: It might be useful to display the even and odd Grassmann parity quantities, \(|\widetilde{v}|=|v|=|\widehat{\cal B}_{\chi}|=0\) and \(|\chi|=|\widehat{\cal B}_{\bar{v}}|=|\widehat{\cal B}_{v}|=1\) respectively. The notation for the derivatives appearing in (5.29) indicates that we differentiate the frame coordinates, \(\widetilde{z},z,\theta\), in (3.16) with respect to the supermoduli, \(t=\widetilde{v}\), \(v\) or \(\chi\), keeping \(\widetilde{w}\);\(w|\psi\) fixed. After taking these derivatives we will make use of the inverse expressions given in (3.17) to eliminate the \(\widetilde{w}\);\(w|\psi\) dependence in favour of \(\widetilde{z},z,\theta\) (since the contour integrals in (5.29) are over \(\widetilde{z},z,\theta\), and furthermore the superghosts are also defined using the \({}^{(z)}\) frame). Another technical detail is that there is some information about the phase of \(z+\delta z(z|\theta)\), with \(\delta z(z|\theta)\) generated by the aforementioned supermoduli variations. This \(\widetilde{v},v\)-dependent phase, \(e^{2{\rm i}m\frac{\delta\delta v}{1+\delta v}}\), is not physically meaningful, so we can set it to zero provided we can show that physical observables do not depend on it, and there is a similar remark for the phase, \(e^{i{\rm Im}\frac{\delta\delta v}{1+\delta v}}\), of \(\theta+\delta\theta(z|\theta)\) (see Appendix A). A short computation (see Appendix A) implementing the above procedure then leads to the following results for the derivatives appearing in (5.29). In terms of the quantities, \[{\cal V}_{t}(z|\theta)\equiv\left[\frac{\partial z}{\partial t}-\frac{ \partial\theta}{\partial t}\theta\right]_{w|\psi},\qquad{\rm and}\qquad \widetilde{\cal V}_{t}(\widetilde{z})\equiv\left[\frac{\partial\widetilde{z}} {\partial t}\right]_{\widetilde{w}}, \tag{5.30}\] one finds in particular: \[{\cal V}_{\widetilde{v}}(z|\theta) =-\frac{1}{1+\widetilde{v}v}\biggl{(}z^{2}-\frac{2\eta\chi}{ \sqrt{1+\widetilde{v}v}}z\theta\biggr{)} \widetilde{\cal V}_{\widetilde{v}}(\widetilde{z}) =-\frac{1}{1+\widetilde{v}v}\] \[{\cal V}_{v}(z|\theta) =-\frac{1}{1+\widetilde{v}v}\biggl{(}1+\frac{2\eta\chi}{\sqrt{1+ \widetilde{v}v}}\widetilde{v}\theta\biggr{)} \widetilde{\cal V}_{v}(\widetilde{z}) =-\frac{\widetilde{z}^{2}}{1+\widetilde{v}v} \tag{5.31}\] \[{\cal V}_{\chi}(z|\theta) =\frac{\eta}{\sqrt{1+\widetilde{v}v}}\biggl{(}2\theta+\frac{\eta \chi}{\sqrt{1+\widetilde{v}v}}\biggr{)} \widetilde{\cal V}_{\chi}(\widetilde{z}) =0\;.\] Notice that the dependence of these quantities on \(z|\theta\) and \(\widetilde{z}\) is, respectively, superconformal, but the dependence on the supermoduli, \(\widetilde{v}\);\(v|\chi\), is instead only smooth. Substituting these expressions (5.31) into (5.29) while taking into account the contour integral representations for the \({}^{(z)}\) frame superghost modes, \[\begin{split}\widetilde{b}_{n}^{(z)}&=-\frac{1}{2\pi i }\oint[{\rm d}\tilde{z}]\,\tilde{z}^{n+1}\widetilde{b}_{\bar{z}\bar{z}}(\tilde{z} )\\ b_{n}^{(z)}&=\frac{1}{2\pi i}\int[{\rm d}z|{\rm d} \theta]z^{n+1}B_{z\theta}(z|\theta)\\ \beta_{n+1/2}^{(z)}&=\frac{1}{2\pi i}\int[{\rm d}z| {\rm d}\theta]\theta z^{n+1}B_{z\theta}(z|\theta),\end{split} \tag{5.32}\] leads to the following explicit expressions for the measure: \[\begin{split}\widehat{\mathcal{B}}_{\tilde{v}}&= \frac{1}{1+\widetilde{v}v}\bigg{(}\widetilde{b}_{-1}^{(z)}+b_{1}^{(z)}+\frac{2 \eta\chi}{\sqrt{1+\widetilde{v}v}}\,\beta_{1/2}^{(z)}\bigg{)}\\ \widehat{\mathcal{B}}_{v}&=\frac{1}{1+\widetilde{v }v}\,\bigg{(}b_{-1}^{(z)}+\widetilde{b}_{1}^{(z)}-\frac{2\eta\chi\widetilde{v }}{\sqrt{1+\widetilde{v}v}}\,\beta_{-1/2}^{(z)}\bigg{)}\\ \widehat{\mathcal{B}}_{\chi}&=\frac{\eta}{\sqrt{1+ \widetilde{v}v}}\bigg{(}-2\beta_{-1/2}^{(z)}+\frac{\eta\chi}{\sqrt{1+ \widetilde{v}v}}\,b_{-1}^{(z)}\bigg{)},\end{split} \tag{5.33}\] and in particular the full insertion (5.28) takes the form: \[\begin{split}\boxed{\int\mathcal{D}(\widetilde{v},& v|\chi)e^{-I_{v\chi}\chi}\delta(\widehat{\mathcal{B}}_{\tilde{v}}) \delta(\widehat{\mathcal{B}}_{v})\delta(\widehat{\mathcal{B}}_{\chi})=}\\ &=\eta\int\mathcal{D}(\widetilde{v},& v|\chi)(1+ \widetilde{v}v)^{-3/2}e^{-I_{v\chi}}\bigg{(}\widetilde{b}_{-1}^{(z)}+b_{1}^{( z)}+\frac{2\eta\chi}{\sqrt{1+\widetilde{v}v}}\,\beta_{1/2}^{(z)}\bigg{)}\\ &\qquad\times\bigg{(}b_{-1}^{(z)}+\widetilde{b}_{1}^{(z)}- \frac{2\eta\chi\widetilde{v}}{\sqrt{1+\widetilde{v}v}}\,\beta_{-1/2}^{(z)} \bigg{)}\delta\bigg{(}-2\beta_{-1/2}^{(z)}+\frac{\eta\chi}{\sqrt{1+ \widetilde{v}v}}\,b_{-1}^{(z)}\bigg{)}\end{split} \tag{5.34}\] where \(e^{-I_{v\chi}}\) encodes the entire dependence of the action on the supermoduli, \(\widetilde{v}\);\(v|\chi\). We can determine this by Taylor series expansion in \(\widetilde{v}\);\(v|\chi\) around 0;0\(|\)0 and taking into account that under a generic change in supercomplex structure the action changes by an amount \(\delta I\) as displayed in (6.51). The derivative of the action with respect to a supermodulus, \(t\), is in turn given by (6.53), and so we can completely reconstruct the quantity \(e^{-I_{ev\chi}}\) using this information. Equation (5.34) is the main result of the current note, but to show that it leads to a sensible path integral it is necessary to also show that BRST-exact states decouple - we discuss this next. ## 6 Gauge Invariance One of the most important consistency checks of (5.34) is to show that when we insert a BRST-exact vertex operator into the path integral the latter should vanish, at least up to a total derivative in supermoduli space. The relevant point here is therefore that the insertion (5.34), in particular, (anti-)commutes with the BRST charge up to a total derivative. We will find it convenient in this section to work in terms of a different set of supermoduli that we will label by: \(\widetilde{\mathbf{z}}\);\(\mathbf{z}|\boldsymbol{\theta}\). In fact, we will only define these implicitly, via their variations, but this will be all we need for the purposes of this section. Let us primarily consider the superconformal vector fields (5.31), in particular: \[\begin{split}\delta\widetilde{\mathcal{V}}(\widetilde{z})& =\delta\widetilde{v}\,\widetilde{\mathcal{V}}_{\tilde{v}}( \widetilde{z})+\delta v\,\widetilde{\mathcal{V}}_{v}(\widetilde{z})+\delta \chi\,\widetilde{\mathcal{V}}_{\chi}(\widetilde{z})\\ \delta\mathcal{V}(z|\theta)&=\delta\widetilde{v}\, \mathcal{V}_{\tilde{v}}(z|\theta)+\delta v\,\mathcal{V}_{v}(z|\theta)+\delta \chi\,\mathcal{V}_{\chi}(z|\theta),\end{split} \tag{6.35}\] which according to (5.31) take the explicit form: \[\begin{split}\delta\widetilde{\mathcal{V}}&=-\frac {\delta\widetilde{v}}{1+\widetilde{v}v}-\frac{\delta v}{1+\widetilde{v}v}\, \widetilde{z}^{2}\\ \delta\mathcal{V}&=-\frac{\delta\widetilde{v}}{1+ \widetilde{v}v}\bigg{(}z^{2}-\frac{2\eta\chi}{\sqrt{1+\widetilde{v}v}}z\theta \bigg{)}-\frac{\delta v}{1+\widetilde{v}v}\bigg{(}1+\frac{2\eta\chi \widetilde{v}}{\sqrt{1+\widetilde{v}v}}\theta\bigg{)}+\frac{\eta\delta\chi}{ \sqrt{1+\widetilde{v}v}}\bigg{(}2\theta+\frac{\eta\chi}{\sqrt{1+\widetilde{ v}v}}\bigg{)}\end{split} \tag{6.36}\] We can then extract the variations \(\delta z(z|\theta)\) and \(\delta\theta(z|\theta)\) from \(\delta\mathcal{V}(z|\theta)\) using the identity: \[\begin{split}\delta z&=\delta\mathcal{V}-\frac{1} {2}\theta D_{\theta}\delta\mathcal{V}\\ \delta\theta&=\frac{1}{2}D_{\theta}\delta\mathcal{V},\end{split} \tag{6.37}\] which in turn follow from the linearised superconformal condition, \(D_{\theta}\delta z=\theta D_{\theta}\delta\theta+\delta\theta\). In terms of these we then have: \[\begin{split}\delta\widetilde{z}(\widetilde{z})& =-\frac{\delta\widetilde{v}}{1+\widetilde{v}v}-\frac{\delta v}{1+ \widetilde{v}v}\,\widetilde{z}^{2}\\ \delta z(z|\theta)&=-\frac{\delta\widetilde{v}}{1+ \widetilde{v}v}\bigg{(}z^{2}-\frac{\eta\chi}{\sqrt{1+\widetilde{v}v}}z\theta \bigg{)}-\frac{\delta v}{1+\widetilde{v}v}\bigg{(}1+\frac{\eta\chi\widetilde{ v}}{\sqrt{1+\widetilde{v}v}}\theta\bigg{)}+\frac{\eta\delta\chi}{\sqrt{1+ \widetilde{v}v}}\bigg{(}\theta+\frac{\eta\chi}{\sqrt{1+\widetilde{v}v}}\bigg{)} \\ \delta\theta(z|\theta)&=-\frac{\delta\widetilde{v}}{1+ \widetilde{v}v}\bigg{(}z\theta+\frac{\eta\chi}{\sqrt{1+\widetilde{v}v}}z \bigg{)}+\frac{\delta v}{1+\widetilde{v}v}\frac{\eta\chi\widetilde{v}}{\sqrt{1 +\widetilde{v}v}}-\frac{\eta\delta\chi}{\sqrt{1+\widetilde{v}v}},\end{split} \tag{6.38}\] and we define the supermoduli variations, \(\delta\widetilde{\mathbf{z}}\);\(\delta\mathbf{z}|\delta\boldsymbol{\theta}\), in terms of these as the change in frame at the location of the puncture: \[\begin{split}\delta\widetilde{\mathbf{z}}&:=-\delta \widetilde{z}(0)\\ \delta\mathbf{z}&:=-\delta z(0|0)\\ \delta\boldsymbol{\theta}&:=-\delta\theta(0|0)\end{split} \tag{6.39}\] From (6.38) and (6.39) it is seen that, \[\eqalign{\delta\widetilde{\bf z}&={\delta\widetilde{v}\over 1+\widetilde{v}v} \cr\delta{\bf z}&={\delta v\over 1+\widetilde{v}v}-{\delta\chi\,\chi\over 1+ \widetilde{v}v}\cr\delta{\bf\theta}&=-{\delta v\,\eta\chi\widetilde{v}\over(1+ \widetilde{v}v)^{3/2}}+{\delta\chi\,\eta\over\sqrt{1+\widetilde{v}v}}.}\] We then rearrange the latter two relations and substitute the resulting set into (6.36) to extract expressions for \(\delta\widetilde{\cal V}\) and \(\delta{\cal V}\) in terms of the variations, \(\delta\widetilde{\bf z}\);\(\delta{\bf z}|\delta{\bf\theta}\). By analogy to (6.35) we can write: \[\eqalign{\delta\widetilde{\cal V}(\widetilde{z})&=\delta\widetilde{\bf z}\, \widetilde{\cal V}_{\tilde{\bf z}}(\widetilde{z})+\delta{\bf z}\,\widetilde{ \cal V}_{\bf z}(\widetilde{z})+\delta{\bf\theta}\,\widetilde{\cal V}_{\bf \theta}(\widetilde{z})\cr\delta{\cal V}(z|\theta)&=\delta\widetilde{\bf z}\,{ \cal V}_{\tilde{\bf z}}(z|\theta)+\delta{\bf z}\,{\cal V}_{\bf z}(z|\theta)+ \delta{\bf\theta}\,{\cal V}_{\bf\theta}(z|\theta),}\] and, in particular, this procedure leads to: \[\eqalign{\delta\widetilde{\cal V}&=\delta\widetilde{\bf z}(-1)+\delta{\bf z} \Big{(}-{1\over 2}D_{\theta}{\cal R}_{\tilde{z}\theta}\widetilde{z}^{2}\Big{)}+ \delta{\bf\theta}\Big{(}-{1\over 2}{\cal R}_{\tilde{z}\theta}\widetilde{z}^{2} \Big{)}\cr\delta{\cal V}&=\delta\widetilde{\bf z}\Big{(}-{1\over 2}D_{\theta}{\cal R}_{ \tilde{z}\theta}z^{2}+{\cal R}_{\tilde{z}\theta}z\theta\Big{)}+\delta{\bf z}( -1)+\delta{\bf\theta}(2\theta)\cr}\] where we also took into account the super curvature expressions (4.27), namely: \[{\cal R}_{\tilde{z}\theta}={2\eta\chi\over\sqrt{1+\widetilde{v}v}},\qquad{ \rm and}\qquad{1\over 2}D_{\theta}{\cal R}_{\tilde{z}\theta}=1.\] Of course, in the terms involving the combination \({1\over 2}D_{\theta}{\cal R}_{\tilde{z}\theta}\) we can trivially replace this combination with 1, but we restored this explicitly to emphasise that the terms it appears multiplied by would have been absent in flat superspace (where instead \(D_{\theta}{\cal R}_{\tilde{z}\theta}\) would equal zero). In particular, in conjunction with the first relation in (6.43), it allows us to differentiate between terms that appear due to super curvature of the super Riemann surface and terms that would have been present also in the absence of super curvature. To emphasise this point, according to the second relation in (6.43), it would be clearly inconsistent to naively project onto flat superspace by setting, \({\cal R}_{\tilde{z}\theta}=D_{\theta}{\cal R}_{\tilde{z}\theta}=0\), which is what we would have arrived at had we assumed a holomorphic splitting for supermoduli space. The presence of super curvature is mixing chiral and anti-chiral contributions in (6.42) in a manner that cannot be removed by a change of coordinates. Indeed, there is a topological obstruction to a "good" holomorphic splitting [17] on a super Riemann surface with the topology of a 2-sphere. Given the superconformal vector fields (6.42), the corresponding superghost contributions to the measure analogous to (5.28) in terms of \(\widetilde{\bf z}\);\({\bf z}|\mathbf{\theta}\) take the form: \[\int{\cal D}(\widetilde{\bf z},{\bf z}|\mathbf{\theta})e^{-I_{{\bf z}{ \bf z}{\bf\theta}}}\delta(\widehat{\bf B}_{\widetilde{\bf z}})\delta(\widehat{ \bf B}_{\bf z})\delta(\widehat{\bf B}_{\mathbf{\theta}}), \tag{6.44}\] where, as always, we adopt the shorthand, \({\cal D}(\widetilde{\bf z},\!{\bf z}|\mathbf{\theta})=-i[{\rm d} \widetilde{\bf z},\!{\rm d}{\bf z}|{\rm d}\mathbf{\theta}]\), and the quantity \(I_{\widetilde{\bf z}{\bf z}\mathbf{\theta}}\) encodes the entire supermoduli dependence of the full worldsheet superconformal field theory action. We can actually think of this as the full (matter plus ghosts) heterotic string theory action (6.49), discussed in further detail below. We will write: \[\begin{array}{l}I_{\widetilde{\bf z}{\bf z}\mathbf{\theta}}=I_{ \widetilde{\bf z}\mathbf{0}}+\mathbf{\theta}\partial_{ \mathbf{\theta}}I_{\widetilde{\bf z}\mathbf{0}}\\ =I_{\widetilde{\bf z}\mathbf{0}}-\mathbf{\theta}\widehat{ \partial}_{\mathbf{\theta}},\end{array}\] to denote the corresponding Taylor series expansion in \(\theta\). We have taken into account the relation (6.53), see also (6.48). Mapping \([{\rm d}\widetilde{\bf z},\!{\rm d}{\bf z}|{\rm d}\mathbf{\theta}]\) to the integral form, \({\rm d}\widetilde{\bf z}\,{\rm d}{\bf z}\,\delta({\rm d}\mathbf{\theta})\), (where each of the terms \({\rm d}\widetilde{\bf z}\), \({\rm d}{\bf z}\) and \(\delta({\rm d}\mathbf{\theta})\) have Grassmann-odd parity) and making use of (6.40) and expanding the delta function, it easily follows that the supermoduli measures are related as follows, \[{\rm d}\widetilde{\bf z}\,{\rm d}{\bf z}\,\delta({\rm d}\mathbf{\theta })={\rm d}\widetilde{v}\,{\rm d}v\,\delta({\rm d}\chi)\,\eta(1+\widetilde{v} v)^{-3/2}.\] (This is of course the same as the conclusion reached in (3.19), but we included this alternative derivation for variety.) So the overall factors outside the parentheses in the superghost expressions (5.33) are absorbed into the measure \({\cal D}(\widetilde{\bf z},\!{\bf z}|\mathbf{\theta})\) in the parametrisation in (6.44) that is determined by the superconformal vector fields (6.42). The derivation of (5.33) from (5.29) is in turn precisely analogous to the corresponding derivation leading to the following superghost contributions to the measure: \[\begin{array}{|c}\widehat{\bf B}_{\widetilde{\bf z}}=\widetilde{b}_{-1}+ \frac{1}{2}D_{\theta}{\cal R}_{\widetilde{z}\theta}\,b_{1}+{\cal R}_{ \widetilde{z}\theta}\,\beta_{1/2}\\ \widehat{\bf B}_{\bf z}=b_{-1}+\frac{1}{2}D_{\theta}{\cal R}_{\widetilde{z} \theta}\,\widetilde{b}_{1}\\ \widehat{\bf B}_{\mathbf{\theta}}=-2\beta_{-1/2}-\frac{1}{2}{\cal R}_ {\widetilde{z}\theta}\,\widetilde{b}_{1}\\ \widehat{\bf B}_{\mathbf{\theta}}+\mathbf{\theta}\widehat{ \bf B}_{\bf z}\equiv-2\beta_{-1/2}+\mathbf{\theta}b_{-1}-{\cal K} \,\widetilde{b}_{1}\end{array} \tag{6.45}\] where instead of (5.31) that led to (5.33) in the previous derivation we now made use of (6.42) to arrive at (6.45). We have also defined the parity-odd quantity, \[{\cal K}:=\frac{1}{2}\big{(}{\cal R}_{\widetilde{z}\theta}-\mathbf{ \theta}D_{\theta}{\cal R}_{\widetilde{z}\theta}\big{)}. \tag{6.46}\] It will turn out that gauge invariance requires \({\cal K}=0\), but we want to remain agnostic about the precise coefficient of proportionality relating \({\cal R}_{\bar{z}\theta}\) to \(\theta\) at this point. It is useful to adopt specific notation for the corresponding BRST (anti-)commutators, \[\widehat{\partial}_{\bf\hat{z}}=\{Q_{B},\widehat{\cal B}_{\bf\hat{z}}\},\quad \widehat{\partial}_{\bf z}=\{Q_{B},\widehat{\cal B}_{\bf z}\},\quad\widehat{D} _{\mathbf{\theta}}=[Q_{B},\widehat{\cal B}_{\mathbf{\theta}} ],\quad\mbox{and}\quad\widehat{\partial}_{\mathbf{\theta}}=[Q_{B}, \widehat{\cal B}_{\mathbf{\theta}}+\mathbf{\theta}\widehat{ \cal B}_{\bf z}], \tag{6.47}\] where, in particular, \[\boxed{\widehat{\partial}_{\bf\hat{z}}=\widetilde{L}_{-1}+\frac{1}{2}D_{ \theta}{\cal R}_{\bar{z}\theta}\,L_{1}+\frac{1}{2}{\cal R}_{\bar{z}\theta}\,G _{1/2}} \tag{6.48}\] \[\widehat{\partial}_{\bf z}=L_{-1}+\frac{1}{2}D_{\theta}{\cal R}_ {\bar{z}\theta}\,\widetilde{L}_{1}\] \[\widehat{D}_{\mathbf{\theta}}=G_{-1/2}+\frac{1}{2}{\cal R }_{\bar{z}\theta}\,\widetilde{L}_{1}\] \[\widehat{\partial}_{\mathbf{\theta}}=G_{-1/2}-\mathbf{\theta}L_{-1}+{\cal K}\widetilde{L}_{1}\] which follow from the defining relation (6.47) and explicit evaluation of the various (anti-)commutators. The operators in (6.48) are very much like derivative operators, but the objects they act on must not be annihilated by the various super Virasoro generators appearing in (6.48) in order to give a non-vanishing answer. So these operators do not quite replace the ordinary notion of a derivative. Clearly, there will also be local functions or superconformal tensors (such as \({\cal R}_{\bar{z}\theta}\)) that have non-trivial supermoduli dependence while nevertheless being annihilated by \(\widehat{\partial}_{t}\). So to complete the story we need to add ordinary supermoduli derivatives to the right-hand sides in (6.48) in order to be able to properly identify them with derivative operators that can act on both operators and ordinary superconformal tensors or functions (that do not necessarily have any operator dependence). Therefore, the total derivatives arising from the BRST (anti-)commutators that we expect to find should be of the form \(\partial_{t}+\widehat{\partial}_{t}\). (We will see in (6.60) and (6.61) that this is the combination that arises naturally.) However, as we briefly summarise momentarily, the \(\widehat{\partial}_{t}\) contributions can be replaced by ordinary \(\partial_{t}\) derivatives of the worldsheet action (the precise relation being, \(\partial_{t}e^{-I}=e^{-I}\widehat{\partial}_{t}\) where \(I\) is the full worldsheet action of interest, namely that of the heterotic string plus superghosts: \[I=\frac{1}{2\pi}\int{\cal D}(\widetilde{z},\!z|\theta)\Big{(}\frac{1}{\alpha^ {\prime}}\partial_{\bar{z}}X\,D_{\theta}X+\Lambda\,D_{\theta}\Lambda+B_{z \theta}\partial_{\bar{z}}C^{z}-\widetilde{B}_{\bar{z}\bar{z}}D_{\theta} \widetilde{C}^{\bar{z}}\Big{)}, \tag{6.49}\] where we will set \(\alpha^{\prime}=2\) and have adopted the notation in [21]. Briefly, writing \(I=I_{\rm matter}+I_{\rm ghosts}\), the matter sector, \(I_{\rm matter}\), receives contributions from the scalar superfields, \(X^{\mu}(\widetilde{z};\!z|\theta)\), \(\mu=0,\ldots,9\), that map the string worldsheet, \(\Sigma\), into flat Euclidean spacetime \({\bf R}^{10}\), and the current algebra fermions, \(\Lambda_{a}(\widetilde{z})\), with \(a=1,\ldots,32\). The latter correspond to spinor superfields taking values in \(\Pi{\cal L}\), i.e. they are fermionic fields taking values in a square root \({\cal L}\) of the line bundle \(\mbox{\it Ber}(\Sigma_{L})\). The argument of \(\Lambda_{a}(\widetilde{z})\) is meant to indicate that the line bundle \({\cal L}\) is anti-holomorphic, so that it can be constructed using anti-holomorphic transition functions (that in the indicated chart are functions of \(\widetilde{z}\) only) and so commute with \(D_{\theta}\). Accordingly, the \(\Lambda\)-matter sector in (6.49) is a section of \({\cal L}^{2}\otimes{\cal D}^{-1}\cong\mbox{\it Ber}(\Sigma_{L}\times\Sigma_{R})\) and can therefore be integrated. See Sec. 3.1-3.3 in [21] for further detail. Sums over \(\mu\) and \(a\) are implicit in (6.49). The superghost sector, \(I_{\rm ghosts}\), of the action receives contributions from the superghosts, \(B(z|\theta)\) and \(C(z|\theta)\), which are sections of \({\cal D}^{-3}\) and \(I\!I{\cal D}^{2}\) respectively, and the anti-chiral superghosts, \(\widetilde{B}(\widetilde{z})\) and \(\widetilde{C}(\widetilde{z})\), which are sections of \(I\!I\!Ber(\Sigma_{L})^{2}\) and \(I\!I\!Ber(\Sigma_{L})^{-1}\) respectively. The conclusion that \(\partial_{t}e^{-I}=e^{-I}\widehat{\partial}_{t}\) is then derived as follows. We parametrise a small change in superconformal structure as a change in superfields, \(X,\Lambda,B,\dots\), generated by locally-defined quasi-superconformal vector superfields, \(\delta\widetilde{\cal V}^{\tilde{z}}\), \(\delta{\cal V}^{z}\), keeping the worldsheet superconformal frame fixed, \[\begin{split}\delta X&=-\delta\widetilde{\cal V}^{ \tilde{z}}\partial_{\tilde{z}}X-\delta{\cal V}^{z}\partial_{z}X-{{ \frac{1}{2}}}D_{\theta}\delta{\cal V}^{z}D_{\theta}X\\ \delta\Lambda&=-\delta\widetilde{\cal V}^{\tilde{z}} \partial_{\tilde{z}}\Lambda-{{\frac{1}{2}}}\partial_{\tilde{z}} \delta\widetilde{\cal V}^{\tilde{z}}\Lambda\\ \delta B_{z\theta}&=-\delta{\cal V}^{z}\partial_{z}B _{z\theta}-{{\frac{1}{2}}}D_{\theta}\delta{\cal V}^{z}D_{\theta}B_{z \theta}-{{\frac{3}{2}}}\partial_{z}\delta{\cal V}^{z}B_{z\theta}\\ \delta C^{z}&=-\delta{\cal V}^{z}\partial_{z}C^{z}-{ {\frac{1}{2}}}D_{\theta}\delta{\cal V}^{z}D_{\theta}C^{z}+\partial_{z} \delta{\cal V}^{z}C^{z}\\ \delta\widetilde{B}_{\tilde{z}\tilde{z}}&=-\delta \widetilde{\cal V}^{\tilde{z}}\partial_{\tilde{z}}\widetilde{B}_{\tilde{z} \tilde{z}}-2\partial_{\tilde{z}}\delta\widetilde{\cal V}^{\tilde{z}} \widetilde{B}_{\tilde{z}\tilde{z}}\\ \delta\widetilde{C}^{\tilde{z}}&=-\delta\widetilde{ \cal V}^{\tilde{z}}\partial_{\tilde{z}}\widetilde{C}^{\tilde{z}}+\partial_{ \tilde{z}}\delta\widetilde{\cal V}^{\tilde{z}}\widetilde{C}^{\tilde{z}}.\end{split} \tag{6.50}\] These variations are essentially super Lie derivatives [16], the precise expressions follow from knowledge of the spaces in which these superfields take their values [21]. The corresponding change in the action induced by (6.50) is given by, \[\delta I=\frac{1}{2\pi}\int{\cal D}(\widetilde{z},z|\theta)\Big{[}(\partial_{ \tilde{z}}\delta{\cal V}^{z}){\cal S}_{z\theta}+(D_{\theta}\delta\widetilde{ \cal V}^{\tilde{z}})\widetilde{T}_{\tilde{z}\tilde{z}}\Big{]}. \tag{6.51}\] The chiral and anti-chiral halves of the total energy-momentum tensors are defined by: \({\cal S}_{z\theta}={\cal S}_{X}+{\cal S}_{BC}\) and \(\widetilde{T}_{\tilde{z}\tilde{z}}=\widetilde{T}_{X}+\widetilde{T}_{\Lambda}+ \widetilde{T}_{\tilde{B}\tilde{C}}\) respectively, where the various contributions are, in turn, found to take the standard form: \[\begin{split}{\cal S}_{X}&=-\frac{1}{\alpha^{\prime }}D_{\theta}X\,D_{\theta}^{2}X\\ {\cal S}_{BC}&=\frac{1}{2}D_{\theta}B_{z\theta}D_{ \theta}C^{z}-\frac{3}{2}D_{\theta}^{2}C^{z}B_{z\theta}-C^{z}D_{\theta}^{2}B_{z \theta}\\ \widetilde{T}_{X}&=-\frac{1}{\alpha^{\prime}} \partial_{\tilde{z}}X\,\partial_{\tilde{z}}X\\ \widetilde{T}_{\Lambda}&=-\partial_{\tilde{z}}\Lambda \,\Lambda\\ \widetilde{T}_{\tilde{B}\tilde{C}}&=2\partial_{\tilde{z }}\widetilde{C}^{\tilde{z}}\widetilde{B}_{\tilde{z}\tilde{z}}+\widetilde{C}^{ \tilde{z}}\partial_{\tilde{z}}\widetilde{B}_{\tilde{z}\tilde{z}}.\end{split} \tag{6.52}\] Integrating by parts in (6.51) actually produces a boundary term. Cancelling this by an appropriate addition to the action, and using the ordinary chain rule to map the total change \(\delta{\cal V}\), \(\delta\widetilde{\cal V}\) (which are smooth in \(\widetilde{z}\);\(z|\theta\)) to a change in \(\delta{\cal V}\), \(\delta\widetilde{\cal V}\) keeping coordinates \(\widetilde{w}\);\(w|\psi\) fixed (which as we have shown is superconformal in \(z|\theta\) and \(\widetilde{z}\) respectively and given for our current purposes by (6.42)), one arrives at the following result for the derivative of the action with respect to a change in a supermodulus, \(t\), \[\frac{\partial I}{\partial t}=-\,\widehat{\partial}_{t}, \tag{6.53}\] with \(\widehat{\partial}_{t}\) as given (in the case of interest) in (6.48). We have assumed that the only chart overlap here is \({\cal U}_{w}\cap{\cal U}_{z}\), which is case of interest when moving NS punctures across the super Riemann surface. (We are again omitting details here.) So we conclude that indeed, \(\partial_{t}e^{-I}=e^{-I}\widehat{\partial}_{t}\), as advertised above. Ultimately therefore, the total derivatives associated to the decoupling of BRST-exact vertex operators will be entirely constructed out of ordinary \(\partial_{t}\) derivatives. We are now well-equipped to consider the BRST (anti-)commutator associated to the decoupling of a BRST-exact vertex operator insertion in the presence of an NS puncture, whose location on the underlying super Riemann surface is associated to a supermodulus of even\(|\)odd dimension \(2|1\). When we insert a BRST-exact vertex operator into the path integral we expect this to decouple. This is in turn required to preserve gauge invariance. In demonstrating this decoupling one encounters a number of (anti-)commutators as one unwraps the BRST charge contour off the said BRST-exact vertex operator. In the absence of other vertex operator insertions or supermoduli contributions to the measure this is trivially zero, because there is no obstruction to unwrapping the contour to a point, whereby it can be seen to vanish since the OSp\((2,1)\) vacuum (represented by the unit operator insertion) is annihilated by the BRST charge (the BRST current has non-singular OPE with the unit operator and can hence be Taylor expanded around \(z|\theta=0|0\)). If however there are supermoduli present (which might be associated to handle supermoduli or other external vertex operators) then the BRST charge contour encounters superghost contributions associated to the gauge slice of our choice. The latter is determined by how we parametrise the integral over supermoduli. The terms of interest are the superghost contributions associated to translating a NS puncture across the super Riemann surface. If the underlying super Riemann surface has the topology of a 2-sphere, the said measure contributions are given by (6.44), where in particular these take the form displayed in (6.45). On a more general super Riemann surface the insertions are similar but in general contain additional terms associated to higher derivatives of super curvature. So the (anti-)commutator that we encounter as we try to unwrap the BRST charge off the surface is the following: \[\big{\{}Q_{B},\widehat{\mathcal{B}}_{\tilde{\mathbf{z}}}\widehat{\mathcal{B}}_{ \mathbf{z}}\delta(\widehat{\mathcal{B}}_{\boldsymbol{\theta}}+\boldsymbol{ \theta}\widehat{\mathcal{B}}_{\mathbf{z}})\big{]}. \tag{6.54}\] We have inserted an additional factor of \(\boldsymbol{\theta}\widehat{\mathcal{B}}_{\mathbf{z}}\) in the argument of the delta function. It will turn out to be convenient to do so, but the point to notice is that it is equivalent to the original insertion since \(\widehat{\mathcal{B}}_{\mathbf{z}}\) is Grassmann-odd. Denoting the Grassmann parity of \(\delta(\widehat{\mathcal{B}}_{\boldsymbol{\theta}}+\boldsymbol{\theta} \widehat{\mathcal{B}}_{\mathbf{z}})\) by \(|\delta|\), and taking into account that the Grassmann parities of the BRST charge, \(Q_{B}\), and that of the insertions, \(\widehat{\mathcal{B}}_{\tilde{\mathbf{z}}}\) and \(\widehat{\mathcal{B}}_{\mathbf{z}}\) are odd, and that the parity of \(\widehat{\mathcal{B}}_{\boldsymbol{\theta}}\) is even, it immediately follows that, \[\begin{split}\big{\{}Q_{B},\widehat{\mathcal{B}}_{\tilde{ \mathbf{z}}}\widehat{\mathcal{B}}_{\mathbf{z}}\delta(\widehat{\mathcal{B}}_{ \boldsymbol{\theta}}+\boldsymbol{\theta}\widehat{\mathcal{B}}_{\mathbf{z}}) \big{]}&=\big{\{}Q_{B},\widehat{\mathcal{B}}_{\tilde{\mathbf{z}}} \big{\}}\widehat{\mathcal{B}}_{\mathbf{z}}\delta(\widehat{\mathcal{B}}_{ \boldsymbol{\theta}}+\boldsymbol{\theta}\widehat{\mathcal{B}}_{\mathbf{z}})- \big{\{}Q_{B},\widehat{\mathcal{B}}_{\mathbf{z}}\big{\}}\widehat{\mathcal{B}} _{\tilde{\mathbf{z}}}\delta(\widehat{\mathcal{B}}_{\boldsymbol{\theta}}+ \boldsymbol{\theta}\widehat{\mathcal{B}}_{\mathbf{z}})\\ &\quad+\big{\{}Q_{B},\delta(\widehat{\mathcal{B}}_{\boldsymbol{ \theta}}+\boldsymbol{\theta}\widehat{\mathcal{B}}_{\mathbf{z}})\big{]} \widehat{\mathcal{B}}_{\tilde{\mathbf{z}}}\widehat{\mathcal{B}}_{\mathbf{z}} -\big{[}\widehat{\mathcal{B}}_{\tilde{\mathbf{z}}},\big{\{}Q_{B},\widehat{ \mathcal{B}}_{\mathbf{z}}\big{\}}\big{]}\delta(\widehat{\mathcal{B}}_{ \boldsymbol{\theta}}+\boldsymbol{\theta}\widehat{\mathcal{B}}_{\mathbf{z}})\\ &\quad-(-)^{|\delta|}\big{\{}\widehat{\mathcal{B}}_{\tilde{ \mathbf{z}}},\big{\{}Q_{B},\delta(\widehat{\mathcal{B}}_{\boldsymbol{\theta}}+ \boldsymbol{\theta}\widehat{\mathcal{B}}_{\mathbf{z}})\big{]}\big{]}\widehat{ \mathcal{B}}_{\mathbf{z}}\\ &\quad+(-)^{|\delta|}\big{\{}\widehat{\mathcal{B}}_{\mathbf{z}},\big{\{}Q_{B},\delta(\widehat{\mathcal{B}}_{\boldsymbol{\theta}}+ \boldsymbol{\theta}\widehat{\mathcal{B}}_{\mathbf{z}})\big{]}\big{]}\widehat{ \mathcal{B}}_{\tilde{\mathbf{z}}}\\ &\quad+\big{\{}\widehat{\mathcal{B}}_{\tilde{\mathbf{z}}},\{ \widehat{\mathcal{B}}_{\mathbf{z}},\{Q_{B},\delta(\widehat{\mathcal{B}}_{ \boldsymbol{\theta}}+\boldsymbol{\theta}\widehat{\mathcal{B}}_{\mathbf{z}}) \big{]}\big{]}\}\big{\}}\end{split} \tag{6.55}\] The last term vanishes, ultimately, due to the fact that the \(\{\widehat{\mathcal{B}}_{s},\widehat{\mathcal{B}}_{t}]=0\). Let us rewrite (6.55) in terms of the derivative operators defined in (6.47), (6.56) For the commutators involving \(\delta(\widehat{\mathcal{B}}_{\boldsymbol{\theta}}+\boldsymbol{\theta}\widehat {\mathcal{B}}_{\mathbf{z}})\) we also need the following identity. Since \(\big{[}\widehat{\mathcal{B}}_{t_{1}},[\widehat{\mathcal{B}}_{t_{2}},[Q_{B}, \widehat{\mathcal{B}}_{t_{3}}]]\big{]}=0\), for any set of supermoduli, \(t_{j}\), it is not too hard to show that: \[\big{\{}Q_{B},\delta(\widehat{\mathcal{B}}_{\boldsymbol{\theta}}+ \boldsymbol{\theta}\widehat{\mathcal{B}}_{\mathbf{z}})\big{\}} =\big{[}Q_{B},\widehat{\mathcal{B}}_{\boldsymbol{\theta}}+ \boldsymbol{\theta}\widehat{\mathcal{B}}_{\mathbf{z}}\big{]}\,\delta^{\prime}( \widehat{\mathcal{B}}_{\boldsymbol{\theta}}+\boldsymbol{\theta}\widehat{ \mathcal{B}}_{\mathbf{z}})\] \[\quad+\frac{1}{2}\big{[}\widehat{\mathcal{B}}_{\boldsymbol{\theta}}+ \boldsymbol{\theta}\widehat{\mathcal{B}}_{\mathbf{z}},[Q_{B},\widehat{ \mathcal{B}}_{\boldsymbol{\theta}}+\boldsymbol{\theta}\widehat{\mathcal{B}}_{ \mathbf{z}}]\big{]}\,\delta^{\prime\prime}(\widehat{\mathcal{B}}_{\boldsymbol {\theta}}+\boldsymbol{\theta}\widehat{\mathcal{B}}_{\mathbf{z}}).\] This follows from pure combinatorics (rather than any detailed properties of these operators). From (6.47) and also, \[\frac{1}{2}\big{[}\widehat{\mathcal{B}}_{\boldsymbol{\theta}}+\boldsymbol{ \theta}\widehat{\mathcal{B}}_{\mathbf{z}},[Q_{B},\widehat{\mathcal{B}}_{ \boldsymbol{\theta}}+\boldsymbol{\theta}\widehat{\mathcal{B}}_{\mathbf{z}}] \big{]}=b_{-1}=\partial_{\boldsymbol{\theta}}\big{(}\widehat{\mathcal{B}}_{ \boldsymbol{\theta}}+\boldsymbol{\theta}\widehat{\mathcal{B}}_{\mathbf{z}} \big{)}+(\partial_{\boldsymbol{\theta}}\mathcal{K})\,\widetilde{b}_{1}\] where we took into account the explicit expression for \(\widehat{\mathcal{B}}_{\boldsymbol{\theta}}\) in (6.45), we learn that, \[\big{\{}Q_{B},\delta(\widehat{\mathcal{B}}_{\boldsymbol{\theta}}+ \boldsymbol{\theta}\widehat{\mathcal{B}}_{\mathbf{z}})\big{\}} =\widehat{\partial}_{\boldsymbol{\theta}}\,\delta^{\prime}( \widehat{\mathcal{B}}_{\boldsymbol{\theta}}+\boldsymbol{\theta}\widehat{ \mathcal{B}}_{\mathbf{z}})+\big{(}\partial_{\boldsymbol{\theta}}\delta^{\prime}( \widehat{\mathcal{B}}_{\boldsymbol{\theta}}+\boldsymbol{\theta}\widehat{ \mathcal{B}}_{\mathbf{z}})\big{)}+(\partial_{\boldsymbol{\theta}}\mathcal{K})\, \widetilde{b}_{1}\delta^{\prime\prime}(\widehat{\mathcal{B}}_{\boldsymbol{ \theta}}+\boldsymbol{\theta}\widehat{\mathcal{B}}_{\mathbf{z}})\] \[=\widehat{\partial}_{\boldsymbol{\theta}}\,\delta^{\prime}( \widehat{\mathcal{B}}_{\boldsymbol{\theta}}+\boldsymbol{\theta}\widehat{ \mathcal{B}}_{\mathbf{z}})+b_{-1}\delta^{\prime\prime}(\widehat{\mathcal{B}}_{ \boldsymbol{\theta}}+\boldsymbol{\theta}\widehat{\mathcal{B}}_{\mathbf{z}}).\] Computing the remaining (anti-)commutators we find: \[-\Big{[}\widehat{\mathbb{B}}_{\hat{\mathbf{z}}},\Big{\{}Q_{B}, \widehat{\mathbb{B}}_{\mathbf{z}}\Big{\}}\Big{]}\delta(\widehat{\mathbb{B}}_{ \boldsymbol{\theta}}+\boldsymbol{\theta}\widehat{\mathbb{B}}_{\mathbf{z}})=- \big{(}D_{\theta}\mathcal{R}_{\bar{z}\theta}(b_{0}-\widetilde{b}_{0})+ \mathcal{R}_{\bar{z}\theta}\beta_{-1/2}\big{)}\delta(\widehat{\mathbb{B}}_{ \boldsymbol{\theta}}+\boldsymbol{\theta}\widehat{\mathbb{B}}_{\mathbf{z}}), \tag{6.57}\] \[-(-)^{|\delta|}\Big{\{}\widehat{\mathbb{B}}_{\hat{\mathbf{z}}}, \Big{\{}Q_{B},\delta(\widehat{\mathbb{B}}_{\boldsymbol{\theta}}+\boldsymbol{ \theta}\widehat{\mathbb{B}}_{\mathbf{z}})\Big{]}\Big{]}\widehat{\mathbb{B}}_{ \mathbf{z}}=\] (6.58) \[=D_{\theta}\mathcal{R}_{\bar{z}\theta}\beta_{1/2}\widehat{ \mathbb{B}}_{\mathbf{z}}\delta^{\prime}(\widehat{\mathbb{B}}_{\boldsymbol{ \theta}}+\boldsymbol{\theta}\widehat{\mathbb{B}}_{\mathbf{z}})+2\mathcal{K} \big{(}b_{0}-\widetilde{b}_{0}+\boldsymbol{\theta}\beta_{-1/2}\big{)} \widehat{\mathbb{B}}_{\mathbf{z}}\delta^{\prime}(\widehat{\mathbb{B}}_{ \boldsymbol{\theta}}+\boldsymbol{\theta}\widehat{\mathbb{B}}_{\mathbf{z}}),\] and, \[+(-)^{|\delta|}\Big{\{}\widehat{\mathbb{B}}_{\mathbf{z}},\Big{\{}Q_{B}, \delta(\widehat{\mathbb{B}}_{\boldsymbol{\theta}}+\boldsymbol{\theta}\widehat {\mathbb{B}}_{\mathbf{z}})\Big{]}\Big{]}\widehat{\mathbb{B}}_{\hat{\mathbf{z} }}=0. \tag{6.59}\] Collecting these results and substituting them into (6.56) implies that, \[\Big{\{}Q_{B}, \widehat{\mathbb{B}}_{\hat{\mathbf{z}}}\widehat{\mathbb{B}}_{ \mathbf{z}}\delta(\widehat{\mathbb{B}}_{\boldsymbol{\theta}}+\boldsymbol{ \theta}\widehat{\mathbb{B}}_{\mathbf{z}})\Big{\}}= \tag{6.60}\] \[=\widehat{\partial}_{\hat{\mathbf{z}}}^{\rm total}\bigg{(} \widehat{\mathbb{B}}_{\mathbf{z}}\delta(\widehat{\mathbb{B}}_{\boldsymbol{ \theta}}+\boldsymbol{\theta}\widehat{\mathbb{B}}_{\mathbf{z}})\bigg{)}- \widehat{\partial}_{\mathbf{z}}^{\rm total}\Big{(}\widehat{\mathbb{B}}_{\hat{ \mathbf{z}}}\delta(\widehat{\mathbb{B}}_{\boldsymbol{\theta}}+\boldsymbol{ \theta}\widehat{\mathbb{B}}_{\mathbf{z}})\Big{)}+\widehat{\partial}_{ \boldsymbol{\theta}}^{\rm total}\Big{(}\widehat{\mathbb{B}}_{\hat{\mathbf{z}}} \widehat{\mathbb{B}}_{\mathbf{z}}\delta^{\prime}(\widehat{\mathbb{B}}_{ \boldsymbol{\theta}}+\boldsymbol{\theta}\widehat{\mathbb{B}}_{\mathbf{z}}) \Big{)}\] \[-D_{\theta}\mathcal{R}_{\bar{z}\theta}(b_{0}-\widetilde{b}_{0}) \delta(\widehat{\mathbb{B}}_{\boldsymbol{\theta}}+\boldsymbol{\theta} \widehat{\mathbb{B}}_{\mathbf{z}})-\mathcal{K}\boldsymbol{\theta}\widehat{ \mathbb{B}}_{\mathbf{z}}\delta(\widehat{\mathbb{B}}_{\boldsymbol{\theta}}+ \boldsymbol{\theta}\widehat{\mathbb{B}}_{\mathbf{z}})\] \[+\Big{(}\partial_{\boldsymbol{\theta}}\mathcal{K}\,\widehat{b}_ {1}\widehat{\mathbb{B}}_{\hat{\mathbf{z}}}\widehat{\mathbb{B}}_{\mathbf{z}} \delta^{\prime\prime}(\widehat{\mathbb{B}}_{\boldsymbol{\theta}}+\boldsymbol{ \theta}\widehat{\mathbb{B}}_{\mathbf{z}})+2\mathcal{K}\big{(}b_{0}- \widetilde{b}_{0}+\boldsymbol{\theta}\beta_{-1/2}\big{)}\widehat{\mathbb{B}}_ {\mathbf{z}}\delta^{\prime}(\widehat{\mathbb{B}}_{\boldsymbol{\theta}}+ \boldsymbol{\theta}\widehat{\mathbb{B}}_{\mathbf{z}})\] \[+(-2\partial_{\boldsymbol{\theta}}\mathcal{K}\,\beta_{1/2}) \widehat{\mathbb{B}}_{\mathbf{z}}\delta^{\prime}(\widehat{\mathbb{B}}_{ \boldsymbol{\theta}}+\boldsymbol{\theta}\widehat{\mathbb{B}}_{\mathbf{z}})+ \Big{(}2\partial_{\mathbf{z}}\mathcal{K}\,\beta_{1/2}\Big{)}\delta(\widehat{ \mathbb{B}}_{\boldsymbol{\theta}}+\boldsymbol{\theta}\widehat{\mathbb{B}}_{ \mathbf{z}}).\] We have written: \[\widehat{\partial}_{t}^{\rm total}=\widehat{\partial}_{t}+\partial_{t}, \tag{6.61}\] and have made use of a number of relations in arriving at this result, all of which have been determined from the explicit representations given in (6.45) and the definition of \(\mathcal{K}\) in (6.46): \[\partial_{\hat{\mathbf{z}}}\widehat{\mathbb{B}}_{\mathbf{z}} =0\] \[\partial_{\mathbf{z}}(\widehat{\mathbb{B}}_{\boldsymbol{\theta}}+ \boldsymbol{\theta}\widehat{\mathbb{B}}_{\mathbf{z}}) =-\partial_{\mathbf{z}}\mathcal{K}\,\widetilde{b}_{1}\] \[\partial_{\mathbf{z}}\widehat{\mathbb{B}}_{\mathbf{z}} =\partial_{\mathbf{z}}\mathcal{R}_{\bar{z}\theta}\,\beta_{1/2}\] \[\partial_{\boldsymbol{\theta}}\widehat{\mathbb{B}}_{\mathbf{z}} =0\] \[\partial_{\widehat{\mathbf{z}}}(\widehat{\mathbb{B}}_{\boldsymbol{ \theta}}+\boldsymbol{\theta}\widehat{\mathbb{B}}_{\mathbf{z}}) =-\partial_{\hat{\mathbf{z}}}\mathcal{K}\,\widetilde{b}_{1}\] \[\mathcal{R}_{\bar{z}\theta}(-2\beta_{-1/2}) =\mathcal{R}_{\bar{z}\theta}(\widehat{\mathbb{B}}_{\boldsymbol{ \theta}}+\boldsymbol{\theta}\widehat{\mathbb{B}}_{\mathbf{z}})-2\mathcal{K} \boldsymbol{\theta}\widehat{\mathbb{B}}_{\mathbf{z}}.\] Gauge invariance requires that only the total derivative terms in (6.60) should be present. In particular, we learn that \(\mathcal{K}=0\) and \(b_{0}-\widetilde{b}_{0}\) should annihilate onshell or offshell vertex operators, \(\widehat{\mathcal{A}}_{a}\), on which this measure contribution acts, namely: \[\mathcal{R}_{\bar{z}\theta}=\boldsymbol{\theta}D_{\theta}\mathcal{R}_{\bar{z} \theta},\quad\text{and}\quad(b_{0}-\widetilde{b}_{0})\widehat{\mathcal{A}}_{a}=0. \tag{6.62}\] The fact that \((b_{0}-\widetilde{b}_{0})\) appears with coefficient \(D_{\theta}{\cal R}_{\tilde{z}\theta}\) in (6.60) indicates that the requirement \((b_{0}-\widetilde{b}_{0})\widehat{\cal A}_{a}=0\) has global origins, whereas the former, since \(D_{\theta}{\cal R}_{\tilde{z}\theta}=2\), provides the precise relation between super curvature, \({\cal R}_{\tilde{z}\theta}\), and the odd modulus, \(\theta\). Notice also that \(D_{\theta}{\cal R}_{\tilde{z}\theta}=D_{\theta}{\cal R}_{\tilde{z}\theta}\). (The analogous expression in terms of the \(\widetilde{v}\);\(v|\chi\) supermodulus was given in (6.43).) All in all, the final result for the BRST commutator associated to the measure contribution that generates smooth translations of NS punctures across the super Riemann surface is: \[\begin{split}\Big{\{}Q_{B},&\widehat{\cal B}_{ \tilde{\bf z}}\widehat{\cal B}_{\bf z}\delta(\widehat{\cal B}_{\mathbf{\theta}}+ \mathbf{\theta}\widehat{\cal B}_{\bf z})\Big{]}=-D_{\theta}{\cal R}_{\tilde{z} \theta}(b_{0}-\widetilde{b}_{0})\delta(\widehat{\cal B}_{\mathbf{\theta}}+\mathbf{ \theta}\widehat{\cal B}_{\bf z})\\ &+\widehat{\partial}_{\tilde{\bf z}}^{\rm total}\Big{(}\widehat {\cal B}_{\bf z}\delta(\widehat{\cal B}_{\mathbf{\theta}}+\mathbf{\theta}\widehat{ \cal B}_{\bf z})\Big{)}-\widehat{\partial}_{\bf z}^{\rm total}\Big{(}\widehat {\cal B}_{\tilde{\bf z}}\delta(\widehat{\cal B}_{\mathbf{\theta}}+\mathbf{\theta} \widehat{\cal B}_{\bf z})\Big{)}+\widehat{\partial}_{\mathbf{\theta}}^{\rm total} \Big{(}\widehat{\cal B}_{\tilde{\bf z}}\widehat{\cal B}_{\bf z}\delta^{\prime }(\widehat{\cal B}_{\mathbf{\theta}}+\mathbf{\theta}\widehat{\cal B}_{\bf z})\Big{)} \end{split} \tag{6.63}\] In fact, taking into account the relations (6.53) and (6.61), we arrive at the following result if we also wish to explicitly include the contribution of the worldsheet action, \(I\), in the path integral, (6.64) so that the derivatives appearing now are just ordinary derivatives, making it entirely manifest that the corresponding contribution to the path integral associated to the insertion of the contribution, \(\widehat{\cal B}_{\tilde{\bf z}}\widehat{\cal B}_{\bf z}\delta(\widehat{\cal B }_{\mathbf{\theta}}+\mathbf{\theta}\widehat{\cal B}_{\bf z})\), to the measure (as we try to unwrap a BRST contour off the surface to establish the decoupling of BRST-exact states) is a total derivative in supermoduli space. (Incidentally, the insertion \(\widehat{\cal B}_{\tilde{\bf z}}\widehat{\cal B}_{\bf z}\delta(\widehat{\cal B }_{\mathbf{\theta}}+\mathbf{\theta}\widehat{\cal B}_{\bf z})\) does not depend on the remaining supermoduli.) ## 7 Discussion The path integral expression for the measure arrived at in (5.34) or (6.45), and the corresponding result for the BRST (anti-)commutator (6.64) are the main results of the present paper. The relation (5.34) provides the explicit expression for the path integral measure in heterotic string theory that translates fixed picture vertex NS operators (in the natural \(-1\) picture) to integrated picture (corresponding to 0 picture). A crucial property is that the dependence on \(\widetilde{v}\);\(v|\chi\) is smooth, with super curvature encoded locally (as opposed to in transition functions on patch overlaps [29]). The BRST anti-commutator (6.64) demonstrates that BRST-exact vertex operators decouple from amplitudes up to boundary terms (that come from the "physical" boundary of supermoduli space as opposed to fictitious boundaries associated to patch overlaps). If there is a second NS vertex operator then we can use the same underlying chart with coordinate, \(\widetilde{w}\);\(w|\psi\), and simply place the first and second vertex operator at \(\widetilde{w}\);\(w|\psi=\widetilde{v}^{1}\);\(v^{1}|\chi^{1}\) and \(\widetilde{w}\);\(w|\psi=\widetilde{v}^{2}\);\(v^{2}|\chi^{2}\) respectively, the measure contribution being a product of terms as in (5.34) with the obvious replacements. It is worth noting that (for the super sphere) only a single coordinate chart, \(\widetilde{w}\);\(w|\psi\), is really needed in this viewpoint in a practical computation. Since super curvature is localised in the bulk of the super sphere, the point at "infinity" is "trivialised" (it does not contribute, e.g., to the Euler characteristic). (One can of course simply map to the \(u\)-chart using (2.1) to include the missing point at infinity when desirable.) So one may ask to what extent this really addresses the main issue associated to a smooth splitting, since we have not needed multiple coordinate patches to cover the supermoduli space associated to a puncture insertion on the super sphere. The main point is that we have used the invariance under superconformal transformations to pick a specific globally-defined gauge slice in the integral over supermoduli. The dependence of the resulting measure on the supermoduli is smooth. The only remaining symmetry is a sU(1) residual symmetry, corresponding to a phase that cannot be fixed globally due to the non-vanishing Euler number of a super sphere. (It is necessary to check that amplitudes do not depend on this phase.) We have also demonstrated (in an explicit example but the prescription is generally applicable) how it is possible to have a smooth dependence of the superconformal transition functions (defining a super Riemann surface) on supermoduli, while retaining our standard superconformal field theory techniques that have a well-defined notion of chiral and anti-chiral halves. (As we briefly skimmed through in the Introduction, in the standard descriptions it is vital that we are still able to distinguish left- from right-moving degrees of freedom in order to even define the theory.) In particular, we are still defining a super Riemann surface at fixed complex structure using superconformal transition functions, out of which corresponding superconformal frames can be constructed. And these, in turn, can be adopted to construct mode expansions, states, and local vertex operators, etc., just as one is used to doing in a corresponding radial quantisation. For this reason it is also clear how to sum over spin structures for left- and right-moving modes independently. The measure contributions that we have derived here, in a sense, translate all of that to "integrated picture", incorporating local super curvature as necessary. What is happening, essentially, is that there is a well-defined notion of a chiral or anti-chiral half in the fixed (or \(-1\)) picture vertex operators, whereas when we go to integrated (or \(0\)) picture this distinction becomes somewhat obscured due to the presence of super curvature. (Incidentally, since it only involves superghost contributions, the measure contributions we have derived are also independent of the string background.) Elaborating a little, fixed-picture vertex operators, \(\widehat{\cal A}_{a}^{(z)}\), where \(a\) labels the state, on which the operator \(\widehat{\cal B}_{\tilde{v}}\widehat{\cal B}_{v}\delta(\widehat{\cal B}_{\chi})\) acts can be constructed using any one of the familiar techniques, such as radial quantisation. (The frame label \({}^{(z)}\) can be identified with the'superconformal normal coordinates' that we have constructed.) So, in particular, a fixed-picture vertex operator will have a well-defined notion of a chiral or anti-chiral half. It has not been at all obvious in the past that this is possible, that it is possible to have a clear distinction between chiral and anti-chiral halves while still having implemented a smooth gauge slice in the integral over supermoduli. Pursuing this further, it is perhaps useful to note that the full set of offshell fixed (or \(-1\)) picture vertex operators can be derived by cutting open the path integral across, say, an \(A_{I}\)-cycle. For the states in the NS sector, this is effectively implemented by inserting into the path integral a resolution of unity [30]: \[\sum_{a}\hat{\cal A}_{a}^{(z_{1})}|0\rangle^{1}\otimes\hat{\cal A }_{(z_{2}/q)}^{a}|0\rangle^{2}=\] \[\qquad=\frac{\alpha^{\prime}g_{D}^{2}}{8\pi i}\int\frac{{\rm d}^{ D}k}{(2\pi)^{D}}e^{ik\cdot(x_{0}^{(z_{1})}-x_{0}^{(z_{2})})}\bar{q}^{\frac{ \alpha^{\prime}}{4}k^{2}-1}q^{\frac{\alpha^{\prime}}{4}k^{2}-1/2}\] \[\qquad\times\exp\Big{[}\sum_{n=1}^{\infty}\bar{q}^{n}\Big{(}- \frac{1}{n}\tilde{\alpha}_{-n}^{(z_{1})}\cdot\tilde{\alpha}_{-n}^{(z_{2})}+ \tilde{c}_{-n}^{(z_{1})}\tilde{b}_{-n}^{(z_{2})}-\tilde{b}_{-n}^{(z_{1})} \tilde{c}_{-n}^{(z_{2})}\Big{)}\Big{]}\] \[\qquad\times\exp\Big{[}\sum_{n=1}^{\infty}i\eta\bar{q}^{n-1/2} \Big{(}\lambda_{-n+1/2}^{(z_{1})}\cdot\lambda_{-n+1/2}^{(z_{2})}\Big{)}\Big{]} \tag{7.65}\] \[\qquad\times\exp\Big{[}\sum_{n=1}^{\infty}q^{n}\Big{(}-\frac{1}{ n}\alpha_{-n}^{(z_{1})}\cdot\alpha_{-n}^{(z_{2})}+c_{-n}^{(z_{1})}b_{-n}^{(z_{2})}-b_{- n}^{(z_{1})}c_{-n}^{(z_{2})}\Big{)}\Big{]}\] \[\qquad\times\exp\Big{[}\sum_{n=1}^{\infty}i\eta q^{n-1/2}\Big{(} \beta_{-n+1/2}^{(z_{1})}\gamma_{-n+1/2}^{(z_{2})}-\gamma_{-n+1/2}^{(z_{1})} \beta_{-n+1/2}^{(z_{2})}+\psi_{-n+1/2}^{(z_{1})}\cdot\psi_{-n+1/2}^{(z_{2})} \Big{)}\Big{]}\] \[\qquad\times\eta\big{[}1+(\tilde{c}_{0}^{(z_{1})}+\tilde{c}_{0}^ {(z_{2})})(c_{0}^{(z_{1})}+c_{0}^{(z_{2})})\big{]}\tilde{c}_{1}^{(z_{1})}c_{1 }^{(z_{1})}\tilde{c}_{1}^{(z_{2})}c_{1}^{(z_{2})}\delta(\gamma_{1/2}^{(z_{1})} )\delta(\gamma_{1/2}^{(z_{2})})|0\rangle^{1}\otimes|0\rangle^{2}\] where the various modes have been defined in Appendix B, whereas \(D\) denotes the number of non-compact spacetime dimensions, \(D=10\). (For \(D<10\) one needs to include some additional states in the resolution of unity depending on the compactification manifold.) The frames \(z_{1}|\theta_{1}\) and \(z_{2}|\theta_{2}\) are glued on an annular patch overlap, \({\cal U}_{1}\cap{\cal U}_{2}\), with the resulting transition functions, \[z_{1}z_{2}=-\varepsilon^{2},\qquad\mbox{subject to}\qquad D_{\theta_{2}}z_{1}= \theta_{1}D_{\theta_{2}}\theta_{1}, \tag{7.66}\] with \(q=-\varepsilon^{2}\), or, more explicitly, \[\begin{split} z_{1}(z_{2}|\theta_{2})&=\frac{- \varepsilon^{2}}{z_{2}}\\ \theta_{1}(z_{2}|\theta_{2})&=\eta\varepsilon\frac{ \theta_{2}}{z_{2}},\end{split}\qquad\text{with}\qquad\eta=\pm 1. \tag{7.67}\] with a similar relation for the anti-chiral half, \(\widetilde{z}_{1}\widetilde{z}_{2}=\widetilde{q}\). It is important to realise that we do not need to glue with a more general transition function, such as \((z_{1}-\theta_{1}\vartheta_{1})(z_{2}-\theta_{2}\vartheta_{2})=-\varepsilon^ {2}\), because the map to integrated picture already incorporates the effect of the odd moduli (analogous to \(\vartheta_{1},\vartheta_{2}\)). (The map to integrated picture also captures the effects of super curvature, which become important when the handle moves across the underlying curved surface.) The sign, \(\eta\), (unlike for the sphere) here plays an important role: the two values define the two NS spin structures: summing over it in the path integral gives the GSO projection in the NS sector (see Sec. 6.2.3 in [21]). We also took into account that \(\delta(\gamma_{1/2}^{(z_{2}/q)})=q^{1/2}\delta(\gamma_{1/2}^{(z_{2})})\). Any one particular fixed picture (or \(-1\) picture) offshell vertex operator, \(\widehat{\mathcal{A}}_{a}^{(z)}\), can be derived from this expression (7.65) by expanding it in powers of the pinch parameters, \(q=-\varepsilon^{2}\),\(\vec{q}\), and then identifying the corresponding momentum integrand with the tensor product of the corresponding vertex operator, \(\widehat{\mathcal{A}}_{a}^{(z_{1})}\widehat{\mathcal{A}}_{(z_{2}/q)}^{a}\) as indicated on the left-hand side in (7.65). The "integrated picture" (or \(0\)-picture) vertex operators are then given by: \[\begin{split}\mathcal{A}_{a}^{(z)}&=\int\mathcal{D }(\widetilde{v},& v|\chi)e^{-I}\widehat{\mathcal{B}}_{\widetilde{ v}}\widehat{\mathcal{B}}_{v}\delta(\widehat{\mathcal{B}}_{\chi})\widehat{ \mathcal{A}}_{a}^{(z)}\\ &=\int\mathcal{D}(\widetilde{\mathbf{z}},&\mathbf{z} |\boldsymbol{\theta})e^{-I_{\mathbf{z}\boldsymbol{\theta}}}\widehat{ \mathcal{B}}_{\mathbf{z}}\widehat{\mathcal{B}}_{\mathbf{z}}\widehat{ \mathcal{B}}(\widehat{\mathcal{B}}_{\boldsymbol{\theta}}+\boldsymbol{\theta} \widehat{\mathcal{B}}_{\mathbf{z}})\widehat{\mathcal{A}}_{a}^{(z)},\end{split} \tag{7.68}\] with the measure given in (5.34) or (6.45). Actually, this is the case when we treat all \(A_{I}\)-cycle loops on equal footing (i.e. we need to cut open all loops and insert a resolution of unity in each one), because it is only in this case that a super Riemann surface with an arbitrary number of loops can be mapped to a super Riemann surface with the topology of a 2-sphere, although this can be relaxed depending on the objective. In practice, after extracting the integrated (\(0\) picture) vertex operators of interest it may be convenient to map back to the \(\widetilde{w};\!w|\psi\) chart coordinates, so that all vertex operator insertions in the amplitude are defined using the same chart coordinates (while being inserted at different coordinate values). Notice that there is a clear distinction between chiral and anti-chiral halves in \(\widehat{\mathcal{A}}_{a}\), and that it is the map of non-primary vertex operators to integrated picture, \(\mathcal{A}_{a}^{(z)}\), that obscures the distinction between chiral and anti-chiral halves. This observation is important, because it resolves the question of how to sum over spin structures in the presence of a smooth gauge slice. Actually, the resolution is reminiscent of the D'Hoker and Phong resolution [9], which showed that at fixed internal loop momenta there is a useful notion of chiral splitting, which in turn made it clear how to sum over spin structures and hence incorporate the GSO projection [40]. Although the original proof of chiral splitting does not hold at arbitrarily high genus due to the Donagi-Witten obstruction [18], the approach that we are presenting here does (modulo R sector contributions). The correspondence with the D'Hoker and Phong procedure [9] becomes apparent when one takes into account that the present approach fixes not only the internal loop momenta, but instead fixes all quantum numbers that characterise the offshell state of a string propagating through an internal loop (including the loop momenta). The resolution of unity (7.65) makes this fully explicit. This is precisely analogous to the corresponding situation discussed in great detail in the context of bosonic string theory in [28]. One of the key ingredients in constructing the smooth gauge slice that associates a supermodulus of dimension \(2|1\) to a NS puncture are the superconformal vector fields given in (6.42) that are associated to an underlying super Riemann surface with the topology of a 2-sphere. A natural question is to understand how these superconformal vector fields would change if we instead considered an arbitrary super Riemann surface with any number of handles (rather than a super Riemann surface whose reduced space is a 2-sphere with constant curvature). This will be discussed elsewhere, but it is more subtle and the situation is not entirely clear. With regards to gauge invariance, on a general super Riemann surface with an arbitrary number of handles and arbitrary local super curvature, there is a result similar to (6.64), but in addition there appear certain additive terms on the right-hand side that depend on bilinear products of \(\tilde{\bf z},{\bf z}\) derivatives of the super curvature evaluated at the puncture. So if the super curvature, \({\cal R}_{\tilde{z}\theta}\), is proportional to the odd modulus, \(\theta\), of the NS puncture when evaluated at the NS puncture in question, \(\tilde{z}\);\(z|\theta=0\);\(0|0\), these additional squared super curvature terms vanish and gauge invariance is restored. It is not obvious if a more general dependence of the super curvature on the odd modulus is allowed. in general, we also expect the BRST charge to receive corrections at higher genus. The case of arbitrary super curvature and arbitrary-genus super Riemann surfaces certainly deserves further study. But it should be kept in mind that in the handle operator viewpoint [28], where all genus amplitudes are constructed on the 2-sphere with additional handle operator insertions (by an appropriate cutting and gluing procedure), the case of the super sphere discussed in this article may actually suffice. There is clearly a lot a work that remains to be done. It would also be interesting, as a warmup, to calculate the dilaton one-point amplitude (or to derive the dilaton theorem) [27, 30, 41, 42, 43, 44] in this formalism. A more ambitious direction is to construct the full handle operators associated to this gauge slice. One reason being that, inspired by an idea in [45] (which inspired the detailed study in [28]), one can then ask under what conditions we might be able to sum over handle operators (which corresponds to summing over string loops at the level of the integrand). Since all loop orders are treated on equal footing, i.e. one handle operator insertion for every string loop inserted on a super sphere, it is tempting to speculate whether one might even be able to go beyond perturbation theory in this manner; and if so, under what assumptions. The first step however is presumably to unravel how to implement this smooth gauge slice in the Ramond sector, because it is clearly necessary [40] to include both NS and R sectors in any handle operator that is meant to exactly incorporate the full implications of a string loop insertion on a super sphere. A second step is to understand how to gauge fix the invariance under \(\mathrm{OSp}(2,1)\) in a manner that does not depend on the number of vertex operator insertions. Once we have reduced a general genus-\(\mathfrak{g}\) amplitude to a sphere with handle operator insertions, we also recover the underlying symmetries of the super sphere, which in turn need to be fixed (see [44, 46, 47, 48, 49, 50, 51]). Furthermore, how is modular invariance restored when we try to sum over string loops? What role might the background symmetries play in this story? At a more basic level, the formalism presented here immediately applies to offshell string theory in the BRST formalism (because we have constructed a globally-defined gauge slice), so one can use it to ask various questions where going offshell is important, see e.g. [51, 52], and in particular when non-primary vertex operators contribute. It would also be interesting to apply this formalism to non-trivial backgrounds involving NS-NS fluxes, a simple example being \(AdS_{3}\) (see e.g. [47] and references therein). ## Acknowledgements I am grateful to Eric D'Hoker, Edward Witten and Branislav Jurco for correspondence, to Mark Doyle for sharing his PhD thesis with me, to Imperial College for support, and especially Arkady Tseytlin for numerous insightful discussions. ## Appendix A Derivation In this Appendix we will compute the derivatives appearing in (5.31). Taking into account (3.16), and in particular, \[z(w|\psi)=\frac{w-v-\psi\chi}{\tilde{v}w+1},\] (A.69) it follows that the variation, \(\delta z\), is given by, \[\begin{split}\delta z&=\bigg{[}\delta\widetilde{v}\, \frac{\partial z}{\partial\widetilde{v}}+\delta v\,\frac{\partial z}{\partial v }+\delta\chi\,\frac{\partial z}{\partial\chi}\bigg{]}_{w|\psi}\\ &=\delta\widetilde{v}\,\bigg{(}\frac{(w-v-\psi\chi)w}{-( \widetilde{v}w+1)^{2}}\bigg{)}+\delta v\bigg{(}\frac{-1}{\widetilde{v}w+1} \bigg{)}+\delta\chi\bigg{(}\frac{\psi}{\widetilde{v}w+1}\bigg{)},\end{split}\] (A.70) where the derivatives with respect to the supermoduli, \(\widetilde{v}\);\(v|\chi\), are evaluated at fixed \(\widetilde{w}\);\(w|\psi\). As seen in (5.29), for the path integral measure we actually need these variations in terms of \(\widetilde{z}\);\(z|\theta\) rather than \(\widetilde{w}\);\(w|\psi\). We eliminate the dependence on the latter in favour of the former by making use of the inverse relations (3.17), \[\begin{split} w(z|\theta)&=\frac{z+v+\eta\theta \chi/\sqrt{1+\widetilde{v}v}}{-\widetilde{v}z+1-\eta\theta\chi\widetilde{v}/ \sqrt{1+\widetilde{v}v}}\\ \psi(z|\theta)&=\frac{\sqrt{1+\widetilde{v}v}\, \eta\theta+\chi}{-\widetilde{v}z+1}.\end{split}\] (A.71) Before explicitly displaying the resulting expression for \(\delta z\) however, we recall that there is also some information about the phase of \(z+\delta z(z|\theta)\) in (A.70), which as we have discussed does not have physical significance. So we would like to extract this, not because it is necessary, but because it is simplest to set it equal to any convenient value, and then check that quantities of physical interest do not depend on that choice. Adding \(z\) to both sides of (A.70), and taking the preceding comments into account, to leading order in the variation we see that: \[\begin{split} z+\delta z(z|\theta)=e^{2i\text{Im}\frac{\delta v }{1+\widetilde{v}v}}\bigg{\{}z&-\frac{\delta\widetilde{v}}{1+ \widetilde{v}v}\left(z^{2}-\frac{2\eta\chi}{\sqrt{1+\widetilde{v}v}}\frac{1}{ 2}z\theta\right)-\frac{\delta v}{1+\widetilde{v}v}\bigg{(}1+\frac{2\eta\chi}{ \sqrt{1+\widetilde{v}v}}\frac{1}{2}\theta\widetilde{v}\bigg{)}\\ &+\frac{\eta\delta\chi}{\sqrt{1+\widetilde{v}v}}\bigg{(}\theta+ \frac{\eta\chi}{\sqrt{1+\widetilde{v}v}}\bigg{)}\bigg{\}},\end{split}\] (A.72) so that we can now easily identify the overall phase. Dropping this, we learn that the variation takes the form, \[\begin{split}\delta z(z|\theta)&=-\frac{\delta \widetilde{v}}{1+\widetilde{v}v}\left(z^{2}-\frac{2\eta\chi}{\sqrt{1+ \widetilde{v}v}}\frac{1}{2}z\theta\right)-\frac{\delta v}{1+\widetilde{v}v} \bigg{(}1+\frac{2\eta\chi}{\sqrt{1+\widetilde{v}v}}\frac{1}{2}\widetilde{v} \theta\bigg{)}\\ &\qquad\qquad\qquad+\frac{\eta\delta\chi}{\sqrt{1+\widetilde{v}v }}\bigg{(}\frac{\eta\chi}{\sqrt{1+\widetilde{v}v}}+\theta\bigg{)}.\end{split}\] (A.73) The corresponding variation, \(\delta\theta\), at generic \(z|\theta\) is similarly determined from (3.16), namely: \[\theta(w|\psi)=\eta\frac{\sqrt{1+\widetilde{v}v}}{(\widetilde{v}w+1)}\psi-\eta \frac{\chi}{\sqrt{1+\widetilde{v}v}},\quad\eta=\pm 1,\] and found to be given by, \[\delta\theta(z|\theta) =\Big{[}\delta\widetilde{v}\,\frac{\partial\theta}{\partial \widetilde{v}}\theta+\delta v\,\frac{\partial\theta}{\partial v}\theta+\delta \chi\,\frac{\partial\theta}{\partial\chi}\theta\Big{]}_{w|\psi}\] \[=-\frac{\delta\widetilde{v}}{1+v\widetilde{v}}\left(z\theta+ \frac{1}{2}v\theta+\frac{z\eta\chi}{\sqrt{1+v\widetilde{v}}}\right)+\frac{ \delta v}{1+v\widetilde{v}}\left(\frac{1}{2}\widetilde{v}\theta+\frac{ \widetilde{v}\eta\chi}{\sqrt{1+v\widetilde{v}}}\right)-\frac{\eta\delta\chi}{ \sqrt{1+v\widetilde{v}}}.\] Extracting the overall phase as above, and adding this variation to \(\theta\), we find that to leading order in the variations, \[\theta+\delta\theta(z|\theta)=e^{i\text{Im}\frac{\phi\phi}{1+v\widetilde{v}}} \Big{\{}\theta-\frac{\delta\widetilde{v}}{1+v\widetilde{v}}\left(z\theta+ \frac{z\eta\chi}{\sqrt{1+v\widetilde{v}}}\right)+\frac{\delta v}{1+v \widetilde{v}}\left(\frac{\widetilde{v}\eta\chi}{\sqrt{1+v\widetilde{v}}} \right)-\frac{\eta\delta\chi}{\sqrt{1+v\widetilde{v}}}\Big{\}}.\] (A.74) Dropping the overall phase (which is half that found in \(\delta z\) as required by the superconformal condition), we learn that: \[\delta\theta(z|\theta)=-\frac{\delta\widetilde{v}}{1+v\widetilde{v}}\left(z \theta+\frac{\eta\chi}{\sqrt{1+v\widetilde{v}}}z\right)+\frac{\delta v}{1+v \widetilde{v}}\left(\frac{\eta\chi}{\sqrt{1+v\widetilde{v}}}\widetilde{v} \right)-\frac{\eta\delta\chi}{\sqrt{1+v\widetilde{v}}}.\] (A.75) The quantity that appears in the path integral measure are actually components of the superfield, \(\delta\mathcal{V}\equiv\delta z-\delta\theta\theta\), corresponding globally to a section of \(T\Sigma/\mathcal{D}\cong\mathcal{D}^{2}\): \[\delta\mathcal{V}^{(z)}=-\frac{\delta\widetilde{v}}{1+\widetilde{v}v}\left(z^ {2}-\frac{2\eta\chi}{\sqrt{1+\widetilde{v}v}}z\theta\right)-\frac{\delta v}{1 +\widetilde{v}v}\Bigg{(}1+\frac{2\eta\chi}{\sqrt{1+\widetilde{v}v}}\widetilde{ v}\theta\Bigg{)}+\frac{\eta\delta\chi}{\sqrt{1+\widetilde{v}v}}\Bigg{(}2\theta+ \frac{\eta\chi}{\sqrt{1+\widetilde{v}v}}\Bigg{)}\] (A.76) An analogous computation for the anti-chiral half, \(\delta\widetilde{z}(\widetilde{w})|_{\widetilde{w}\,\text{fixed}}=\delta \widetilde{v}\,\partial_{\widetilde{v}}\widetilde{z}(\widetilde{w})+\delta v \,\partial_{v}\widetilde{z}(\widetilde{w})+\delta\chi\,\partial_{\chi} \widetilde{z}(\widetilde{w})\), according to (3.16) and (3.17), and by a slight abuse of notation denoting this again by \(\delta\widetilde{z}(\widetilde{z})\), yields: \[\delta\widetilde{z}(\widetilde{z})=-\frac{\delta\widetilde{v}}{1+\widetilde{ v}v}\left(1-v\widetilde{z}\right)-\frac{\delta v}{1+\widetilde{v}v}\left( \widetilde{z}^{2}+\widetilde{v}\widetilde{z}\right)\!,\] (A.77) so that, writing \(\delta\widetilde{\mathcal{V}}^{(z)}(\widetilde{z})=\delta\widetilde{z}( \widetilde{z})\), and extracting out the phase again and dropping it we arrive at: \[\delta\widetilde{\mathcal{V}}^{(z)}(\widetilde{z})=-\frac{\delta\widetilde{v} }{1+\widetilde{v}v}-\frac{\delta v}{1+\widetilde{v}v}\,\widetilde{z}^{2}\] (A.78) It will prove efficient to introduce notation for the variations with respect to specific moduli in (A.78) and (A.76). Define: \[\begin{split}\delta\widetilde{\mathcal{V}}^{(z)}(\widetilde{z})& =\delta\widetilde{v}\widetilde{\mathcal{V}}_{\widetilde{v}}( \widetilde{z})+\delta v\widetilde{\mathcal{V}}_{v}(\widetilde{z})+\delta \chi\widetilde{\mathcal{V}}_{\chi}(\widetilde{z})\\ \delta\mathcal{V}^{(z)}(z|\theta)&=\delta\widetilde{v }\mathcal{V}_{\widetilde{v}}(z|\theta)+\delta v\mathcal{V}_{v}(z|\theta)+ \delta\chi\mathcal{V}_{\chi}(z|\theta),\end{split}\] (A.79) so that, according to (A.78) and (A.76), \[\begin{split}\widetilde{\mathcal{V}}_{\tilde{v}}(\widetilde{z})& =-\frac{1}{1+\tilde{v}v}\qquad\qquad\mathcal{V}_{\tilde{v}}(z| \theta)=-\frac{1}{1+\tilde{v}v}\bigg{(}z^{2}-\frac{2\eta\chi}{\sqrt{1+\tilde{ v}v}}z\theta\bigg{)}\\ \widetilde{\mathcal{V}}_{v}(\widetilde{z})&=-\frac{ \widetilde{z}^{2}}{1+\tilde{v}v}\qquad\qquad\mathcal{V}_{v}(z|\theta)=-\frac {1}{1+\tilde{v}v}\bigg{(}1+\frac{2\eta\chi\tilde{v}}{\sqrt{1+\tilde{v}v}} \theta\bigg{)}\\ \widetilde{\mathcal{V}}_{\chi}(\widetilde{z})&=0 \qquad\qquad\qquad\mathcal{V}_{\chi}(z|\theta)=\frac{\eta}{\sqrt{1+\tilde{v}v }}\bigg{(}2\theta+\frac{\eta\chi}{\sqrt{1+\tilde{v}v}}\bigg{)}.\end{split}\] (A.80) ## Appendix B Mode Expansions Let us consider a local frame, \((U,\bar{z};\!z|\theta)\), and mode expand the various matter and ghost superfields around \(\widetilde{z};\!z|\theta=0;\!0|0\). We will restrict attention to the NS sector. Neglecting auxiliary fields, for the chiral half of the superghosts in particular we write: \[\begin{split} B_{z\theta}&=\beta(z)+\theta b(z) \qquad\qquad\qquad C^{z}=c(z)+\theta\gamma(z)\\ &=\sum_{n\in\mathbf{Z}}\frac{\beta_{n+1/2}+\theta b_{n}}{z^{n+2}} \qquad\text{ and }\qquad\qquad\quad=\sum_{n\in\mathbf{Z}}\frac{c_{n}+\theta\gamma_{n-1/2}}{z^{n -1}},\end{split}\] (B.81) whereas for the anti-chiral halves, \[\begin{split}\widetilde{B}_{\bar{z}\bar{z}}=\widetilde{b}( \widetilde{z})=\sum_{n\in\mathbf{Z}}\frac{\widetilde{b}_{n}}{\widetilde{z}^{n +2}}\qquad\quad\text{ and }\qquad\widetilde{C}^{\bar{z}}=\widetilde{c}( \widetilde{z})=\sum_{n\in\mathbf{Z}}\frac{\widetilde{c}_{n}}{\widetilde{z}^{n -1}}.\end{split}\] (B.82) Similarly, for the chiral half of the matter fields the mode expansions are: \[\begin{split} D_{\theta}X^{\mu}(z|\theta)&=\psi^{ \mu}(z)+\theta\partial_{z}x^{\mu}(z)\\ &=\sum_{n\in\mathbf{Z}}\frac{\psi^{\mu}_{n+1/2}-i\theta\alpha^{ \mu}_{n}}{z^{n+1}},\end{split}\] (B.83) whereas for the anti-chiral half of the matter fields: \[\begin{split}\partial_{\bar{z}}X^{\mu}(\widetilde{z})=\partial_{ \bar{z}}x^{\mu}(\widetilde{z})=-i\sum_{n\in\mathbf{Z}}\frac{\widetilde{\alpha }^{\mu}_{n}}{z^{n+1}}\qquad\quad\text{ and }\qquad\Lambda_{a}(\bar{z})=\lambda_{a}(\tilde{z})=\sum_{n\in \mathbf{Z}}\frac{\widetilde{\lambda}^{a}_{n+1/2}}{\widetilde{z}^{n+1}}.\end{split}\] (B.84) We can now define the \(\mathrm{OSp}(2,1)\) vacuum, denoted by \(|0\rangle\), which is analogous to the \(\mathrm{SL}(2,\mathbf{C})\) vacuum for ordinary Riemann surfaces. This is simply the state corresponding to the unit operator, so is defined in the NS sector by: \[\begin{split}\beta_{n+1/2}|0\rangle&=0,\qquad n \geq-1\\ \gamma_{n-1/2}|0\rangle&=0,\qquad n\geq 2\\ \widetilde{b}_{n}|1\rangle&=b_{n}|0\rangle& =0,\qquad n\geq-1\\ \widetilde{c}_{n}|1\rangle&=c_{n}|0\rangle& =0,\qquad n\geq 2\end{split}\qquad\begin{split}\widetilde{\alpha}_{n}|0 \rangle=\alpha_{n}|0\rangle=0,\qquad n\geq 0\\ \psi_{n+1/2}|0\rangle&=0,\qquad n\geq 0\\ \lambda_{n+1/2}|0\rangle&=0,\qquad n\geq 0,\end{split}\] (B.85) \[\widetilde{L}_{n}|0\rangle=L_{n}|0\rangle=G_{n+1/2}|0\rangle=0,\qquad n\geq-1\] (B.86) where it is to be understood that the indices are shifted as necessary such that \(n\in{\bf Z}\). These are derived by re-expressing the above mode expansions as contour integrals and using that the OPE with the unit operator is non-singular. The total ghost charge is defined such that \(\widetilde{c},c,\gamma,\delta(\beta)\) have ghost charge \(1\), whereas \(\widetilde{b},b,\beta,\delta(\gamma)\) have ghost charge \(-1\).
2309.01196
A Visual Interpretation-Based Self-Improved Classification System Using Virtual Adversarial Training
The successful application of large pre-trained models such as BERT in natural language processing has attracted more attention from researchers. Since the BERT typically acts as an end-to-end black box, classification systems based on it usually have difficulty in interpretation and low robustness. This paper proposes a visual interpretation-based self-improving classification model with a combination of virtual adversarial training (VAT) and BERT models to address the above problems. Specifically, a fine-tuned BERT model is used as a classifier to classify the sentiment of the text. Then, the predicted sentiment classification labels are used as part of the input of another BERT for spam classification via a semi-supervised training manner using VAT. Additionally, visualization techniques, including visualizing the importance of words and normalizing the attention head matrix, are employed to analyze the relevance of each component to classification accuracy. Moreover, brand-new features will be found in the visual analysis, and classification performance will be improved. Experimental results on Twitter's tweet dataset demonstrate the effectiveness of the proposed model on the classification task. Furthermore, the ablation study results illustrate the effect of different components of the proposed model on the classification results.
Shuai Jiang, Sayaka Kamei, Chen Li, Shengzhe Hou, Yasuhiko Morimoto
2023-09-03T15:07:24Z
http://arxiv.org/abs/2309.01196v1
A Visual Interpretation-Based Self-Improved Classification System Using Virtual Adversarial Training + ###### Abstract The successful application of large pre-trained models such as BERT in natural language processing has attracted more attention from researchers. Since the BERT typically acts as an end-to-end black box, classification systems based on it usually have difficulty in interpretation and low robustness. This paper proposes a visual interpretation-based self-improving classification model with a combination of virtual adversarial training (VAT) and BERT models to address the above problems. Specifically, a fine-tuned BERT model is used as a classifier to classify the sentiment of the text. Then, the predicted sentiment classification labels are used as part of the input of another BERT for spam classification via a semi-supervised training manner using VAT. Additionally, visualization techniques, including visualizing the importance of words and normalizing the attention head matrix, are employed to analyze the relevance of each component to classification accuracy. Moreover, brand-new features will be found in the visual analysis, and classification performance will be improved. Experimental results on Twitter's tweet dataset demonstrate the effectiveness of the proposed model on the classification task. Furthermore, the ablation study results illustrate the effect of different components of the proposed model on the classification results. Keywords:Visual Interpretation Self-Improved Classification Spam Detection Virtual Adversarial Training. ## 1 Introduction Deep learning is a machine learning technique that has been widely applied in various fields, such as natural language processing (NLP) [8, 17], recommendation systems [16, 21], and prediction tasks [15, 18]. In the field of spam email classification, deep learning models such as Recurrent Neural Networks, especially the Long Short-Term Memory [32] and Gated Recurrent Unit [19], have made significant progress in classifying emails and identifying spam. Finding suitable labels for domain-specific classification models trained using deep learning is challenging for researchers. Previous research has shown that sentiment analysis can be used in combination with pre-trained models for spam detection in tweets, as spammers often use emotional expressions to increase users' trust in their messages [30]. However, determining effective tags for other types of social media content remains a challenge in this field [7]. In recent years, large pre-trained language models such as the BERT have achieved high accuracy when fine-tuning supervised tasks [5]. Additionally, some past work has partly studied the learning of linguistic features and examined the internal vector by probing the classifier [25]. This paper uses spam detection as an example scenario. First, a fine-tuned BERT model is used for the sentiment classification of tweet texts. The sentiment label is then input for another BERT model to determine whether it is a spam tweet. In the training process of this model, a semi-supervised training approach using virtual adversarial training (VAT) is introduced to improve accuracy and system robustness. Ablation experiments demonstrate its effectiveness. Secondly, relevant tools are used to interpret the internal workings of the BERT model. By comparing various models used in the experiment and visualizing word importance attribution, the contribution of each token in each layer, and the attention matrices of each layer, the reasons for the accuracy differences between different models are found, explaining the models. Furthermore, more suitable URL tags are identified through internal analysis of the model. Further training of the model results in system improvement, as demonstrated by experiments. The main contributions of this paper are as follows: * **A BERT-based model for semi-supervised learning**: The first BERT is employed for text sentiment classification. The obtained sentiment tags are combined with the text in another BERT for spam classification via VAT. * **Self-improved visual interpretation**: Word attention scores are analyzed with visual interpretation tools, and the parts with high feature weights are used to improve the system's classification performance. * **Performance improvement**: Experimental results and ablation studies on the Twitter tweets dataset have demonstrated the effectiveness of the proposed model for spam classification in a semi-supervised learning task. ## 2 Related Work ### Spam Detection Major social media sites (e.g., Twitter, Facebook, Quora) face a massive dilemma as many fall victim to spam. This information induces users to click on malicious links or uses bots to spread false news, seriously adding to the chaos in the Internet space. In recent years, many studies have been on spam detection for tweets, and many suitable and outdated features have been summarized [12]. Many studies have shown that sentiment analysis technology can enhance the differentiation of spam tweets [2]. Therefore, many studies have used traditional machine learning methods to detect spam tweets based on sentiment features [24, 23, 28]. In [27], the authors use several machine learning and deep learning techniques for sentiment analysis and spam detection to detect spam tweets in real time. However, the author does not combine the two techniques but performs spam detection and sentiment analysis on tweets separately. In [1], the authors use an LDA model to find the sentiment and topic of tweets, suggest features that identify spam tweets more accurately than previous methods, and predict how widely spam spreads on Twitter. In [11], the author first used the pre-trained BERT model to perform sentiment analysis on tweets, extracting various sentiment features. Then, an unsupervised GloVe model was used for Twitter bot detection, resulting in high accuracy. In addition, adversarial training has also been widely used in spam detection tasks. For example, in [6], the author utilized several adversarial strategies to enhance the spam classifier and achieved good results, laying the foundation for adversarial training in classification tasks. [9] used an attention mechanism for movie review spam detection and employed GAN models for adversarial training, achieving state-of-the-art results. ### Model Interpretability In today's era of widespread use of deep learning and neural network technology, the demand for their interpretability is also gradually increasing. Such models are usually black boxes in their organizational structure, where users input specific information into the model and can obtain specific outputs. However, the model still needs to answer how the outputs are obtained. Model interpretability aims to transform black-box models into white-box models so that users can understand why the model makes relevant predictions and identify ways to improve its validity. In addition, it eliminates ethical issues when AI models are used on a large scale in society. Interpretability on machine learning models has long been proposed, such as SHAP [20] and LIME [26]. The SHAP (SHapley Additive exPlanations) model produces a prediction value for each prediction sample, and the SHAP value is the value assigned to each feature in that sample. Lime (Local Interpretable Model-Agnostic Explanations) is an approach that uses a trained local proxy model to explain individual samples. However, when dealing with large pre-trained deep learning models with hundreds of hidden states, the situation becomes different, and simple local explanations of the model are difficult to fit. The BERT model, for example, introduced the attention mechanism [3], which became a very successful neural network component but increased the difficulty of interpreting the model. Clark et al. [4] strongly emphasize analyzing the attention head in BERT. They studied its behavior and directly extracted sentence representations from the BERT model without fine-tuning. They discovered that the attention head exhibits recognizable patterns, such as focusing on a fixed position offset or paying attention to the entire sentence. ## 3 Model To identify spam tweets, we used the public dataset "Spam Detection on Twitter" [31] posted by YASH, which contains 82,469 legitimate tweets and 97,276 spam tweets. To classify the sentiment of tweets, we utilized the Sentiment140 dataset [10], which comprises 1.6 million tweets, half positive and half negative. The model architecture is shown in Fig. 1. First, the tweets will be tagged with sentiment labels after a BERT model fine-tuned by the Sentiment140 dataset, the Sentiment Analysis Component. Then, after two fully connected layers, the Spam Detection Component will output whether the tweet is spam. Semi-supervised learning and VAT are used here to improve the training accuracy. We utilize the Twitter dataset as unlabeled data for semi-supervised learning in sentiment analysis. Finally, the interpretation method will interpret the models. ### Sentiment Analysis Component Sentiment analysis of tweets helps to comprehend public opinion on topics prevalent on social media. Twitter's usage has increased as users share news and Figure 1: Overview architecture of the proposed spam classification system. First, a fine-tuned BERT (the Sentiment Analysis Component) is used as a classifier to classify the sentiment of the text. Then, the predicted sentiment classification labels are used as part of the input of another BERT (the Spam Detection Component) for spam classification via a semi-supervised training manner using VAT. Additionally, visualization techniques, including visualizing the importance of words and normalizing the attention head matrix, are employed to analyze the relevance of each component to classification accuracy. Moreover, brand-new features are detected in the Visual Analysis Component. Finally, the classification system can be self-improved via the newly imported. personal experiences. Hence, analyzing tweet sentiment is crucial. Despite its popularity, sentiment analysis of tweets is challenging due to the 280-character limit and irregularities in tweets (e.g., spelling variations and abbreviations). BERT is a pre-trained deep bidirectional transformer, a powerful model for language understanding. We employed BERT for sentiment polarity classification using the Sentiment140 dataset. This dataset contains tweet content and is for a binary sentiment classification task. It includes 1.6 million tweets collected from the Twitter API and annotated as negative (0) or positive (4), making it useful for sentiment detection. Unlike previous work [11] by Heidari et al., who used the SST-2 movie review dataset for training, the Sentiment140 dataset is in the same domain as the target task, thus leading to improved accuracy in sentiment analysis of tweets. The Sentiment Analysis Component categorizes tweets as positive, neutral, or negative. BERT is trained on 1.6 million tweets. After the tokens are input into the model, the model performs word embedding processing. Among the 12 hidden layers in the BERT model, the next layer's multi-head attention calculates the attention scores of each word in the previous layer. The BERT model's training results will be presented in the next section. This component finally extracts sentiment features from the text of tweets through fine-tuned BERT. At the end of the model, the softmax layer outputs the sentence's sentiment polarity value score \(X\) (\(0<X<1\)). Then it divides the sentence into three categories: positive, neutral, and negative, according to this value. If \(X<0.3\), the tweet sentiment is negative. If \(X>0.7\), the tweet sentiment is positive. The rest are neutral. ### Spam Detection Component The Spam Detection Component also uses the fine-tuned BERT to determine if the tweet is spam. We use the _SpamDetectionOnTwitter_ dataset in [31] to learn whether a tweet is a spam. To embed emotional features into tweets and better explain the model, we add sentiment tags into each piece of data, namely _TAGPOS_, _TAGNEU_, and _TAGNEG_, and add these three tags to the dictionary of the BERT model. After exporting the BERT model, it includes two fully connected layers. The final layer with softmax will output the final result indicating whether the tweet is spam. At the same time, we also use several adversarial learning methods for training enhancement to find the best one. Adversarial training can be summarized as the following max-min formula: \[\min_{\theta}\mathbb{E}_{(x,y)\sim\mathcal{D}}[\max_{||\delta||\leq\varepsilon }L(f_{\theta}(X+\delta),y)]. \tag{1}\] The inner layer (in square brackets) is a maximization, where \(X\) represents the input representation of the sample, \(\delta\) represents the perturbation superimposed on the input, \(f_{\theta}()\) is the neural network function, \(y\) is the label of the sample, and \(L(f_{\theta}(X+\delta),y)\) represents the loss obtained by superimposing a disturbance \(\delta\) superimposed on the sample \(X\). \(\max(L)\) is the optimization goal, that is, to find the disturbance that maximizes the loss function. The outside minimization refers to finding the most robust parameters \(\theta\), such that the predicted distribution conforms to the distribution of the original dataset. The Fast Gradient Method (FGM) is implemented by \(L2\) normalization, which divides the value of each gradient dimension by the \(L2\) norm of the gradient. Theoretically, \(L2\) normalization preserves the direction of the gradient. \[\delta=\epsilon\bullet\left(g/|\left|g\right|\right|_{2})\,. \tag{2}\] Among them, \(g=\nabla\;_{\text{X}}\left(L\left(f_{\theta}\left(X\right),y\right)\right)\) is the gradient of the loss function \(L\) with respect to the input \(X\). Unlike a normal FGM that only performs iteration once, PGD performs multiple iterations to find the optimal perturbation. Each iteration projects the disturbance into a specified range each time a small step is taken. The formula of the loss function in step \(t\) in PGD is shown as follows: \[g_{t}=\nabla X_{t}\left(L\left(f_{\theta}\left(X_{t}\right),y\right)\right). \tag{3}\] Although PGD is simple and effective, there is a problem that it is not computationally efficient. Without adversarial training, \(m\) iterations will only have \(m\) gradient calculations, but for PGD, each gradient descent must correspond to the \(K\) steps of gradient boosting. Therefore, PGD needs to do \(m(K+1)\) gradient calculations compared with the method without adversarial training. In VAT, the loss function for adversarial training can be expressed as [22]: \[L_{adv}(x_{l},\theta):=D[q(y|x_{l}),p(y|x_{l}+r_{adv},\theta)], \tag{4}\] \[r_{adv}:=\operatorname*{arg\,max}_{r;||r||\leq\epsilon}D[q(y|x_{l}),p(y|x_{l}+ r,\theta)], \tag{5}\] where \(D[p,q]\) is a non-negative function that measures the divergence between two distributions \(p\) and \(q\). The function \(q(y|x_{l})\) is the true distribution of the output label, which is unknown. This loss function aims to approximate the true distribution \(q(y|x_{l})\) by a parametric model \(p(y|x_{l},\theta)\) that is robust against adversarial attack to labeled input \(x_{l}\). A "virtual" label generated by the \(p(y|x,\theta)\) probability is used in VAT to represent the user-unknown \(p(y|x,\hat{\theta})\) label, and the adversarial direction is calculated based on the virtual label. Unlabeled input \(x_{ul}\) and labeled input \(x_{l}\) will be unified as \(x_{*}\). The formula is calculated as follows: \[\text{LDS}(x_{*},\theta):=D[p(y|x_{*},\hat{\theta}),p(y|x_{*}+r_{ quad},\theta)], \tag{6}\] \[r_{quadv}:=\operatorname*{arg\,max}_{r;||r||_{2}\leq\epsilon}D[p(y|x_{*},\hat {\theta}),p(y|x_{*}+r)], \tag{7}\] where the loss function of \(\text{LDS}(x_{*},\theta)\) indicates the virtual adversarial perturbation. This function can be considered a negative indicator of the local smoothness of the current model at each input data point \(x\). A reduction in this function would result in a smoother model at each data point. ### Visual Interpretation Component In this part, we begin by creating visual representations of the significance of individual words in differentiating between spam and non-spam content. Additionally, we normalize the attention head matrix to visualize all attention matrices and identify distinctions between various models, thereby demonstrating the efficacy of our proposed model. We will delve into further details in the following part, accompanied by relevant examples. #### 3.3.1 Word Importance Attribution Integrated Gradients [29] are used to compute attributions concerning the _BertEmbeddings_ layer to obtain the importance of words. In simple terms, Integrated Gradients define the attribution of the \(i^{th}\) feature of the input as the path integral of the straight line path from the baseline \(x^{\prime}_{i}\) to the input \(x_{i}\) from [29]: \[\text{IG}_{i}(x)::=(x_{i}-x^{\prime}_{i})\cdot\int_{\alpha=0}^{1}\frac{ \partial F(x^{\prime}+\alpha(x-x^{\prime}))}{\partial x_{i}}\text{d}\alpha, \tag{8}\] where \(\frac{\partial F(x)}{\partial x_{i}}\) is the gradient of \(F\) along the \(i^{th}\) dimension at input \(x\) and baseline \(x^{\prime}\). In the NLP task described in this paper, we use the zero vector as the baseline. #### 3.3.2 Attribution in Attention Matrix We visualize the attention probabilities of 12 attention heads in all 12 layers, totaling 144. It represents the softmax normalized dot product of key and query vectors. In [4], it is an essential indicator, indicating how related a token is to another token in the text. ## 4 Experiments To demonstrate the effectiveness of the proposed model, this section first provides empirical evidence through ablation experiments, demonstrating the effectiveness of the relevant components, including the sentiment analysis component and the adversarial training component. Secondly, visualization tools are used to analyze the model's interpretability to identify the reasons for the effectiveness of the relevant components. Finally, using the above analysis, more suitable labels are identified, and the system is further trained to achieve improvements. ### Dataset #### 4.1.1 Spam Dataset We use the SpamDetectionOnTwitter dataset in [31] to learn whether a tweet is spam. This dataset contains 82,469 legitimate tweets and 97,276 spam tweets. Each tweet is tagged with user_id, tweet_id, tweet_text, time, and spam_label to show whether it is a spam tweet. Here we only select the text and spam_label for training. We select 68,919 legitimate tweets and 58,866 spam tweets as the training set, and the rest is divided into a validation set and a test set. Sentiment DatasetWe used the Sentiment140 dataset [10] as the training dataset for the part of the tweet sentiment polarity analysis component. This dataset contains 1.6 million sentiment-labeled tweets, half positive and half negative, and each tweet is accompanied by tweet_id, time, username, and tweet_text. Similar to the last part, only the tweet_text and spam_label are selected for training at this stage. We select 1.46 million tweets as the training set, and the rest are divided into the validation and test sets. ### Hyperparameter Setting In both Sentiment Analysis Component and Spam Detection Component, we used the bert-base-multilingual-cased model, which is Google's new and recommended BERT model. We set the batch size to 16 and the dropout rate to 0.1 in both Sentiment Analysis and Spam Detection Components. The learning rates of the Adam optimizer are 2e-5 in the sentiment part and 1e-5 in the spam part. According to the size of the two datasets, we set the steps of the sentiment part as 10000, while 1000 in the spam part. All experiments used Pytorch version 1.13.1, bert4torch 0.2.4, and Captum 0.6.0. ### Spam Detection After fine-tuning, Sentiment Analysis is performed on the existing spam dataset, and the sentiment distribution of the dataset can be seen, as shown in Fig. 2. The number of spam tweets with positive sentiment is the largest, followed by neutral sentiment and the least negative sentiment, with 42547, 32291, and 7629, respectively. In the non-spam tweets, the tweets with neutral sentiment are the most, and the positive and negative sentiment is both less, among which the negative sentiment is the least, the numbers 27619, 52049, and 17606, respectively. To show the effectiveness of the proposed model, we performed ablation experiments on the model proposed in this paper using the same training parameters and random seed. Since PGD is an evolutionary algorithm of FGM, here we omit the experiment of FGM and only keep PGD. The results are shown in Figure 2: Sentiment distribution in spam dataset. Table 1. It can be seen that the precision of the proposed model is the highest, proving its effectiveness. This part only analyzes the model's effectiveness from the model experimentation perspective. Although the proposed model has achieved the highest accuracy and recall, as a black box model, we cannot know the reasons for the differences in the experiment results inside the different models. This issue leads us to introduce model interpretability (or XAI). The next part will study this issue in depth. ### Visual Interpretation In this section, we use the Captum [14] tool to perform visual interpretability analysis on the BERT model in the Spam Detection Component of all six models in the previous chapter. This tool is a Pytorch-based model interpretation library released by Facebook. The library provides interpretability for many new algorithms (such as ResNet, BERT, and some semantic segmentation networks), helping everyone better understand the specific features, neurons, and neural network layers that affect the model's prediction results. For text translation and other problems, it can visually mark the importance of different words and use a heat map to display the correlation between words. Here, we first visualize the importance of words to distinguish which words play a role in judging spam or not. Then, we visualize all the attention matrices to find the differences between different models to prove the effectiveness of the proposed model. Here is an example of the tweet "_19 year old genius shares Twitter tool free. Nice guys rock! [http://ow.ly/Ult_](http://ow.ly/Ult_)", a spam tweet with positive sentiment. #### 4.4.1 Word Importance Attribution With the formula (8), we can obtain the Word Importance Attribution of the input sentence, as shown in Fig. 3. In this case, the actual label of the input sentence is "Spam." This figure's "Predicted Label" represented the model output result and predicted probability. The rightmost is a visual explanation of the contribution value of the input sentence, green represents a positive contribution to the Predicted Label, and red represents the opposite. Additionally, the deeper the color, the higher the level of contribution, and vice versa. It can be seen from the figure that when the sentiment tag is not added, only the word _year_ is a positive contribution, so the final probability is only 0.52, and \begin{table} \begin{tabular}{l l l l l} \hline & Precision & Accuracy & Recall & F1 Score \\ \hline BERT & 76.06 & 75.32 & 79.95 & 77.96 \\ BERT+Sentiment & 79.14 & 77.43 & 79.65 & 79.39 \\ BERT+Sentiment+PGD & 83.21 & 76.83 & 72.10 & 77.25 \\ BERT+Sentiment+VAT & 82.61 & 76.68 & 79.54 & 78.47 \\ \hline Proposed Model & 82.58 & 77.81 & 75.41 & 78.83 \\ (1 dense layer) & & & & \\ Proposed Model & 85.97 & 77.60 & 74.92 & 78.49 \\ \hline \end{tabular} \end{table} Table 1: Experiment results of all models. the model is difficult to distinguish. After adding the sentiment tag, although the contribution of the tag is less, the contribution of many words, especially the URL part, is significantly improved, thereby increasing the final probability. #### 4.2.2 Attribution in Attention Matrix The order of the following figures is simple BERT model and the proposed model. The \(x\)-axis and \(y\)-axis of the matrix are tokens, and each cell represents the attention score between different tokens, that is, the degree of attention obtained from the weight of attention. The brighter the cell, the higher the attention score, or level of attention, from the token on the \(x\)-axis to the token on the \(y\)-axis, and vice versa. In most attention heads, the overall trend of words is to pay more attention to themselves or the next word. Still, in Head 2-5, 3-3, 3-4, 3-6, 3-8, 3-12, and 4-2, the [CLS] token will pay more attention to the added sentiment tag, which we can see the brightest point appears in the upper left corner, such as Head 3-6 (Fig. 4). According to [13], each sequence's initial [CLS] token is used as the sentence representation in a labeled classification task. Therefore, in the Spam Detection Component of this paper, the [CLS] token represents whether the sentence is spam. Therefore, these results show the validity of the sentiment tag. Entering layer 9, it can be seen that, as shown in Fig. 5, the attention matrix divides the token into two parts, the text, and the URL. This phenomenon is more evident in Head 9-12, and the attention matrix is divided into four prominent parts, especially in the model after adding the adversarial training method. Other examples can also demonstrate the effectiveness of the proposed model. In Head 2-12, shown in Fig. 6, we can see the part of the URL that pays more attention to _http://_, indicating that the model has detected the URL. Figure 3: Word importance attribution. Figure 4: Attention Head 3-6 of BERT and the proposed model. Figure 5: Attention Head 9-12 of BERT and the proposed model. Figure 6: Attention Head 2-12 of BERT and the proposed model. ### Model Improvement Based on the analysis in the previous section, we found that the URL part can be detected in some attention heads. Inspired by this, we operate similarly to the sentiment tags above for the URL part, using a regular expression to extract the URL part as separate data labels. To distinguish between short and long links, we set the URL label with less than 24 characters as _TAGURLS_, indicating short URL links, and the rest as _TAGURLL_. We add these labels after the sentiment label. We used the optimized data to retrain the proposed model, resulting in the modified model, as shown in Table 2. We found that after adding the URL tag, the accuracy, F1 score, and precision of the modified model were improved. Next, we analyze the BERT model using interpretability methods and directly examine the corresponding attention head of the original model. In Fig. 7-8, the left part is the proposed model, and the right is the improved model. In Fig. 7, Head 3-6 is the same as before, with [CLS] p Figure 8: Attention Head 9-12 of the proposed model and improved model. Figure 7: Attention Head 3-6 of the proposed model and improved model. to the sentiment tag, but the URL tag has become the second most attention token, which to some extent, proves the validity of the URL tag. In Fig. 8, Head 9-12, the URL part is more prominent than the original model, indicating that the model is paying more attention to the URL part, proving the URL tag's reliability for improving precision and accuracy. ## 5 Conclusion The main contributions of this paper can be summarized in two points. First, using sentiment analysis and adversarial training methods, we proposed a new model for spam detection, which is better than the traditional models. Secondly, applying the visual interpretability analysis method to the model, we studied the principle of internal classification of the model, found the reasons for the difference in precision in different models, and proved the effectiveness of the proposed model at the same time, further improving its performance. The large-scale pre-trained BERT model based on VAT can be extended to other tasks. The attention mechanism can analyze the in-depth features with heavy weights, and these features can effectively improve the accuracy and precision of the task. In future work, we will utilize the VAT and visual interpretation method in other pre-trained language models (e.g., ALBERT, XLNET) to further improve the performance of spam classification.
2306.11500
A remark on continued fractions for permutations and D-permutations with a weight $-1$ per cycle
We show that very simple continued fractions can be obtained for the ordinary generating functions enumerating permutations or D-permutations with a large number of independent statistics, when each cycle is given a weight $-1$. The proof is based on a simple lemma relating the number of cycles modulo 2 to the numbers of fixed points, cycle peaks (or cycle valleys), and crossings.
Bishal Deb, Alan D. Sokal
2023-06-20T12:39:04Z
http://arxiv.org/abs/2306.11500v2
# A remark on continued fractions for permutations and D-permutations with a weight \(-1\) per cycle ###### Abstract We show that very simple continued fractions can be obtained for the ordinary generating functions enumerating permutations or D-permutations with a large number of independent statistics, when each cycle is given a weight \(-1\). The proof is based on a simple lemma relating the number of cycles modulo \(2\) to the numbers of fixed points, cycle peaks (or cycle valleys), and crossings. 20 June 2023 **Key Words:** Permutation, D-permutation, continued fraction. **Mathematics Subject Classification (MSC 2020) codes:** 05A19 (Primary); 05A05, 05A15, 05A30, 30B70 (Secondary). ###### Contents * 1 Introduction * 2 Preliminaries * 2.1 Continued fractions * 2.2 Permutation statistics: The record-and-cycle classification * 2.3 Permutation statistics: Crossings and nestings * 3 Proof of Lemma 1.1 * 4 Results for permutations * 4.1 Master J-fraction * 4.2 \(p,q\) J-fraction * 4.3 Simple J-fraction * 4.4 Corollary for cycle-alternating permutations * 5 Results for D-permutations * 5.1 Master T-fraction * 5.2 \(p,q\) T-fraction * 5.3 Simple T-fraction Introduction In a recent paper, Zeng and one of us [13] proved continued fractions enumerating permutations with 10, 18 or infinitely many statistics that implement the cycle classification of indices (cycle peak, cycle valley, cycle double rise, cycle double fall, fixed point) together with an index-refined count of crossings and nestings. Subsequently, the two present authors [3] proved analogous results for D-permutations [8, 9, 10], which are a subclass of permutations of \([2n]\) counted by the Genocchi and median Genocchi numbers: our continued fractions enumerated D-permutations with 12, 22 or infinitely many statistics that implement the parity-refined cycle classification of indices (cycle peak, cycle valley, cycle double rise, cycle double fall, even fixed point, odd fixed point) together with an index-refined count of crossings and nestings. In both papers, we called these results our "first" continued fractions. In both cases, it was natural to try to extend these results by taking account also of the number of cycles: that is, by including an additional weight \(\lambda^{\mathrm{cyc}(\sigma)}\). However, it turned out that it was possible to do so only by renouncing some of the other statistics: for instance, by counting cycle valleys only with respect to crossings \(+\) nestings, rather than to crossings and nestings separately. We called these results our "second" continued fractions. Our purpose here is to make a simple but previously overlooked remark: that in addition to the trivial case \(\lambda=1\), there is one other case where one need not renounc counting any other statistics, namely, \(\lambda=-1\). The reason for this is the following simple lemma, which relates the number of cycles modulo 2 to the number of fixed points, cycle peaks (or cycle valleys), and crossings: **Lemma 1.1**.: _Let \(\sigma\in\mathfrak{S}_{n}\) be a permutation. Then the following identity holds:_ \[\mathrm{cyc} = \mathrm{fix}+\mathrm{cpeak}+\mathrm{ucross}+\mathrm{lcross}\pmod {2} \tag{1.1a}\] \[= \mathrm{fix}+\mathrm{cval}+\mathrm{ucross}+\mathrm{lcross}\pmod{ 2}. \tag{1.1b}\] We will give a precise definition of ucross (number of upper crossings) and lcross (number of lower crossings) in Section 2.3, and then a proof of this lemma in Section 3. Using Lemma 1.1, it is easy to obtain continued fractions for the case \(\lambda=-1\) as simple corollaries of those for \(\lambda=1\). That is what we shall do in this paper. The plan of this paper is as follows: In Section 2 we give some preliminary definitions concerning continued fractions and permutation statistics. In Section 3 we give two proofs of Lemma 1.1: one topological, and one combinatorial. Then, in Sections 4 and 5, we give our results for permutations and D-permutations, respectively. We remark that since Lemma 1.1 is a general fact concerning permutations, it can be applied to _any_ result concerning _any_ subclass of permutations in which the statistics fix, cpeak and \(\mathrm{ucross}+\mathrm{lcross}\) are handled. ## 2 Preliminaries ### Continued fractions If \((a_{n})_{n\geq 0}\) is a sequence of combinatorial numbers or polynomials with \(a_{0}=1\), it is often fruitful to seek to express its ordinary generating function as a continued fraction of either Stieltjes type (_S-fraction_), \[\sum_{n=0}^{\infty}a_{n}t^{n}\ =\ \frac{1}{1-\frac{\alpha_{1}t}{1-\frac{ \alpha_{2}t}{1-\cdots}}}\, \tag{2.1}\] Thron type (_T-fraction_), \[\sum_{n=0}^{\infty}a_{n}t^{n}\ =\ \frac{1}{1-\delta_{1}t-\frac{\alpha_{1}t}{1- \delta_{2}t-\frac{\alpha_{2}t}{1-\cdots}}}\, \tag{2.2}\] or Jacobi type (_J-fraction_), \[\sum_{n=0}^{\infty}a_{n}t^{n}\ =\ \frac{1}{1-\gamma_{0}t-\frac{\beta_{1}t^{2} }{1-\gamma_{1}t-\frac{\beta_{2}t^{2}}{1-\cdots}}}. \tag{2.3}\] Both sides of these expressions are to be interpreted as formal power series in the indeterminate \(t\). ### Permutation statistics: The record-and-cycle classification Given a permutation \(\sigma\in\mathfrak{S}_{N}\), an index \(i\in[N]\) is called an * _excedance_ (exc) if \(i<\sigma(i)\); * _anti-excedance_ (aexc) if \(i>\sigma(i)\); * _fixed point_ (fix) if \(i=\sigma(i)\). Clearly every index \(i\) belongs to exactly one of these three types; we call this the _excedance classification_. We also say that \(i\) is a _weak excedance_ if \(i\leq\sigma(i)\), and a _weak anti-excedance_ if \(i\geq\sigma(i)\). A more refined classification is as follows: an index \(i\in[N]\) is called a * _cycle peak_ (cpeak) if \(\sigma^{-1}(i)<i>\sigma(i)\); * _cycle valley_ (cval) if \(\sigma^{-1}(i)>i<\sigma(i)\); * _cycle double rise_ (cdrise) if \(\sigma^{-1}(i)<i<\sigma(i)\); * _cycle double fall_ (cdfall) if \(\sigma^{-1}(i)>i>\sigma(i)\); * _fixed point_ (fix) if \(\sigma^{-1}(i)=i=\sigma(i)\). Clearly every index \(i\) belongs to exactly one of these five types; we refer to this classification as the _cycle classification_. Obviously, excedance = cycle valley or cycle double rise, and anti-excedance = cycle peak or cycle double fall. We write \[\mathrm{Cpeak}(\sigma)\;=\;\{i\colon\,\sigma^{-1}(i)<i>\sigma(i)\} \tag{2.4}\] for the set of cycle peaks and \[\mathrm{cpeak}(\sigma)\;=\;|\mathrm{Cpeak}(\sigma)| \tag{2.5}\] for its cardinality, and likewise for the others. On the other hand, an index \(i\in[N]\) is called a * _record_ (rec) (or _left-to-right maximum_) if \(\sigma(j)<\sigma(i)\) for all \(j<i\) [note in particular that the indices \(1\) and \(\sigma^{-1}(N)\) are always records]; * _antirecord_ (arec) (or _right-to-left minimum_) if \(\sigma(j)>\sigma(i)\) for all \(j>i\) [note in particular that the indices \(N\) and \(\sigma^{-1}(1)\) are always antirecords]; * _exclusive record_ (erec) if it is a record and not also an antirecord; * _exclusive antirecord_ (earec) if it is an antirecord and not also a record; * _record-antirecord_ (rar) (or _pivot_) if it is both a record and an antirecord; * _neither-record-antirecord_ (nrar) if it is neither a record nor an antirecord. Every index \(i\) thus belongs to exactly one of the latter four types; we refer to this classification as the _record classification_. The record and cycle classifications of indices are related as follows: * Every record is a weak excedance, and every exclusive record is an excedance. * Every antirecord is a weak anti-excedance, and every exclusive antirecord is an anti-excedance. * Every record-antirecord is a fixed point. Therefore, by applying the record and cycle classifications simultaneously, we obtain 10 disjoint categories [13]: * ereccval: exclusive records that are also cycle valleys; * ereccdrise: exclusive records that are also cycle double rises; * eareccpeak: exclusive antirecords that are also cycle peaks; * eareccdfall: exclusive antirecords that are also cycle double falls; * rar: record-antirecords (these are always fixed points); * nrcpeak: neither-record-antirecords that are also cycle peaks; * nrcval: neither-record-antirecords that are also cycle valleys; * nrcdrise: neither-record-antirecords that are also cycle double rises; * nrcdfall: neither-record-antirecords that are also cycle double falls; * nrfix: neither-record-antirecords that are also fixed points. Clearly every index \(i\) belongs to exactly one of these 10 types; we call this the _record-and-cycle classification_. When studying D-permutations, we will use the _parity-refined record-and-cycle classification_, in which we distinguish even and odd fixed points. ### Permutation statistics: Crossings and nestings We now define (following [13]) some permutation statistics that count _crossings_ and _nestings_. First we associate to each permutation \(\sigma\in\mathfrak{S}_{N}\) a pictorial representation (Figure 1) by placing vertices \(1,2,\ldots,N\) along a horizontal axis and then drawing an arc from \(i\) to \(\sigma(i)\) above (resp. below) the horizontal axis in case \(\sigma(i)>i\) [resp. \(\sigma(i)<i\)]; if \(\sigma(i)=i\) we do not draw any arc. Each vertex thus has either out-degree = in-degree = 1 (if it is not a fixed point) or out-degree = in-degree = 0 (if it is a fixed point). Of course, the arrows on the arcs are redundant, because the arrow on an arc above (resp. below) the axis always points to the right (resp. left); we therefore omit the arrows for simplicity. We then say that a quadruplet \(i<j<k<l\) forms an * _upper crossing_ (ucross) if \(k=\sigma(i)\) and \(l=\sigma(j)\); * _lower crossing_ (lcross) if \(i=\sigma(k)\) and \(j=\sigma(l)\); * _upper nesting_ (unest) if \(l=\sigma(i)\) and \(k=\sigma(j)\); * _lower nesting_ (lnest) if \(i=\sigma(l)\) and \(j=\sigma(k)\). We also consider some "degenerate" cases with \(j=k\), by saying that a triplet \(i<j<l\) forms an * _upper joining_ (ujoin) if \(j=\sigma(i)\) and \(l=\sigma(j)\) [i.e. the index \(j\) is a cycle double rise]; * _lower joining_ (ljoin) if \(i=\sigma(j)\) and \(j=\sigma(l)\) [i.e. the index \(j\) is a cycle double fall]; * _upper pseudo-nesting_ (upsnest) if \(l=\sigma(i)\) and \(j=\sigma(j)\); * _lower pseudo-nesting_ (lpsnest) if \(i=\sigma(l)\) and \(j=\sigma(j)\). These are clearly degenerate cases of crossings and nestings, respectively. See Figure 2. Note that \(\mbox{upsnest}(\sigma)=\mbox{lpsnest}(\sigma)\) for all \(\sigma\), since for each fixed point \(j\), the number of pairs \((i,l)\) with \(i<j<l\) such that \(l=\sigma(i)\) has to equal the number of such pairs with \(i=\sigma(l)\); we therefore write these two statistics simply as \[\mbox{psnest}(\sigma)\;\stackrel{{\rm def}}{{=}}\;\mbox{upsnest }(\sigma)\;=\;\mbox{lpsnest}(\sigma)\;. \tag{2.6}\] And of course \(\mbox{ujoin}=\mbox{cdrise}\) and \(\mbox{ljoin}=\mbox{cdfall}\). We can further refine the four crossing/nesting categories by examining more closely the status of the inner index (\(j\) or \(k\)) whose _outgoing_ arc belonged to the crossing or nesting: we say that a quadruplet \(i<j<k<l\) forms an * _upper crossing of type cval_ (ucrosscval) if \(k=\sigma(i)\) and \(l=\sigma(j)\) and \(\sigma^{-1}(j)>j\); * _upper crossing of type cdrise_ (ucrosscdrise) if \(k=\sigma(i)\) and \(l=\sigma(j)\) and \(\sigma^{-1}(j)<j\); * _lower crossing of type cpeak_ (lcrosscpeak) if \(i=\sigma(k)\) and \(j=\sigma(l)\) and \(\sigma^{-1}(k)<k\); * _lower crossing of type cdfall_ (lcrosscdfall) if \(i=\sigma(k)\) and \(j=\sigma(l)\) and \(\sigma^{-1}(k)>k\); * _upper nesting of type cval_ (unestcval) if \(l=\sigma(i)\) and \(k=\sigma(j)\) and \(\sigma^{-1}(j)>j\); * _upper nesting of type cdrise_ (unestcdrise) if \(l=\sigma(i)\) and \(k=\sigma(j)\) and \(\sigma^{-1}(j)<j\); * _lower nesting of type cpeak_ (lnestcpeak) if \(i=\sigma(l)\) and \(j=\sigma(k)\) and \(\sigma^{-1}(k)<k\); * _lower nesting of type cdfall_ (lnestcdfall) if \(i=\sigma(l)\) and \(j=\sigma(k)\) and \(\sigma^{-1}(k)>k\). See Figure 3. Please note that for the "upper" quantities the distinguished index (i.e. the one for which we examine both \(\sigma\) and \(\sigma^{-1}\)) is in second position (\(j\)), while for the "lower" quantities the distinguished index is in third position (\(k\)). In fact, a central role in our work will be played (just as in [3, 13]) by a yet further refinement of these statistics: rather than counting the _total_ numbers of quadruplets \(i<j<k<l\) that form upper (resp. lower) crossings or nestings of the foregoing types, we will count the number of upper (resp. lower) crossings or nestings that use a particular vertex \(j\) (resp. \(k\)) in second (resp. third) position. More precisely, we define the _index-refined crossing and nesting statistics_ \[\mbox{ucross}(j,\sigma) = \#\{i<j<k<l\colon\;k=\sigma(i)\mbox{ and }l=\sigma(j)\} \tag{2.7a}\] \[\mbox{unest}(j,\sigma) = \#\{i<j<k<l\colon\;k=\sigma(j)\mbox{ and }l=\sigma(i)\}\] (2.7b) \[\mbox{lcross}(k,\sigma) = \#\{i<j<k<l\colon\;i=\sigma(k)\mbox{ and }j=\sigma(l)\}\] (2.7c) \[\mbox{lnest}(k,\sigma) = \#\{i<j<k<l\colon\;i=\sigma(l)\mbox{ and }j=\sigma(k)\} \tag{2.7d}\] Figure 2: Crossing, nesting, joining and pseudo-nesting. Figure 3: Refined categories of crossing and nesting. Note that \(\operatorname{ucross}(j,\sigma)\) and \(\operatorname{unest}(j,\sigma)\) can be nonzero only when \(j\) is an excedance (that is, a cycle valley or a cycle double rise), while \(\operatorname{lcross}(k,\sigma)\) and \(\operatorname{lnest}(k,\sigma)\) can be nonzero only when \(k\) is an anti-excedance (that is, a cycle peak or a cycle double fall). When \(j\) is a fixed point, we also define the analogous quantity for pseudo-nestings: \[\operatorname{psnest}(j,\sigma)\ \stackrel{{\mathrm{def}}}{{=}}\ \#\{i<j\colon\,\sigma(i)>j\}\ =\ \#\{i>j\colon\,\sigma(i)<j\}\;. \tag{2.8}\] (Here the two expressions are equal because \(\sigma\) is a bijection from \([1,j)\cup(j,n]\) to itself.) In [13, eq. (2.20)] this quantity was called the _level_ of the fixed point \(j\) and was denoted \(\operatorname{lev}(j,\sigma)\). ## 3 Proof of Lemma 1.1 We will give two proofs of Lemma 1.1: one topological, and one combinatorial. The topological proof is extremely satisfying from an intuitive point of view, but it requires some nontrivial results on the topology of the plane to make it rigorous. The combinatorial proof is simple and manifestly rigorous, but it relies on an identity for the number of inversions [5, Lemme 3.1][12, eq. (40)][13, Proposition 2.24] whose proof is elementary but not entirely trivial. Topological Proof. Draw the diagram representing the permutation \(\sigma\) (Figure 1) such that each arc is a \(C^{1}\) non-self-intersecting curve that has a vertical tangent at each cycle peak and cycle valley and a horizontal tangent at each cycle double rise and cycle double fall, and such that each pair of arcs intersects either zero times (if they do not represent a crossing) or once transversally (if they do represent a crossing), and also such that each intersection point involves only two arcs (see Figure 4 for the example of Figure 1 redrawn according to these rules). Then each cycle becomes a \(C^{1}\) closed curve with a finite number of self-intersections, all of which are transversal double points; following Whitney [18, pp. 280-281], we call such a curve _normal_. The total number of intersections in the diagram is \(\operatorname{ucross}+\operatorname{lcross}\). Each fixed point is of course a cycle. So we focus henceforth on cycles of length \(\geq 2\). We will prove the following two facts: * The number of self-intersections in a cycle is equal modulo 2 to the number of cycle peaks (or alternatively, cycle valleys) in that cycle, plus 1. * The number of intersections between two distinct cycles is equal modulo 2 to zero. Together these facts will prove Lemma 1.1. Proof of (a). The _rotation angle_ (or _tangent winding angle_) of a \(C^{1}\) closed curve is the total angle through which the tangent vector turns while traversing the curve.1 With the above conventions for the arc diagram (with arcs traversed in the direction of the arrows, i.e. clockwise) it is easy to see that the tangent turns by an angle \(-\pi\) from each cycle valley to the next cycle peak, and again by an angle \(-\pi\) from each cycle peak to the next cycle valley. Therefore, a cycle containing \(M\) cycle peaks (and hence \(M\) cycle valleys) has a rotation angle \(-2\pi M\). On the other hand, Whitney [18, Theorem 2] proved that the rotation angle for a \(C^{1}\) normal closed curve \(f\) is \[\gamma(f)\;=\;2\pi(\mu+N^{+}-N^{-}) \tag{3.1}\] where \(N^{+}\) (resp. \(N^{-}\)) is the number of positive (resp. negative) crossings, and \(\mu\) is either \(+1\) or \(-1\).2 It follows that the number of self-intersections in this cycle, namely \(N^{+}+N^{-}\), equals \(M+1\) modulo \(2\). Footnote 2: The definition of positive and negative crossings [18, p. 281] depends on the choice of a starting point on the curve; if the crossing point is visited first with tangent vector \(\mathbf{v}_{1}\) and then with tangent vector \(\mathbf{v}_{2}\), the crossing point is called _positive_ if \(\mathbf{v}_{1}\times\mathbf{v}_{2}<0\) using the right-hand rule, and _negative_ if \(\mathbf{v}_{1}\times\mathbf{v}_{2}>0\) using the right-hand rule. The hypotheses of [18, Theorem 2] require that the starting point be an _outside_ starting point, i.e. the whole curve must lies on one side of the tangent line to the curve at the starting point. That requirement is easily fulfilled here, e.g. by taking the starting point to be the smallest or largest element of the cycle. In this situation, [18, Theorem 2] also specifies explicitly whether \(\mu\) is \(+1\) or \(-1\); in the present case it is \(\mu=-1\). See also Umehara and Yamada [17, pp. 34–38] for an exposition of Whitney’s proof. They use the term β€œgeneric” for what Whitney calls β€œnormal”. See e.g. [16] or [14, Section 0.3] for proofs of the Jordan Curve Theorem. Proof of (b). This is a general property of \(C^{1}\) normal closed curves in the plane that have finitely many mutual intersections, all of which are transversal double points: in this situation the number of mutual intersections is even. This intuitively obvious fact goes back at least to Tait [15, statement III]. For completeness we give a proof: Let \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) be \(C^{1}\) normal closed curves in the plane; and suppose that \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) have finitely many intersections, all of which are all transversal double points. Consider first the case in which \(\mathcal{C}_{1}\) is a simple closed curve, i.e. has no self-intersections. Then the Jordan Curve Theorem tells us that \(\mathbb{R}^{2}\setminus\mathcal{C}_{1}\) has two connected components, an interior and an exterior.3 We put an orientation on \(\mathcal{C}_{2}\) and traverse \(\mathcal{C}_{2}\) from some Figure 4: Diagram of the permutation \(\sigma=9\,3\,7\,4\,6\,11\,2\,8\,10\,1\,5=(1,9,10)\,(2,3,7)\,(4)\,(5,6,11)\,(8)\in \mathfrak{S}_{11}\) shown in Figure 1, drawn according to the rules stated in the text. starting point. Each time \({\cal C}_{2}\) intersects \({\cal C}_{1}\), it must either go from the interior to the exterior of \({\cal C}_{1}\) or vice versa (because the intersections are transversal). Since \({\cal C}_{2}\) returns to its starting point, the number of intersections between \({\cal C}_{2}\) and \({\cal C}_{1}\) must be even. When \({\cal C}_{1}\) is not a simple closed curve but has finitely many self-intersections, we can write it as a union of finitely many simple closed curves \({\cal C}_{1}^{i}\) that are disjoint except for intersections at the self-intersection points of \({\cal C}_{1}\). (The graph whose vertices are the self-intersection points and whose edges are the arcs of \({\cal C}_{1}\) between two successive self-intersections is an Eulerian graph; and an Eulerian graph can be written as the edge-disjoint union of cycles.) Then \({\cal C}_{2}\) has an even number of intersections with each \({\cal C}_{1}^{i}\), hence also with \({\cal C}_{1}\) (since by hypothesis none of those intersections occur at the self-intersection points of \({\cal C}_{1}\)).4 This completes the proof. Footnote 4: Equivalently, the graph \(G\) whose vertices are the self-intersection points and whose edges are the arcs of \({\cal C}_{1}\) between two successive self-intersections is Eulerian; so its dual \(G^{*}\) is bipartite. Then the closed curve \({\cal C}_{2}\) must intersect the edges of \(G\) an even number of times. This completes the proof of Lemma 1.1. \(\Box\) Combinatorial Proof. Let \(\mbox{cyc}(\sigma)=k\), and let \(p_{1},\ldots,p_{k}\) be the sizes of the \(k\) cycles of \(\sigma\). Then \[n+k\;=\;\sum_{i=1}^{k}(p_{i}+1)\;\equiv\;\#(\mbox{cycles of $\sigma$ of even length})\quad(\mbox{mod $2$})\;. \tag{3.2}\] Therefore \[(-1)^{n+k}\;=\;(-1)^{\#(\mbox{cycles of $\sigma$ of even length})}\;. \tag{3.3}\] Here the right-hand side is simply the parity of \(\sigma\), usually denoted \(\mbox{sgn}(\sigma)\). As is well known (e.g. [11, section 7.4]), the parity of \(\sigma\) is also given by \[\mbox{sgn}(\sigma)\;=\;(-1)^{\mbox{\scriptsize inv}(\sigma)}\;, \tag{3.4}\] where \[\mbox{inv}(\sigma)\;\stackrel{{\rm def}}{{=}}\;\#\{(i,j)\colon i <j\mbox{ and }\sigma(i)>\sigma(j)\} \tag{3.5}\] is the number of inversions in \(\sigma\). We therefore have \[n+k\;\equiv\;\mbox{inv}(\sigma)\quad(\mbox{mod $2$})\;. \tag{3.6}\] On the other hand, we recall a formula [13, Proposition 2.24] for the number of inversions in terms of cycle, crossing and nesting statistics: \[\mbox{inv}\;=\;\mbox{cval}+\mbox{cdrise}+\mbox{cdfall}+\mbox{ucross}+\mbox{ lcross}+2(\mbox{unest}+\mbox{lnest}+\mbox{psnest})\;. \tag{3.7}\] Combining (3.6) and (3.7) yields \[n+k\;\equiv\;(\mbox{cval}+\mbox{cdrise}+\mbox{cdfall})+(\mbox{ucross}+\mbox{ lcross})\quad(\mbox{mod $2$})\;, \tag{3.8}\] which can be rewritten as \[k\;\equiv\;(\mbox{cpeak}+\mbox{fix})+(\mbox{ucross}+\mbox{lcross})\quad(\mbox{ mod $2$}) \tag{3.9}\] since \(n=\mbox{cpeak}+\mbox{cval}+\mbox{cdrise}+\mbox{cdfall}+\mbox{fix}\). This proves (1.1a). Then (1.1b) follows because \(\mbox{cpeak}=\mbox{cval}\). \(\Box\) Results for permutations We find it convenient to start from the first "master" J-fraction for permutations [13, Theorem 2.9] and then to specialize. ### Master J-fraction Following [13, Section 2.7], we introduce five infinite families of indeterminates \(\mathbf{a}=(\mathfrak{a}_{\ell,\ell^{\prime}})_{\ell,\ell^{\prime}\geq 0}\), \(\mathbf{b}=(\mathfrak{b}_{\ell,\ell^{\prime}})_{\ell,\ell^{\prime}\geq 0}\), \(\mathbf{c}=(\mathfrak{c}_{\ell,\ell^{\prime}})_{\ell,\ell^{\prime}\geq 0}\), \(\mathbf{d}=(\mathfrak{d}_{\ell,\ell^{\prime}})_{\ell,\ell^{\prime}\geq 0}\), \(\mathbf{e}=(\mathfrak{e}_{\ell})_{\ell\geq 0}\) and then define the polynomials \[P_{n}(\mathbf{a},\mathbf{b},\mathbf{c},\mathbf{d},\mathbf{e}, \lambda)\;=\] \[\sum_{\sigma\in\mathfrak{S}_{n}}\lambda^{\mathrm{cyc}(\sigma)} \prod_{i\in\mathrm{CVal}(\sigma)}\mathfrak{a}_{\mathrm{ucross}(i,\sigma), \,\mathrm{unest}(i,\sigma)}\prod_{i\in\mathrm{Cpeak}(\sigma)}\mathfrak{b}_{ \mathrm{lcross}(i,\sigma),\,\mathrm{nest}(i,\sigma)}\;\times\] \[\prod_{i\in\mathrm{Cdfall}(\sigma)}\mathfrak{c}_{\mathrm{lcross}( i,\sigma),\,\mathrm{nest}(i,\sigma)}\;\prod_{i\in\mathrm{Cdrise}(\sigma)} \mathfrak{d}_{\mathrm{ucross}(i,\sigma),\,\mathrm{unest}(i,\sigma)}\prod_{i \in\mathrm{Fix}(\sigma)}\mathfrak{e}_{\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p }\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p} \mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{p}\mathrm{ **Proposition 4.1** (Master J-fraction for permutations, \(\lambda=-1\)).: _The ordinary generating function of the polynomials \(P_{n}(\mathbf{a},\mathbf{b},\mathbf{c},\mathbf{d},\mathbf{e},-1)\) has the J-type continued fraction_ \[\sum_{n=0}^{\infty}P_{n}(\mathbf{a},\mathbf{b},\mathbf{c},\mathbf{ d},\mathbf{e},-1)\;t^{n}\;=\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \ It follows that the polynomial (4.6) is obtained from (4.1) by making the specializations [13, eq. (2.81)] \[\mathsf{a}_{\ell,\ell^{\prime}} = p_{+1}^{\ell}q_{+1}^{\ell^{\prime}}\,\times\,\begin{cases}y_{1}& \text{if }\ell^{\prime}=0\\ v_{1}&\text{if }\ell^{\prime}\geq 1\end{cases} \tag{4.7a}\] \[\mathsf{b}_{\ell,\ell^{\prime}} = p_{-1}^{\ell}q_{-1}^{\ell^{\prime}}\,\times\,\begin{cases}x_{1}& \text{if }\ell^{\prime}=0\\ u_{1}&\text{if }\ell^{\prime}\geq 1\end{cases}\] (4.7b) \[\mathsf{c}_{\ell,\ell^{\prime}} = p_{-2}^{\ell}q_{-2}^{\ell^{\prime}}\,\times\,\begin{cases}x_{2}& \text{if }\ell^{\prime}=0\\ u_{2}&\text{if }\ell^{\prime}\geq 1\end{cases}\] (4.7c) \[\mathsf{d}_{\ell,\ell^{\prime}} = p_{+2}^{\ell}q_{+2}^{\ell^{\prime}}\,\times\,\begin{cases}y_{2}& \text{if }\ell^{\prime}=0\\ v_{2}&\text{if }\ell^{\prime}\geq 1\end{cases}\] (4.7d) \[\mathsf{e}_{\ell} = s^{\ell}w_{\ell} \tag{4.7e}\] Making these specializations in Proposition 4.1 -- or equivalently, attaching a minus sign to the variables \(x_{1},u_{1},p_{+1},p_{+2},p_{-1},p_{-2},w_{i}\) in [13, Theorem 2.7] -- we obtain: **Proposition 4.2** (\(p,q\) J-fraction for permutations, \(\lambda=-1\)).: _The ordinary generating function of the polynomials (4.6) at \(\lambda=-1\) has the J-type continued fraction_ \[\sum_{n=0}^{\infty}P_{n}(x_{1},x_{2},y_{1},y_{2},u_{1},u_{2},v_{1},v_{2}, \mathbf{w},p_{+1},p_{+2},p_{-1},p_{-2},q_{+1},q_{+2},q_{-1},q_{-2},s,-1)\,t^{ n}\ =\] \[\xrightarrow[1\,\,+\,w_{q^{\prime}}]{\frac{1}{1\,\,+\,w_{q^{\prime}}+\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, by setting \(p_{+1}=p_{+2}=p_{-1}=p_{-2}=q_{+1}=q_{+2}=q_{-1}=q_{-2}=s=1\) in (4.6). Making this same specialization in Proposition 4.2 and observing that \[[n-1]_{-1,1}\;=\;\begin{cases}1&\text{if $n$ is even}\\ 0&\text{if $n$ is odd}\end{cases} \tag{4.11}\] we obtain: **Proposition 4.3** (Simple J-fraction for permutations, \(\lambda=-1\)).: _The ordinary generating function of the polynomials (4.10) at \(\lambda=-1\) has the J-type continued fraction_ \[\sum_{n=0}^{\infty}P_{n}(x_{1},x_{2},y_{1},y_{2},u_{1},u_{2},v_{1 },v_{2},\mathbf{w},-1)\;t^{n}\;=\] \[\frac{1}{1+w_{0}t+\frac{x_{1}y_{1}t^{2}}{1-(x_{2}\!+\!y_{2}\!-\!w_ {1})t+\frac{(x_{1}\!-\!u_{1})(y_{1}\!-\!v_{1})t^{2}}{1-(-x_{2}\!+\!u_{2}\!-\!y_ {2}\!+\!v_{2}\!-\!w_{2})t+\frac{x_{1}y_{1}t^{2}}{1-\cdots}}}}\] _with coefficients_ \[\gamma_{0} = -w_{0} \tag{4.13a}\] \[\gamma_{n} = \begin{cases}x_{2}+y_{2}-w_{n}&\text{if $n$ is odd}\\ -x_{2}+u_{2}-y_{2}+v_{2}-w_{n}&\text{if $n$ is even and $\geq 2$}\end{cases}\] (4.13b) \[\beta_{n} = \begin{cases}-x_{1}y_{1}&\text{if $n$ is odd}\\ -(x_{1}-u_{1})(y_{1}-v_{1})&\text{if $n$ is even}\end{cases} \tag{4.13c}\] ### Corollary for cycle-alternating permutations We recall [4, 6, 13] that a _cycle-alternating permutation_ is a permutation of \([2n]\) that has no cycle double rises, cycle double falls, or fixed points; Deutsch and Elizalde [6, Proposition 2.2] showed that the number of cycle-alternating permutations of \([2n]\) is the secant number \(E_{2n}\) (see also Dumont [7, pp. 37, 40] and Biane [2, section 6]). In this subsection, we will obtain continued fractions for cycle-alternating permutations at \(\lambda=-1\) by specializing our master J-fraction (Proposition 4.1) to suppress cycle double rises, cycle double falls and fixed points, and then using [4, Lemma 4.2] to interpret the parity of cycle peaks and cycle valleys in terms of crossings and nestings. Let \(P_{n}(\mathbf{a},\mathbf{b},\lambda)\) denote the polynomial (4.1) specialized to \(\mathbf{c}=\mathbf{d}=\mathbf{e}=\mathbf{0}\); it enumerates cycle-alternating permutations according to the index-refined crossing and nesting statistics associated to its cycle peaks and cycle valleys. Note that \(P_{n}\) is nonvanishing only for even \(n\). The J-fraction of Proposition 4.1 then becomes an S-fraction in the variable \(t^{2}\); after changing \(t^{2}\) to \(t\), we have: **Proposition 4.4** (Master S-fraction for cycle-alternating permutations, \(\lambda=-1\)).: _The ordinary generating function of the polynomials \(P_{2n}(\mathbf{a},\mathbf{b},-1)\) has the S-type continued fraction_ \[\sum_{n=0}^{\infty}P_{2n}(\mathbf{a},\mathbf{b},-1)\,t^{n}\;=\; \frac{1}{1+\frac{\mathsf{a}_{00}\mathsf{b}_{00}t}{1+\frac{(\mathsf{a}_{01}- \mathsf{a}_{10})(\mathsf{b}_{01}-\mathsf{b}_{10})t}{1+\frac{(\mathsf{a}_{02}- \mathsf{a}_{11}+\mathsf{a}_{20})(\mathsf{b}_{02}-\mathsf{b}_{11}+\mathsf{b}_{2 0})t}{1-\cdots}}}} \tag{4.14}\] _with coefficients_ \[\alpha_{n}\;=\;-\left(\sum_{\ell=0}^{n-1}(-1)^{\ell}\,\mathsf{a}_{\ell,n-1- \ell}\right)\left(\sum_{\ell=0}^{n-1}(-1)^{\ell}\,\mathsf{b}_{\ell,n-1-\ell} \right)\,. \tag{4.15}\] We can use this master S-fraction to obtain a continued fraction that distinguishes cycle peaks and cycle valleys according to their parity. To do this, we use [4, Lemma 4.2]: **Lemma 4.5** (Key lemma from [4]).: _If \(\sigma\) is a cycle-alternating permutation of \([2n]\), then_ \[\text{\rm cycle valleys: \quad}\text{\rm\rm\ \ \ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \rm\ \ \rm\ \rm\ \ \rm\ \rm\ \rm\ \ \rm\ \ \rm\ \rm\ \ \rm\ \rm\ \ \rm\ \ \rm\ \ \rm\ \rm\ \ \rm\ \ \rm\ \ \rm\ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \ \rm\ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \rm\ \ \ \rm\ \ \ \rm\ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \rm\ \ \ \rm\ \ \ \rm\ \ \rm\ \ \ \rm\ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \ \rm\ \ \rm\ \ \rm\ \ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \rm\ \ \rm\ \ \rm\ \ \rm\ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \rm\ \ \rm\ \ \rm\ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \ \rm\ \ \rm\ \ \rm\ \ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \ \rm\ \ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \ \rm\ \ \rm\ \ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \ \rm\ \ \ \rm\ \ \rm\ \ \ \rm\ \ \rm\ \ \ \rm\ \ \rm\ \ \rm\ \ \ \rm\ \ \rm\ \ \rm\ \ \ \rm\ \ \ \rm\ \ \rm\ \ \rm\ \ \ \rm\ \ \rm\ \ \rm\ \ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \ \rm\ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \rm\ \ \rm\ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \rm\ \ \ \rm\ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \rm\ \ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \ \rm\ \ \ \rm\ \ \ \ \rm\ \ \ \rm\ \ \ \rm\ \ \ \ \rm\ \ \ \rm\ \ \ \ \rm\ \ \ \ \rm\ \ \ \ \rm\ \ \ \ \ \rm\ and cycle valleys according to their parity. It follows that the polynomials (4.17) can be obtained from the master polynomials \(P_{n}(\mathbf{a},\mathbf{b},\lambda)\) by making the specializations [4, eq. (4.33)] \[\mathbf{a}_{\ell,\ell^{\prime}} = \begin{cases}p_{+1}^{\ell}y_{\text{o}}&\text{if $\ell^{\prime}=0$ and $\ell+\ell^{\prime}$ is even}\\ p_{+1}^{\ell}q_{+1}^{\ell^{\prime}}v_{\text{o}}&\text{if $\ell^{\prime}\geq 1$ and $\ell+\ell^{\prime}$ is even}\\ p_{+2}^{\ell}y_{\text{e}}&\text{if $\ell^{\prime}=0$ and $\ell+\ell^{\prime}$ is odd}\\ p_{+2}^{\ell}q_{+2}^{\ell^{\prime}}v_{\text{e}}&\text{if $\ell^{\prime}\geq 1$ and $\ell+\ell^{\prime}$ is odd}\end{cases} \tag{4.20a}\] \[\mathbf{b}_{\ell,\ell^{\prime}} = \begin{cases}p_{-1}^{\ell}x_{\text{e}}&\text{if $\ell^{\prime}=0$ and $\ell+\ell^{\prime}$ is even}\\ p_{-1}^{\ell}q_{-1}^{\ell^{\prime}}u_{\text{e}}&\text{if $\ell^{\prime}\geq 1$ and $\ell+\ell^{\prime}$ is even}\\ p_{-2}^{\ell}x_{\text{o}}&\text{if $\ell^{\prime}=0$ and $\ell+\ell^{\prime}$ is odd}\\ p_{-2}^{\ell}q_{-2}^{\ell^{\prime}}u_{\text{o}}&\text{if $\ell^{\prime}\geq 1$ and $\ell+\ell^{\prime}$ is odd}\end{cases} \tag{4.20b}\] Inserting these specializations into Proposition 4.4, we obtain: **Proposition 4.6** (\(p,q\) S-fraction for cycle-alternating permutations, \(\lambda=-1\)).: _The ordinary generating function of the polynomials (4.17) at \(\lambda=-1\) has the S-type continued fraction_ \[\sum_{n=0}^{\infty}Q_{n}(x_{\text{e}},y_{\text{e}},u_{\text{e}}, v_{\text{e}},x_{\text{o}},y_{\text{o}},u_{\text{o}},v_{\text{o}},p_{-1},p_{-2},p_{+1},p_ {+2},q_{-1},q_{-2},q_{+1},q_{+2},-1)\;t^{n}\] \[= \frac{1}{1+\frac{x_{\text{e}}y_{\text{o}}t}{1+\frac{(-p_{-2}x_{ \text{o}}+q_{-2}u_{\text{o}})(-p_{+2}y_{\text{e}}+q_{+2}v_{\text{e}})t}{1+\frac {(p_{-1}^{2}x_{\text{e}}+q_{-1}[2]_{-p_{-1},q_{-1}}u_{\text{e}})(p_{+1}^{2}y_ {\text{o}}+q_{+1}[2]_{-p_{+1},q_{+1}}v_{\text{o}})t}{1-\cdots}}}} \tag{4.21a}\] \[= -(p_{-1}^{2k-2}x_{\text{e}}+q_{-1}[2k-2]_{-p_{-1},q_{-1}}u_{\text{ e}})\;(p_{+1}^{2k-2}y_{\text{o}}+q_{+1}[2k-2]_{-p_{+1},q_{+1}}v_{\text{o}})\] (4.22a) \[\alpha_{2k} = -(-p_{-2}^{2k-1}x_{\text{o}}+q_{-2}[2k-1]_{-p_{-2},q_{-2}}u_{\text {o}})\;(-p_{+2}^{2k-1}y_{\text{e}}+q_{+2}[2k-1]_{-p_{+2},q_{+2}}v_{\text{e}}) \tag{4.22b}\] Finally, denote by \(Q_{n}(x_{\text{e}},y_{\text{e}},u_{\text{e}},v_{\text{e}},x_{\text{o}},y_{ \text{o}},u_{\text{o}},v_{\text{o}},\lambda)\) the polynomial (4.17) specialized to \(p_{+1}=p_{+2}=p_{-1}=p_{-2}=q_{+1}=q_{+2}=q_{-1}=q_{-2}=1\). Setting \(\lambda=-1\), we obtain: **Proposition 4.7** (Simple S-fraction for cycle-alternating permutations, \(\lambda=-1\)).: _The ordinary generating function of the polynomials \(Q_{n}(x_{\text{e}},y_{\text{e}},u_{\text{e}},v_{\text{e}},x_{\text{o}},y_{ \text{o}},u_{\text{o}},v_{\text{o}},-1)\) has the S-type continued fraction_ \[\sum_{n=0}^{\infty}Q_{n}(x_{\text{e}},y_{\text{e}},u_{\text{e}},v_{\text{e}},x_ {\text{o}},y_{\text{o}},u_{\text{o}},v_{\text{o}},-1)\;t^{n}\;=\;\frac{1}{1+ \frac{x_{\text{e}}y_{\text{o}}t}{1+\frac{(x_{\text{o}}-u_{\text{o}})(y_{\text{e} }-v_{\text{e}})t}{1+\frac{x_{\text{e}}y_{\text{o}}t}{1-\cdots}}}} \tag{4.23}\] _with coefficients_ \[\alpha_{2k-1} = -x_{\rm e}y_{\rm o} \tag{4.24a}\] \[\alpha_{2k} = -(x_{\rm o}-u_{\rm o})\,(y_{\rm e}-v_{\rm e}) \tag{4.24b}\] This proves the continued fraction that was conjectured in [4, eq. (A.6)]. ## 5 Results for D-permutations We recall [3, 8, 9, 10] that a _D-permutation_ is a permutation of \([2n]\) satisfying \(2k-1\leq\sigma(2k-1)\) and \(2k\geq\sigma(2k)\) for all \(k\); D-permutations provide a combinatorial model for the Genocchi and median Genocchi numbers. We write \(\mathfrak{D}_{2n}\) for the set of D-permutations of \([2n]\). We proceed in the same way as in the preceding section, beginning with the "master" T-fraction and then obtaining the others by specialization. ### Master T-fraction Following [3, Section 3.4], we introduce six infinite families of indeterminates \(\mathbf{a}=(\mathbf{a}_{\ell,\ell^{\prime}})_{\ell,\ell^{\prime}\geq 0}\), \(\mathbf{b}=(\mathbf{b}_{\ell,\ell^{\prime}})_{\ell,\ell^{\prime}\geq 0}\), \(\mathbf{c}=(\mathbf{c}_{\ell,\ell^{\prime}})_{\ell,\ell^{\prime}\geq 0}\), \(\mathbf{d}=(\mathbf{d}_{\ell,\ell^{\prime}})_{\ell,\ell^{\prime}\geq 0}\), \(\mathbf{e}=(\mathbf{e}_{\ell})_{\ell\geq 0}\), \(\mathbf{f}=(\mathbf{f}_{\ell})_{\ell\geq 0}\) and then define the polynomials \[Q_{n}(\mathbf{a},\mathbf{b},\mathbf{c},\mathbf{d},\mathbf{e}, \mathbf{f},\lambda)\;=\] \[\sum_{\sigma\in\mathfrak{D}_{2n}}\lambda^{\rm cyc(\sigma)}\prod_{ i\in{\rm Cval}(\sigma)}\mathbf{a}_{\rm uncross(i,\sigma),\,{\rm unest}(i,\sigma)}\prod_{i\in{\rm Cpeak }(\sigma)}\mathbf{b}_{\rm lcross(i,\sigma),\,{\rm lnest}(i,\sigma)}\;\times\] \[\prod_{i\in{\rm Cdfall}(\sigma)}\mathbf{c}_{\rm lcross(i,\sigma), \,{\rm lnest}(i,\sigma)}\prod_{i\in{\rm Cdfsize}(\sigma)}\mathbf{d}_{\rm uncross (i,\sigma),\,{\rm unest}(i,\sigma)}\;\times\] \[\prod_{i\in{\rm Evenfix}(\sigma)}\mathbf{e}_{\rm pnsnet(i,\sigma )}\prod_{i\in{\rm Oddfix}(\sigma)}\mathbf{f}_{\rm pnsnet(i,\sigma)}\;. \tag{5.1}\] (This is [3, eq. (3.30)] with a factor \(\lambda^{\rm cyc(\sigma)}\) included.) Then the first master T-fraction for D-permutations [3, Theorem 3.11] handles the case \(\lambda=1\): it states that the ordinary generating function of the polynomials \(Q_{n}(\mathbf{a},\mathbf{b},\mathbf{c},\mathbf{d},\mathbf{e},1)\) has the T-type continued fraction \[\sum_{n=0}^{\infty}Q_{n}(\mathbf{a},\mathbf{b},\mathbf{c},\mathbf{d}, \mathbf{e},\mathbf{f},1)\;t^{n}\;=\;\frac{1}{1-\mathbf{e}_{0}\mathbf{f}_{0}t- \frac{\mathbf{a}_{00}\mathbf{b}_{00}t}{ 1- \frac{(\mathbf{c}_{00}+\mathbf{e}_{1})(\mathbf{d}_{00}+\mathbf{f}_{1} )t}{ 1-\frac{(\mathbf{a}_{01}+\mathbf{a}_{10})(\mathbf{b}_{01}+\mathbf{b}_{10} )t}{ 1-\frac{(\mathbf{c}_{01}+\mathbf{c}_{10}+\mathbf{e}_{2})(\mathbf{d}_{01}+ \mathbf{d}_{10}+\mathbf{f}_{2})t}{ 1-\cdots}}}}}} \tag{5.2}\] with coefficients \[\alpha_{2k-1} = \biggl{(}\sum_{\ell=0}^{k-1}\mathsf{a}_{\ell,k-1-\ell}\biggr{)} \biggl{(}\sum_{\ell=0}^{k-1}\mathsf{b}_{\ell,k-1-\ell}\biggr{)} \tag{5.3a}\] \[\alpha_{2k} = \biggl{(}\mathsf{e}_{k}\,+\,\sum_{\ell=0}^{k-1}\mathsf{c}_{\ell,k -1-\ell}\biggr{)}\biggl{(}\mathsf{f}_{k}\,+\,\sum_{\ell=0}^{k-1}\mathsf{d}_{ \ell,k-1-\ell}\biggr{)}\] (5.3b) \[\delta_{1} = \mathsf{e}_{0}\mathsf{f}_{0}\] (5.3c) \[\delta_{n} = 0\qquad\text{for }n\geq 2 \tag{5.3d}\] By Lemma 1.1, we obtain the case \(\lambda=-1\) by inserting a factor \(-1\) for each even or odd fixed point, for each cycle peak (or alternatively, cycle valley), and for each lower or upper crossing. We therefore have: **Proposition 5.1** (Master T-fraction for D-permutations, \(\lambda=-1\)).: _The ordinary generating function of the polynomials \(Q_{n}(\mathbf{a},\mathbf{b},\mathbf{c},\mathbf{d},\mathbf{e},\mathbf{f},-1)\) has the T-type continued fraction_ \[\sum_{n=0}^{\infty}Q_{n}(\mathbf{a},\mathbf{b},\mathbf{c},\mathbf{d}, \mathbf{e},\mathbf{f},-1)\,t^{n}\ =\ \frac{1}{1-\mathsf{e}_{0}\mathsf{f}_{0}t+\frac{ \mathsf{a}_{00}\mathsf{b}_{00}t}{ 1-\frac{(\mathsf{c}_{00}-\mathsf{e}_{1})(\mathsf{d}_{00}-\mathsf{f}_{1})t}{ 1+\frac{(\mathsf{a}_{01}-\mathsf{a}_{10})(\mathsf{b}_{01}-\mathsf{b}_{10})t}{ 1-\frac{(\mathsf{c}_{01}-\mathsf{c}_{10}-\mathsf{e}_{2})(\mathsf{d}_{01}- \mathsf{d}_{10}-\mathsf{f}_{2})t}{ 1-\cdots}}}}} \tag{5.4}\] _with coefficients_ \[\alpha_{2k-1} = \tag{5.5a}\] \[\alpha_{2k} =\] (5.5b) \[\delta_{1} = \mathsf{e}_{0}\mathsf{f}_{0}\] (5.5c) \[\delta_{n} = 0\qquad\text{for }n\geq 2 \tag{5.5d}\] ### \(p,q\) T-fraction Consider now the polynomial \[P_{n}(x_{1},x_{2},y_{1},y_{2},u_{1},u_{2},v_{1},v_{2},w_{\rm e},w_{\rm e},z_{\rm e },z_{\rm o},p_{-1},p_{-2},p_{+1},p_{+2},q_{-1},q_{-2},q_{+1},q_{+2},s_{\rm e},s_{ \rm o},\lambda)\ =\] \[\sum_{\sigma\in\mathfrak{D}_{2n}}x_{1}^{\rm eareccpeak(\sigma)}x_{2}^{\rm eareccdfall (\sigma)}y_{1}^{\rm erecvcal(\sigma)}y_{2}^{\rm ereccdrise(\sigma)}\ \times\] \[u_{1}^{\rm nrcpeak(\sigma)}u_{2}^{\rm nrcdfall(\sigma)}v_{1}^{\rm nrcval(\sigma) }v_{2}^{\rm nrcdfrise(\sigma)}\ \times\] \[w_{\rm e}^{\rm evennrfix(\sigma)}w_{\rm o}^{\rm oddnrfix(\sigma)}z_{\rm e}^{ \rm evennr(\sigma)}z_{\rm o}^{\rm oddarr(\sigma)}\ \times\] \[p_{-1}^{\rm crosspeak(\sigma)}p_{-2}^{\rm lcrosscdfall(\sigma)}p_{+1}^{\rm uncrosscdf (\sigma)}p_{+2}^{\rm uncrosscdfries(\sigma)}\ \times\] \[q_{-1}^{\rm nestcpeak(\sigma)}q_{-2}^{\rm lneckcdfall(\sigma)}q_{+1}^{\rm unestc val(\sigma)}q_{+2}^{\rm unestcdfries(\sigma)}\ \times\] \[s_{\rm e}^{\rm epsnest(\sigma)}s_{\rm o}^{\rm opnest(\sigma)}\lambda^{\rm cy (\sigma)}. \tag{5.6}\] (This is [3, eq. (3.22)] with a factor \(\lambda^{\rm cyc(\sigma)}\) included.) The various statistics have been defined in [3, Sections 2.7 and 2.8 and eq. (3.22)]. This polynomial is obtained from (5.1) by making the specializations [3, eqs. (6.40)-(6.45)] \[\mathfrak{a}_{\ell,\ell^{\prime}} = p_{+1}^{\ell}q_{+1}^{\ell^{\prime}}\,\times\,\begin{cases}y_{1}& \mbox{if $\ell^{\prime}=0$}\\ v_{1}&\mbox{if $\ell^{\prime}\geq 1$}\end{cases} \tag{5.7a}\] \[\mathfrak{b}_{\ell,\ell^{\prime}} = p_{-1}^{\ell}q_{-1}^{\ell^{\prime}}\,\times\,\begin{cases}x_{1}& \mbox{if $\ell^{\prime}=0$}\\ u_{1}&\mbox{if $\ell^{\prime}\geq 1$}\end{cases}\] (5.7b) \[\mathfrak{c}_{\ell,\ell^{\prime}} = p_{-2}^{\ell}q_{-2}^{\ell^{\prime}}\,\times\,\begin{cases}x_{2}& \mbox{if $\ell^{\prime}=0$}\\ u_{2}&\mbox{if $\ell^{\prime}\geq 1$}\end{cases}\] (5.7c) \[\mathfrak{d}_{\ell,\ell^{\prime}} = p_{+2}^{\ell}q_{+2}^{\ell^{\prime}}\,\times\,\begin{cases}y_{2}& \mbox{if $\ell^{\prime}=0$}\\ v_{2}&\mbox{if $\ell^{\prime}\geq 1$}\end{cases}\] (5.7d) \[\mathfrak{e}_{k} = \begin{cases}z_{\rm e}&\mbox{if $k=0$}\\ s_{\rm e}^{k}w_{\rm e}&\mbox{if $k\geq 1$}\end{cases}\] (5.7e) \[\mathfrak{f}_{k} = \begin{cases}z_{\rm o}&\mbox{if $k=0$}\\ s_{\rm o}^{k}w_{\rm o}&\mbox{if $k\geq 1$}\end{cases} \tag{5.7f}\] Making these specializations in Proposition 5.1 -- or equivalently, attaching a minus sign to the variables \(x_{1},u_{1},p_{+1},p_{+2},p_{-1},p_{-2},w_{\rm e},w_{\rm o},z_{\rm e},z_{\rm o}\) in [3, Theorem 3.9] -- we obtain: **Proposition 5.2** (\(p,q\) T-fraction for D-permutations, \(\lambda=-1\)).: _The ordinary generating function of the polynomials (5.6) at \(\lambda=-1\) has the T-type continued fraction_ \[\sum\limits_{n=0}^{\infty}P_{n}(x_{1},x_{2},y_{1},y_{2},u_{1},u_{2},v_{1},v_{2},w_{\rm e},w_{\rm o},z_{\rm e},z_{\rm o},p_{-1},p_{-2},p_{+1},p_{+2},q_{-1},q_ {-2},q_{+1},q_{+2},s_{\rm e},s_{\rm o},-1)\,t^{n}\ =\] \[\frac{1}{1-z_{\rm e}z_{\rm o}\,t+\frac{x_{1}y_{1}\,t}{1+\frac{(x_{2}\!-\!s_{ \rm e}w_{\rm e})(y_{2}\!-\!s_{\rm o}w_{\rm o})\,t}{1-\frac{(-p_{-1}x_{1}\!+\!q_ {-1}u_{1})(-p_{+1}y_{1}\!+\!q_{+1}v_{1})\,t}{1+\frac{(p_{-2}^{2}x_{1}\!+\!q_{- 1}[2]_{-p_{-1},q_{-1}}u_{1})(p_{+1}^{2}y_{1}\!+\!q_{+1}[2]_{-p_{+1},q_{+1}v_{1} )\,t}{1-\frac{(p_{-2}^{2}x_{2}\!+\!q_{-2}[2]_{-p_{-2},q_{-2}}u_{2}\!-\!s_{\rm o} ^{3}w_{\rm e})(p_{+2}^{2}y_{2}\!+\!q_{+2}[2]_{-p_{+2},q_{+2}v_{2}}\!-\!s_{\rm o }^{3}w_{\rm o})\,t}{1-\cdots}}}}}}} \tag{5.8}\] _with coefficients_ \[\alpha_{2k-1} = -\big{(}(-p_{-1})^{k-1}x_{1}+q_{-1}[k-1]_{-p_{-1},q_{-1}}u_{1}\big{)} \left((-p_{+1})^{k-1}y_{1}+q_{+1}[k-1]_{-p_{+1},q_{+1}}v_{1}\right) \tag{5.9a}\] \[\alpha_{2k} = \big{(}(-p_{-2})^{k-1}x_{2}+q_{-2}[k-1]_{-p_{-2},q_{-2}}u_{2}-s_{ \rm e}^{k}w_{\rm e}\big{)}\left((-p_{+2})^{k-1}y_{2}+q_{+2}[k-1]_{-p_{+2},q_{+ 2}v_{2}}-s_{\rm o}^{k}w_{\rm o}\right)\] (5.9b) \[\delta_{1} = z_{\rm e}z_{\rm o}\] (5.9c) \[\delta_{n} = 0\qquad\text{for $n\geq 2$} \tag{5.9d}\] ### Simple T-fraction Finally, denote by \(P_{n}(x_{1},x_{2},y_{1},y_{2},u_{1},u_{2},v_{1},v_{2},w_{\rm e},w_{\rm o},z_{ \rm e},z_{\rm o},\lambda)\) the polynomial (5.6) specialized to \(p_{+1}=p_{+2}=p_{-1}=p_{-2}=q_{+1}=q_{+2}=q_{-1}=q_{-2}=s_{\rm e}=s_{\rm o}=1\). This polynomial was introduced in [3, eq. (4.2)]. Making this same specialization in Proposition 5.2 and using (4.11), we obtain: **Proposition 5.3** (Simple T-fraction for D-permutations, \(\lambda=-1\)).: _The ordinary generating function of the polynomials \(P_{n}(x_{1},x_{2},y_{1},y_{2},u_{1},u_{2},v_{1},v_{2},w_{\rm e},w_{\rm o},z_{ \rm e},z_{\rm o},-1)\) has the T-type continued fraction_ \[\sum\limits_{n=0}^{\infty}P_{n}(x_{1},x_{2},y_{1},y_{2},u_{1},u_{2},v_{1},v_{2 },w_{\rm e},w_{\rm o},z_{\rm e},z_{\rm o},-1)\,t^{n}\ =\] \[\frac{1}{1-z_{\rm e}z_{\rm o}\,t+\frac{x_{1}y_{1}\,t}{1-\frac{(x_{2}\!-\!w_{ \rm e})(y_{2}\!-\!w_{\rm o})\,t}{1+\frac{(x_{1}\!-\!u_{1})(y_{1}\!-\!v_{1})\,t }{1-\frac{(x_{2}\!-\!u_{2}\!+\!w_{\rm e})(y_{2}\!-\!v_{2}\!+\!w_{\rm o})\,t}{1 +\frac{x_{1}y_{1}\,t}{1-\frac{(x_{2}\!-\!w_{\rm e})(y_{2}\!-\!w_{\rm o})\,t}{1 -\cdots}}}}}}}}}} \tag{5.10}\] _with coefficients_ \[\alpha_{2k-1} = \begin{cases}-x_{1}y_{1}&\text{if $k$ is odd}\\ -(x_{1}-u_{1})(y_{1}-v_{1})&\text{if $k$ is even}\end{cases} \tag{5.11a}\] \[\alpha_{2k} = \begin{cases}(x_{2}-w_{\text{e}})(y_{2}-w_{\text{o}})&\text{if $k$ is odd}\\ (x_{2}-u_{2}+w_{\text{e}})(y_{2}-v_{2}+w_{\text{o}})&\text{if $k$ is even}\end{cases}\] (5.11b) \[\delta_{1} = z_{\text{e}}z_{\text{o}}\] (5.11c) \[\delta_{n} = 0\qquad\text{for $n\geq 2$} \tag{5.11d}\] Finally, as a special case of Proposition 5.3, we can obtain a J-fraction that was conjectured in [3, Appendix, case \(\lambda=-1\)]. It suffices to specialize the polynomials \(P_{n}(x_{1},x_{2},y_{1},y_{2},u_{1},u_{2},v_{1},v_{2},w_{\text{e}},w_{\text{o }},z_{\text{e}},z_{\text{o}},\lambda)\) by setting \(x_{1}=x_{2}=z_{\text{e}}=z_{\text{o}}=x\), \(y_{1}=y_{2}=y\), and \(u_{1}=u_{2}=v_{1}=v_{2}=w_{\text{e}}=w_{\text{o}}=1\); this yields the polynomials \[P_{n}(x,y,\lambda)\;=\;\sum_{\sigma\in\mathfrak{D}_{2n}}x^{\text{arec}(\sigma) }y^{\text{arec}(\sigma)}\lambda^{\text{cyc}(\sigma)} \tag{5.12}\] that were introduced in [3, eqs. (4.1) and (A.1)]. Inserting this specialization in Proposition 5.3 gives, for \(\lambda=-1\), a T-fraction with coefficients \[\alpha_{2k-1} = \begin{cases}-xy&\text{if $k$ is odd}\\ -(x-1)(y-1)&\text{if $k$ is even}\end{cases} \tag{5.13a}\] \[\alpha_{2k} = \begin{cases}(x-1)(y-1)&\text{if $k$ is odd}\\ xy&\text{if $k$ is even}\end{cases}\] (5.13b) \[\delta_{1} = x^{2}\] (5.13c) \[\delta_{n} = 0\qquad\text{for $n\geq 2$} \tag{5.13d}\] Using the even contraction for T-fractions with \(\delta_{2}=\delta_{4}=\delta_{6}=\ldots=0\)[3, Proposition 2.1], we can rewrite this as a J-fraction: **Corollary 5.4**.: _The ordinary generating function of the polynomials (5.12) has the J-type continued fraction_ \[\sum_{n=0}^{\infty}P_{n}(x,y,-1)\;=\;\frac{1}{1-x(x\!-\!y)\,t+ \frac{ xy\,(x-1)(y-1)\,t^{2}}{ 1+\frac{ xy\,(x-1)(y-1)\,t^{2}}{ 1+\frac{ xy\,(x-1)(y-1)\,t^{2}}{ 1+\frac{ xy\,(x-1)(y-1)\,t^{2}}{ 1+\cdots}}}}} \tag{5.14}\] _with coefficients_ \[\gamma_{0} = x(x-y) \tag{5.15a}\] \[\gamma_{n} = 0\quad\text{for $n\geq 1$}\] (5.15b) \[\beta_{n} = -xy(x-1)(y-1) \tag{5.15c}\] This J-fraction was conjectured in [3, Appendix, case \(\lambda=-1\)]. ## Acknowledgments One of us (B.D.) wishes to thank Jakob Stein for helpful discussions concerning the topology of the plane.
2305.08501
Label Smoothing is Robustification against Model Misspecification
Label smoothing (LS) adopts smoothed targets in classification tasks. For example, in binary classification, instead of the one-hot target $(1,0)^\top$ used in conventional logistic regression (LR), LR with LS (LSLR) uses the smoothed target $(1-\frac{\alpha}{2},\frac{\alpha}{2})^\top$ with a smoothing level $\alpha\in(0,1)$, which causes squeezing of values of the logit. Apart from the common regularization-based interpretation of LS that leads to an inconsistent probability estimator, we regard LSLR as modifying the loss function and consistent estimator for probability estimation. In order to study the significance of each of these two modifications by LSLR, we introduce a modified LSLR (MLSLR) that uses the same loss function as LSLR and the same consistent estimator as LR, while not squeezing the logits. For the loss function modification, we theoretically show that MLSLR with a larger smoothing level has lower efficiency with correctly-specified models, while it exhibits higher robustness against model misspecification than LR. Also, for the modification of the probability estimator, an experimental comparison between LSLR and MLSLR showed that this modification and squeezing of the logits in LSLR have negative effects on the probability estimation and classification performance. The understanding of the properties of LS provided by these comparisons allows us to propose MLSLR as an improvement over LSLR.
Ryoya Yamasaki, Toshiyuki Tanaka
2023-05-15T09:57:04Z
http://arxiv.org/abs/2305.08501v1
# Label Smoothing is Robustification ###### Abstract Label smoothing (LS) adopts smoothed targets in classification tasks. For example, in binary classification, instead of the one-hot target \((1,0)^{\top}\) used in conventional logistic regression (LR), LR with LS (LSLR) uses the smoothed target \((1-\frac{\alpha}{2},\frac{\alpha}{2})^{\top}\) with a smoothing level \(\alpha\in(0,1)\), which causes squeezing of values of the logit. Apart from the common regularization-based interpretation of LS that leads to an inconsistent probability estimator, we regard LSLR as modifying the loss function and consistent estimator for probability estimation. In order to study the significance of each of these two modifications by LSLR, we introduce a modified LSLR (MLSLR) that uses the same loss function as LSLR and the same consistent estimator as LR, while not squeezing the logits. For the loss function modification, we theoretically show that MLSLR with a larger smoothing level has lower efficiency with correctly-specified models, while it exhibits higher robustness against model misspecification than LR. Also, for the modification of the probability estimator, an experimental comparison between LSLR and MLSLR showed that this modification and squeezing of the logits in LSLR have negative effects on the probability estimation and classification performance. The understanding of the properties of LS provided by these comparisons allows us to propose MLSLR as an improvement over LSLR. Label smoothing, logistic regression, asymptotic statistics, robust statistics, smoothed KL-divergence ## I Introduction Label smoothing (LS) adopts smoothed targets in classification problems. Conventional logistic regression (LR) uses a one-hot vector as a target (Section II-B), while LR with LS (LSLR) [1] uses a smoothed target vector that replaces the component 1 in the one-hot vector with a smaller value and 0 with a larger value (Section II-C). Previous studies have provided, mostly through experimental considerations, several heuristic findings on behaviors of LSLR, for example, 1. It prevents the largest logit from becoming much larger than all others (squeezes the logits) and encourages the model to be less confident [1]. 2. It generally improves adversarial robustness against a variety of attacks [2]. 3. It can often significantly improve the generalization of a (multi-class) neural network [3]. Motivated by these supportive findings, LS has recently been actively adopted, together with a neural network model, in various modern applications such as speech recognition [4], machine translation [5, 6], image classification [7, 8], and visual tracking [9]. However, in spite of the above-mentioned findings on LS and its wide use, the underlying mechanism of LS has not been fully explored yet. In this paper we study it and provide further understanding on LS, including verification of the significance of squeezing of the logits stated in A1 and supportive arguments on A2 and A3, which is the first of two contributions of this paper. Although most previous studies have interpreted LSLR as entropy-regularized LR that squeezes the logits (Section II-D), we propose in this paper an alternative interpretation of LSLR in which LSLR performs probability estimation using a different loss function and consistent estimator than LR (Section II-E). We then study the significance of each of these two modifications, via introducing a novel method, modified LSLR (MLSLR), that uses the same loss function as LSLR and the same consistent probability estimator as LR, which results in no squeezing of the logits (Section II-F). For the modification of the loss function, we give theoretical comparisons between LR and MLSLR: Compared with LR, MLSLR with a larger smoothing level has lower efficiency with correctly-specified models (Section III-B) but higher robustness against model misspecification (Section III-C). As far as the authors' knowledge, this paper is the first to report the low efficiency as a disadvantage of LS, and the robustness is a basis for the adversarial robustness A2 of LS. Also, the trade-off between the low efficiency and the high robustness explains the better practical performance A3. Moreover, for the modification of the consistent probability estimator, we prove that an estimator of LSLR can have output with an unnecessarily large range like \((-0.1,1.1,0,\ldots,0)^{\top}\), implying that it is inappropriate as a probability estimator. We also experimentally compare LSLR and MLSLR using a neural network model, and the result shows that LSLR performed worse than MLSLR (Section IV). This experimental result implies that modifying the consistent probability estimator and squeezing the logits are not effective in improving the probability estimation and classification performance of LR, but rather have a negative impact, as opposed to the previous understanding A1. In other words, MLSLR based on a consistent probability estimator with an appropriate range would be recommended over LSLR, in typical usages with Fig. 1: Relationship between LR, LSLR, and MLSLR, and summary of the understanding on LS that this paper gives. a large-size model such as a deep neural network model. Collaterally, we propose to practically use MLSLR as the second of two contributions of this paper. Besides these findings on behaviors of LS and the proposal of MLSLR as an improvement over LSLR, the final section (Section V) gives reference to other findings on LS A4-6 by previous studies, relevant topics, and future prospects. ## II Preliminaries ### _Problem Formulation, Notation, and Terminology_ In this section, we discuss interpretations of LS. We first formulate probability estimation and classification tasks, along with preparing required notations and terminologies. Suppose that one has the data \((\mathbf{x}_{1},y_{1}),\ldots,(\mathbf{x}_{n},y_{n})\in\mathbb{R}^{d}\times[K]\) from the joint distribution of the explanatory random variable \(\mathbf{X}\) and target random variable \(Y\), where \(K\) is the number of different values that \(y_{1},\ldots,y_{n}\) take and \([K]\coloneqq\{1,\ldots,K\}\). A classification task is defined in this paper as a task to obtain a good classifier \(f:\mathbb{R}^{d}\to[K]\) such that the task risk \(\mathcal{R}_{\text{tsk}}(f;\ell)\coloneqq\mathbb{E}_{(\mathbf{X},Y)}[\ell(f(\mathbf{X }),Y)]\), defined as the expectation value \(\mathbb{E}_{(\mathbf{X},Y)}\) with respect to the pair \((\mathbf{X},Y)\) of a user-specified task loss function \(\ell:[K]^{2}\to[0,\infty)\), is made small. The task of minimizing misclassification rate corresponds to using the zero-one loss \(\ell_{\infty}(j,k)\coloneqq\mathbb{1}\left(j\neq k\right)\) as the task loss, where \(\mathbb{1}\left(c\right)\) values \(1\) if a condition \(c\) is true and \(0\) otherwise. Classification methods that we discuss in this paper rely on the framework of so-called empirical (surrogate) risk minimization (ERM). Let \(\mathcal{G}=\{\mathbf{g}:\mathbb{R}^{d}\to\mathcal{Z}\}\) be a learner class, where \(\mathcal{Z}\) denotes the set of values the learners in \(\mathcal{G}\) may output. A classifier is constructed as \(f=h\circ\mathbf{g}\) with a labeling \(h:\mathcal{Z}\to[K]\) and a learner \(\mathbf{g}\in\mathcal{G}\) that minimizes the empirical surrogate risk \(\frac{1}{n}\sum_{i=1}^{n}\phi(\mathbf{g}(\mathbf{x}_{i}),y_{i})\) (which is an empirical counterpart of the surrogate risk \(\mathcal{R}_{\text{sur}}(\mathbf{g};\mathbf{\phi})\coloneqq\mathbb{E}_{(\mathbf{X},Y)}[ \phi(\mathbf{g}(\mathbf{X}),Y)]\)) for a surrogate loss function \(\phi:\mathcal{Z}\times[K]\to[0,\infty)\) that is continuous in its first argument. ERM can be interpreted as estimation of the conditional probability distribution (CPD) function \(\mathbf{p}(\cdot)=(\Pr(Y=1|\mathbf{X}=\cdot),\ldots,\Pr(Y=K|\mathbf{X}=\cdot))^{\top}: \mathbb{R}^{d}\to\Delta_{K-1}\), where \(\Delta_{K-1}\) is the probability simplex in \(\mathbb{R}^{K}\), and the labeling \(h\) is designed according to that interpretation. Note that our notations show possibly-multivariate objects in bold and often omit the \(K\)-dependence for the brevity. ### _LR: Conventional Logistic Regression_ For the learner class \(\mathcal{G}=\{\mathbf{g}:\mathbb{R}^{d}\to\mathbb{R}^{K}\}\), LR solves \[\min_{\mathbf{g}\in\mathcal{G}}\!\!\left[-\frac{1}{n}\sum_{i=1}^{n}\sum_{k=1}^{K}t _{k}(y_{i})\ln\!\left(\frac{e^{\mathbf{g}_{k}(\mathbf{x}_{i})}}{e^{\mathbf{g}_{k}^{2}(\bm {x}_{i})}}\right)\right], \tag{1}\] where \(\mathbf{t}\coloneqq(t_{1},\ldots,t_{K})^{\top}\) is the one-hot encoding function, whose \(k\)-th component is \(t_{k}(y)=\mathbb{1}\left(k=y\right)\) for \(k,y\in\mathcal{Y}\). The obtained functions \(\{g_{k}\}_{k\in[K]}\) are also called the logits. As \(\mathbf{t}(y)\) satisfies \(\sum_{y=1}^{K}t_{k}(y)\Pr(Y=y|\mathbf{X}=\mathbf{x})=\Pr(Y=k|\mathbf{X}=\mathbf{x})\), a mean of the targets \(\{\mathbf{t}(y_{i})\}_{\mathbf{x}_{i}=\mathbf{x}}\) can be seen as an empirical estimate of the CPD function \(\mathbf{p}(\mathbf{x})\). The KL-divergence, defined for probability mass functions \(\mathbf{p}=(p_{1},\ldots,p_{K})^{\top},\mathbf{q}=(q_{1},\ldots,q_{K})^{\top}\in\Delta _{K-1}\) as \[D_{\text{KL}}(\mathbf{p}||\mathbf{q})\coloneqq\sum_{k=1}^{K}p_{k}\ln\frac{p_{k}}{q_{k}}, \tag{2}\] has the consistency property \[\operatorname*{arg\,min}_{\mathbf{q}\in\Delta_{K-1}}D_{\text{KL}}(\mathbf{p}||\mathbf{q})= \mathbf{p},\quad\text{for all }\mathbf{p}\in\Delta_{K-1}. \tag{3}\] This property shows that LR applies the logit model \[\mathbf{q}(\mathbf{x})=\mathbf{q}_{\text{L}}(\mathbf{g}(\mathbf{x}))\coloneqq\left(\frac{e^{\mathbf{g} _{\text{L}}(\mathbf{x})}}{\sum_{k=1}^{K}e^{\mathbf{g}_{k}(\mathbf{x})}},\ldots,\frac{e^{ \mathbf{g}_{K}(\mathbf{x})}}{\sum_{k=1}^{K}e^{\mathbf{g}_{k}(\mathbf{x})}}\right)^{\top} \tag{4}\] as a consistent estimator (see also Corollary 1), in estimation of the true CPD function \(\mathbf{p}(\mathbf{x})\) through minimization of an empirical estimate of the mean KL-divergence \(\mathbb{E}_{\mathbf{X}}\left[D_{\text{KL}}(\mathbf{p}(\mathbf{X})||\mathbf{q}(\mathbf{X}))\right]\), where \(\mathbb{E}_{\mathbf{X}}\) denotes the expectation value regarding the random variable \(\mathbf{X}\). ### _LSLR: Logistic Regression with Label Smoothing_ For the smoothing function \[\mathbf{s}_{\alpha}(\mathbf{v})\coloneqq(1-\alpha)\mathbf{v}+\frac{\alpha}{K} \tag{5}\] with a smoothing level \(\alpha\) that is conventionally in \((0,1)\), LSLR applies the smoothed target \(\mathbf{s}_{\alpha}(\mathbf{t}(y_{i}))=(s_{\alpha}(t_{1}(y_{i})),\ldots,s_{\alpha}(t_{ K}(y_{i})))^{\top}\) instead of the one-hot encoded target \(\mathbf{t}(y_{i})\) of LR. Namely, LSLR considers the learning process \[\min_{\mathbf{g}\in\mathcal{G}}\!\left[-\frac{1}{n}\sum_{i=1}^{n}\sum_{k=1}^{K}s_{ \alpha}(t_{k}(y_{i}))\ln\!\left(\frac{e^{\mathbf{g}_{k}(\mathbf{x}_{i})}}{\sum_{k=1}^ {K}e^{\mathbf{g}_{k}(\mathbf{x}_{i})}}\right)\right]. \tag{6}\] ### _Regularization View: LSLR is Entropy Regularized LR_ Since the smoothed target satisfies \(\sum_{y=1}^{K}s_{\alpha}(t_{k}(y))\Pr(Y=y|\mathbf{X}=\mathbf{x})=s_{\alpha}(\Pr(Y=k|\mathbf{X}= \mathbf{x}))\), a mean of the smoothed targets \(\{\mathbf{s}_{\alpha}(\mathbf{t}(y_{i}))\}_{\mathbf{x}_{i}=\mathbf{x}}\) can be seen as an empirical estimate of the smoothed CPD function \(s_{\alpha}(\mathbf{p}(\mathbf{x}))=(s_{\alpha}(\Pr(Y=1|\mathbf{X}=\mathbf{x})),\ldots,s_{\alpha}( \Pr(Y=K|\mathbf{X}=\mathbf{x}))^{\top}\). On the basis of this consideration, along with an implicit supposition that LSLR adopts the logit model \(\mathbf{q}(\mathbf{x})=\mathbf{q}_{\text{L}}(\mathbf{g}(\mathbf{x}))\) for estimation of the true CPD function \(\mathbf{p}(\mathbf{x})\), and the equation \[\begin{split} D_{\text{KL}}(\mathbf{s}_{\alpha}(\mathbf{p})||\mathbf{q})=& (1-\alpha)D_{\text{KL}}(\mathbf{p}||\mathbf{q})+\alpha D_{\text{KL}}(\mathbf{1}/K||\mathbf{q}) \\ &+(\mathbf{q}\text{-independent term})\end{split} \tag{7}\] with the all-1 \(K\)-dimensional vector \(\mathbf{1}\coloneqq(1,\ldots,1)^{\top}\), most previous studies regard LSLR as LR with the entropy regularization term \(D_{\text{KL}}(\mathbf{1}/K||\mathbf{q})\) which penalizes the deviation of \(\mathbf{q}\) from the uniform CPD function \(\mathbf{x}\mapsto\mathbf{1}/K\); See [1, 2, 3], [10]. As a result, the learned logit model \(\mathbf{q}_{\text{L}}(\mathbf{g}(\mathbf{x}))\) is expected not to take an extreme probability estimate like \((1,0,\ldots)^{\top}\), or equivalently, the logits by LSLR will be squeezed; See Theorem 1, B6. Many previous studies claim that this squeezing helps avoid over-fitting of the model, as in the finding A1. ### _Our Loss View: LSLR modifies Loss Function and Consistent Probability Estimator from LR_ We here introduce an alternative view of LS that LSLR adopts a loss function and consistent estimator different from those of LR for probability estimation (the loss view). First, we define the smoothed KL (SKL)-divergence \[D_{\text{SKL},\alpha}(\mathbf{p}||\mathbf{q})\coloneqq\sum_{k=1}^{K}s_{\alpha}(p_{k}) \ln\frac{s_{\alpha}(p_{k})}{s_{\alpha}(q_{k})} \tag{8}\] for \(\mathbf{p},\mathbf{q}\in s_{\alpha}^{-1}(\Delta_{K-1})\coloneqq\{s_{\alpha}^{-1}(\bm {p})\mid\mathbf{p}\in\Delta_{K-1}\}\). The SKL-divergence satisfies the consistency property \[\underset{\mathbf{q}\in\mathcal{S}}{\arg\min}\ D_{\text{SKL},\alpha}(\mathbf{p}||\mathbf{q })=\mathbf{p},\quad\text{for all }\mathbf{p}\in\mathcal{S} \tag{9}\] for \(\mathcal{S}=\Delta_{K-1}\) or \(s_{\alpha}^{-1}(\Delta_{K-1})\) and \(\alpha\) in a certain range (see Theorem 1, B3). Then, we view that LSLR (6) adopts not the logit model \(\mathbf{q}_{\text{L}}(\mathbf{g}(\mathbf{x}))\) but what we call the roughened logit (R-logud) model, which is defined with an unconventional link function as \[\begin{split}\mathbf{q}_{\text{RL},\alpha}(\mathbf{g}(\mathbf{x}))\coloneqq \mathbf{s}_{\alpha}^{-1}(\mathbf{q}_{\text{L}}(\mathbf{g}(\mathbf{x})))\\ =\Big{(}s_{\alpha}^{-1}\Big{(}\frac{\sigma^{\text{gL}(\mathbf{x})}} {\sum_{k=1}^{K}\sigma^{\text{gL}(\mathbf{x})}}\Big{)},\ldots,s_{\alpha}^{-1} \Big{(}\frac{\sigma^{\text{gL}(\mathbf{x})}}{\sum_{k=1}^{K}\sigma^{\text{gL}(\mathbf{ x})}}\Big{)}\Big{)}^{\top},\end{split} \tag{10}\] as an estimator (as \(\mathbf{q}(\mathbf{x})\)) of the true CPD function \(\mathbf{p}(\mathbf{x})\) through minimizing an empirical estimate of the mean SKL-divergence \(\mathbb{E}_{\mathbf{X}}[D_{\text{SKL},\alpha}(\mathbf{p}(\mathbf{X})||\mathbf{q}(\mathbf{X}))]\). As one can see from the fact that \(\mathbf{s}_{\alpha}\) appearing in the loss (8) and \(\mathbf{s}_{\alpha}^{-1}\) in the model (10) cancel out each other, this estimator, not the logit model, is consistent to the true CPD for LSLR.1 Footnote 1: One may see that LSLR estimates the smoothed CPD function \(\mathbf{s}_{\alpha}(\mathbf{p}(\mathbf{x}))\) with the logit model \(\mathbf{q}_{\text{L}}(\mathbf{g}(\mathbf{x}))\). However, since this view changing properties of the data makes it difficult to compare with the original LR, our paper will not discuss this view further. Note that the logit model is not the only probability estimator; For example, another well-known probability estimator is the probit model in probit regression. Our loss view uniformly determines which parts we treat as a probability estimator and which parts we treat as a loss function for a probability estimation method, according to the consistency of the probability estimator to the true CPD function, for fair comparisons to be performed in this paper. We here list basic properties of the SKL-divergence \(D_{\text{SKL},\alpha}\) and R-logud model \(\mathbf{q}_{\text{RL},\alpha}\), which are the KL-divergence \(D_{\text{KL}}\) and logit model \(\mathbf{q}_{\text{L}}\) when \(\alpha=0\): **Theorem 1**.: _For any \(K\geq 2\),_ 1. \(q_{\text{RL},\alpha,k}(\mathbf{g})\) _for_ \(\mathbf{g}\in\mathbb{R}^{K}\) _and_ \(k=1,\ldots,K\) _can range_ \([-\frac{\alpha}{K(1-\alpha)},\frac{K-\alpha}{K(1-\alpha)}]\) _or_ \([\frac{K-\alpha}{K(1-\alpha)},-\frac{\alpha}{K(1-\alpha)}]\) _for_ \(\alpha\in[0,1)\)__\(\alpha\in(1,\frac{K}{K-1}]\)_, and these intervals cover_ \([0,1]\)_._ 2. \(\sum_{k=1}^{K}q_{\text{RL},\alpha,k}(\mathbf{g})=1\) _for any_ \(\mathbf{g}\in\mathbb{R}^{K}\) _and_ \(\alpha\neq 1\)_._ 3. \(D_{\text{SKL},\alpha}(\mathbf{p}||\mathbf{q})\geq 0\)_, and_ \(D_{\text{SKL},\alpha}(\mathbf{p}||\mathbf{q})=0\) _if and only if_ \(\mathbf{p}=\mathbf{q}\)_, for any_ \(\mathbf{p},\mathbf{q}\in\mathbf{s}_{\alpha}^{-1}(\Delta_{K-1})\) _and_ \(\alpha\in[0,1)\cup(1,\frac{K}{K-1}]\)_. Also,_ \(D_{\text{SKL},1}(\mathbf{p}||\mathbf{q})=0\) _for any_ \(\mathbf{p},\mathbf{q}\in\mathbf{s}_{\alpha}^{-1}(\Delta_{K-1})\)_._ 4. \(D_{\text{SKL},\alpha}(\mathbf{p}||\mathbf{q})\) _is convex in_ \((\mathbf{p},\mathbf{q})\) _for any_ \(\alpha\in[0,\frac{K}{K-1}]\)_._ \(D_{\text{SKL},\alpha}(\mathbf{r}\mathbf{p}_{1}+(1-r)\mathbf{p}_{2})|\mathbf{r}q_{1}+(1-r)\mathbf{q }_{2})\leq rD_{\text{SKL},\alpha}(\mathbf{p}||\mathbf{q}_{1})+(1-r)D_{\text{SKL}, \alpha}(\mathbf{p}_{2}||\mathbf{q}_{2})\)_, for any_ \(\mathbf{p}_{1},\mathbf{p}_{2},\mathbf{q}_{1},\mathbf{q}_{2}\in\mathbf{s}_{\alpha}^{-1}(\Delta_{K-1})\) _and_ \(r\in[0,1]\)_._ 5. \(\underset{\mathbf{q}\in\mathbb{R}^{K},S\mathbf{\alpha}=0}{\min}\ D_{\text{SKL},\alpha}((1,0, \ldots)^{\top}||\mathbf{q}_{\text{LR}}(\mathbf{g}))=(\infty,0,\ldots)^{\top}\) _for_ \(\alpha\in[0,1)\cup(1,\frac{K}{K-1}]\)_._ 6. \(\underset{\mathbf{q}\in\mathbb{R}^{K},S\mathbf{\alpha}=0}{\min}\ D_{\text{SKL},\alpha}((1,0, \ldots)^{\top}||\mathbf{q}_{\text{RL},\alpha}(\mathbf{g}))=(\infty,0,\ldots)^{\top}\)_,_ \((\ln(\frac{K}{K}+1-K),0,\ldots)^{\top}\)_, or_ \((-\infty,0,\ldots)^{\top}\) _for_ \(\alpha=0\)_, any_ \(\alpha\in(0,1)\cup(1,\frac{K}{K-1})\)_, or_ \(\alpha=\frac{K}{K-1}\)_._ 7. \(D_{\text{SKL},\alpha}(\mathbf{p}||\mathbf{q})=D_{\text{SKL},K-(K-1)\alpha}(\frac{K-\mathbf{p} }{K-1}||\frac{1-\mathbf{q}}{K-1}||\frac{1-\mathbf{q}}{K-1})\)_,_ \(D_{\text{SKL},\alpha}(\mathbf{p}||\mathbf{q}_{\text{RL},\alpha}(\mathbf{g}))=D_{\text{SKL},K-(K-1) \alpha}(\frac{1-\mathbf{p}}{K-1}||\mathbf{q}_{\text{RL},K-(K-1)\alpha}(\mathbf{g}))\)_,_ \(K-(K-1)\alpha\in[0,1)\)_, and_ \(\frac{\mathbf{q}\in\mathbf{s}_{\alpha}^{-1}(\Delta_{K-1})}{\mathbf{g}\in\mathbb{R}^{K}}\)_, and_ \(\alpha\in(1,\frac{K}{K-1}]\)_._ _For \(K=2\),_ 1. \(D_{\text{SKL},\alpha}(\mathbf{p}||\mathbf{q}_{\text{L}}(\mathbf{g}))=D_{\text{SKL},2-\alpha}( \mathbf{1}-\mathbf{p}||\mathbf{q}_{\text{L}}(\mathbf{-g}))=D_{\text{SKL},2-\alpha}(\mathbf{p}|| \mathbf{q}_{\text{L}}(\mathbf{g}))\)_,_ \(D_{\text{SKL},\alpha}(\mathbf{p}||\mathbf{q}_{\text{RL},\alpha}(\mathbf{g}))=D_{\text{SKL},2- \alpha}(\mathbf{1}-\mathbf{p}||\mathbf{q}_{\text{RL},2-\alpha}(\mathbf{g}))=D_{\text{SKL},2- \alpha}(\mathbf{p}||\mathbf{q}_{\text{RL},2-\alpha}(\mathbf{-g}))\)_,_ \(2-\alpha\in[0,1)\)_, and_ \(\mathbf{1}-\mathbf{p}\in\mathbf{s}_{2-\alpha}^{-1}(\Delta_{1})\)_, for any_ \(\mathbf{p}\in\mathbf{s}_{\alpha}^{-1}(\Delta_{1})\)_,_ \(\mathbf{g}\in\mathbb{R}^{2}\)_, and_ \(\alpha\in(1,2]\)_._ The constraint \(g_{K}=0\) in B5 and B6 is for removing the degree of freedom of translation of the minimizers; Consider that, for example, if \(\mathbf{g}=\bar{\mathbf{g}}\) minimizes \(D_{\text{SKL},\alpha}(\mathbf{p}||\mathbf{q}_{\text{L}}(\mathbf{g}))\), then \(\mathbf{g}=\bar{\mathbf{g}}+\mathbf{v}1\) also minimizes that SKL-divergence for any \(v\in ### _LSQLR: MLSLR with \(\alpha\to 1\)_ We find it useful to consider the limit \(\alpha\to 1\) of MLSLR in understanding properties of MLSLR and LSLR. **Theorem 2**.: _For any \(K\geq 2\) and \(\mathbf{p},\mathbf{q}\in\mathbf{s}_{\alpha}^{-1}(\Delta_{K-1})\), \(\frac{2\alpha}{(1-\alpha)^{2}K}D_{\mathrm{LSLR},\alpha}(\mathbf{p}||\mathbf{q})\approx \|\mathbf{p}-\mathbf{q}\|^{2}\) as \(\alpha\to 1\), where \(\|\cdot\|\) is the Euclidean norm in \(\mathbb{R}^{K}\)._ This theorem indicates especially that MLSLR with a limiting smoothing level \(\alpha\to 1\) approaches least squares logistic regression (LSQLR) \[\min_{\mathbf{g}\in\mathcal{G}}\left[\frac{1}{n}\sum_{i=1}^{n}\sum_{k=1}^{K}\left( t_{k}(y_{i})-\frac{e^{\theta_{k}(\mathbf{u}_{i})}}{\sum_{i=1}^{K}e^{\theta_{k}( \mathbf{u}_{i})}}\right)^{2}\right]. \tag{12}\] LSQLR adopts the logit model \(\mathbf{q}_{\mathrm{L}}(\mathbf{g}(\mathbf{x}))\) as a consistent estimator (as \(\mathbf{q}(\mathbf{x})\)) of the true CPD function \(\mathbf{p}(\mathbf{x})\) via minimizing an empirical version of the mean squared distance \(\mathbb{E}_{\mathbf{X}}\big{[}D_{\mathrm{SQ}}(\mathbf{p}(\mathbf{X})\|\mathbf{q}(\mathbf{X})) \big{]}\) (plus \(\mathbf{q}\)-independent quantity \(\{1-\mathbb{E}_{\mathbf{X}}\left[\|\mathbf{p}(\mathbf{X})\|^{2}\right]\}\big{)}\), where \(D_{\mathrm{SQ}}(\mathbf{p}\|\mathbf{q})\coloneqq\|\mathbf{p}-\mathbf{q}\|^{2}\). Then, one can regard MLSLR with \(\alpha\in(0,1)\) as interpolating between LR (\(\alpha=0\)) and LSQLR (\(\alpha\to 1\)). In contrast, even if considering the consequence of Theorem 2 into account, LSLR with \(\alpha\to 1\) \[\min_{\mathbf{g}\in\mathcal{G}}\left[\frac{1}{n}\sum_{i=1}^{n}\sum_{k=1}^{K}\left\{ t_{k}(y_{i})-s_{\alpha}^{-1}\left(\frac{e^{\theta_{k}(\mathbf{u}_{i})}}{\sum_{i=1}^{K}e^{ \theta_{k}(\mathbf{u}_{i})}}\right)\right\}^{2}\right] \tag{13}\] remains the \(\alpha\)-dependence, and this method is not practical since \(s_{\alpha}^{-1}\) diverges almost everywhere. Due to this trouble, we do not study LSLR using \(\alpha\) very close to \(1\). ### _Summary on Our Comparing Methods_ LR applies the KL-divergence loss and logit model (see Section II-B and (1)), LSLR applies the SKL-divergence loss and R-logit model (see Sections II-C and II-E, and (6)), and MLSLR including LSQLR applies the SKL-divergence loss and logit model (see Sections II-F and II-G, and (11) and (12)). This paper formulated these methods according to the framework of ERM, and these methods do not have an explicit term for regularization in our formulations (so we do not use the terminology'regularization' except when discussing statements of existing studies). We will compare these methods that use different loss functions and consistent probability estimators, under settings regarding the underlying data distribution and data characteristics, and the model size or representation ability of the probability estimators, according to classical analysis for ERM methods. One may be concerned about the remaining one of the four combinations of the KL- or SKL-divergence loss and the logit or R-logit model. The following problem corresponds to the combination of the KL-divergence loss and the R-logit model: \[\min_{\mathbf{g}\in\mathcal{G}}\left[-\frac{1}{n}\sum_{i=1}^{n}\sum_{k=1}^{K}t_{k }(y_{i})\ln s_{\alpha}^{-1}\left(\frac{e^{\theta_{k}(\mathbf{u}_{i})}}{\sum_{i=1 }^{K}e^{\theta_{k}(\mathbf{u}_{i})}}\right)\right]. \tag{14}\] However, an element of the R-logit model can take a negative value, so the KL-divergence for this model will be ill-defined and the optimization will fail. This method is therefore not promising and will not be considered in this study. ## III Statistical Analysis: LR versus MLSLR ### _Setting, Notation, and Basic Properties_ In the theoretical analysis presented in this section, for the sake of interpretability of the comparison results, we consider the simple case of binary classification (\(K=2\)) for the task of minimizing misclassification rate (\(\ell=\ell_{\mathrm{zo}}\)). See Appendix B for analysis under more general settings including multi-class cases and cost-sensitive tasks. We write the distribution \(\Pr(\mathbf{X}=\mathbf{x})\) of \(\mathbf{X}\) as \(p_{0}(\mathbf{x})\), relabel \(Y=1,2\) to \(+1,-1\), and abbreviate the conditional probability \(\Pr(Y=1+|\mathbf{X}=\mathbf{x})\) as \(p_{1}(\mathbf{x})\). Then, for the real-valued learner class \(\mathcal{G}=\{g:\mathbb{R}^{d}\to\mathbb{R}\}\), the surrogate loss functions \(\phi\) for LR, LSLR, MLSLR, and LSQLR are respectively given by \(\phi(v,y)=\varphi(yv)\) with \[\begin{split}\varphi_{\mathrm{LR}}(v)&=-\ln\bigl{(} \frac{1}{1+e^{v}}\bigr{)},\\ \varphi_{\mathrm{LS},\alpha}(v)&=-\bigl{(}1-\frac{ \alpha}{2}\bigr{)}\ln\bigl{(}\frac{1}{1+e^{v}}-v\bigr{)}-\frac{\alpha}{2}\ln \bigl{(}\frac{1-\alpha}{1+e^{v}}+\frac{\alpha}{2}\bigr{)},\\ \varphi_{\mathrm{LSQ}}(v)&=\frac{1}{2(1+e^{v})^{2}} \bigl{(}\propto\bigl{(}1-\frac{1}{1+e^{v}}\bigr{)}^{2}+\bigl{(}0-\frac{1}{1+e^ {v}}\bigr{)}^{2}\bigr{)},\end{split} \tag{15}\] where \(\varphi\) itself is also called a surrogate loss (subscript LR, LS, MLS, or LSQ of an object indicates that it is for LR, LSLR, MLSLR, or LSQLR).2 The loss function \(\varphi_{\mathrm{LSQ}}\) is also known as Savage loss [11]. Also, a labeling function \(h\) is fixed to the sign function, \(h_{\ell_{\mathrm{zo}}}(v)\coloneqq-1\) (if \(v\leq 0\)), \(\coloneqq+1\) (if \(v>0\)), considering the task and surrogate losses. Footnote 2: The learner model \(g\) is written as a real-valued function in Section III, which is a simplification in the binary case, from the formulations (1), (6), (11), and (12) of the four methods that adopted an \(\mathbb{R}^{2}\)-valued function, by letting \((g(\mathbf{x}),0)^{\top}\) be the model. First, we summarize results showing that LR, MLSLR, and LSQLR can consistently perform probability estimation via the logit model, while LSLR can via the R-logit model: **Corollary 1**.: _Assume \(\alpha\in[0,1)\), and let \(\tilde{g}\in\arg\min_{g:\mathbb{R}^{d}\to\mathbb{R}}\mathcal{R}_{\mathrm{sur}}( g;\phi)\). Then, regardless of the distribution of \((\mathbf{X},\mathbf{Y})\), \(\frac{1}{1+e^{-\tilde{g}(\mathbf{x})}}=p_{1}(\mathbf{x})\) a.s. for LR, MLSLR, and LSQLR, and \(s_{\alpha}^{-1}\bigl{(}\frac{1}{1+e^{-\tilde{g}(\mathbf{x})}}\bigr{)}=p_{1}(\mathbf{x})\), a.s. for LSLR._ Also, it can be found that surrogate loss \(\phi\) is properly designed for the task loss \(\ell_{\mathrm{zo}}\) under the labeling function \(h_{\ell_{\mathrm{zo}}}\). **Corollary 2**.: _Assume \(\alpha\in[0,1)\), and let \(\tilde{g}=\arg\min_{g:\mathbb{R}^{d}\to\mathbb{R}}\mathcal{R}_{\mathrm{sur}}( g;\phi)\). Then, regardless of the distribution of \((\mathbf{X},\mathbf{Y})\), \(\mathcal{R}_{\mathrm{risk}}(h_{\ell_{\mathrm{zo}}}\circ\tilde{g};\epsilon_{ \mathrm{zo}})=\inf_{f:\mathbb{R}^{d}\to[K]}\mathcal{R}_{\mathrm{risk}}(f; \ell_{\mathrm{zo}})\) for LR, LSLR, MLSLR, and LSQLR._ This result can be proved from the fact that the loss \(\varphi\) is classification calibrated; Refer to [12]. Also, this result can be generalized to the multi-class cases (\(K>2\)) and cost-sensitive tasks (\(\ell\neq\ell_{\mathrm{zo}}\)) [13]. These properties form an important basis for analysis of the performance in probability estimation and classification tasks with an empirical surrogate risk minimizer, which are the subjects below. These results indicate that the methods have no difference in the limit of the performances, and suggest that we should discuss their estimation performance (error) and the adequacy of the models in more specific settings. ### _Lower Efficiency with Correctly-Specified Models_ #### Iv-B1 Theories on Asymptotic Behaviors In order to make a detailed comparison, we focus in this section on a specific case where the data are distributed in association with a certain linear model and methods adopt a linear learner class \(\mathcal{G}\) (the correctly-specified model). Namely, assuming that the true conditional positive probability is \(p_{1}(\mathbf{x})=\frac{1}{1+\sigma^{\beta^{\star}\mathbf{x}}}\) for any \(\mathbf{x}\in\mathbb{R}^{d}\), we study the estimation result of the true parameter \(\tilde{\mathbf{\beta}}\) by LR, MLSLR, and LSQLR that use the same logit model with the learner class \(\mathcal{G}=\{\mathbf{g}(\cdot)=\mathbf{\beta}^{\top}\cdot\mid\mathbf{\beta}\in\mathbb{R} ^{d}\}\) as a probability estimator. Since LSLR adopts a consistent probability estimator (the R-logit model) different from the others, it will not give a consistent parameter estimate of \(\tilde{\mathbf{\beta}}\) under the above-mentioned setting. Thus, it is difficult to perform fair comparisons of LSLR with the other methods, and so we here consider only LR and MLSLR including LSQLR. The only difference between these compared methods lies in their loss functions. With the correctly-specified model, one can show consistency, asymptotic normality, and asymptotic mean squared error (AMSE) of an empirical parameter estimate defined by \[\hat{\mathbf{\beta}}_{n}\coloneqq\operatorname*{arg\,min}_{\mathbf{\beta}\in\mathbb{R }^{d}}\ \frac{1}{n}\sum_{t=1}^{n}\phi(\mathbf{\beta}^{\top}\mathbf{x}_{i},y_{i}), \tag{16}\] on the basis of well-established theories for generalized linear models (see [14, Section 3] for LR) or M-estimators (see [15] or [16, Chapter 6] for MLSLR and LSQLR). **Theorem 3**.: _For LR, MLSLR, or LSQLR, assume \(\alpha\in[0,1)\),_ 1. \(\phi\) _is a surrogate loss function defined by (_15_)._ 2. \(\Pr(\mathbf{\beta}^{\top}\mathbf{X}=0)=0\) _for any_ \(\mathbf{\beta}\in\mathbb{R}^{d}\) _such that_ \(\mathbf{\beta}\neq 0\)_._ 3. \(p_{1}(\mathbf{x})=\frac{1}{1+\sigma^{-\beta^{\star}\mathbf{x}}}\) _for any_ \(\mathbf{x}\in\mathbb{R}^{d}\) _and some_ \(\mathbf{\beta}\in\mathbb{R}^{d}\)_._ _Then, \(\hat{\mathbf{\beta}}_{n}\) defined by (16) converges almost surely to \(\hat{\mathbf{\beta}}\)._ **Theorem 4**.: _Assume C1-C3 in Theorem 3, \(\alpha\in[0,1)\), and_ 1. \(\mathbb{E}_{\mathbf{X}}\big{[}\|\mathbf{X}\|^{2}\big{]}<\infty\) _for LR, or_ \(\mathbb{E}_{\mathbf{X}}\big{[}\|\mathbf{X}\|^{3}\big{]}<\infty\) _for MLSLR and LSQLR._ _Then, for \(\hat{\mathbf{\beta}}_{n}\) defined by (16), \(\sqrt{n}(\hat{\mathbf{\beta}}_{n}-\mathbf{\hat{\beta}})\) converges in distribution to a \(d\)-dimensional normal distribution with mean \(\mathbf{0}\) and covariance matrix \(\mathrm{C}=\mathrm{B}^{-1}\mathrm{AB}^{-1}\) with_ \[\begin{split}&\mathrm{A}=\mathbb{E}_{(\mathbf{X},\mathbf{Y})}\big{[} \nabla\phi(\tilde{\mathbf{\beta}}^{\top}\mathbf{X},Y)\nabla\phi(\tilde{\mathbf{\beta}}^{ \top}\mathbf{X},Y)^{\top}\big{]},\\ &\mathrm{B}=\mathbb{E}_{(\mathbf{X},\mathbf{Y})}\big{[}\nabla^{2}\phi( \tilde{\mathbf{\beta}}^{\top}\mathbf{X},Y)\big{]},\end{split} \tag{17}\] _where \(\nabla\) and \(\nabla^{2}\) are respectively the nabla and Hessian operator with respect to the model parameter, and where \(\mathrm{A}\) and \(\mathrm{B}\) for LR, MLSLR, and LSQLR are_ \[\begin{split}\mathrm{A}_{\mathrm{LR}}&=\mathrm{B}_{ \mathrm{LR}}=\mathbb{E}_{\mathbf{X}}\big{[}p_{1}(\mathbf{X})\{1-p_{1}(\mathbf{X})\mathbf{X}\mathbf{X }\mathbf{X}^{\top}\},\\ \mathrm{A}_{\mathrm{MLS},\mathbf{\alpha}}&=\mathbb{E}_{ \mathbf{X}}\Big{[}\frac{\{p_{1}(\mathbf{X})\}^{3}\{1-p_{1}(\mathbf{Y})\}^{3}\mathbf{X}\mathbf{X} ^{\top}}{\{p_{1}(\mathbf{X})-\alpha(p_{1}(\mathbf{X})-\frac{1}{2})\}^{2}\{1-p_{1}(\mathbf{ X})+\alpha(p_{1}(\mathbf{X})-\frac{1}{2})\}^{2}}\Big{]},\\ \mathrm{B}_{\mathrm{MLS},\mathbf{\alpha}}&=\mathbb{E}_{ \mathbf{X}}\Big{[}\frac{\{p_{1}(\mathbf{X})\}^{2}\{1-p_{1}(\mathbf{X})\}^{2}\mathbf{X}\mathbf{X} ^{\top}}{\{p_{1}(\mathbf{X})-\alpha(p_{1}(\mathbf{X})-\frac{1}{2})\}\{1-p_{1}(\mathbf{X}) +\alpha(p_{1}(\mathbf{X})-\frac{1}{2})\}}\Big{]},\\ \mathrm{A}_{\mathrm{LSQ}}&=\mathbb{E}_{\mathbf{X}}\big{[} \{p_{1}(\mathbf{X})\}^{3}\{1-p_{1}(\mathbf{X})\}^{3}\mathbf{X}\mathbf{X}^{\top}\big{]},\\ \mathrm{B}_{\mathrm{LSQ}}&=\mathbb{E}_{\mathbf{X}}\big{[} \{p_{1}(\mathbf{X})\}^{2}\{1-p_{1}(\mathbf{X})\}^{2}\mathbf{X}\mathbf{X}^{\top}\big{]}.\end{split} \tag{18}\] In the large-sample limit \(n\to\infty\), the AMSE of \(\hat{\mathbf{\beta}}_{n}\) becomes \[n\mathbb{E}\big{[}\|\hat{\mathbf{\beta}}_{n}-\mathbf{\hat{\beta}}\|^{2}\big{]}\to\mathrm{ tr}(\mathrm{C}). \tag{19}\] Therefore, the ratio \(\mathrm{tr}(\mathrm{C}_{\mathrm{LR}})/\mathrm{tr}(\mathrm{C})\) (which is called asymptotic relative efficiency, ARE) serves as an indicator of the estimation performance of a method corresponding to the matrix \(\mathrm{C}\); The smaller the ARE is, the lower the asymptotic efficiency of the method is (a larger-size sample is needed to achieve the same level of estimation performance as LR). Note that it can be theoretically found that LR gives asymptotically the most efficient estimate by considering the Cramer-Rao bound since LR is a maximum likelihood method. Table I shows AREs of MLSLR and LSQLR in the example where the data with a 2-dimensional covariate follow the distribution \(\tilde{F}\), in which the probability density at \((\mathbf{X},Y)=(\mathbf{x},+1)\) is given as a product of \[p_{0}(\mathbf{x})=\delta_{\mathbf{x}_{1}}(1)\cdot\frac{1}{\sqrt{2\pi}}\exp\bigl{(}- \frac{1}{2}x_{1}^{2}\bigr{)},\quad p_{1}(\mathbf{x})=\frac{1}{1+\sigma^{-\beta^{ \star}\mathbf{x}}} \tag{20}\] with \(\hat{\beta}_{1}=0,1,2\), \(\hat{\beta}_{2}=1,2,4\), where \(\delta_{\mathbf{Z}}(\mathbf{z})\) is a point mass distribution at \(\mathbf{Z}=\mathbf{z}\). It indicates that the asymptotic efficiency of MLSLR tends to decrease, as the smoothing level \(\alpha\) increases to \(1\). #### Iv-B2 Simulation Experiment We are also interested in estimation performance of MLSLR and LSQLR in a finite-sample situation as well as the large-sample limit. The numerical experiment in this section was performed to see it.3 Footnote 3: Our program codes for experiments in Sections III-B2, III-C2, and IV and Appendix D are in [https://github.com/yamasakizygos/LSIS](https://github.com/yamasakizygos/LSIS). We consider the task with the zero-one task loss \(\ell_{\mathbf{z}_{0}}\). With the correctly-specified model (20), we calculated \(\hat{\mathbf{\beta}}_{n}\) with a training sample of size \(n=25,50,\ldots,800\) and evaluated the size of bias (SoB) \(\|\hat{\mathbf{\beta}}_{n}-\hat{\mathbf{\beta}}\|\) and test task risk (TTR) \(\frac{1}{m}\sum_{t=1}^{m}\ell_{\mathbf{z}_{0}}(h_{\ell_{\mathbf{z}_{0}}}(\hat{\mathbf{\beta}}_{ n}^{\top}\mathbf{x}_{i}),y_{i})\) (i.e., misclassification rate) with a test sample of size \(m=10^{4}\), for LR, MLSLR with \(\alpha=0.2,0.4,0.6,0.8\), and LSQLR. Note that it makes no sense to compare the test surrogate risk (TSR) \(\frac{1}{m}\sum_{i=1}^{m}\phi(\hat{\mathbf{\beta}}_{n}^{\top}\mathbf{x}_{i},y_{i})\) for different surrogate losses. We trained the model parameter using (batch-version) Adam with learning rate that is multiplied by \(10^{-1/2}\) every 50 epochs from 0.01 for 150 epochs, from the initial point \(\hat{\mathbf{\beta}}\) for LR or together with multi-start strategy with 30 initial points scattered from \(\hat{\mathbf{\beta}}\) for MLSLR and LSQLR. Figure 2 shows some of the results on the mean and standard deviation (STD) of the errors over 100 randomized trials. Consequently, similarly to the ARE results in Table I, it was also experimentally confirmed that parameter estimation per formance with the correctly-specified model decreased along with increase of the smoothing level \(\alpha\) even when the sample size is small (see mean of SoB in Figure 2). Along with this behavior, TTR for MLSLR and LSQLR also deteriorated. ### _Robustness against Model Misspecification_ #### Iv-C1 Similarity to Existing Robust LRs Use of a loss function that leads to low efficiency with correctly-specified models may be motivated by expectation for robustness against model misspecification, as also suggested by [17]: One may want to make the estimation result stable even when the data distribution deviates from the specified model. Various studies have confirmed that LR lacks robustness against model misspecification [18], and then proposed robust LRs via the idea of \(\rho\)-transformation (see below) of the loss function \(\varphi_{\text{LR}}\)[15, 19, 20]; See also the monograph [21, Chapter 7.2]. [19] proposed to use a \(\rho\)-transformed loss \[\varphi(v)=\rho(\varphi_{\text{LR}}(v)), \tag{21}\] where \(\rho:[0,\infty)\to\mathbb{R}\) is a function with sublinear growth (e.g., \(\rho_{\text{P},c}(v)=v\) (if \(v\leq c\)), \(=2(cv)^{1/2}-c\) (if \(v>c\)) with some constant \(c>0\), inspired by a Huber-type loss). However, his robust estimator based on \(\rho_{\text{P},c}\) is not consistent even with correctly-specified models, and it has been pointed out that it is not sufficiently robust against outliers [22]. Thus, seeking a robust LR with consistency (having a guarantee like Corollary 1), [15]4 Footnote 4: They originally defined their estimator by the loss function \[\varphi(v)=\zeta_{1,c}\left(\varphi_{\text{LR}}(v)\right)+\eta\big{(}\frac{1} {1\epsilon^{-c}v}\big{)}+\eta\big{(}\frac{1}{1\epsilon^{c}v}\big{)}, \tag{22}\] where \(\zeta_{1,c}(v)=v-v^{2}/(2c)\) (if \(v\leq c\)), \(=c/2\) (if \(v>c\)) with some constant \(c>0\), and where \(\eta(v)=\int_{0}^{c}\zeta_{1,c}^{\prime}(-\ln t)\ dt\). The form (23)-(24) in the body is that our study derived so that it follows the unified formulation through the \(\rho\)-transformation (21). \[\rho_{\text{BY},c}(v)=\zeta_{1,c}(v)+\zeta_{2,c}(e^{-v})+\zeta_{2,c}(1-e^{-v}) \tag{23}\] that is non-decreasing in \(v\geq 0\) and bounded, where \[\begin{split}\zeta_{2,c}(v)=\big{[}v-e^{-c}+\frac{1}{c}\big{\{} e^{-c}(c+1)\\ +v(\ln v-1)\big{\}}\big{]}\big{]}\,\mathbb{1}\,(v\geq e^{-c}).\end{split} \tag{24}\] Also, [20] used, besides a data-weighting scheme similar to [23], \(\bar{\zeta}_{1,c}(v)=ve^{-\sqrt{c}}\) (if \(v\leq c\)), \(=-2e^{-\sqrt{v}}(1+\sqrt{v})+e^{-\sqrt{c}}(2(1+\sqrt{c})+c)\) (if \(v>c\)) instead of \(\zeta_{1,c}\) in the formulation (22), yielding an increasing and bounded \(\rho\)-transformation \(\rho_{\text{CH},c}\). Our observation here is that MLSLR and LSQLR can be regarded as instances of \(\rho\)-transformation-based robust LR, with \[\begin{split}\rho_{\text{MLS},\alpha}(v)&=-\big{(}1 -\frac{\alpha}{2}\big{)}\ln((1-\alpha)e^{-v}+\frac{\alpha}{2})\\ &\quad-\frac{\alpha}{2}\ln((1-\alpha)(1-e^{-v})+\frac{\alpha}{2} \big{)},\\ \rho_{\text{LSQ}}(v)&=\frac{1}{2}(1-e^{-v})^{2}. \end{split} \tag{25}\] Both of these transformations are increasing (in \(v\geq 0\)) and bounded. This property and Figure 3 show close similarity of MLSLR, LSQLR, and the existing robust LRs by [15, 20], which will suggest the robustness of MLSLR and LSQLR. #### Iv-C2 Simulation Experiment In robust statistics, it is common to study the performance of an estimator under (point-mass) data contaminations, which cause model misspecification. The discussion in this section follows that approach. Fig. 2: Errorbar plots of SoB and TTR for LR (red), MLSLR with \(\alpha=0.2,0.4,0.6,0.8\) (green, blue, cyan, yellow), and LSQLR (magenta) versus \(n\), where circles and errorbars represent the mean and STD of the errors (for Section III-B2). We introduce the \(\mathbb{R}^{d}\)-valued functional \(\mathbf{T}\), defined on the space of probability distribution functions \(F\) of random variables \((\mathbf{X},Y)\in\mathbb{R}^{d}\times[K]\), that represents the procedure of the surrogate risk minimization: \[\mathbf{T}(F)\coloneqq\operatorname*{arg\,min}_{\mathbf{\beta}\in\mathbb{R}^{d}}\ \mathbb{B}_{(\mathbf{X},\mathbf{Y})\sim F}\left[\phi(\mathbf{\beta}^{\top}\mathbf{X},Y)\right], \tag{26}\] where the expectation is taken with respect to \((\mathbf{X},Y)\sim F\). Now, we are interested in behaviors of the estimate \(\mathbf{\beta}_{\mathbf{\varepsilon},(\mathbf{x}_{c},y_{c})}=\mathbf{T}(F_{\mathbf{\varepsilon}, (\mathbf{x}_{c},y_{c})})\) for the contaminated distribution \[F_{\mathbf{\varepsilon},(\mathbf{x}_{c},y_{c})}=(1-\epsilon)\bar{F}+\epsilon\delta_{( \mathbf{X},\mathbf{Y})}(\mathbf{x}_{c},y_{c}), \tag{27}\] where \(\bar{F}\) is a nominal distribution of \((\mathbf{X},Y)\) such that \(p_{1}(\mathbf{x})=\frac{1}{1+\epsilon^{-\beta^{\top}\mathbf{x}}}\) implying \(\mathbf{\beta}=\mathbf{T}(\bar{F})\), and \(\epsilon\ll 1\) is a small ratio. Although theoretical discussion to guarantee the robustness of an estimator often adopts the notion of 'breakdown point' (\(\inf\{\epsilon\mid\sup_{(\mathbf{x}_{c},y_{c})}\|\mathbf{\beta}_{\mathbf{\varepsilon},( \mathbf{x}_{c},y_{c})}\|=\infty\}\)), it has been known that it does not bring about meaningful results in the context of binary classification; For example, [24, Theorem 1]5 states that the breakdown (in the above-mentioned sense) in the finite-sample situation will not happen even for LR, which is in conflict with the observed instability of LR as confirmed by [18]. Due to such difficulty in theoretical discussion, we study the robustness by resorting to a simulation experiment here. The focuses of this experiment are to see if MLSLR and LSQLR actually make LR robust and how the choice of smoothing level \(\alpha\) affects the robustness of those methods. Footnote 5: [24, Theorem 2] discussed an unconventional version of breakdown point (\(\inf\{\epsilon\mid\inf_{(\mathbf{x}_{c},y_{c})}\|\mathbf{\beta}_{\mathbf{\varepsilon},( \mathbf{x}_{c},y_{c})}\|=0\}\)) too, under the linear learner class \(\mathcal{G}\). However, such breakdown is different from the situation in which the largest logit takes a quite large value, which is regarded as a trouble in studies on LS. Also, it does not provide suggestions for cases where flexible models such as a neural network model are used, because it can occur depending heavily on the linearity of the learner. This paper thus adopts only the conventional version of the breakdown point. We consider the setting with the task with the zero-one task loss \(\ell_{\text{zo}}\), the nominal distribution \(\bar{F}\) given by (20), and the point-mass contamination of \(\mathbf{\epsilon}=0.05\), \(x_{c,2}=-10,-9.9,\ldots,10\) (\(x_{c,1}=1\)), and \(y_{c}=\pm 1\). We could not find a closed-form representation of \(\mathbf{\beta}_{\mathbf{\varepsilon},(\mathbf{x}_{c},y_{c})}\), so we estimated it with a sample from \(F_{\mathbf{\varepsilon},(\mathbf{x}_{c},y_{c})}\) of sufficiently large size such that the resulting variance gets negligibly small (denote it \(\hat{\mathbf{\beta}}_{\mathbf{\varepsilon},(\mathbf{x}_{c},y_{c})}\)). We calculated \(\hat{\mathbf{\beta}}_{\mathbf{\varepsilon},(\mathbf{x}_{c},y_{c})}\) with a training sample of size \(n=10^{4}\) and evaluated SoB \(\|\hat{\mathbf{\beta}}_{\mathbf{\varepsilon},(\mathbf{x}_{c},y_{c})}-\hat{\mathbf{\beta}}\|\) and \(\text{TTR}\ \frac{1}{n}\sum_{i=1}^{m}\ell_{20}(h_{\ell_{\text{zo}}}(\hat{\mathbf{\beta}}_{ \mathbf{\varepsilon},(\mathbf{x}_{c},y_{c})}^{\top}\mathbf{x}_{i}),\mathbf{y}_{i})\) with a sample of size \(m=10^{4}\) (both samples follow the distribution (27)), for LR, MLSLR with \(\alpha=0.2,0.4,0.6,0.8\), and LSQLR (by 1 trial). We trained the model parameter in the same way as one in Section III-B2. Figure 4 shows some of the results. Using a larger smoothing level \(\alpha\) tended to improve SoB and TTR for many \((\mathbf{x}_{c},y_{c})\)'s that are so anomalous for (20) that \(\frac{1}{1+\epsilon^{-\gamma}\beta^{\top}\mathbf{x}_{c}}\) gets quite small (in the setting with larger \(\hat{\mathbf{\beta}}_{2}\)). This result indicates that LS greatly robustified LR with larger \(\alpha\). Also, the improvement of TTR was larger than that of the experiment in Section III-B2, which clarifies that LS is promising for better classification performance. ## IV Experiment: LSLR versus MLSLR In this section, so as to study the significance of LSLR using a consistent probability estimator different from LR and ML-SLR, we perform an experimental comparison between LSLR and MLSLR that respectively apply probability estimators based on the R-logit model and the logit model, while using intrinsically the same (surrogate) loss functions based on the SKL-divergence for probability estimation. Several previous works claim that squeezing of the logits in LSLR (see Theorem 1, B6) works as regularization and is an advantage of LS. Recalling the contrasting property that the logits of MLSLR can output quite large values (see Theorem 1, B5), if this claim Fig. 4: Plots of SoB and TTR for LR (red), MLSLR with \(\alpha=0.2,0.4,0.6,0.8\) (green, blue, cyan, yellow), and LSQLR (magenta) versus \(x_{c,2}\), where solid and dashed lines are for \(y_{c}=+1,-1\) (for Section III-C2). were correct, LSLR would perform better than MLSLR. The experiment will test whether a regularization mechanism based on logit-squeezing actually works. Once LSLR and MLSLR select a (common) model class \(\mathcal{G}\) (e.g., the network architecture), optima of their surrogate risk \(\min_{\mathbf{g}\in\mathcal{G}}\mathcal{R}_{\text{surr}}(\mathbf{g};\mathbf{\phi})\) will be different. In order to reduce such a difference stemmed from the lack of representation ability of the model and focus on their estimation performance, we perform an experiment with a large-size learner model, unlike those experiments in Sections III-B2 and III-C2. Following the experiments by [3] with the CIFAR-10 dataset, we trained LR, LSLR and MLSLR with \(\alpha=0.2,0.4,0.6,0.8\), and LSQLR based on ResNet-18 architecture (while we did not use the weight-decay [25] in the implementation of [3]) with the training sample of size \(n=5\times 10^{4}\), and evaluated \(\text{TSR}\ \frac{1}{m}\sum_{i=1}^{m}\phi(\mathbf{g}(\mathbf{x}_{i}),y_{i})\) and \(\text{TTR}\ \frac{1}{m}\sum_{i=1}^{m}\ell_{20}(h_{\ell_{\mathbf{\alpha}_{0}}}(\mathbf{g}( \mathbf{x}_{i})),y_{i})\) (where \(h_{\ell_{\mathbf{\alpha}_{0}}}(\mathbf{v})=\arg\max_{k\in[K]}\{v_{k}\}\)) with \(m=10^{4}\). We trained each model for 150 epochs using the Nesterov's accelerated SGD similar to [3], and adopted a model at the point in time when each test risk achieved its minimum among those evaluated at the end of each epoch. The results for 20 trials are summarized in Table II (mean and STD of TTR for LR and LSQLR were.0784.0046 and.0762.0019). Note that it is meaningless to compare TSRs for different \(\alpha\)'s, and we compare TSRs or TTRs of LSLR and MLSLR with the same \(\alpha\) or TTRs with the different \(\alpha\)'s. The R-logit model \(q_{\text{RL},\alpha}(\mathbf{g}(\mathbf{x}))\), a consistent probability estimator of LSLR, has an unnecessarily larger range than that of the CPD function \(\mathbf{p}(\mathbf{x})\), \(\Delta_{K-1}\) (see Theorem 1, B1 and Figure 5). This fact can be interpreted as learning a probability estimator from an unnecessarily larger hypothesis space, and we consider that it may prevent proper learning and degrade prediction performance of LSLR, in contrast to the positive statement by existing studies for the logit-squeezing. To evaluate the degree to which the R-logit model deviates from the probability simplex, we calculated the following three criteria: Table III shows outlier probability distribution estimate rate (OPDER) \[\frac{1}{n}\sum_{i=1}^{n}\mathbb{1}\left\{\mathbf{q}_{\text{RL},\alpha}(\mathbf{g}( \mathbf{x}_{i}))\notin\Delta_{K-1}\right\}, \tag{28}\] outlier probability estimate rate (OPER) \[\frac{1}{n\times K}\sum_{i=1}^{n}\sum_{k=1}^{K}\mathbb{1}\left\{q_{\text{RL},\alpha,k}(\mathbf{g}(\mathbf{x}_{i}))\notin[0,1]\right\}, \tag{29}\] and mean size of residual (MSoR) \[\frac{\text{OPER}}{n\times K}\sum_{i=1}^{n}\sum_{k=1}^{K}\left(\left|q_{\text {RL},\alpha,k}(\mathbf{g}(\mathbf{x}_{i}))-\frac{1}{2}\right|-\frac{1}{2}\right)_{+}, \tag{30}\] evaluated for training and test (with \(m\) instead of \(n\)) sets at an epoch with the minimum TSR, where \((\cdot)_{+}\coloneqq\max\{0,\cdot\}\). Comparing TSRs or TTRs of LSLR and MLSLR with the same \(\alpha\) (see Table II), it can be seen that the modification of the consistent probability estimator or squeezing of the logits does not help to improve the probability estimation and classification performance. Rather, MLSLR had stable and better performance in many cases. Table III indicates that the R-logit model \(q_{\text{RL},\alpha}(\mathbf{g}(\mathbf{x}))\) of the LSLR often, and greatly for larger \(\alpha\), deviates from the probability simplex, as Theorem 1, B1 and Figure 5 also suggest. This result is coherent with the fact that the TTR of LSLR is much worse than that of MLSLR for large \(\alpha\), and supports our hypothesis about the trouble of LSLR. These observations and considerations are novel findings and recommend MLSLR over LSLR when one uses a large-size learner model. Besides, the best method with respect to the TTR was MLSLR with an intermediate smoothing level \(\alpha=0.4\). Although this result is apart from the preset purpose of the experiment in this section, it is also notable and can be understood from the trade-off between efficiency and robustness discussed in Sections III-B and III-C: Even a large-size neural network model \((\arg\min_{\mathbf{g}\in\mathcal{G}}\mathcal{R}_{\text{surr}}(\mathbf{g};\mathbf{\phi}))\) cannot completely represent the optimal solution \((\arg\min_{\mathbf{g}\in\mathcal{R}^{d}\rightarrow\mathbb{R}^{K}}\mathcal{R}_{ \text{surr}}(\mathbf{g};\mathbf{\phi}))\), and due to such deviations (model misspecification), robustification by LS would have contributed to improve the probability estimation and classification performance. ## V Conclusion and Future Prospect This paper has proposed the loss view, that LS adopts a loss function different from that of LR, in contrast to the regularization view, that LS is a sort of regularization techniques, adopted in most existing studies. This loss view will also provide theoretical generalization analysis of LSLR; See Appendix E. Also, we introduced MLSLR, for fair comparison with LR, that adopts the same logit model as a consistent probability estimator. Previous studies have stated 1. If a teacher network is trained with LS, knowledge distillation into a student network is much less effective [3, 26]. 2. LS is competitive with loss-correction techniques under label noise [27]. 3. LS can help to speed up the convergence of SGD by reducing the variance [28]. but they regarded the inconsistent logit model as a probability estimator of LSLR, which does not result in a fair comparison with LR. Thus, it may be still meaningful to re-consider these statements in the introduced alternative view, coupled with the fact that MLSLR provided better probability estimation and classification performance than LSLR when they depend on a large-size neural network model. In Sections III-B and III-C, we showed that MLSLR and LSQLR are less efficient but more robust than LR: This tendency becomes more pronounced as the smoothing level is increased. As demonstrated in Section IV, the selection of the smoothing level controls the trade-off between efficiency and robustness and is practically important for better classification performance. For example, [29] studied and proposed covariate-dependent adaptation of the smoothing level: it decides the smoothing level locally according to an estimated maximum conditional probability and estimated marginal distribution of the covariate. Also, [30] adopted target-dependent adaptation of the smoothing level, so called non-uniform LS. The trade-off that we have discovered may lead to a more sensible selection of the smoothing level. Since the SKL divergence, on which LS is based, is a divergence that provides a robust statistical procedure, it may also be associated with another class of robustifying divergence, such as density power divergence [31, 32, 33]. Also, although entropy regularization techniques [34, 35] actively used in reinforcement learning would be more difficult to express its corresponding loss function or divergence in a closed form and theoretically analyze them, it might be able to interpret them and other similar logit-squeezing [36, 37] as robustification in the same way as LSLR. These related topics are the subject of future work, and these methods have potential to be improved like MLSLR against LSLR. ## Appendix A Proof of Theorems 1 and 2 First, we present a proof of Theorem 1. Proof of Theorem 1.: The statements B1, B2, B7, and B8 can be proved by trivial calculations, so we omit the proof. The method of Lagrange multiplier, \[\begin{split}&\frac{\partial}{\partial u_{2}}\big{\{}D_{\text{ SKL},\alpha}(\mathbf{p}||\mathbf{q})-\lambda\big{(}\sum_{k=1}^{K}q_{k}-1\big{)}\big{\}}\\ &=-(1-\alpha)\frac{(1-\alpha)p_{1}+\frac{\alpha}{K}}{(1-\alpha) q_{k}+\frac{\alpha}{K}}-\lambda=0,\quad\text{for $k=1,\ldots,K$},\end{split} \tag{31}\] shows that the optimal solution is determined to \(\mathbf{p}\) as far as \(\alpha\neq 1\). This result and \(D_{\text{SKL},\alpha}(\mathbf{p}||\mathbf{p})=0\) prove B3. On the basis of the calculus of the second derivatives of \(D_{\text{SKL},\alpha}(\mathbf{p}||\mathbf{q})\) in \((\mathbf{p},\mathbf{q})\), that is, for \(k,l=1,\ldots,K\) s.t. \(k\neq l\), \[\begin{split}&\frac{\partial^{2}D_{\text{SKL},\alpha}(\mathbf{p}|| \mathbf{q})}{\partial p_{1}^{2}}=\frac{(1-\alpha)^{2}}{s_{\alpha}(\mathbf{p}_{k})}, \ \frac{\partial^{2}D_{\text{SKL},\alpha}(\mathbf{p}||\mathbf{q})}{\partial q_{k}^{2}}= \frac{(1-\alpha)^{2}s_{\alpha}(\mathbf{p}_{k})}{s_{\alpha}(\mathbf{q}_{k})^{2}},\\ &\frac{\partial^{2}D_{\text{SKL},\alpha}(\mathbf{p}||\mathbf{q})}{ \partial p_{1}\mathbf{q}_{k}}=\frac{\partial^{2}D_{\text{SKL},\alpha}(\mathbf{p}|| \mathbf{q})}{\partial q_{k}\mathbf{q}_{l}}=\frac{\partial^{2}D_{\text{SKL},\alpha}( \mathbf{p}||\mathbf{q})}{\partial p_{2}\mathbf{q}_{l}}=0,\end{split} \tag{32}\] one has that, for \(\mathbf{v}=(v_{1},\ldots,v_{K}),\mathbf{w}=(w_{1},\ldots,w_{K})\in\mathbb{R}^{K}\), \[\begin{split}&\left(\mathbf{v}\ \mathbf{w}\right)\nabla^{2}D_{\text{SKL},\alpha}(\mathbf{p}||\mathbf{q})\left(\mathbf{v}\ \mathbf{w}\right)^{\top}\\ &=(1-\alpha)^{2}\left(\mathbf{v}\ \mathbf{w}\right)\left(\begin{matrix}\text{diag} \big{\{}\frac{1}{s_{\alpha}(\mathbf{p}_{k})}\big{\}}&\text{diag}\big{\{}\frac{s_{ \alpha}(\mathbf{q}_{k})}{s_{\alpha}(\mathbf{q}_{k})^{2}}\big{\}}\end{matrix}\right) \left(\mathbf{w}\right)\\ &=(1-\alpha)^{2}\sum_{k=1}^{K}\Bigl{(}\frac{v_{k}}{s_{\alpha}(\mathbf{p }_{k})^{\top/2}}-\frac{s_{\alpha}(\mathbf{p}_{k})^{\top/2}w_{k}}{s_{\alpha}(\mathbf{q }_{k})}\Bigr{)}^{2}\geq 0,\end{split} \tag{33}\] which clarifies the statement B4. The constraint \(g_{K}=0\), together with the symmetry, implies \(g_{2}=\cdots=g_{K}=0\). Then, as \(q_{1,K}(\mathbf{g})=\frac{1}{(K-1)\epsilon e^{\beta_{1}}}=0\), one has \(g_{1}=\infty\) (statement B5). Also, since \(\frac{q_{\text{SKL},\alpha,K}(\mathbf{g})}{(K-1)\epsilon e^{\beta_{1}}}=\frac{ \alpha}{K}=0\), one has \(g_{1}=\ln(\frac{K}{\alpha}+1-K)\) if \(\frac{\alpha}{\alpha}+1-K>0\), i.e., for \(\alpha\in(0,1)\cup(1,\frac{K}{K-1})\) (statement B6). Next, we prove Theorem 2. Proof of Theorem 2.: Taylor expansion \(\ln(1+v)\approx v-\frac{1}{2}v^{2}\) for \(v\in\mathbb{R}\) with a small absolute value shows \[\begin{split}& D_{\text{SKL},\alpha}(\mathbf{p}||\mathbf{q})=\sum_{k=1}^ {K}\{(1-\alpha)p_{k}+\frac{\alpha}{K}\}\ln\frac{(1-\alpha)p_{k}+\frac{ \alpha}{K}}{(1-\alpha)q_{k}+\frac{\alpha}{K}}\\ &\rightarrow\sum_{k=1}^{K}\Bigl{[}\bigl{\{}(1-\alpha)p_{k}+\frac{ \alpha}{K}\bigr{\}}\bigl{\{}\frac{(1-\alpha)K}{\alpha}p_{k}-\frac{1}{2} \bigl{(}\frac{(1-\alpha)K}{\alpha}p_{k}\bigr{)}^{2}\bigr{\}}\\ &\quad-\bigl{\{}(1-\alpha)p_{k}+\frac{\alpha}{K}\bigr{\}}\bigl{\{} \frac{(1-\alpha)K}{\alpha}q_{k}-\frac{1}{2}\bigl{(}\frac{(1-\alpha)K}{\alpha} q_{k}\bigr{)}^{2}\bigr{\}}\Bigr{]}\\ &=\frac{(1-\alpha)^{2}K}{2\alpha}\|\mathbf{p}-\mathbf{q}\|^{2}+O\bigl{(} \frac{(1-\alpha)^{2}}{\alpha^{2}}\bigr{)},\end{split} \tag{34}\] when \(\alpha\to 1\). ## Appendix B Generalized Version of Corollaries 1 and 2 This section describes the generalized version of Corollaries 1 and 2 for the multi-class settings (\(K>2\)), cost-sensitive tasks (\(\ell\neq t_{\text{zo}}\)), and \(\alpha\in[0,1)\cup(1,\frac{K}{K-1}]\). For an \(\mathbb{R}^{K}\)-valued learner, the surrogate loss function \(\phi\) for LR, LSLR, MLSLR, and LSQLR can be represented as \[\begin{split}\phi_{\text{LR}}(\mathbf{v},y)&=-\sum_{k=1} ^{K}t_{k}(y)\ln\Bigl{(}e^{v_{k}}/\sum_{l=1}^{K}e^{v_{l}}\Bigr{)},\\ \phi_{\text{LS},\alpha}(\mathbf{v},y)&=-\sum_{k=1}^{K}s_{ \alpha}(t_{k}(y))\ln\bigl{(}e^{v_{k}}/\sum_{l=1}^{K}e^{v_{l}}\bigr{)},\\ \phi_{\text{MLS},\alpha}(\mathbf{v},y)&=-\sum_{k=1}^{K}s_{ \alpha}(t_{k}(y))\ln s_{\alpha}\bigl{(}e^{v_{k}}/\sum_{l=1}^{K}e^{v_{l}} \bigr{)},\\ \phi_{\text{LSQ}}(\mathbf{v},y)&=\sum_{k=1}^{K}\bigl{[}t_{ k}(y)-e^{v_{k}}/\sum_{l=1}^{K}e^{v_{l}}\bigr{)}^{2}.\end{split} \tag{35}\] Then, one has the following results: **Proposition 1**.: _Assume \(\alpha\in[0,1)\cup(1,\frac{K}{K-1}]\), and let \(\bar{\mathbf{g}}\in\arg\min_{\mathbf{g}\in\mathbb{R}^{d}\to\mathbb{R}^{K}}\mathcal{R}_{sur}( \mathbf{g};\phi)\). Then, regardless of the distribution of \((\mathbf{X},Y)\), \(\mathbf{q}_{\mathrm{L}}(\tilde{\mathbf{g}}(\mathbf{x}))=\mathbf{p}(\mathbf{x})\) a.s. for LR, MLSLR, and LSQLR, and \(\mathbf{q}_{\mathrm{RL},\alpha}(\tilde{\mathbf{g}}(\mathbf{x}))=\mathbf{p}(\mathbf{x})\) a.s. for LSLR._ **Proposition 2**.: _Assume \(\alpha\in[0,1)\cup(1,\frac{K}{K-1}]\), and let \(\tilde{\mathbf{g}}\in\arg\min_{\mathbf{g}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{K}} \mathcal{R}_{\text{sur}}(\mathbf{g};\mathbf{\phi})\). Then, regardless of the distribution of \((\mathbf{X},Y)\), \(h_{\ell}(\mathbf{v})=\arg\min_{t\in[K]}\sum_{k=1}^{K}q_{\mathrm{L},k}(\mathbf{v})\ell (1,k)\) satisfies \(\mathcal{R}_{\text{stk}}(h_{\ell}\circ\tilde{\mathbf{g}};\ell)=\inf_{f:\mathbb{R} ^{d}\rightarrow[K]}\mathcal{R}_{\text{stk}}(f;\ell)\) for LR, MLSLR, and LSQLR, and \(h_{\ell}(\mathbf{v})=\arg\min_{t\in[K]}\sum_{k=1}^{K}q_{\mathrm{RL},\alpha,k}(\bm {v})\ell(1,k)\) satisfies \(\mathcal{R}_{\text{stk}}(h_{\ell}\circ\tilde{\mathbf{g}};\ell)=\inf_{f:\mathbb{R }^{d}\rightarrow[K]}\mathcal{R}_{\text{stk}}(f;\ell)\) for LSLR._ Theorem 1, B3 shows Proposition 1, and Proposition 2 is trivial from Proposition 1, considering the form of the task risk, \(\mathcal{R}_{\text{stk}}(f,\ell)=\mathbb{E}_{\mathbf{X}}\left[\sum_{k=1}^{K} \Pr(Y=k|\mathbf{X})\ell(f(X),k)\right]\). Also, one has to pay attention to the labeling function of LSLR with \(\alpha\in(1,\frac{K}{K-1}]\) even for the task with the zero-one task loss \(\ell_{20}\). **Corollary 3**.: _Under the assumption of Proposition 2, \(h_{\ell_{\text{stk}}}(\mathbf{v})=\arg\max\{v_{k}\}_{k}\) for LR, LSLR with with \(\alpha\in[0,1)\), MLSLR with \(\alpha\in[0,1)\cup(1,\frac{K}{K-1}]\), and LSQLR, and \(h_{\ell_{\text{stk}}}(\mathbf{v})=\arg\min\{v_{k}\}_{k}\) for LSLR with with \(\alpha\in(1,\frac{K}{K-1}]\)._ ## Appendix C Proof of Theorems 3 and 4 [14] shows Theorems 3 and 4 for the loss \(\varphi_{\mathrm{LR}}\). Also, the surrogate losses \(\varphi_{\mathrm{MLS},\alpha}\), \(\alpha\in(0,1)\) and \(\varphi_{\mathrm{SQ}}\) are bounded, and Theorems 3 and 4 for these losses can be proved in a way similar to that for [15, Theorems 2.3 and 2.4], which is based on [16, Chapter 6]; Refer to these studies for the proof of Theorems 3 and 4. Additionally, we note that these theorems can be extended to the case \(\alpha\in(1,2]\) (which is \((1,\frac{K}{K-1}]\) for \(K=2\)): As Theorem 1, B8, \(\Lambda_{\mathrm{MLS},\alpha}=\Lambda_{\mathrm{MLS},2-\alpha}\), and \(\mathrm{B}_{\mathrm{MLS},\alpha}=\mathrm{B}_{\mathrm{MLS},2-\alpha}\) suggest, MLSLRs with \(\alpha\in(1,2]\) and \(2-\alpha\in[0,1)\) provide the same result. ## Appendix D Discussion on LSLR and MLSLR with \(\alpha\in(1,\frac{K}{K-1}]\) In the binary setting \(K=2\), Theorem 1, B8 suggests that LSLRs or MLSLRs with \(\alpha\in(1,2]\) and \(2-\alpha\in[0,1)\) give the same probability estimation and classification performance. On the other hand, in a multi-class setting \(K>2\), an analogy for LSLRs or MLSLRs with \(\alpha\in(1,\frac{K}{K-1}]\) and \(K-(K-1)\alpha\in[0,1)\) may not exactly hold; See Theorem 1, B7. It would be more difficult to understand the results for \(\alpha\in(1,\frac{K}{K-1}]\) than to understand the results for \(\alpha\in[0,1)\) in a multi-class setting \(K>2\) in our way of interpreting the LS technique with respect to LR, because the difference from LR (\(\alpha=0\)) gets bigger (just like that the mathematical approximation becomes less accurate for a more distant point). Therefore, we here report only considerations based on the experimental observations. We took CIFAR-10 experiments similar to those in Section IV for \(\alpha\in(1,\frac{K}{K-1}]\). We tried LSLR and MLSLR for \(\alpha=\frac{9.2}{9},\frac{9.4}{9},\frac{9.6}{9},\frac{9.8}{9},\frac{9.9}{9}\) under the task with the zero-one task loss \(\ell_{20}\), where we used the labeling function \(h_{\ell_{\text{stk}}}\) described in Corollary 3. The results are shown in Tables IV and V. As Figure 5 and Table V show, the R-logit model \(\mathbf{q}_{\mathrm{RL},\alpha}(\mathbf{g}(\mathbf{x}))\) of the LSLR often and largely deviates from the probability simplex, and LSLR gave worse TSR and TTR than MLSLR with the same \(\alpha\). When \(\alpha\in(1,\frac{K}{K-1}]\), the smaller \(\alpha\) is, the larger the deviation tended, which negatively affected the probability estimation and classification performance of LSLR. Also, all TTRs of MLSLR with \(\alpha\in(1,\frac{K}{K-1}]\) were worse than TTR of MLSLR with \(\alpha=0.4\), the best result of MLSLR with \(\alpha\in[0,1)\). In the CIFAR-10 experiments (\(K=10\)), we could not find advantage of choosing \(\alpha\in(1,\frac{K}{K-1}]\). ## Appendix E Discussion on Generalization Analysis Several previous studies have provided experimental investigations of the generalization performance of LSLR, but there has not been theoretical analysis. This may be attributed to the fact that many studies view the LS technique as regularization added to LR. The loss view gives generalization analysis of LSLR collaterally. Although it does not present a meaningful comparison between LR and LSLR, we here give discussions on generalization analysis in probability estimation and classification tasks for LSLR from the loss view to compensate for the absence of theory, and mention challenges in this direction. According to [12, Theorem 4], one can obtain a generalization bound for LSLR (and LR) that uses a classification calibrated, Lipschitz continuous and convex loss function \(\varphi\) in a simple setting of Section III-A. This bound is governed by a covexified variational transformation of the loss \(\varphi\) (called \(\psi\)-transform in [12]), and the tightest possible upper bound uniform over all probability distributions. By viewing the LS technique as modification of the surrogate loss, a lot of theories by existing research can be directly applied to analyze the properties of LSLR in other various settings; See [12, Theorem 5] under low-noise assumption (which assumes that \(p_{1}(\mathbf{x})\) is unlikely to be close to \(1/2\)), [38, Chapter 10] for a norm-regularized version using a kernel-based learner, and [13] for multi-class cost-sensitive tasks. However, it also should be noted that such results grounded on learning theories are often loose bounding-based evaluation and contain quantities such like \(\inf_{g\in\mathcal{G}}\mathcal{R}_{\text{surf}}(\mathbf{g};\phi)\) or \(\psi\)-transform that cannot be known in advance or may vary for methods using different surrogate losses, making it difficult to make clear comparisons between methods using different losses. ## Acknowledgment This work was supported by Grant-in-Aid for JSPS Fellows, Number 20J23367.
2307.08378
eGPU: A 750 MHz Class Soft GPGPU for FPGA
This paper introduces the eGPU, a SIMT soft processor designed for FPGAs. Soft processors typically achieve modest operating frequencies, a fraction of the headline performance claimed by modern FPGA families, and obtain correspondingly modest performance results. We propose a GPGPU architecture structured specifically to take advantage of both the soft logic and embedded features of the FPGA. We also consider the physical location of the embedded memories and DSP Blocks relative to the location and number of soft logic elements in order to have a design with balanced resources. Our goal is to create a high performance soft processor able to implement complex portions of FPGA system designs, such as the linear solvers commonly used in wireless systems, through push-button compilation from software. The eGPU architecture is a streaming multiprocessor (SM) machine with 512 threads. Each SM contains 16 scalar processors (SP). Both IEEE754 FP32 and INT32 integer arithmetic are supported. We demonstrate a single SM eGPU in an Intel Agilex device, requiring 5600 ALMs and 24 DSP Blocks, which closes timing at over 770 MHz from a completely unconstrained compile. Multiple eGPUs can also be tightly packed together into a single Agilex FPGA logic region, with minimal speed penalty.
Martin Langhammer, George Constantinides
2023-07-17T10:32:51Z
http://arxiv.org/abs/2307.08378v1
# eGPU: A 750 MHz Class Soft GPGPU for FPGA ###### Abstract This paper introduces the eGPU, a SIMT soft processor designed for FPGAs. Soft processors typically achieve modest operating frequencies, a fraction of the headline performance claimed by modern FPGA families, and obtain correspondingly modest performance results. We propose a GPGPU architecture structured specifically to take advantage of both the soft logic and embedded features of the FPGA. We also consider the physical location of the embedded memories and DSP Blocks relative to the location and number of soft logic elements in order to have a design with balanced resources. Our goal is to create a high performance soft processor able to implement complex portions of FPGA system designs, such as the linear solvers commonly used in wireless systems, through push-button compilation from software. The eGPU architecture is a streaming multiprocessor (SM) machine with 512 threads. Each SM contains 16 scalar processors (SP). Both IEEE754 FF32 and INT32 integer arithmetic are supported. We demonstrate a single SM eGPU in an Intel Agilex device, requiring 5600 ALMs and 24 DSP Blocks, which closes timing at over 770 MHz from a completely unconstrained compile. Multiple eGPUs can also be tightly packed together into a single Agilex FPGA logic region, with minimal speed penalty. GPGPU; FPGA ## I Introduction Soft processors have been a longstanding feature of the FPGA landscape [1, 2]. Typically these have been low performance (both in terms of operating frequency and processing capability), and have been used for handling ancillary functions, or in some cases to control larger datapath structures implemented in the FPGA fabric. Many FPGAs also include embedded processors, typically ARM based [3, 4], to support higher performance requirements. Often only a small fraction of the computation is done in the hard processor (almost all of the processing is offloaded to the soft logic array) [5, 6]. In some cases the entire FPGA acts as a processor, for example Microsoft Project Brainwave [7]. Integration and flexibility are the key value propositions of the FPGA. While there are many discrete processors that can outperform soft processors, the ability to tightly integrate peripherals and accelerators with a soft processor can often give the FPGA an overall performance advantage. Certain features of the FPGA are already hardened (_e.g._ the DSP Blocks), meaning that FPGAs [8] and GPUs [9] now support similar levels of floating point density at similar process nodes. This suggests that a well-crafted soft GPU could achieve similar performance density to a hard GPU; by leveraging flexibility in data movement and custom processing elements, the FPGA could outperform it in a system setting. The eGPU is designed from the outset to be a high performance soft processor. There is often a large gap between the maximum clock frequency of the FPGA device and the actual speed achieved for a complex design. An FPGA has a natural speed limit, which is governed by the slowest feature, such as the clock network, embedded memories, or DSP Blocks. In practice, the critical path is usually in the soft logic portion of the design. With our focus on effective design of the soft logic, it is the Agilex DSP Blocks configured in FP32 mode that are the limiting factor for the eGPU at 771 MHz. The combination of a simple memory hierarchy and shallow processing pipeline of the eGPU provide a low latency and computationally efficient processor for signal processing algorithms that are commonly used in FPGA systems. Our eGPU makes the following novel contributions: 1. Performance: eGPU closes timing at 771 MHz in a current Agilex FPGA without any synthesis, placement, or timing constraints. This is a higher Fmax than any other soft processor we are aware of, of any complexity. 2. Resource Efficiency: the resource balance of eGPU - logic, DSP, and memory - is approximately the same ratio as found in the FPGA. Multiple instances can be specified while maintaining a high efficiency design. In addition, the overall resource requirements are considerably smaller than previously published soft GPUs. 3. Flexible ISA: eGPU can target a subset of the initialized threads on an instruction by instruction basis. Processing efficiency for operations such as reduction are boosted without requiring thread divergence. Our goal is not to replace standard GPGPUs, or to compete directly with them, but rather to use the SIMT architecture as a basis for an efficient and effective component of FPGA system design, implementing some algorithms that are challenging to code and maintain in RTL. We are building a different type of GPGPU, one that is works well for small datasets, in terms of both processing efficiency and latency. ## II Comparison to Other Soft GPU Architectures A number of soft GPU architectures have been published, including Guppy [10], FGPU [11], FlexGrip [12] and MIAOW [13]. The capabilities of FGPU and MIAOW have been improved by others in DO-GPU [14] and SCRATCH [15], but at the cost of considerable additional resources and/or Fmax reduction. Of all the previous soft GPU architectures we surveyed, only the FGPU (and its derivatives) were faster than 100 MHz. Vector processors have also been studied for FPGA, including VEGAS [16], VENICE [17], and VectorBlox [18], the last of which has been commercialized by Microsomi [19]. The Fmax of these is still modest, with the fastest configuration running at 154 MHz on a recent device. FlexGrip and FGPU are compared with our proposed eGPU in Table I. This comparison is only representative; there are significant differences between eGPU and the other soft GPUs. FGPU and Flexgrip support enhanced features (like thread divergence), and have a much more complex memory system, with caches. Both can issue instructions across multiple SMs; for eGPU this must be manually controlled by an external agent. But FGPU and FlexGrip have much deeper pipelines than eGPU. While the pipeline depth of eGPU is only 9 (for both Integer and FP operations), FGPU requires 21 cycles for INT and 44 for FP. However, eGPU is an order of magnitude smaller compared to FlexGrip, and nearly an order of magnitude faster. Although FGPU and FlexGrip are mapped onto older FPGAs (28nm and 40nm planar nodes, respectively), this would not explain the performance difference, especially with the deep pipelines on those processors. When FGPU was ported to a newer Intel Stratix 10 (14nm FinFET process) [14], the clock frequency remained approximately the same. ## III Architecture The eGPU is organized in a typical SIMT machine topology, but is designed from the start for an FPGA target. We will use the term _wavefront_ to denote a number of parallel threads issued in a single clock cycle, and _thread block_ to denote the number of wavefronts required to run all of the initialized threads. Our flexible ISA feature allows both the wavefront and thread block to varied on an instruction by instruction basis. Each SM (see Figure 1) contains 16 SPs, and additional processing structures, such as dot product operators and special function units (SFUs) can be optionally added. The dot product performs a FP32 vector multiplication and reduction across an entire wavefront, which is supported directly by DSP Block features. The SFU used here is a FP32 inverse square root. Routing fan-out and resource balancing were the main considerations in the choice of the 16 SP per SM. The instruction section is not shown, but will be described later in this section. The read bandwidth to the shared memory is four ports, with one write port back from the SPs. Global access is by direct writes into, and reads out of the shared memory at one 32-bit word wide lane. The eGPU supports a maximum of 512 threads, and has a fixed 16 registers per thread. (We chose these values as they would fit into a single M20K memory, and the were not too dissimilar to older Nvidia GPGPUs [20]). Currently, a 2D thread space can be defined. All data, whether floating point or integer, is 32 bits wide. Correspondingly, wavefronts are 16 wide, and a thread block is up to 32 wavefronts deep. Load (memory and immediate), store, and processing (both FP32 and INT32) have different latencies. Hazards have to be managed by the programmer; there are no hardware interlocks. These hazards, however, (especially dependencies inside the SP) are typically only exposed for small thread blocks. ### _Shared Memory_ The shared memory is configured as a four read port, one write port memory, which requires 4 identical copies of the shared memory space to be maintained. The four read ports are transferred to the 16 SPs in a four phase sequence. Writeback is a 16 phase sequence. The shared memory bandwidth, especially the single word store, is one of the most significant performance bottlenecks for the eGPU. Later in this section, we will describe some novel architectural features which mitigate these limitations. ### _SP Architecture_ The architecture of the SP is shown in Figure 2. A register file with a total of 512 32-bit words is configured as a two read port, one write port memory. The SP can read from, and write to, shared memory, and execute both IEEE754 single precision floating point (FP32) and 32-bit integer (INT32) instructions. We expect that most of the work will be done by the FP ALU, and the INT ALU will mostly be used for address generation. At this time predicates (a conditional branch on a thread by thread basis) are not supported, as we have found that the benchmarks we are most interested in (such as FFT and matrix decomposition) do not make data dependent decisions. All SPs have the same micro-architecture, except the first lane, as the dot product and SFU cores write into this lane. The FP ALU uses the DSP blocks, and does not require any soft logic other Fig. 1: Representative SM Architecture. than an input multiplexer to convert the pre-configured FP32 multiply-add datapath into an adder. The INT ALU is constructed out of ALMs, plus half a DSP Block for the INT multiply. The multiply is 16x16 with a 32-bit output, which will typically be used for address generation. In addition to the multiplier, the ALU contains logic functions (AND/OR/XOR/NOT), add/sub, and shifters, all with a pipe depth to match the FP ALU. Some integer functions (especially the shifters and the add/sub) could restrict the 770 MHz goal when in a full chip or otherwise densely packed design, but the added logic depth here allows us to spread some of these INT32 functions over two pipeline stages. As an example, the adders are implemented using a carry select structure. ### _Instruction Section_ The instruction section consists of an instruction fetch, instruction memory, instruction decoder, sequencer and thread generator. The instruction fetch stage also supports zero-overhead loops, similar in style to DSP processors [21]. A loop counter is initialized by instruction (which requires only a single cycle) and is automatically decremented at the bottom of each loop (another single cycle instruction). The instruction memory (I-MEM) contains 40-bit wide words, and has a parameterizable depth. The program size for the expected applications is relatively small. A single M20K can hold a 512x40 block, so we expect that one to four memories would likely be sufficient, although there is no upper limitation on the I-MEM size. (Two of the benchmarks in this paper, the 256 point radix-2 FFT and the 16x16 QRD require 135 and 40 instructions, respectively, which suggest that multiple programs can easily be contained in a single M20K). The I-MEM can be independently reloaded or updated, including during execution of a program - the external agent just has to be aware of the memory space and update a portion of the I-MEM not currently being accessed. The sequencer controls the application of the decoded instruction signals to the SPs. Some instructions are single cycle, but most require multiple cycles. Operation instructions (FP or INT) typically run as many wavefronts until all threads have been executed. Load and store instructions similarly run for all threads, but loads require one clock per four threads loaded, and stores take one cycle per thread. The sequencer also adjusts the number of wavefronts with the variable thread block capability explained below. The 40-bit instruction word is divided into eight fields. The most significant field is the 4-bit Variable, which can modify the thread block in real time. This field is described later in this section. Next is the 6-bit Opcode field. This allows for 64 instructions, and we currently implement 23 (see Table II). The 2-bit Type field selects INT32, UINT32, or FP32 numerics when an operation is run. The next three fields, 4-bits each, select the destination and two source registers, or alternately the address register in case of an indexed load or store. The single bit X field enables thread snooping, explained below. The 15-bit immediate field is sign extended to 32 bits. It also contains the register address extensions when thread snooping is selected. Fig. 3: I-WORD. Fig. 2: SP Architecture. ### _Novel Architectural Features_ The flexible ISA enables the thread block - in both depth and width - to be changed on an instruction by instruction basis, while keeping a constant thread initialization. No data flush is required - the switch occurs instantly with no latency, other than to manually control hazards as the depth changes. This effectively allows the eGPU to switch personalities between SIMT, Vector Processor (VP), multi-threaded CPU, and simple MCU architectures at any point. This can significantly reduce the negative impact of low memory bandwidth. We will see the effect of this in some of the benchmarks, especially in matrix decomposition, where a normalization value is calculated by a dot product. The result of the dot product will be written back to only the first lane. Through the ability to only save a single thread per wavefront, or even a single thread within a block, a large number of cycles is saved. This feature is controlled by the upper 4 bits of the I-Word. Bits [40:39] set the width of the wavefront (full width, 1/2 width, 1/4 width, and single thread), and bits [38:37] set the depth of the block (full depth, 1/2 depth, 1/4 depth, and single cycle). The Modified Gram-Schmit (MGS) QRD algorithm demonstrates the power of this variable ISA. Our benchmark is on a 16x16 matrix, which uses 256 threads. A norm is computed on a single column, and then applied to all remaining columns. First, a single wavefront is isolated, to compute the norm. Then a single thread is isolated to write the norm into shared memory, from where it can be broadcast to all threads. A regular SIMT instruction is issued to apply the norm to the remaining columns. With a standard GPU architecture, thread divergence would be used to isolate subsets of the initialized thread space, requiring the running of all threads (whether or not they were executed). One of the limitations of the eGPU is the writeback bandwidth; but using the flexible ISA, the norm writeback only requires a single clock cycle. The VP mode also uses the thread snooping feature. When the X bit in the instruction word is set, two 5-bit sub-fields in the immediate section of the I-Word provide the upper five bits of the operation source registers. This allows the threads of the first wavefront (threads 0 to 15) to access any register in their lane. An example where thread snooping can be used is in the reduction benchmark. The dot product operator writes the reduction of each wavefront into the first lane (threads within the first SP). The first thread of the first lane can then access all of these threads directly, without having to go through the shared memory. ### _FPGA Mapping_ One of our design goals was that the resource usage of the eGPU was balanced. The most common sector type on Agilex devices contains 237 M20Ks, 164 DSP Blocks, and 16,400 ALMs [22]. All of these resources are arranged in columns, 41 LABs high; the mix of resources can be seen in the floorplan output by Quartus. As most of the soft logic is in the INT ALU, we can trade off various feature sets in this block to enable fitting into specific geometries, such as sector boundaries. We will now consider some possibilities for a balanced design inside a sector. Our base eGPU architecture needs 24 DSP Blocks (16 for the FP ALU and 8 for the INT ALU). Each SP uses two M20Ks for register files (32 in total). To pack four SMs per sector requires 128 M20Ks and 96 DSP blocks. This leaves 109 M20Ks remaining for shared memory, which is 27 512x32 memories per eGPU. For a quad read port shared memory, this allows us a 6 deep (3K word) shared memory, or 12K bytes. There are 68 DSP Blocks remaining, or 16 per eGPU, which is how many DSP Blocks are required to implement the dot product core. The sector contains 1640 LABs (16400 ALMs), which gives us a budget of 4100 ALMs per eGPU. ## IV Benchmarks We demonstrate the performance and utility of the eGPU by running two non-trivial benchmarks, FFT and QRD. We also profile the code to provide an analysis of the strengths and weaknesses of the eGPU architecure, and to provide a starting point for future architectural enhancements. ### _Fft_ We coded a radix-2 (R2) decimation-on-frequency (DIF) FFT, and analyzed the performance for lengths 32 and 256. The R2 butterfly takes two complex inputs. They are added for the upper output. The lower output is the subtraction of the two inputs, followed by a rotation, implemented as a complex multiply by a coefficient. Here, we map each butterfly to its own thread. The 32 point FFT therefore requires only 16 threads, which maps to a single wavefront of the eGPU. The 256-point FFT requires eight wavefronts, which is slightly less than the pipeline depth of the SP. This creates a RAW hazard at one point in the address generation code, which we handle by inserting a NOP. We will see that the shared memory bandwidth limitation is the most significant performance limitation for this benchmark. As an example of the assembly code style, the following code segment calculates the address for each thread at the start of each pass (R1 contains the threadID, R3 and R4 contain address masks, R5 the address rotate value - always '1' for radix-2, and R9 the twiddle addressing increment for this pass). As the eGPU is a SIMT machine, this instruction runs for all active threads, with the threadID creating a unique address for each one. We will follow the execution for thread ID 110 ("01101110") for the 256 point FFT in pass 2 as an example. The initial values are R3 = "01000000", R4 = "00111111", R5 = 1, and R9 = 2. The resultant data address is 174 ("10111000"). AND.INT32 R6,R1,R3; // R6 = "01101110" AND.INT32 R7,R1,R4; // R7 = "00101110" LSL.INT32 R8,R6,R5; // R8 = "1000000" ADD.INT32 R6,R7,R8; // R6 = "10101110" NOP; // prevent RAW hazard ADD.INT32 R2,R6,R6; // R2 = "101011100" LSL.INT32 R3,R7,R9; // R3 = "0010111000" RTS Table III shows the distribution of the instruction types for a radix-2 FFT. The address calculation takes 12% of the cycles, and the actual butterflies 13%, with the shared memory access accounting for 75% of the total. Every pass needs to go through the shared memory so that the butterflies obtain the correct input data locations. The combination of the low memory bandwidth for the eGPU and the number of passes for the R2 FFT makes this example relatively inefficient. ### _QRD_ We also coded a QRD using the Modified Gram-Schmidt algorithm [23], and show results for a small 16x16 matrix. This size of matrix would perform particularly poorly on standard GPUs [24]. The combination of the dot product core and the SFU make the calculation of the norm value (in the outer loop) very quick. The ability to specify the execution of both a subset of the thread space and/or the subset of a wavefront means that indexed store operations may need to run for as little as a single clock cycle. The results can be seen in Table IV. The FP operations (actual QRD calculation work) are 22% of the total cycles, and writeback to the shared memory is only 11%. Interestingly, the broadcast of the norm from a single thread to all threads (which needs to go through the shared memory) requires almost half of the total time, which is a potential focus area for future architectural optimizations. The hazard mitigating NOPs require 15% of the cycles, but these would largely disappear for larger matrices. The QR decomposition of these smaller matrices on hard GPUs, even with vendor libraries such as cuBLAS, often have efficiencies measured in the small single digits [25]. But when the actual number of arithmetic operations per instruction is taken into account, the true efficiency of the eGPU is much higher. The FP32 dot product runs for 6% of the cycles, but each dot product instruction performs 31 operations (16 multiplies and 15 adds). This structure is particularly effective on the type of operations needed for MGS, while most of the hard GPU examples we surveyed [24, 25] used Householder reflections [26]. ## V Results We performed a completely unconstrained compile of the eGPU into an Intel Agilex AGFB014R24A1E1V device using Quartus 22.4.0 Pro Edition. The resource requirements are shown in Table V. This closes timing at 771 MHz, with the DSP Block in FP32 Multiply-Add mode the critical path. The other failing paths are primarily soft logic inside the integer ALU. Additional experiments with additional pipelining and alternate constructions of the INT ALU achieve only marginal gains, with a soft logic Fmax of 831 MHz. As a result, we can be confident that our core is performant by design _i.e._ it will compile to the maximum performance (limited by the DSP Block) without any constraints. As a result of the resource ratio-driven design methodology, we are able to replicate the eGPU core four times, in a spatially regular fashion, by locking the design into a sector boundary. The logic is slightly larger than the contents of the sector, so about 20% of this design spilled over into the next sector in the Y direction, although still within the horizontal sector boundaries. The same process allows placement of any number of tightly packed quad-eGPU structures side by side if needed, without any reduction in performance on a single quad construct. (There is no routing spillage over the boundaries horizontally). There is a slight (\(\sim 5\%\)) performance degradation in the quad packing to 738 MHz. ## VI Conclusions This paper describes the architecture and implementation of a high performance embedded GPU, which can achieve over 770 MHz operating frequency with an unconstrained compile. The area is modest, and has a balanced resource usage of memories, DSP Blocks, and soft logic. As a consequence, we can also tightly pack multiple cores together with a minimal performance degradation. The flexibility of the FPGA means that we can optionally add accelerator cores such as dot-product cores and operators such as elementary function cores. We have also coded and demonstrated benchmarks such as FFTs and QR matrix decomposition.
2306.07362
Large-Scale Multiple Testing of Composite Null Hypotheses Under Heteroskedasticity
Heteroskedasticity poses several methodological challenges in designing valid and powerful procedures for simultaneous testing of composite null hypotheses. In particular, the conventional practice of standardizing or re-scaling heteroskedastic test statistics in this setting may severely affect the power of the underlying multiple testing procedure. Additionally, when the inferential parameter of interest is correlated with the variance of the test statistic, methods that ignore this dependence may fail to control the type I error at the desired level. We propose a new Heteroskedasticity Adjusted Multiple Testing (HAMT) procedure that avoids data reduction by standardization, and directly incorporates the side information from the variances into the testing procedure. Our approach relies on an improved nonparametric empirical Bayes deconvolution estimator that offers a practical strategy for capturing the dependence between the inferential parameter of interest and the variance of the test statistic. We develop theory to show that HAMT is asymptotically valid and optimal for FDR control. Simulation results demonstrate that HAMT outperforms existing procedures with substantial power gain across many settings at the same FDR level. The method is illustrated on an application involving the detection of engaged users on a mobile game app.
Bowen Gang, Trambak Banerjee
2023-06-12T18:36:53Z
http://arxiv.org/abs/2306.07362v1
# Large-Scale Multiple Testing of Composite Null Hypotheses Under Heteroskedasticity ###### Abstract Heteroskedasticity poses several methodological challenges in designing valid and powerful procedures for simultaneous testing of composite null hypotheses. In particular, the conventional practice of standardizing or re-scaling heteroskedastic test statistics in this setting may severely affect the power of the underlying multiple testing procedure. Additionally, when the inferential parameter of interest is correlated with the variance of the test statistic, methods that ignore this dependence may fail to control the type I error at the desired level. We propose a new Heteroskedasticity Adjusted Multiple Testing (HAMT) procedure that avoids data reduction by standardization, and directly incorporates the side information from the variances into the testing procedure. Our approach relies on an improved nonparametric empirical Bayes deconvolution estimator that offers a practical strategy for capturing the dependence between the inferential parameter of interest and the variance of the test statistic. We develop theory to show that HAMT is asymptotically valid and optimal for FDR control. Simulation results demonstrate that HAMT outperforms existing procedures with substantial power gain across many settings at the same FDR level. The method is illustrated on an application involving the detection of engaged users on a mobile game app. _Keywords:_ Composite null hypotheses; Deconvolution estimates; Empirical Bayes; False discovery rate; Heteroskedasticity; Multiple testing with covariates. ## 1 Introduction Suppose \(X_{i}\), \(i=1,\cdots,m\), are independent summary statistics arising from the following random mixture model: \[X_{i} = \mu_{i}+\epsilon_{i},\;\epsilon_{i}\stackrel{{ ind.}} {{\sim}}N(0,\sigma_{i}^{2}) \tag{1}\] \[\mu_{i}\mid\sigma_{i} \stackrel{{ ind.}}{{\sim}} g_{\mu}(\cdot\mid\sigma_{i}),\;\sigma_{i}\stackrel{{ i.i.d.}}{{\sim}}g_{\sigma}(\cdot), \tag{2}\] where \(g_{\mu}(\cdot\mid\sigma_{i})\) and \(g_{\sigma}(\cdot)\) are, respectively, the probability density functions of the unknown mixing distributions of \(\mu\) given \(\sigma_{i}\) and \(\sigma_{i}\). Model (1)-(2) find substantial use in large-scale inference problems (Efron, 2004, 2012, Efron and Tibshirani, 2007, Jin and Cai, 2007), where the Gaussian distribution assumption in Equation (1) often provides a good approximation to the distribution of the summary statistics \(X_{i}\). Following Fu et al. (2022), Sun and McLain (2012), Weinstein et al. (2018), Xie et al. (2012), we assume that \(\sigma_{i}\) are known or can be well estimated from the data. Upon observing the pair \((X_{i},\sigma_{i})\), the goal is to simultaneously test the following \(m\) hypotheses: \[H_{0,i}:\mu_{i}\in\mathcal{A}\quad\text{versus}\quad H_{1,i}:\mu_{i}\notin \mathcal{A},\ i=1,\ldots,m, \tag{3}\] where \(\mathcal{A}\) represents the indifference region such that the researcher is indifferent to the effects in \(\mathcal{A}\)(Sun and McLain, 2012). Here \(H_{0,i}\) represents a composite null hypothesis as opposed to a simple null hypothesis when \(\mathcal{A}\) is singleton. Much of the focus of extant multiple testing methods is directed towards simultaneously testing simple null hypotheses against composite alternatives. A typical example arises in genome-wide association studies involving millions of single nucleotide polymorphisms (SNPs), where the primary goal is to discover SNPs that are statistically associated with a specific trait or disease of interest (Basu et al., 2018, Uffelmann et al., 2021). The simultaneous inference problem in these applications require testing \(m\) hypotheses of the form \(H_{0,i}:\mu_{i}=0\ vs\ H_{1,i}:\mu_{i}\neq 0\) where \(\mu_{i}\) is the unknown effect of SNP \(i\) on the disease response, such as cholesterol level. However, across numerous medical and social science applications it is important to detect if \(\mu_{i}\notin\mathcal{A}\). For instance, Gu and Shen (2018), Pop-Eleches and Urquiola (2013) study the effect of attending a more selective school on the exam grade of high-school students in Romania. There the inferential objective is to identify schools with a positive effect on the average exam grade and it is desirable for the null hypothesis to include both zero and negative effects, i.e., to test a one-sided composite null hypothesis \(H_{0,i}:\mu_{i}\in\mathcal{A}\) against the alternative \(H_{1,i}:\mu_{i}\notin\mathcal{A},\ i=1,\ldots,m\), where \(\mathcal{A}=(-\infty,0]\). In high-throughput gene sequencing studies, a fundamental task is to discover genes that exhibit differential expression levels that exceed a biologically relevant threshold \(\mu_{0}\)(Love et al., 2014). So, for each gene \(i\) a two-sided composite null hypothesis \(H_{0,i}:\mu_{i}\in\mathcal{A}\) is tested against the alternative \(H_{1,i}:\mu_{i}\notin\mathcal{A}\) where \(\mathcal{A}=[-\mu_{0},\mu_{0}]\). The standard practice for simultaneously testing a large number of hypotheses involves con structing significance indices, such as \(p-\)values or local false discovery rate (Lfdr) statistics (Basu et al., 2018, Efron, 2012, Sun and Cai, 2007, Sun and McLain, 2012), for ranking the hypotheses and then estimating a threshold along the ranking for type I error control. However, for testing composite null hypotheses, procedures based on \(p-\)values are not as powerful since the \(p-\)values may fail to adapt to the potential asymmetry of the alternative about the null (Sun and Cai, 2007, Sun and McLain, 2012) and tend to concentrate near \(1\) under the null. The Lfdr statistic, on the contrary, adapts to such asymmetry by incorporating information about the null as well as the alternative distribution of the test statistic. Given a summary statistic \(X_{i}\) of \(\mu_{i}\), the Lfdr statistic represents the posterior probability of a case being null and relies on the density of \(X_{i}\) under the null, and its mixture density under the null and the alternative. When testing composite null hypotheses, both these densities are unknown in practical applications and must be estimated from the available data. The heteroskedasticity in the summary statistics raises two main challenges in estimating the composite null and mixture densities. _Effect of heteroskedasticity on the inferential parameter of interest_ - In heteroskedastic settings, the parameter \(\mu_{i}\) and the standard deviation \(\sigma_{i}\) may be correlated (Weinstein et al., 2018). For instance, in a restaurant rating app it is often the case that extremely good and extremely bad restaurants tend to receive a large number of reviews. Thus, if the goal is to identify restaurants within a certain rating range then both the mean and variance of the ratings are related to the number of reviews. A key to constructing reliable estimates of the composite null and mixture densities depends on a deconvolution step that learns the distribution of \(\mu_{i}\) from the data and can effectively capture the dependence between \(\mu_{i}\) and \(\sigma_{i}\). However, existing approaches for empirical Bayes deconvolution, such as Efron (2016), Koenker and Mizera (2014), assume independence between \(\mu_{i}\) and \(\sigma_{i}\), which is often violated in practice. In Section 3.3, we demonstrate via a numerical example that procedures for testing composite null hypotheses may incur power loss and even fail to control the FDR when their underlying deconvolution estimator ignores this dependence. _Power distortion due to standardization_ - The conventional approach to mitigate the impact of heteroskedasticity is to re-scale each \(X_{i}\) by \(\sigma_{i}\) and construct \(z-\)values \(Z_{i}=X_{i}/\sigma_{i}\) so that the Lfdr statistics can be estimated using the homoskedastic \(Z_{i}\)'s. However, for two-sided composite null hypotheses standardization distorts the underlying scientific question (Sun and McLain, 2012) and, recently, Fu et al. (2022) demonstrate that such a data reduction step may severely affect the power of multiple testing procedures even in the case of simple null hypotheses. In Section 2.3, we present illustrative examples to demonstrate that standardization may lead to considerable power loss while testing one-sided composite null hypotheses as the power of testing procedures can vary substantially with \(\sigma_{i}\). In this article, we propose a new heteroskedasticity-adjusted multiple testing (HAMT) procedure for composite null hypotheses. HAMT represents an effective strategy for incorporating the side-information in the standard deviations for simultaneous testing of composite nulls and it operates in two steps: in step (1) HAMT constructs a significance index for ranking the hypotheses and then in step (2) it estimates a threshold along the ranking for identifying interesting hypotheses. The significance index is a new Lfdr statistic that addresses the methodological challenges discussed earlier in dealing with heteroskedasticity. First, our Lfdr statistic utilizes the full data, namely the summary statistic and its standard deviation, thus avoiding standardization to \(z\)-values and the potential power distortion due to data reduction. Second, the construction of the Lfdr statistic relies on an improved nonparametric empirical Bayes deconvolution estimator that provides a practical strategy for incorporating the dependence between \(\mu_{i}\) and \(\sigma_{i}\), and yields consistent estimates of the composite null and mixture densities in the heteroskedastic setting. HAMT is designed for problems where the number of hypotheses being tested is large, which allows the deconvolution estimator to efficiently learn the latent structural relationship between \(\mu_{i}\) and \(\sigma_{i}\) in the data. Our theoretical results show that for such large-scale problems, HAMT is valid for FDR control and is as powerful as the oracle procedure that has full knowledge of the underlying data generating process under our hierarchical model (Equations (1)-(2)). In our numerical experiments, we find that HAMT exhibits substantial power gains over existing methods across many settings while controlling FDR at the target level. Our work is closely related to Sun and McLain (2012) who develop an FDR controlling procedure based on Lfdr statistics for testing composite null hypotheses under heteroskedasticity. However, HAMT differs on two important aspects. First, and in contrast to Sun and McLain (2012), we allow \(\mu_{i}\) and \(\sigma_{i}\) to be dependent in our hierarchical model, which presents a challenging deconvolution problem for estimating the composite null and mixture densities. Second, the kernel method developed in Sun and McLain (2012) for estimating these densities is highly unstable (Fu et al., 2022). Here, we develop a nonparametric empirical Bayes deconvolution es timator which is scalable to large problems and provides consistent estimates of the composite null and mixture densities. In the terminology of Efron (2014), our deconvolution estimator is related to the \(g-\)modeling strategy for empirical Bayes estimation. While existing \(g-\)modeling approaches, such as Efron (2016), Koenker and Mizera (2014), ignore the dependence between \(\mu_{i}\) and \(\sigma_{i}\), we develop a simple yet effective technique for modeling such dependence while estimating the distribution of \(\mu_{i}\). Recently, Gu and Shen (2018) propose a FDR controlling method for one-sided composite null hypotheses. Their approach is based on \(z-\)values and relies on the deconvolution estimate obtained from nonparametric maximum likelihood (Kiefer and Wolfowitz, 1956, Laird, 1978) techniques to estimate the Lfdr. The illustrative examples in Section 2.3 show that such an approach based on standardization may lead to substantial power loss when \(\mu_{i}\) and \(\sigma_{i}\) are correlated. Since variance can be viewed as a covariate in multiple testing problems, our work is also connected to the rapidly expanding literature on multiple testing with generic covariates. Here, proposals for heteroskedasticity adjustment of multiple testing methods vary from using \(\sigma_{i}\) as a potential covariate for pre-ordering the hypotheses (Cao et al., 2022, G'Sell et al., 2016, Lei and Fithian, 2016, Li and Barber, 2017) to grouping methods based on the magnitudes of \(\sigma_{i}\)(Cai and Sun, 2009, Efron, 2008, Hu et al., 2010, Liu et al., 2016). However, such a pre-ordering or grouping based on \(\sigma_{i}\) may not always be informative since a larger \(\sigma_{i}\) does not necessarily imply a relatively higher or lower likelihood of rejecting the null hypothesis. More recently, several methods have been proposed that seek to directly use the covariate information along with the \(p-\)values to develop powerful testing procedures (see for example Boca and Leek (2018), Chao and Fithian (2021), Ignatiadis and Huber (2021), Lei and Fithian (2018), Li and Barber (2019), Zhang et al. (2019), Zhang and Chen (2022) and the references therein). While testing composite null hypotheses, the aforementioned testing procedures, however, can suffer from low power when the null \(p\)-values are overly conservative. Methods that estimate the Lfdr statistic utilizing test statistic \(X_{i}\) and additional covariates have also been developed (see for instance Chao and Fithian (2021), Leung and Sun (2021), Scott et al. (2015), Tansey et al. (2018)). In particular, Scott et al. (2015), Tansey et al. (2018) use the covariate information to estimate the null proportion in an empirical Bayes two-groups model while Chao and Fithian (2021) posit a Gaussian mixture model with \(K\) classes to model the conditional distribution of \(\mu_{i}\) given the covariates, where only the class probabilities depend on the covariates. In contrast to these works, HAMT does not rely on any pre-ordering or grouping of the hypotheses based on the magnitude of \(\sigma_{i}\). Instead, HAMT is based on a Lfdr statistic that directly characterizes the impact of heteroskedasticity on the composite null and mixture densities of the test statistic. For estimating the Lfdr statistics, our approach utilizes an empirical Bayes deconvolution estimator that does not depend on any parametric representation of the distribution of \(\mu_{i}\) conditional on \(\sigma_{i}\). In the following sections, we formally describe the multiple testing problem involving composite null hypotheses, present the oracle procedure, and then introduce the HAMT procedure and its asymptotic properties. ## 2 Multiple testing of composite null hypotheses ### Problem formulation Let \(\theta_{i}=I(\mu_{i}\notin\mathcal{A})\) be an indicator function that gives the true state of the \(i\)th testing problem in Equation (3). For instance, if \(\theta_{i}=1\) then the alternative hypothesis \(H_{1,i}\) is true. Let \(\delta_{i}\in\{0,1\}\) be the decision we make about hypothesis test \(i\), with \(\delta_{i}=1\) being a decision to reject \(H_{0,i}\). Denote the vector of all \(m\) decisions \(\boldsymbol{\delta}=(\delta_{1},\cdots,\delta_{m})\in\{0,1\}^{m}\). A selection error, or false positive, occurs if we assert that \(\mu_{i}\) is not in \(\mathcal{A}\) when it actually is. In large-scale multiple testing problems, false positive decisions are inevitable if we wish to discover interesting effects with a reasonable power. Instead of aiming to avoid any false positives, a practical goal is to keep the false discovery rate (FDR) (Benjamini and Hochberg, 1995) small, which is the expected proportion of false positives among all selections, \[\text{FDR}(\boldsymbol{\delta})=E\left[\frac{\sum_{i=1}^{m}(1-\theta_{i}) \delta_{i}}{\max\{\sum_{i=1}^{m}\delta_{i},1\}}\right].\] The power of a testing procedure is measured by the expected number of true positives (ETP) where, \[\text{ETP}(\boldsymbol{\delta})=E\left(\sum_{i=1}^{m}\theta_{i}\delta_{i} \right)=E\left(\sum_{i=1}^{m}I(\mu_{i}\notin\mathcal{A})\delta_{i}\right).\] Hence, the multiple testing problem in Equation (3) can be formulated as \[\text{maximize}_{\boldsymbol{\delta}}\text{ETP}(\boldsymbol{\delta})\text{ subject to }\text{FDR}(\boldsymbol{\delta})\leq\alpha,\] where \(\alpha\in(0,1)\) is a user-defined cap on the maximum acceptable FDR. A quantity that is closely related to the FDR is the marginal false discovery rate (mFDR) where, \[\text{mFDR}(\mathbf{\delta})=\frac{E\{\sum_{i=1}^{m}(1-\theta_{i})\delta_{i}\}}{E\{ \sum_{i=1}^{m}\delta_{i}\}}.\] Under certain first and second-order conditions on the number of rejections, the mFDR and the FDR are asymptotically equivalent (Basu et al., 2018, Genovese and Wasserman, 2002), and for theoretical convenience we will aim to control mFDR instead. Formally, we study the following problem for the rest of the article: \[\text{maximize}_{\mathbf{\delta}}\ \text{ETP}(\mathbf{\delta})\ \text{subject to mFDR}(\mathbf{\delta})\leq\alpha. \tag{4}\] ### Oracle procedure In this section we assume that the mixture densities \(g_{\mu}(\cdot\mid\sigma)\) and \(g_{\sigma}(\cdot)\) in Model (2) are known by the oracle and present the oracle procedure that solves Problem (4). There are two steps involved in the derivation of the oracle procedure: the first step constructs the optimal ranking of hypotheses and the second step determines the best threshold along the ranking that satisfies the mFDR constraint in Problem (4). To rank the \(m\) hypotheses, consider the oracle conditional local FDR (Clfdr) statistic which is defined as, \[T_{i}^{\text{OR}}=T^{\text{OR}}(x_{i},\sigma_{i})=P(\mu_{i}\in\mathcal{A}|x_{i },\sigma_{i})=\frac{f_{0}(x_{i}|\sigma_{i})}{f(x_{i}|\sigma_{i})}, \tag{5}\] where \[f_{0}(x|\sigma)=\int_{\mu\in\mathcal{A}}\phi_{\sigma}(x-\mu)g_{\mu}(\mu\mid \sigma)\mathrm{d}\mu,\ f(x|\sigma)=\int_{\mathbb{R}}\phi_{\sigma}(x-\mu)g_{ \mu}(\mu\mid\sigma)\mathrm{d}\mu \tag{6}\] denote, respectively, the composite null density and the marginal density of \(X\) given \(\sigma\) under Model (1)-(2). In Equation (6), \(\phi_{\sigma}(\cdot-\mu)\) is the density of a Gaussian random variable with mean \(\mu\) and standard deviation \(\sigma\). Next, to derive the best threshold, suppose \(Q(t)\) denotes the mFDR level of the testing procedure \(\mathbf{\delta}^{\text{OR}}(t)=\{I(T_{i}^{\text{OR}}\leq t):1\leq i\leq m\}\) for some \(t\in(0,1)\). We propose the following oracle procedure for Problem (4), \[\mathbf{\delta}^{\text{OR}}(t^{*})=\{I(T_{i}^{\text{OR}}<t^{*}):1\leq i\leq m\}, \tag{7}\] where \(t^{*}=\sup\{t\in(0,1):Q(t)\leq\alpha\}\). Denote \(\mathbf{X}=(X_{1},\ldots,X_{m})\) and \(\mathbf{\sigma}=(\sigma_{1},\ldots,\sigma_{m})\). In Theorem 1 we show that \(\mathbf{\delta}^{\text{OR}}(t^{*})\) has the highest power among all procedures based on \((\mathbf{X},\mathbf{\sigma})\) that control the mFDR at level \(\alpha\). **Theorem 1**.: _Consider Model (1)-(2). The oracle procedure \(\mathbf{\delta}^{\text{OR}}(t^{*})\) in Equation (7) controls mFDR at level \(\alpha\). Additionally if \(\mathbf{\delta}\) is any other procedure based on \((\mathbf{X},\mathbf{\sigma})\) that controls mFDR at level \(\alpha\) then we have \(\text{ETP}\{\mathbf{\delta}^{\text{OR}}(t^{*})\}\geq\text{ETP}(\mathbf{\delta})\)._ Theorem 1 establishes that the oracle procedure \(\mathbf{\delta}^{\text{OR}}(t^{*})\) is valid and optimal for mFDR control. However, \(\mathbf{\delta}^{\text{OR}}(t^{*})\) is not implementable in practice since both \(T_{i}^{\text{OR}}\) and \(t^{*}\) are unknown in practical applications. In Section 3, we describe the proposed HAMT procedure that relies on a nonparametric empirical Bayes deconvolution estimator of \(g_{\mu}(\cdot|\sigma_{i})\) to construct a data-driven estimate of \(T_{i}^{\text{OR}}\) and uses a step-wise procedure to estimate \(t^{*}\). ### Power loss due to standardization: illustrative examples While \(\mathbf{\delta}^{\text{OR}}(t^{*})\) is the optimal solution to Problem (4) based on \((\mathbf{X},\mathbf{\sigma})\), a plausible approach for solving Problem (4) is to construct \(z-\)values \(Z_{i}=X_{i}/\sigma_{i}\) and then reject the null hypothesis for suitably small values of \(\mathcal{Z}_{i}^{\text{OR}}\) where \(\mathcal{Z}_{i}^{\text{OR}}=P(\mu_{i}\in\mathcal{A}|z_{i})\). In fact, Sun and Cai (2007) show that this approach is the most powerful \(z\)-value method. The apparent advantage of this data reduction step is that it transforms the heteroskedastic multiple testing problem to a homoskedastic one, and enables a like-for-like comparison of the \(m\) study units under consideration. However, in the case of two-sided composite null hypothesis, such a standardization may distort the underlying scientific question (Sun and McLain, 2012). Moreover, Fu et al. (2022) demonstrate that data reduction via standardization could lead to power loss for multiple testing procedures even in the case of simple null hypotheses. In this section we consider two illustrative examples to demonstrate that power loss due to standardization can be substantial while testing one-sided composite null hypotheses. **Example 1**.: _Suppose data are generated from Model (1) with \(\sigma_{i}\stackrel{{ i.i.d}}{{\sim}}U(0.5,4)\) and \(\mu_{i}\mid\sigma_{i}\stackrel{{ ind.}}{{\sim}}\) \(0.9\delta_{0}(\cdot)+0.1\delta_{\sigma_{i}^{1.5}}(\cdot),\) where \(\delta_{a}(\cdot)\) is a Dirac delta function indicating a point mass at \(a\). In this example \(\sigma_{i}\) controls the magnitude of the non-zero \(\mu_{i}\) and we are interested in Problem (3) with \(\mathcal{A}=(-\infty,0]\). We first consider the oracle procedure based on the \(z-\)values \(\mathbf{Z}=(Z_{1},\ldots,Z_{m})\). In Section A we show that this oracle procedure is a thresholding rule of the form \(\mathbf{\delta}^{\mathsf{ZOR}}(t_{z})=\{I(Z_{i}>t_{z}):1\leq i\leq m\}\) where \(t_{z}=3.273\) at \(\alpha=0.1\). Next, recall from Equation (7) that the oracle procedure \(\mathbf{\delta}^{\mathsf{OR}}(t^{*})\) based on \((\mathbf{X},\mathbf{\sigma})\) is of the form \(\{I(T_{i}^{\mathsf{OR}}<t^{*}):1\leq i\leq m\}\). This is equivalent to a thresholding rule \(\{I(Z_{i}>\lambda_{\sigma_{i}}(t^{*})):1\leq i\leq m\}\) (details provided in Section A), where_ \[\lambda_{\sigma}(t)=\frac{1}{\sqrt{\sigma}}\Big{[}-\log\Bigl{\{}\frac{0.1t}{( 1-t)0.9}\Bigr{\}}+0.5\sigma\Big{]},\] _and \(t^{*}=0.177\) at \(\alpha=0.1\)._ _While both \(\mathbf{\delta}^{\mathsf{ZOR}}\) and \(\mathbf{\delta}^{\mathsf{OR}}\) control the mFDR exactly at \(\alpha\), their powers are substantially different in this example: power of \(\mathbf{\delta}^{\mathsf{ZOR}}(t_{z})\) is \(0.0432\) and that of \(\mathbf{\delta}^{\mathsf{OR}}\) is \(0.0611\). To further examine the power gain of \(\mathbf{\delta}^{\mathsf{OR}}(t^{*})\), we consider the left panel of Figure 1 that plots the rejection regions of \(\mathbf{\delta}^{\mathsf{OR}}(t^{*})\) and \(\mathbf{\delta}^{\mathsf{ZOR}}(t_{z})\) as a function of \(Z_{i}\) and \(\sigma_{i}\). In the red shaded region \(\mathbf{\delta}^{\mathsf{ZOR}}(t_{z})\) rejects while \(\mathbf{\delta}^{\mathsf{OR}}(t^{*})\) does not, in the blue region \(\mathbf{\delta}^{\mathsf{OR}}(t^{*})\) rejects while \(\mathbf{\delta}^{\mathsf{ZOR}}(t_{z})\) does not and both procedures reject in the white region. Finally, in the gray shaded region neither procedures reject. The black dots represent instances where the null hypothesis is false and fall within the three rejection regions. While it is clear that a vast majority of the non-null cases appear in the white region, approximately \(64\%\), the blue region captures relatively more non-null cases than the red region, \(30\%\) versus \(6\%\). Thus, \(\mathbf{\delta}^{\mathsf{OR}}(t^{*})\) rejects an overall higher percentage of the non-null cases than \(\mathbf{\delta}^{\mathsf{ZOR}}(t_{z})\), which explains the power gain of the former over the latter._ **Example 2**.: _Unlike the previous setting, in this example \(\sigma_{i}\) controls the sparsity as well as the magnitude of the non-zero \(\mu_{i}\). Data are generated from Model (1) with \(\sigma_{i}\stackrel{{ i.i.d}}{{\sim}}U(0.5,4)\) and \(\mu_{i}\mid\sigma_{i}\stackrel{{\text{ind.}}}{{\sim}}\delta_{0}( \cdot)I\{\sigma_{i}\leq 3.65\}+\delta_{\sigma_{i}^{1.5}}(\cdot)I\{\sigma_{i}>3.65\},\) where \(P(\sigma_{i}\leq 3.65)=0.9\). We are interested in Problem (3) with \(\mathcal{A}=(-\infty,0]\). The oracle procedure based on \(\mathbf{Z}\) is of the form \(\mathbf{\delta}^{\mathsf{ZOR}}(t_{z})=\{I(Z_{i}>t_{z}):1\leq i\leq m\}\) where \(t_{z}=4.124\) at \(\alpha=0.1\) with power \(0.0015\). In contrast, \(\mathbf{\delta}^{\mathsf{OR}}(t^{*})\) in this example simply observes if \(\sigma_{i}>3.65\) to detect if \(H_{0,i}\) is false and thus, provides a perfect classification rule with FDR equal to \(0\) and power equal to \(1\). The stark contrast in the power of these two procedures is further elucidated in the right panel of Figure 1. Here, the rejection regions continue to have the same interpretation as in the left panel. However, the blue region now captures almost \(99\%\) of all the non-null cases that fall within the three regions while the white region only accounts for the remaining \(1\%\). Moreover, the red region does not capture any non-null case, thus explaining the substantially low power of \(\boldsymbol{\delta}^{\text{ZOR}}(t_{z})\) in this setting._ The preceding examples illustrate that data reduction via standardization may lead to power loss even when testing one-sided composite null hypotheses. While standardization is a natural pre-procesing step for testing heteroskedastic units, Examples 1 and 2 demonstrate that such a step suppresses the information contained in the standard deviations that can boast the power of these tests. Our numerical experiments in Sections 5.2 and Appendix C corroborate this observation where we find that \(z-\)value procedures are, in general, not as powerful as the proposed HAMT procedure which is based on \((\boldsymbol{X},\boldsymbol{\sigma})\). ## 3 Heteroskedasticty adjusted multiple testing procedure for composite null hypotheses ### Improved empirical Bayes deconvolution This section develops a data-driven procedure to mimic the oracle. We discuss the estimation of \(T_{i}^{\text{OR}}\) and \(t^{*}\), and present the HAMT procedure in Definition 2. Our approach for estimating \(T_{i}^{\text{OR}}\) involves constructing a nonparametric empirical Bayes deconvolution estimate of the unknown Figure 1: In the red shaded region \(\boldsymbol{\delta}^{\text{ZOR}}(t_{z})\) rejects while \(\boldsymbol{\delta}^{\text{OR}}(t^{*})\) does not, in the blue region \(\boldsymbol{\delta}^{\text{OR}}(t^{*})\) rejects while \(\boldsymbol{\delta}^{\text{ZOR}}(t_{z})\) does not and both procedures reject in the white region. Finally, in the gray shaded region neither procedures reject. The black dots represent instances where the null hypothesis is false and fall within the three rejection regions. mixing density \(g_{\mu}(\cdot\mid\sigma_{i})\). While there are several popular approaches to estimating an unknown mixing density, we demonstrate in Section 3.3 that existing methods that fail to account for the dependence between \(\mu_{i}\) and \(\sigma_{i}\) can suffer from power loss and may not even provide FDR control. Here we present a practical approach for estimating \(g_{\mu}(\cdot\mid\sigma_{i})\) that effectively accounts for this dependence. Suppose \(g_{\mu}(\cdot\mid\sigma_{i})\) is continuous in \(\sigma_{i}\) and the parameter space of \(\mu_{i}\) is a finite discrete set \(\mathcal{T}=\{u_{1},\ldots,u_{S}\}\) of size \(S\). The assumption on the discreteness of \(\mathcal{T}\) is a convenience that aids with the practical implementation of our method. See for example Efron (2016) for a similar assumption while defining their deconvolution estimator. Let \(g_{j}(\sigma_{i})=g_{\mu}(u_{j}\mid\sigma_{i})\) denote the prior probability mass on \(u_{j}\) conditional on \(\sigma_{i}\) where \(j=1,\ldots,S\). Since each \(g_{j}(\sigma_{i})\) depends on \(\sigma_{i}\), we approximate \(g_{j}(\sigma_{i})\) as a linear combination of \(K\) basis functions as follows: \[g_{j}(\sigma_{i})\approx\sum_{k=1}^{K}w_{jk}q_{k}(\sigma_{i})=\mathbf{w}_{j}^{T} \mathbf{q}(\sigma_{i}). \tag{8}\] In Equation (8), \(\mathbf{w}_{j}\) is a \(K-\)dimensional vector of unknown weights and \(\mathbf{q}(\sigma_{i})\) is a known vector of basis functions that depend on \(\sigma_{i}\). We discuss the choice of these basis functions in Section 5.1. In this discrete setting, and using Equation (8), the marginal densities in Equation (6) have the following representation: \[\tilde{f}_{0}(x\mid\sigma_{i})=\sum_{j:u_{j}\in\mathcal{A}}\phi_{\sigma_{i}}(x -u_{j})\mathbf{w}_{j}^{T}\mathbf{q}(\sigma_{i}),\ \tilde{f}(x\mid\sigma_{i})=\sum_{j=1}^{S}\phi_{\sigma_{i}}(x-u_{j})\mathbf{w}_{j}^{ T}\mathbf{q}(\sigma_{i}).\] Denote \(\mathcal{S}^{S}=\{\mathbf{\eta}\in\mathbf{R}^{S}:\mathbf{1}^{T}\mathbf{\eta}=1,\ \mathbf{\eta}\succeq \mathbf{0}\}\) as the \(S-\)dimensional unit simplex. Our goal is to estimate the \(KS-\)dimensional vector \(\mathcal{W}=(\mathbf{w}_{1}^{T},\ldots,\mathbf{w}_{S}^{T})^{T}\) such that \(\mathbf{g}_{i}=\{\mathbf{w}_{j}^{T}\mathbf{q}(\sigma_{i}):1\leq j\leq S\}\in\mathcal{S}^{S}\) for \(i=1,\ldots,m\). A possible formulation of an optimization problem to estimate \(\mathcal{W}\) is to consider the following minimization problem: \[\min_{\mathcal{W}\in\mathbb{R}^{KS}}\sum_{i=1}^{m}\Bigl{\{}f(x_{i}\mid\sigma_ {i})-\tilde{f}(x_{i}\mid\sigma_{i})\Bigr{\}}^{2}\quad\text{subject to}\quad\mathbf{g} _{i}\in\mathcal{S}^{S}\ \text{for}\ i=1,\ldots,m. \tag{9}\] However, the density \(f(x_{i}\mid\sigma_{i})\) in Equation (9) is not known in practice and estimating it directly from the data is difficult as we only have one pair of observation \((X_{i},\sigma_{i})\) for estimating each density. Recently, Fu et al. (2022) consider a heteroskedasticity adjusted bivariate kernel density estimator \(\hat{\varphi}^{m}(x,\sigma_{i})\) for \(f(x\mid\sigma_{i})\) where \[\hat{\varphi}^{m}(x,\sigma_{i})=\sum_{j=1}^{m}\frac{\phi_{h_{\sigma}}(\sigma_{i }-\sigma_{j})}{\sum_{k=1}^{m}\phi_{h_{\sigma}}(\sigma_{i}-\sigma_{k})}\phi_{h_{ xj}}(x-x_{j}). \tag{10}\] In Equation (10), \(h_{xj}=h_{x}\sigma_{j}\) and \(\mathbf{h}=(h_{x},h_{\sigma})\) is a pair of bandwidths. The weights \(\phi_{h_{\sigma}}(\sigma_{i}-\sigma_{j})/\sum_{k=1}^{m}\phi_{h_{\sigma}}(\sigma _{i}-\sigma_{k})\) are designed to borrow strength from observations with variability close to \(\sigma_{i}\), while placing little weight on points where \(\sigma_{i}\) and \(\sigma_{j}\) are far apart. The variable bandwidth \(h_{xj}\) adjusts for the heteroskedasticity in the data by inducing flatter kernels for data points that are observed with a higher variance. Furthermore, Fu et al. (2022) show that \(\hat{\varphi}^{m}(x,\sigma_{i})\) is a consistent estimator of \(f(x\mid\sigma_{i})\) in the sense that \(E\int\{\hat{\varphi}^{m}(x,\sigma_{i})-f(x\mid\sigma_{i})\}^{2}\mathrm{d}x\to 0\) as \(m\to\infty\) for all \(\sigma_{i}>0\). In our analysis, we use \(\hat{\varphi}^{m}(x_{i},\sigma_{i})\) as a pilot estimate of \(f(x_{i}\mid\sigma_{i})\) and solve the following constrained optimization problem with respect to \(\mathcal{W}\): \[\min_{\mathcal{W}\in\mathbb{R}^{KS}}\sum_{i=1}^{m}\Bigl{\{}\hat{\varphi}^{m}(x _{i},\sigma_{i})-\tilde{f}(x_{i}\mid\sigma_{i})\Bigr{\}}^{2}\quad\text{subject to}\quad\mathbf{g}_{i}\in\mathcal{S}^{S}\text{ for }i=1,\ldots,m. \tag{11}\] Equation (11) is a convex optimization problem in \(\mathcal{W}\) and in Section 5.1 we provide the implementation details for solving Problem (11) along with the recommended choices for \(\mathcal{T}\), \(S\) and \(K\). In the next section, we present our data-driven HAMT procedure that relies on the solution \(\hat{\mathcal{W}}_{m}\) to Problem (11). ### Proposed HAMT procedure We first present the estimator of the oracle Clfdr statistic \(T_{i}^{\text{OR}}\) in Definition 1. **Definition 1**.: _Let \(\hat{\mathcal{W}}_{m}=(\hat{\mathbf{w}}_{1,m},\ldots,\hat{\mathbf{w}}_{S,m})\) be the solution to Problem (11). The data-driven Clfdr statistic is given by_ \[\hat{T}_{i,m}=\frac{\hat{f}_{0}^{m}(x_{i}\mid\sigma_{i})}{\hat{f}^{m}(x_{i} \mid\sigma_{i})},\quad\text{where}\] \[\hat{f}_{0}^{m}(x\mid\sigma_{i})=\sum_{j:u_{j}\in\mathcal{A}}\phi_{\sigma_{i}} (x-u_{j})\hat{\mathbf{w}}_{j,m}^{T}\mathbf{q}(\sigma_{i}),\;\hat{f}^{m}(x\mid\sigma_{ i})=\sum_{j=1}^{S}\phi_{\sigma_{i}}(x-u_{j})\hat{\mathbf{w}}_{j,m}^{T}\mathbf{q}( \sigma_{i}).\] Next, in Definition 2 we present the proposed HAMT procedure that relies on the estimate \(\hat{T}_{i,m}\) and uses a step-wise procedure from Sun and McLain (2012) to estimate \(t^{*}\). **Definition 2**.: _(HAMT procedure) Denote \(\hat{T}_{(1),m}\leq\ldots\leq\hat{T}_{(m),m}\) the sorted Clfdr statistics and \(H_{(1)},\ldots,H_{(m)}\) the corresponding hypotheses. Suppose_ \[r=\max\Bigl{\{}j:\frac{1}{j}\sum_{i=1}^{j}\hat{T}_{(i),m}\leq\alpha\Bigr{\}}.\] _Then, the HAMT procedure rejects the ordered hypotheses \(H_{(1)},\ldots,H_{(r)}\). Furthermore, in comparison to the oracle procedure \(\boldsymbol{\delta}^{\text{OR}}(t^{*})\) in Equation (7), HAMT has the following form:_ \[\boldsymbol{\delta}^{\text{HAMT}}(\hat{t}_{m}^{*})=\{I(\hat{T}_{i,m}<\hat{t}_{ m}^{*}):1\leq i\leq m\},\text{ where }\hat{t}_{m}^{*}=\hat{T}_{(r),m}.\] In Definition 2, the estimate \(\hat{t}_{m}^{*}\) of \(t^{*}\) is based on the intuition that when the first \(j\) ordered hypotheses are rejected then a good estimate of the false discovery proportion is given by the moving average \((1/j)\sum_{i=1}^{j}\hat{T}_{(i),m}\) and the condition \((1/j)\sum_{i=1}^{j}\hat{T}_{(i),m}\leq\alpha\) then helps fulfill the FDR constraint. In Section 4 we show that for large \(m\), \(\hat{T}_{i,m}\) is asymptotically close to \(T_{i}^{\text{OR}}\) uniformly in \(i\), and the HAMT procedure in Definition 2 is a good approximation to the oracle procedure \(\boldsymbol{\delta}^{\text{OR}}(t^{*})\). ### Effect of ignoring the dependence between \(\mu_{i}\) and \(\sigma_{i}\) Here, we consider a numerical example to illustrate the effect on the power and validity of multiple testing procedures if the underlying deconvolution estimator for constructing the Clfdr statistics ignores the dependence between \(\mu_{i}\) and \(\sigma_{i}\). We fix \(m=10^{4}\) and sample \(X_{1},\ldots,X_{m}\) from Model (1) with \(\mu_{i}=3\sigma_{i}\) and \(\sigma_{i}\stackrel{{ i.i.d.}}{{\sim}}U(0.5,2)\). The goal is to test \(H_{0,i}:\mu_{i}\in\mathcal{A}\ vs\ H_{1,i}:\mu_{i}\notin\mathcal{A},\ i=1, \ldots,m\) where \(\mathcal{A}=(-\infty,4]\) and \(\alpha=0.1\). The following three testing procedures are evaluated in this example: the procedure that relies on the deconvolution estimate obtained from nonparametric maximum likelihood (Kiefer and Wolfowitz, 1956, Koenker and Gu, 2017, Laird, 1978) (NPMLE) techniques to estimate the Clfdr statistic, the procedure that uses the deconvolution estimate from Efron (2016) (DECONV) to estimate \(T_{i}^{\text{OR}}\) and the HAMT procedure from Definition 2. While these procedures employ different methods for estimating \(T_{i}^{\text{OR}}\), they all rely on Definition 2 to estimate the threshold \(t^{*}\). The first row of Figure 2 highlights in red the hypotheses that were rejected by the three procedures. Here the dotted horizontal line is \(\sigma=4/3\) and represents the oracle decision rule which rejects any hypothesis above that line. The rightmost panel presents the hypotheses that were rejected by HAMT and appears to correctly discover a substantially larger proportion of the non-null cases than NPMLE and DECONV while safeguarding, at the same time, the number of false discoveries. For instance, across \(200\) repetitions of this multiple testing problem the average false discovery proportions for NPMLE, DECONV and HAMT are, respectively, \(0.157\), \(0.186\) and \(0.010\) while their average proportion of true discoveries are \(0.142\), \(0.231\) and \(0.845\). The relatively poorer performance of NPMLE and DECONV in this example is related to the fact that the underlying deconvolution estimator for both these procedures ignore the dependence between \(\mu_{i}\) and \(\sigma_{i}\). To see that, we present the estimate of \(f(\cdot\mid\sigma)\) for \(\sigma\in\{1,1.5,2\}\) in Figure 3. Across the three panels, the deconvolution estimates from NPMLE and DECONV result in marginal density estimates that are substantially different from the ground truth. The Figure 3: The oracle marginal density \(f(x\mid\sigma)\) in green and the estimated marginal densities from the deconvolution estimates of NPMLE, DECONV and HAMT for \(\sigma\in\{1,1.5,2\}\). The dotted vertical line represents the mean of the distribution of \(X\) given \(\sigma\). Figure 2: We test \(H_{0,i}:\mu_{i}\leq 4\)_vs_\(H_{1,i}:\mu_{i}>4,\ i=1,\ldots,m\), where \(\mu_{i}=3\sigma_{i},\ \sigma_{i}\overset{i.i.d.}{\sim}U(0.5,2)\) and \(m=10,000\). Across the three panels, in red are the hypotheses that were rejected by the testing procedures at \(\alpha=0.1\). The dotted horizontal line is the oracle decision rule which rejects any hypothesis above that line. The left and center panels depict testing procedures that rely, respectively, on NPMLE’s and DECONV’s deconvolution estimates. The rightmost panel presents the HAMT procedure. deconvolution estimator underlying the HAMT procedure, on the other hand, seems to generate marginal density estimates that are relatively closer to \(f(\cdot\mid\sigma)\). In Section 4 we present formal theories supporting this intuition and establish that \(\hat{f}_{0}^{m}(\cdot\mid\sigma_{i})\) and \(\hat{f}^{m}(\cdot\mid\sigma_{i})\) in Definition 1 are, in fact, consistent estimators of \(f_{0}(\cdot\mid\sigma_{i})\) and \(f(\cdot\mid\sigma_{i})\), respectively, as \(m\to\infty\). ## 4 Theory In this section we study the asymptotic properties of HAMT under the setting where the grid size \(S=S(m)\) and the number of bases \(K=K(m)\) vary with \(m\). The following regularity conditions are needed in our technical analysis. **(A1)**\(g_{\mu}(\cdot\mid\sigma)\) is continuous in \(\sigma\) and supported on a compact interval \([-M,M]\) for some \(M<\infty\). **(A2)** The density \(g_{\sigma}(\cdot)\) is bounded and supported on a compact interval \([M_{1},M_{2}]\) for some \(M_{2}<\infty\) and \(M_{1}>0\). **(A3)** The bandwidths \((h_{x},h_{\sigma})\) satisfy \(h_{x}=O(m^{-\eta_{x}})\), \(h_{\sigma}=O(m^{-\eta_{x}})\) where \(\eta_{x}\) and \(\eta_{s}\) are small positive constants such that \(0<\eta_{s}+\eta_{x}<1\). Assumption (A1) on the continuity of \(g_{\mu}(\cdot|\sigma)\) in \(\sigma\) is a necessary condition in our proofs for information pooling across the heteroskedastic units. The compactness of the supports of \(g_{\mu}(\cdot|\sigma)\) and \(g_{\sigma}(\cdot)\) in Assumptions (A1) and (A2) are standard regularity conditions for empirical Bayes deconvolution problems (see for example Dicker and Zhao (2016)) and are satisfied in most practical scenarios where the true mean \(\mu\) often represents a score. Finally, Assumption (A3) is satisfied by common choices of bandwidths in Silverman (1986), Wand and Jones (1994). Proposition 1 formally establishes the asymptotic consistency of \(\hat{f}_{0}^{m}(\cdot|\sigma)\) and \(\hat{f}^{m}(\cdot|\sigma)\) as \(m\to\infty\). **Proposition 1**.: _Consider Model (1)-(2) and suppose assumptions (A1) - (A3) hold. Then as \(m,S(m),K(m)\to\infty\), we have, for every fixed \(\sigma>0\),_ \[E\|\hat{f}^{m}(\cdot|\sigma)-f(\cdot|\sigma)\|^{2} = E\int\{\hat{f}^{m}(x|\sigma)-f(x|\sigma)\}^{2}\mathrm{d}x\to 0\text{ and }\] \[E\|\hat{f}_{0}^{m}(\cdot|\sigma)-f_{0}^{m}(\cdot|\sigma)\|^{2} = E\int\{\hat{f}_{0}^{m}(x|\sigma)-f_{0}^{m}(x|\sigma)\}^{2} \mathrm{d}x\to 0,\] _where the expectation is taken over \((\mathbf{X},\mathbf{\sigma})\)._ With appropriate choices of \(S(m),K(m),h_{x}\) and \(h_{\sigma}\), \(E\|\hat{f}^{m}(\cdot,\sigma)-f(\cdot,\sigma)\|^{2}=O(m^{-2/3})\) in Proposition 1. This follows from the fact that \(O(E\|\hat{\varphi}^{m}(\cdot,\sigma)-f(\cdot|\sigma)\|^{2})=O\{(mh_{x}h_{ \sigma})^{-1}+h_{x}^{4}+h_{\sigma}^{4}\}\)(Wand and Jones, 1994) where the optimal rate is \(O(m^{-2/3})\) when \(h_{x}\) and \(h_{\sigma}\) are \(O(m^{-1/6})\). To achieve this rate in our context, it is sufficient for the grid size \(S(m)\) to be \(O(m^{1/3}\sqrt{\log m})\). This is formally established in Remark 1 in Appendix B.2. Moreover, on the appropriate choice of the number of basis functions \(K(m)\) in this setting, Remark 2 (Appendix B.2) shows that if \(g_{j}(\cdot)=\sum_{k=1}^{\infty}w_{jk}q_{k}(\cdot)\) and \(\mathbf{w}_{j}=\{w_{jk}:k=1,2,\dots,\}\) belong to the Sobolev ellipsoid \(\Theta(\gamma,c)\) with order \(\gamma>0\) and radius \(c<\infty\) for \(j=1,\dots,S(m)\), then \(K(m)=O\{m^{1/(2\gamma)}(\log m)^{1/(4\gamma)}\}\). Section 5.1 provides recommendations on the practical choices of \(S(m)\) and \(K(m)\) that work well in our numerical experiments and real data analyses. A consequence of Proposition 1 is Corollary 1 which establishes that the data-driven Clfdr statistic \(\hat{T}_{i,m}\) in Definition 2 converges in probability to its oracle counterpart as \(m\to\infty\). **corollary 1**.: _Under the conditions of Proposition 1 and uniformly in \(i\), \(\hat{T}_{i,m}{\rightarrow}T_{i}^{\mathsf{OR}}\) in probability as \(m\to\infty\)._ Next, we state the main theorem of this section which is related to the asymptotic performance of HAMT as \(m\to\infty\). **Theorem 2**.: _Consider Model (1)-(2). Under assumptions (A1) - (A3) and as \(m\to\infty\), we have (i) the mFDR and FDR of \(\mathbf{\delta}^{\mathsf{HAMT}}(\hat{t}_{m})\) are controlled at level \(\alpha+o(1)\), and (ii) \(ETP\{\mathbf{\delta}^{\mathsf{HAMT}}(\hat{t}_{m})\}/ETP\{\mathbf{\delta}^{\mathsf{OR}} (t^{*})\}=1+o(1)\)._ Together with Theorem 1, Theorem 2 establishes that the proposed HAMT procedure is asymptotically valid for FDR control and attains the performance of the oracle procedure as \(m\to\infty\). ## 5 Numerical experiments ### Implementation We first discuss Problem (11). While Section 4 provides guidance on the asymptotic choices of the grid size \(S(m)\) and the number of basis functions \(K(m)\), in our implementation we fix \(S=50\) and \(K=10\), which work well in all of our numerical and real data examples. For the grid support \(\mathcal{T}\), HAMT uses \(S\) equi-spaced points in \([X_{(1)},X_{(m)}]\) where \(X_{(1)}=\min\{X_{1},\ldots,X_{m}\}\) and \(X_{(m)}=\max\{X_{1},\ldots,X_{m}\}\). Finally, the conic interior-point optimizer in MOSEK (MOSEK, 2019) solves Problem (11). Next, for the basis functions \(\mathbf{q}(\sigma_{i})=(q_{1,i},\ldots,q_{K,i})\) in Equation (8) we use the cosine basis \(q_{k,i}=\cos(k\sigma_{i})\). Since we assume the dependence of \(g_{\mu}(\cdot|\sigma)\) on \(\sigma\) is continuous, the number of cosine basis functions used in Equation (8) can be interpreted as the user's belief about the smoothness of such dependence. Lastly, the pilot estimator \(\hat{\varphi}^{m}(x_{i},\sigma_{i})\) in Equation (11) is borrowed from Fu et al. (2022) and depends on a pair of bandwidths \(\mathbf{h}=(h_{x},h_{\sigma})\). We follow the author's recommendation in choosing these bandwidths which rely on Silverman's rule of thumb (Silverman, 1986). ### Experiments involving one-sided composite null hypotheses In this section we assess the numerical performance of HAMT for one-sided composite null hypotheses. Specifically, we test \(m=10^{4}\) hypotheses of the form \(H_{0i}:\mu\in\mathcal{A}\ vs\ H_{1i}:\mu\notin\mathcal{A}\) where \(\mathcal{A}=(-\infty,\mu_{0}]\). The following six competing testing procedures are evaluated in addition to HAMT: **AdaPTGMM** - the \(p-\)value procedure from Chao and Fithian (2021) that uses \(\mathbf{\sigma}\) as an additional covariate, **BH** - the \(p-\)value Benjamini-Hochberg procedure from Benjamini et al. (2006) which is designed to overcome the conservativeness of the original Benjamini and Hochberg (1995) procedure by including a correction in size, **DECONV** - the Clfdr procedure that uses the empirical Bayes deconvolution method from Efron (2016) to estimate \(T_{i}^{\text{OR}}\) and then relies on Definition 2 to estimate the threshold \(t^{*}\), **GS 1** - the testing procedure from Gu and Shen (2018) that is based on the standardized statistic \(Z_{i}=(X_{i}-\mu_{0})/\sigma_{i}\) and relies on the deconvolution estimate obtained from nonparametric maximum likelihood estimation to construct the Lfdr, **GS 2** - another procedure from Gu and Shen (2018) that allows for the possibility that in some applications, there might be a non-trivial probability mass at \(\mu_{0}\) which may lead to poor FDR control if not accounted for while estimating the marginal density of \(Z_{i}\) and **OR** - the oracle procedure from Equation (7). The aforementioned seven procedures are evaluated on five different simulation settings with \(\alpha\) fixed at \(0.1\). For each simulation setting, the data are generated from Model (1)-(2), and the average false discovery proportion \(\text{FDP}(\mathbf{\delta})=\sum_{i=1}^{m}\{(1-\theta_{i})\delta_{i}\}/\max(\sum_{ i=1}^{m}\delta_{i},1)\) and the average proportion of true positives discovered \(\text{PTP}(\mathbf{\delta})=\sum_{i=1}^{m}\theta_{i}\delta_{i}/\max(\sum_{i=1}^{m }\theta_{i},1)\) across \(200\) Monte-Carlo repetitions are reported. In the first setting \((\mu_{i},\sigma_{i})\) are independent. We sample \(\sigma_{i}\stackrel{{ i.i.d.}}{{\sim}}U(0.5,u)\) and let \(\mu_{i}=0\) with probability \(0.9\) and \(\mu_{i}\stackrel{{ i.i.d.}}{{\sim}}N(3,1)\) with probability \(0.1\). We vary \(u\in\{1,1.2,1.4,1.6,1.8,2\}\) and take \(\mu_{0}=2\). Figure 4 reports the average FDP and PTP for the competing procedures in this setting. We observe that BH is the most conservative and confirms the findings in Sun and McLain (2012) where the author's simulation study demonstrate that the BH procedure is unsuitable for testing composite null hypothesis. The procedure GS 2 closely follows BH in FDR control but exhibits substantially better power. The remaining methods have an overall similar performance in this setting although GS1 fails to control the FDR level at \(10\%\) for small values of \(u\). The second setting represents a scenario where \(\mu_{i}\) and \(\sigma_{i}\) are correlated and have discrete distributions. Setting 2 is presented in Figure 5 where \(\sigma_{i}\) can take three values \(\{0.5,1,2\}\) with equal probabilities. Conditional on \(\sigma_{i}\), \(\mu_{i}=0\) with probability \(0.9\) or \(\mu_{i}=u\sigma_{i}\) with probability \(0.1\). We set \(\mu_{0}=2\) and find that all methods control the FDR level in Figure 5. Among the data-driven procedures, HAMT has the highest power and is closely followed by GS 1. DECONV, which completely ignores the dependence between \(\mu_{i}\) and \(\sigma_{i}\) exhibits a substantially lower power than both GS 1 and HAMT. The remaining three settings present scenarios where HAMT provides a substantial improvement over competing methods, both in terms of FDR control and power. In the third setting, \(\sigma_{i}\stackrel{{ i.i.d}}{{\sim}}0.9U(0.5,1)+0.1U(1,u)\), \(\mu_{i}=0\), if \(\sigma_{i}\leq 1\) and \(2/\sigma_{i}\), otherwise. Thus, in this setting \(\sigma_{i}\) controls both the sparsity level of \(\mu_{i}\) and the distribution of its non-zero effects. The performance of the competing methods is presented in Figure 6 where \(\mu_{0}=1\). The oracle procedure (OR) in this setting perfectly classifies each \(\mu_{i}\) as satisfying \(\mu_{i}\leq\mu_{0}\) or \(\mu_{i}>\mu_{0}\) simply by observing if \(\sigma_{i}\leq 1\) or \(\sigma_{i}>1\) and \(2/\sigma_{i}>\mu_{0}\). Thus in Figure 6, OR has FDP equal to 0 and PTP equal to 1 for all \(u\). While, all other methods control the FDR at \(10\%\), HAMT exhibits a substantially higher power in this setting for all values of \(u\). For Setting 4, \(\sigma_{i}\stackrel{{ i.i.d.}}{{\sim}}U(0.5,u)\) and conditional on \(\sigma_{i}\), \(\mu_{i}\stackrel{{ ind.}}{{\sim}}0.9N(-\sigma_{i},0.5)+0.1\delta _{(2\sigma_{i}^{2})}\), where \(\delta_{(a)}\) represents a point mass at \(a\). Setting 4 is presented in Figure 7 where \(\mu_{0}=1\). We find that GS 1 fails to control the FDR level at \(10\%\) while DECONV controls the FDR at all values of \(u\), except the first two where it exhibits an FDR value bigger than \(0.2\) at \(u=1\). HAMT effectively captures the dependence between \(\mu_{i}\) and \(\sigma_{i}\) and is, by far, the best testing procedure in this setting. In the fifth setting, we allow \(\mu_{i}\) and \(\sigma_{i}\) to be perfectly correlated. Specifically, \(U(0.25,u)\), \(\mu_{i}=3\sigma_{i}\) and \(\mu_{0}=4\). In Figure 8, GS 1 and DECONV fail to control the FDR at \(10\%\) and for some values of \(u\), they exhibit FDR values bigger than \(0.2\). The left panel of Figure 8 excludes those values of \(u\) for GS 1 and DECONV. The oracle procedure in this setting simply observes if \(3\sigma_{i}>\mu_{0}\) for rejecting the null hypothesis and its data-driven counterpart, HAMT, has the highest power amongst all other testing procedures considered here. Overall, the aforementioned simulation experiments reveal that HAMT, which relies on an improved deconvolution estimator for constructing the Clfdr statistic, provides a substantially more powerful multiple testing procedure than competing methods at the same FDR level. Additionally, we find that the \(p-\)value based procedures, such as BH and AdaPTGMM, in these experiments are considerably more conservative while the Lfdr methods that ignore the dependence between \(\mu_{i}\) and \(\sigma_{i}\) may even fail to control the FDR at the desired level. In Section C we present an additional simulation study to assess the numerical performance of HAMT for two-sided composite null hypotheses. ## 6 Real data analysis In this section we analyze a dataset from Banerjee et al. (2019) that hold daily player-level gaming information over 60 days from a mobile app game. For monetization of these games, managers are often interested in identifying a group of players who are most engaged with the game so that personalized promotional offers can be pushed to their devices. While there are several ways of measuring game engagement, such as engagement via purchases or through social media activity, here we use the daily duration of play as a measure of how engaged each player is with the game. However, a positive daily duration of play does not necessarily mean that the player is highly engaged. Rather, from a game manager's perspective, sustained player activities translate to high levels of engagement, either through purchases or social media activities. Thus, in this analysis we focus on players who have logged-in to the game for at least 5 days in the 60 day period and the goal is to select those players whose mean daily duration of play exceeds \(30\) minutes. Formally, let \(Y_{ij}>0\) denote the duration of play in minutes for player \(i\) on day \(j\) where \(j=1,\ldots,n_{i}\) and \(i=1,\ldots,m\). Here \(n_{i}\in[5,60]\) denotes the number days that player \(i\) has logged-in to the game and there are \(m=10,336\) such players in our data. Following Banerjee et al. (2019), we work with the log duration of play \(X_{ij}=\log Y_{ij}\) and denote \(X_{i}=n_{i}^{-1}\sum_{j=1}^{n_{i}}\log Y_{ij}\). We assume that \(X_{i}\mid(n_{i},\mu_{i},\sigma_{i})\stackrel{{ ind.}}{{\sim}}N(\mu_{i}, \sigma_{i}^{2})\), and test \(H_{0,i}:\mu_{i}\leq\log(30)\ vs\ H_{1,i}:\mu_{i}>\log(30)\). Since \(\sigma_{i}\) are unknown in this example, we calculate the sample standard deviation \(S_{i}\) and consider the \(m\) pairs \((X_{i},\sigma_{i})\) for the testing problem, where we set \(\sigma_{i}=S_{i}/\sqrt{n_{i}}\) with some abuse of notation. We first discuss the estimate of prior probabilities arising from the deconvolution estimator that HAMT relies on. The heatmap in Figure 9 presents the \(m\times S\) matrix \(\mathcal{G}=(\hat{\mathbf{g}}_{1},\ldots,\hat{\mathbf{g}}_{m})^{T}\) of the estimated prior probabilities where \(\hat{\mathbf{g}}_{i}=\{\hat{\mathbf{w}}_{j}^{T}\mathbf{q}(\sigma_{i}):1\leq j\leq S\}\). The x-axis represents the support of \(\mu_{i}\) which is give by the grid \(S\), truncated to \([1,4.6]\) for ease of presentation, and the y-axis is \(\sigma_{i}\). It is interesting to note that when \(\sigma_{i}\) are small, most of the prior mass is concentrated in \([2,4]\). As \(\sigma_{i}\) increases, the deconvolution estimator adjusts and assigns more mass in the interval \([1,3]\). This is further elucidated in Figure 10 where we plot \(\hat{\mathbf{g}}_{i}\) for \(\sigma_{i}=\sigma\in\{0.1,0.5,1\}\) and notice a change in the spread of the estimated prior density as \(\sigma\) increases from left to right. Deconvolution estimators that ignore the dependence between \(\mu_{i}\) and \(\sigma_{i}\) are incapable of demonstrating such patterns in the estimated prior density. For the multiple testing problem described earlier, HAMT relies on the deconvolution estimates \(\hat{\mathbf{g}}_{i}\) to estimate the oracle Clfdr statistic \(T_{i}^{\text{OR}}\). Table 1 reports the percentage of players selected by each method for different choices of the FDR level \(\alpha\) and we find that both DECONV and HAMT reject more hypotheses than GS 1 and GS 2. In Figure 11, the red dots indicate the hypotheses rejected by the four methods at \(\alpha=0.1\). The rejection regions of GS 1 and GS2 only depend on \(Z_{i}=(X_{i}-\mu_{0})/\sigma_{i}\). In contrast, the rejection region of HAMT depends on both \(Z_{i}\) and \(\sigma_{i}\). Moreover, in comparison to the other three methods, HAMT Figure 10: Plot of the estimated prior masses \(\hat{\mathbf{g}}_{i}\) for \(\sigma_{i}=\sigma\in\{0.1,0.5,1\}\). The x-axis is truncated to \([1,4.6]\) as the estimated probability mass is negligible outside this interval. when \(\sigma_{i}\) is small and, unlike DECONV and GS 1, HAMT does not reject any hypotheses when \(\sigma_{i}\) is large, particularly bigger than \(0.5\). The rejection region of DECONV gives the impression that it depends on both \(Z_{i}\) and \(\sigma_{i}\), however as seen in our simulation experiments, DECONV may suffer from low power and may even fail to control the FDR at the desired level in case \(\mu_{i}\) and \(\sigma_{i}\) are correlated as its deconvolution estimator is not designed to capture this dependence. ## 7 Discussion Heteroskedasticity presents a challenging setting for designing valid and powerful multiple testing procedures. For testing composite null hypotheses, we show that the conventional practice \begin{table} \begin{tabular}{c c c c c} \hline \(\alpha\) & GS 1 & GS 2 & DECONV & HAMT \\ \hline 0.05 & 2.38\% & 1.34\% & 3.40\% & 3.37\% \\ \hline 0.1 & 3.30\% & 1.75\% & 4.76\% & 4.44\% \\ \hline 0.15 & 4.04\% & 2.11\% & 6.07\% & 5.38\% \\ \hline \end{tabular} \end{table} Table 1: Percentage of players selected by each method. Figure 11: Scatter plot of \((Z_{i},\sigma_{i}),\;i=1,\ldots,m\) where \(Z_{i}=(X_{i}-\mu_{0})/\sigma_{i}\). The red dots indicate the hypotheses rejected by the three methods. The x-axis is truncated below \(0\) as all rejections are made when \(Z_{i}>0\). of standardizing heteroskedastic test statistics may severely affect the power of the underlying testing procedure. Additionally, when the inferential parameter of interest is correlated with the variance of the test statistic, existing methods that ignore this dependence may fail to control the type I error at the desired level. In this article, we propose HAMT which is a general framework for simultaneously testing composite null hypotheses under heteroskedasticity. HAMT avoids data reduction by standardization and directly incorporates the side information from the variances into the testing procedure. It ranks the hypotheses using Clfdr statistics that rely on a carefully designed deconvolution estimator that captures the dependence between \(\mu_{i}\) and \(\sigma_{i}\). Our asymptotic analysis establishes that HAMT is valid and optimal for FDR control. In the numerical experiments, HAMT demonstrates substantial power gain against competing methods, particularly in the settings where \(\mu_{i}\) and \(\sigma_{i}\) are correlated. We conclude this article with a brief discussion on potential areas for future research. _First_, it is of tremendous interest to develop powerful and valid multiple testing procedures that can pool side information from several covariate sequences (see for example Chao and Fithian (2021), Zhang and Chen (2022) and the references therein). In the context of testing composite null hypotheses, HAMT can handle just one such sequence given by the \(\sigma_{i}\)'s and it is desirable to develop methods that can incorporate other side information, such as a grouping structure, in addition to heteroskedasticity. Given a \(p-\)dimensional side information vector \(\mathbf{Y}_{i}\in\mathbb{R}^{p}\) for hypothesis \(i\), the hierarchical Model (1)-(2) may be modified as follows: \[X_{i} = \mu_{i}+\epsilon_{i},\;\epsilon_{i}\overset{ind.}{\sim}N(0, \sigma_{i}^{2}),\] \[\mu_{i}\mid(\sigma_{i},\mathbf{y}_{i}) \overset{ind.}{\sim} g_{\mu}(\cdot|\sigma_{i},\mathbf{y}_{i}),\;(\sigma_{i},\mathbf{Y}_{i}) \overset{i.i.d}{\sim}g_{\sigma,\mathbf{y}}(\cdot),\] where \(g_{\mu}(\cdot\mid\sigma_{i},\mathbf{y}_{i})\) and \(g_{\sigma,\mathbf{y}}(\cdot)\) are, respectively, the probability density functions of the unknown mixing distributions of \(\mu\) given \((\sigma_{i},\mathbf{y}_{i})\) and \((\sigma_{i},\mathbf{Y}_{i})\). A major methodological challenge towards extending HAMT in this direction will be to develop a reliable deconvolution estimator of \(g_{\mu}(\cdot\mid\sigma_{i},\mathbf{y}_{i})\) for constructing the Clfdr statistic. _Second_, our testing framework assumes that \(\sigma_{i}\) are known and while a numerical experiment in Appendix C shows that using sample variances HAMT still controls the FDR level, it would be of great interest to further study the impact of estimating \(\sigma_{i}\) on the power and validity of multiple testing procedures. _Third_, HAMT relies on a novel \(g-\)modeling approach for estimating the Clfdr statistic. The methodology developed here uses a simple yet effective basis expansion step in Equation (8) that allows us to nonparametrically model the dependence between \(\mu_{i}\) and \(\sigma_{i}\). Further investigation of this approach in conjunction with existing \(g-\)modeling approaches, such as Efron (2016), is desirable for developing sophisticated deconvolution estimators for a variety of large-scale inferential problems. _Finally_, while HAMT is guaranteed to provide asymptotic FDR control, it will be of interest to modify HAMT so that it can provably control FDR in finite samples. Promising ideas in this direction include the construction of knockoffs or mirror sequences as done in Barber and Candes (2015), Leung and Sun (2021), or the use of conformal techniques as pursued in Bates et al. (2021), Guan and Tibshirani (2022). ## Acknowledgement B. Gang's research was supported by National Natural Science Foundation of China grant 12201123. T. Banerjee was partially supported by the University of Kansas General Research Fund allocation #2302216. Supplement to "Large-Scale Multiple Testing of Composite Null Hypotheses Under Heteroskedasticity" This supplement is organized as follows: the calculations for Examples 1 and 2 in Section 2.3 are presented in Section A. The proofs of all other theoretical results in the paper are presented in Section B. Additional numerical experiments involving two-sided composite null hypotheses are provided in Section C. ## Appendix A Calculations for Section 2.3 **Example 1 -** we first consider the oracle rule based on the standardized statistic \(Z_{i}=X_{i}/\sigma_{i}\). The marginal density function of \(Z_{i}\) under the alternative is \[f_{a}(z)=\int_{0.5}^{4}\frac{1}{3.5\sqrt{2\pi}}\exp\left\{-\frac{(z-\sqrt{ \sigma})^{2}}{2}\right\}d\sigma,\] and the distribution function of \(Z_{i}\) under the alternative is \[F_{a}(t)=P(Z<t)=\int_{-\infty}^{t}f_{a}(z)dz=\int_{0.5}^{4}\frac{1}{3.5}\Phi(t -\sqrt{\sigma})d\sigma,\] where \(\Phi\) is the distribution function of \(N(0,1)\). Then, using the definition of mFDR, it is not hard to see that the oracle procedure based on \(\mathbf{Z}=(Z_{1},\ldots,Z_{m})\) is of the form \(\mathbf{\delta}^{\text{ZOR}}(t_{z})=\{I(Z_{i}>t_{z}):1\leq i\leq m\}\) where, \[t_{z}=\inf\left\{t>0:\frac{0.9\{1-\Phi(t)\}}{0.9\{1-\Phi(t)\}+0.1\{1-F_{a}(t) \}}\leq\alpha\right\}.\] When \(\alpha=0.1\), the above display can be solved numerically for \(t\) to get \(t_{z}=3.273\) and the power of \(\mathbf{\delta}^{\text{ZOR}}(t_{z})\) is \(1-F_{a}(t_{z})=0.0432\). Next, consider the oracle rule \(\mathbf{\delta}^{\text{OR}}(t^{*})\). Recall that \(\mathbf{\delta}^{\text{OR}}(t^{*})\) is of the form \(\{I(T_{i}^{\text{OR}}<t^{*}):1\leq i\leq m\}\). Using the definition of Clfdr in Equation (5), it is straightforward to show that this rule is equivalent to \(\{I(Z_{i}>\lambda_{\sigma}(t^{*})):1\leq i\leq m\}\), where \[\lambda_{\sigma}(t)=\frac{-\log(\frac{0.1t}{0.9(1-t)})+\frac{1}{2}\sigma}{ \sqrt{\sigma}},\] \[t^{*}=\sup\left[t\in[0,1]:\frac{0.9\int(1-\Phi\{\lambda_{\sigma}(t)\})\mathrm{d} \sigma}{0.9\int(1-\Phi\{\lambda_{\sigma}(t)\})\mathrm{d}\sigma+0.1\int(1-\Phi\{ \lambda_{\sigma}(t)-\sqrt{\sigma}\})\mathrm{d}\sigma}\leq\alpha\right].\] When \(\alpha=0.1\), the above display can be solved numerically to get \(t^{*}=0.177\) and the power of \(\boldsymbol{\delta}^{\text{OR}}(t^{*})\) is given by \((1/3.5)\int(1-\Phi\{\lambda_{\sigma}(t^{*})-\sqrt{\sigma}\})\mathrm{d}\sigma =0.0611\). \(\blacksquare\) **Example 2 -** for the oracle rule based on \(Z_{i}\), the calculations from Example 1 give \(t_{z}=4.124\) at \(\alpha=0.1\) and the power of \(\boldsymbol{\delta}^{\text{ZOR}}(t_{z})=0.0015\). Now, consider the oracle rule based on \(T_{i}^{\text{OR}}\). Note that \(T_{i}^{\text{OR}}=1\) if \(\sigma_{i}\leq 3.65\) and \(0\) otherwise. So, \(T_{i}^{\text{OR}}\) perfectly classifies each case as being null or non-null based on \((X_{i},\sigma_{i})\). Consequently, the power of this procedure is \(1\) while the FDR is \(0\). ## Appendix B Proofs ### Proof of Theorem 1 We divide the proof into two parts. In Part (a), we establish two properties of the testing rule \(\boldsymbol{\delta}^{\text{OR}}(t)=\{I(T_{i}^{\text{OR}}<t):1\leq i\leq m\}\) for an arbitrary \(0<t<1\). In Part (b) we show that the oracle rule \(\boldsymbol{\delta}^{\text{OR}}(t^{*})\) attains the mFDR level exactly and is optimal amongst all mFDR procedures at level \(\alpha\). **Part (a).** Denote \(\alpha(t)\) the mFDR level of \(\boldsymbol{\delta}^{\text{OR}}(t)\). We shall show that (i) \(\alpha(t)<t\) for all \(0<t<1\) and that (ii) \(\alpha(t)\) is nondecreasing in \(t\). First, note that \(E\left\{\sum_{i=1}^{m}(1-\theta_{i})\delta_{i}^{\text{OR}}(t)\right\}=E_{ \boldsymbol{X},\boldsymbol{\sigma}}\{\sum_{i=1}^{m}T_{i}^{\text{OR}}\delta_{i} ^{\text{OR}}(t)\}\). Then, according to the definition of \(\alpha(t)\), we have \[E_{\boldsymbol{X},\boldsymbol{\sigma}}\left\{\sum_{i=1}^{m}\left\{T_{i}^{ \text{OR}}-\alpha(t)\right\}I(T_{i}^{\text{OR}}\leq t)\right\}=0. \tag{12}\] We claim that \(\alpha(t)<t\). Otherwise if \(\alpha(t)\geq t\), then we must have \(T_{i}^{\text{OR}}<t\leq\alpha(t)\). It follows that the LHS must be negative, contradicting (12). Next we show (ii), i.e, \(\alpha(t)\) is nondecreasing in \(t\). Let \(\alpha(t_{j})=\alpha_{j}\). We claim that if \(t_{1}<t_{2}\) then we must have \(\alpha_{1}\leq\alpha_{2}\). We argue by contradiction. Suppose that \(t_{1}<t_{2}\) but \(\alpha_{1}>\alpha_{2}\). Then \[(T_{i}^{\mathsf{OR}}-\alpha_{2})I(T_{i}^{\mathsf{OR}}<t_{2}) = (T_{i}^{\mathsf{OR}}-\alpha_{1})I(T_{i}^{\mathsf{OR}}<t_{1})+( \alpha_{1}-\alpha_{2})I(T_{i}^{\mathsf{OR}}<t_{1})\] \[+(T_{i}^{\mathsf{OR}}-\alpha_{2})I(t_{1}\leq T_{i}^{\mathsf{OR}}< t_{2})\] \[\geq (T_{i}^{\mathsf{OR}}-\alpha_{1})I(T_{i}^{\mathsf{OR}}<t_{1})+( \alpha_{1}-\alpha_{2})I(T_{i}^{\mathsf{OR}}<t_{1})\] \[+(T_{i}^{\mathsf{OR}}-\alpha_{1})I(t_{1}\leq T_{i}^{\mathsf{OR}}< t_{2}).\] It follows that \(E\left\{\sum_{i=1}^{m}(T_{i}^{\mathsf{OR}}-\alpha_{2})I(T_{i}^{\mathsf{OR}}<t_{2})\right\}>0\) since \(E\left\{\sum_{i=1}^{m}(T_{i}^{\mathsf{OR}}-\alpha_{1})I(T_{i}^{\mathsf{OR}}< t_{1})\right\}=0\) according to (12), \(\alpha_{1}>\alpha_{2}\) and \(T_{i}^{\mathsf{OR}}\geq t_{1}>\alpha_{1}\). However, this contradicts Equation (12) and so we must have \(\alpha_{1}<\alpha_{2}\). **Part (b).** Let \(\bar{\alpha}=\alpha(1)\). In Part (a), we showed that \(\alpha(t)\) is non-decreasing in \(t\). It follows that for all \(\alpha<\bar{\alpha}\), there exists a \(t^{*}\) such that \(t^{*}=\sup\{t:\alpha(t^{*})=\alpha\}\). By definition, \(t^{*}\) is the oracle threshold. Consider an arbitrary decision rule \(\boldsymbol{\delta}=(\delta_{1},\ldots,\delta_{m})\in\{0,1\}^{m}\) such that \(\text{mFDR}(\boldsymbol{\delta})\leq\alpha\). We have \(\mathbb{E}\left\{\sum_{i=1}^{m}(T_{i}^{\mathsf{OR}}-\alpha)\delta_{i}^{ \mathsf{OR}}(t^{*})\right\}=0\) and \(E\left\{\sum_{i=1}^{m}(T_{i}^{\mathsf{OR}}-\alpha)\delta_{i}\right\}\leq 0\). Hence \[E\Big{[}\sum_{i=1}^{m}\{\delta_{i}^{\mathsf{OR}}(t^{*})-\delta_{i}\}(T_{i}^{ \mathsf{OR}}-\alpha)\Big{]}\geq 0. \tag{13}\] Consider the transformation \(h(x)=(x-\alpha)/(1-x)\). Note that since \(h(x)\) is monotone, we can rewrite \(\delta_{i}^{\mathsf{OR}}(t^{*})=I\left[\left\{(T_{i}^{\mathsf{OR}}-\alpha)/(1- T_{i}^{\mathsf{OR}})\right\}<\lambda\right]\), where \(\lambda=(t^{*}-\alpha)/(1-t^{*})\). In Part (a) we have shown that \(\alpha<t^{*}<1\), which implies that \(\lambda>0\). Hence \[E\left[\sum_{i=1}^{m}\left\{\delta_{i}^{\mathsf{OR}}(t^{*})-\delta_{i}\right\} \left\{(T_{i}^{\mathsf{OR}}-\alpha)-\lambda(1-T_{i}^{\mathsf{OR}})\right\} \right]\leq 0. \tag{14}\] To see this, consider the terms where \(\delta_{i}^{\mathsf{OR}}(t^{*})-\delta_{i}\neq 0\). Then we have two cases: (i) \(\delta_{i}^{\mathsf{OR}}(t^{*})>\delta_{i}\) or (ii) \(\delta_{i}^{\mathsf{OR}}<\delta_{i}\). In case (i), \(\delta_{i}^{\mathsf{OR}}(t^{*})=1\), implying that \(\left\{(T_{i}^{\mathsf{OR}}-\alpha)/(1-T_{i}^{\mathsf{OR}})\right\}<\lambda\). In case (ii), \(\delta_{i}^{\mathsf{OR}}(t^{*})=0\), implying that \(\left\{(T_{i}^{\mathsf{OR}}-\alpha)/(1-T_{i}^{\mathsf{OR}})\right\}\geq\lambda\). Therefore, we always have \(\{\delta_{i}^{\mathsf{OR}}(t^{*})-\delta_{i}\}\{(T_{i}^{\mathsf{OR}}-\alpha)- \lambda(1-T_{i}^{\mathsf{OR}})\}\leq 0\). Summing over the \(m\) terms and taking the expectation yield (14). Now, combining (13) and (14), we obtain \[0\leq E\left[\sum_{i=1}^{m}\{\delta_{i}^{\mathsf{OR}}(t^{*})-\delta_{i}\}(T_{ i}^{\mathsf{OR}}-\alpha)\right]\leq\lambda E\left[\sum_{i=1}^{m}\{\delta_{i}^{ \mathsf{OR}}(t^{*})-\delta_{i}\}(T_{i}^{\mathsf{OR}}-\alpha)\right].\] Since \(\lambda>0\), it follows that \(E\left[\sum_{i=1}^{m}\{\delta_{i}^{\mathsf{OR}}(t^{*})-\delta_{i}\}(T_{i}^{ \mathsf{OR}}-\alpha)\right]>0\). Finally, we apply the definition of ETP to conclude that \(\text{ETP}\{\delta^{\mathsf{OR}}(t^{*})\}\geq\text{ETP}(\boldsymbol{\delta})\) for all \(\boldsymbol{\delta}\in\{0,1\}^{m}\) such that \(\text{mFDR}(\boldsymbol{\delta})\leq\alpha\). \(\blacksquare\) ### Proof of Proposition 1 We first state two useful lemmata where \(\delta_{u}(\cdot)\) denotes a point mass at \(u\) **Lemma 1**.: _Let \(\phi_{\tau}(\cdot)\) be the density function of \(N(0,\tau^{2})\). For any \(g\) with support \(supp(g)\subset[-M,M]\), and any \(\epsilon>0\), \(\tau>0\), with \(S\) large enough (depending on \(M,\epsilon,\tau\) only), there exists \(g^{\prime}\in\{\sum_{j=1}^{S}\theta_{j}\delta_{u_{j}}(\cdot)|\sum_{j=1}^{S} \theta_{j}=1,\ \theta_{j}\geq 0\ \forall j\}\) with \(u_{j}=-M+2M(j-1)/(S-1)\) such that \(|g*\phi_{\tau}(x)-g^{\prime}*\phi_{\tau}(x)|^{2}<\epsilon\) for all \(x\)._ **Lemma 2**.: _Suppose \(\hat{f}(x|\sigma)=\hat{g}*\phi_{\sigma}(x)\) and \(f(x|\sigma)=g*\phi_{\sigma}(x)\). Then \(E_{\boldsymbol{x},\boldsymbol{\sigma}}E_{x,\sigma}|\hat{f}(x|\sigma)-f(x| \sigma)|^{2}\to 0\) implies \(E_{\boldsymbol{x},\boldsymbol{\sigma}}\|\hat{g}*\phi_{\tau}-g*\phi_{\tau}\|_{ 2}^{2}\to 0\) for any fixed \(\tau>0\). Here \(E_{\boldsymbol{x},\boldsymbol{\sigma}}\) is taken with respect to the data used to construct \(\hat{f}\) and \(\hat{g}\), \(E_{x,\sigma}\) is taken with respect to the input for \(\hat{f}\) and \(f\)._ Using standard arguments in density estimation theory (e.g. Wand and Jones (1994) page 21), we have \(E\|\hat{\varphi}^{m}(\cdot,\sigma)-f(\cdot|\sigma)\|_{2}^{2}=O\{(mh_{x}h_{ \sigma})^{-1}+h_{x}^{4}+h_{\sigma}^{4}\}\). By assumption (A3) \((mh_{x}h_{\sigma})^{-1}+h_{x}^{4}+h_{\sigma}^{4}\to 0\), it follows that \[\frac{1}{m}\sum_{i=1}^{m}\{\hat{f}^{m}(x_{i}|\sigma_{i})-\hat{\varphi}^{m}(x_ {i},\sigma_{i})\}^{2}\xrightarrow{p}\frac{1}{m}\sum_{i=1}^{m}\{\hat{f}^{m}(x_ {i}|\sigma_{i})-f(x_{i}|\sigma_{i})\}^{2}. \tag{15}\] For any \(\epsilon>0\), since \(supp\{g_{\mu}(\cdot|\sigma)\}\subset[-M,M]\) and \(g_{\mu}(\cdot|\sigma)\) is continuous in \(\sigma\), by Lemma 1 there exists continuous functions \(g_{j},\ j=1,\ldots,S\) such that \(g^{\prime}_{\mu}(\cdot|\sigma)=\sum_{j=1}^{S}g_{j}(\sigma)\delta_{u_{j}}(\cdot)\) and \(|g^{\prime}_{\mu}(\cdot|\sigma)*\phi_{\tau}(x)-g_{\mu}(\cdot|\sigma)*\phi_{ \tau}(x)|^{2}<\epsilon\) for all \(x\). Let \(\{q_{k}\}_{k=1}^{\infty}\) be an orthonormal basis for \(L^{2}[M_{1},M_{2}]\). Since \(g_{j}\)'s are bounded and continuous they all belongs to \(L^{2}[M_{1},M_{2}]\), hence they can be written as \(g_{j}(\sigma)=\sum_{k=1}^{\infty}w_{jk}q_{k}(\sigma)\). For each \(g_{j}\) there exists \(N_{j}\) such that we can find \(\tilde{w}_{jk}\) with \(\|g_{j}(\cdot)-\sum_{k=1}^{N_{j}}\tilde{w}_{jk}q_{k}(\cdot)\|_{2}^{2}<\epsilon/S\). Take \(K=\max_{j}N_{j}\). Then, there exists \(\tilde{w}_{jk},j=1,\ldots,S,\ k=1,\ldots,K\), such that \(\|g_{j}(\cdot)-\sum_{k=1}^{K}\tilde{w}_{jk}q_{k}(\cdot)\|_{2}^{2}<\epsilon/S\) for all \(j\). Write \(\tilde{g}_{j}(\cdot)=\sum_{k=1}^{K}\tilde{w}_{jk}q_{k}(\cdot)\). Let \(\tilde{g}_{\mu}(\cdot|\sigma)=\sum_{j=1}^{K}\tilde{g}_{j}(\sigma)\delta_{u_{j}}(\cdot)\). Then for every \(\sigma>0\) and any fixed \(\tau>0\) we have \[\|g_{\mu}^{\prime}(\cdot|\sigma)*\phi_{\tau}-\tilde{g}_{\mu}(\cdot|\sigma)*\phi_ {\tau}\|_{2}^{2}=\|\sum_{j=1}^{S}(\tilde{g}_{j}(\sigma)-g_{j}(\sigma))\phi_{ \tau}(\cdot-u_{j})\|_{2}^{2}=O(\epsilon).\] Hence, in the feasible region, it is possible to find \(\hat{f}^{m}\) such that \[\frac{1}{m}\sum_{i=1}^{m}\{\hat{f}^{m}(x_{i}|\sigma_{i})-f(x_{i}|\sigma_{i}) \}^{2}\leq\epsilon.\] Using (15), we see that the solution to the optimization problem indeed satisfies the above inequality with probability converging to 1. The Proposition then follows from Lemma 2. \(\blacksquare\) **Remark 1**.: _(Grid Size) Note that_ \[\frac{1}{m}\sum_{i=1}^{m}\{\hat{f}^{m}(x_{i}|\sigma_{i})-f(x_{i}|\sigma_{i}) \}^{2}=O(E\|\hat{f}^{m}-f\|_{2}^{2})=O\{(mh_{x}h_{\sigma})^{-1}+h_{x}^{4}+h_{ \sigma}^{4}\}.\] _The optimal rate of \((mh_{x}h_{\sigma})^{-1}+h_{x}^{4}+h_{\sigma}^{4}\) is \(m^{-2/3}\) and is achieved when \(h_{x}\sim h_{\sigma}\sim m^{-1/6}\). Hence, when choosing the grid size we only need_ \[\big{|}\frac{1}{m}\sum_{j=1}^{m}\phi_{\tau}(x-\mu_{j})-\frac{1}{m}\sum_{j=1}^{ m}\phi_{\tau}(x-u_{i(j)})\big{|}^{2}=O(m^{-2/3}),\] _where \(u_{i(j)}\in\{u_{1},\ldots,u_{S}\}\) is such that \(|u_{i(j)}-\mu_{j}|=O(1/S)\). Since \(g_{\mu}(\cdot|\sigma)\) has bounded support, such \(u_{i(j)}\) can always be found. Let \(\epsilon=|u_{i(j)}-\mu_{j}|\), then_ \[|\phi_{\tau}(x-\mu_{j})-\phi_{\tau}(x-u_{i(j)})|^{2}=\frac{1}{2\pi\tau^{2}}e^ {-\frac{\pi^{2}}{\tau^{2}}}|1-e^{\frac{2\pi\epsilon-\epsilon^{2}}{2\tau^{2}} }|^{2}. \tag{16}\] _We want the above to be of order \(O(m^{-2/3})\) uniformly for any \(x\). If \(x\) has order greater than \(\sqrt{\log m}\) then the RHS of (16) is \(O(m^{-2/3})\). When \(x\) has order less than \(\sqrt{\log m}\), since \(e^{-\frac{\epsilon^{2}}{\tau^{2}}}=O(1)\), we focus on \(|1-e^{\frac{2\pi\epsilon-\epsilon^{2}}{2\tau^{2}}}|^{2}.\) By Taylor expansion,_ \[|1-e^{\frac{2\pi\epsilon-\epsilon^{2}}{2\tau^{2}}}|^{2}=O\left\{\left(\frac{2 \epsilon-\epsilon^{2}}{2\tau^{2}}\right)^{2}\right\}.\] _If \(\epsilon=O(m^{-1/3}(\log m)^{-1/2})\) then the above is \(O(m^{-2/3})\), and it follows that the grid size of \(S(m)=O(m^{1/3}(\log m)^{1/2})\) is sufficient. \(\blacksquare\) **Remark 2** (Number of Basis Functions).: _In the proof of Proposition 1 we used the fact that for each \(g_{j}\) there exists \(N_{j}\) such that we can find \(\tilde{w}_{jk}\) with \(\|g_{j}(\cdot)-\sum_{k=1}^{N_{j}}\tilde{w}_{jk}q_{k}(\cdot)\|_{2}^{2}<\epsilon/S.\) If we take \(\tilde{w}_{ji}=w_{ji}\) then_ \[\|g_{j}(\cdot)-\sum_{k=1}^{N_{j}}\tilde{w}_{jk}q_{k}(\cdot)\|_{2}^{2}=\sum_{i= N_{j}}^{\infty}w_{ji}^{2}.\] _Since \(\{w_{ji}\}_{i=1}^{\infty}\in\Theta(\gamma,c)\) we have_ \[\sum_{i=N_{j}}^{\infty}w_{ji}^{2}=o\left(\int_{N_{j}}^{\infty}x^{-2\gamma-1}dx \right)=O(N_{j}^{-2\gamma}).\] _Hence, for \(\|g_{j}(\cdot)-\sum_{k=1}^{N_{j}}w_{jk}q_{k}(\cdot)\|_{2}^{2}<\epsilon/S\) we only need \(N_{j}^{2\gamma}>S/\epsilon\). Since we have argued that the order of \(S\) does not have to be larger than \(m^{1/3}(\log m)^{1/2}\), if we take \(\epsilon\) to be of order \(m^{-2/3}\) then the order of \(K(m)=\max_{j}N_{j}\) does not have to be much larger than \(m^{1/(2\gamma)}(\log m)^{1/(4\gamma)}\). \(\blacksquare\)_ ### Proof of Corollary 1 Note that \(f(\cdot|\sigma)\) is continuous, then there exists \(K_{1}=[-M,M]\) such that \(P(x\in K_{1}^{c})\to 0\) as \(M\to\infty\). Let \(\inf_{x\in K_{1}}f(x|\sigma)=l_{0}\) and \(A_{l_{0}}=\{x:|\hat{f}^{m}(x|\sigma)-f(x|\sigma)|\geq l_{0}/2\}\). Since \(E\int|\hat{f}^{m}(x|\sigma)-f(x|\sigma)|^{2}dx\geq(l_{0}/2)^{2}P(A_{l_{0}})\); it follows that \(P(A_{l_{0}})\to 0\). Thus \(\hat{f}^{m}(\cdot|\sigma)\) and \(f(\cdot|\sigma)\) are bounded below by a positive number for large \(m,S,K\) except for an event that has a low probability. Similar arguments can be applied to the upper bound of \(\hat{f}^{m}(\cdot|\sigma)\) and \(f(\cdot|\sigma)\), as well as to the upper and lower bounds for \(\hat{f}^{m}_{0}(\cdot|\sigma)\) and \(f_{0}(\cdot|\sigma)\). Therefore, we conclude that \(\hat{f}^{m}_{0}(\cdot|\sigma)\), \(\hat{f}^{m}(\cdot|\sigma)\), \(f_{0}(\cdot|\sigma)\) and \(f(\cdot|\sigma)\). are all bounded in the interval \([l_{a},l_{b}]\), \(0<l_{a}<l_{b}<\infty\) for large \(m,S,K\) except for an event, say \(A_{\epsilon}\) that has low probability. Let \(\hat{T}_{m}(x,\sigma)=\hat{f}^{m}_{0}(x|\sigma)/\hat{f}^{m}(x|\sigma)\) and \(T^{\mathsf{OR}}(x,\sigma)=f_{0}(x|\sigma)/f(x|\sigma)\). We have \[\hat{T}_{m}(x,\sigma)-T^{\mathsf{OR}}(x,\sigma)=(\hat{f}^{m}_{0}(x|\sigma)f(x| \sigma)-f_{0}(x|\sigma)\hat{f}^{m}(x|\sigma))/(\hat{f}^{m}(x|\sigma)f(x|\sigma )).\] It is easy to see that \((\hat{T}_{m}-T^{\mathsf{OR}})^{2}\) is bounded by 1. Then \[E\{\hat{T}_{m}(x,\sigma)-T^{\mathsf{OR}}(x,\sigma)\}^{2}\leq P(A_{l_{0}})+c_{1 }E\{\hat{f}^{m}_{0}(x|\sigma)-f_{0}(x|\sigma)\}^{2}+c_{2}E\{\hat{f}^{m}(x| \sigma)-f(x|\sigma)\}^{2}.\] Thus, \(E\{\hat{T}_{m}(x,\sigma)-T^{\mathsf{OR}}(x,\sigma)\}^{2}\to 0.\) Let \(B_{\delta}=\{x|\sigma:|\hat{T}_{m}(x,\sigma)-T^{\mathsf{OR}}(x,\sigma)|>\delta\}\). Then \(\delta^{2}P(B_{\delta})\leq E\{\hat{T}_{m}(x,\sigma)-T^{\mathsf{OR}}(x,\sigma) \}^{2}\to 0\), and the result follows. ### Proof of Theorem 2 We begin with a summary of notation used throughout the proof: * \(Q(t)=m^{-1}\sum_{i=1}^{m}(T_{i}^{\mathsf{OR}}-\alpha)I\{T_{i}^{\mathsf{OR}}<t\}\). * \(\hat{Q}(t)=m^{-1}\sum_{i=1}^{m}(\hat{T}_{i,m}-\alpha)I\{\hat{T}_{i,m}<t\}\). * \(Q_{\infty}(t)=E\{(T^{\mathsf{OR}}-\alpha)I\{T^{\mathsf{OR}}<t\}\}\). * \(t_{\infty}=\sup\{t\in(0,1):Q_{\infty}(t)\leq 0\}\): the "ideal" threshold. For \(\hat{T}_{(k),m}<t<\hat{T}_{(k+1),m}\), define a continuous version of \(\hat{Q}(t)\) as \[\hat{Q}_{C}(t)=\frac{t-\hat{T}_{(k),m}}{\hat{T}_{(k+1),m}-\hat{T}_{(k),m}}\hat {Q}_{k}+\frac{\hat{T}_{(k+1),m}-t}{\hat{T}_{(k+1),m}-\hat{T}_{(k),m}}\hat{Q}_{ k+1},\] where \(\hat{Q}_{k}=\hat{Q}\left(\hat{T}_{(k),m}\right)\). Since \(\hat{Q}_{C}(t)\) is continuous and monotone, its inverse \(\hat{Q}_{C}^{-1}\) is well-defined, continuous and monotone. Next we show the following two results in turn: (i) \(\hat{Q}(t)\stackrel{{ p}}{{\rightarrow}}Q_{\infty}(t)\) and (ii) \(\hat{Q}_{C}^{-1}(0)\stackrel{{ p}}{{\rightarrow}}t_{\infty}\). To show (i), note that \(Q(t)\stackrel{{ p}}{{\rightarrow}}Q_{\infty}(t)\) by the WLLN, so that we only need to establish that \(\hat{Q}(t)-Q(t)\stackrel{{ p}}{{\rightarrow}}0\). We need the following lemma, which is proven in Section B.7. **Lemma 3**.: _Let \(U_{i}=(T_{i}-\alpha)I(T_{i}<t)\) and \(\hat{U}_{i}=(\hat{T}_{i}-\alpha)I\{\hat{T}_{i}<t\}\). Then \(E\left(\hat{U}_{i}-U_{i}\right)^{2}=o(1)\)._ By Lemma 3 and Cauchy-Schwartz inequality, \(E\left\{\left(\hat{U}_{i}-U_{i}\right)\left(\hat{U}_{j}-U_{j}\right)\right\}= o(1)\). Let \(S_{m}=\sum_{i=1}^{m}\left(\hat{U}_{i}-U_{i}\right)\). It follows that \[Var\left(m^{-1}S_{m}\right)\leq m^{-2}\sum_{i=1}^{m}E\left\{\left(\hat{U}_{i }-U_{i}\right)^{2}\right\}+O\left(\frac{1}{m^{2}}\sum_{i,j:i\neq j}E\left\{ \left(\hat{U}_{i}-U_{i}\right)\left(\hat{U}_{j}-U_{j}\right)\right\}\right)=o (1).\] By Corollary 1, \(E(m^{-1}S_{m})\to 0\), applying Chebyshev's inequality, we obtain \(m^{-1}S_{m}=\hat{Q}(t)-Q(t)\stackrel{{ p}}{{\rightarrow}}0\). Hence (i) is proved. Notice that \(Q_{\infty}(t)\) is continuous by construction, we also have \(\hat{Q}(t)\stackrel{{ p}}{{\rightarrow}}\hat{Q}_{C}(t)\). Next we show (ii). Since \(\hat{Q}_{C}(t)\) is continuous, for any \(\varepsilon>0\), we can find \(\eta>0\) such that \[P\left\{\left|\hat{Q}_{C}\left(t_{\infty}\right)\right|>\eta\right\}\geq P\left\{ \left|\hat{Q}_{C}^{-1}(0)-\hat{Q}_{C}^{-1}\left\{\hat{Q}_{C}\left(t_{\infty} \right)\right\}\right|>\varepsilon\right\}.\] Corollary 1 and the WLLN imply that \(\hat{Q}_{C}(t)\overset{p}{\rightarrow}Q_{\infty}(t).\) Note that \(Q_{\infty}\left(t_{\infty}\right)=0\). Then \(P\left(\left|\hat{Q}_{C}\left(t_{\infty}\right)\right|>\eta\right)\to 0\). Hence we have \(\hat{Q}_{C}^{-1}(0)\overset{p}{\rightarrow}\hat{Q}_{C}^{-1}\left\{\hat{Q}_{C }\left(t_{\infty}\right)\right\}=t_{\infty}\), completing the proof of (ii). To show \(\text{FDR}(\boldsymbol{\delta}^{\text{HAMT}}(\hat{t}_{m}))=\text{FDR}( \boldsymbol{\delta}^{\text{OR}}(t^{*}))+o(1)=\alpha+o(1)\), we only need to show \(\text{mFDR}(\boldsymbol{\delta}^{\text{HAMT}}(\hat{t}_{m}))=\text{mFDR}( \boldsymbol{\delta}^{\text{OR}}(t^{*}))+o(1)\). The result then follows from the asymptotic equivalence of FDR and mFDR, which was proven in Basu et al. (2018). Define the continuous version of \(Q(t)\) as \(Q_{C}(t)\) and the corresponding threshold as \(Q_{C}^{-1}(0)\). Then by construction, we have \[\boldsymbol{\delta}^{\text{HAMT}}(\hat{t}_{m})=\left[I\left\{\hat{T}_{i,m} \leq\hat{Q}_{C}^{-1}(0)\right\}:1\leq i\leq m\right]\quad\text{and}\quad \boldsymbol{\delta}^{\text{OR}}(t^{*}))=\left[I\left\{T_{i}\leq Q_{C}^{-1}(0) \right\}:1\leq i\leq m\right].\] Following the previous arguments, we can show that \(Q_{C}^{-1}(0)\overset{p}{\rightarrow}t_{\infty}\). It follows that \(\hat{Q}_{C}^{-1}(0)=Q_{C}^{-1}(0)+o_{p}(1)\). By construction \(\text{mFDR}(\boldsymbol{\delta}^{\text{OR}})=\alpha\). The mFDR level of \(\boldsymbol{\delta}^{\text{HAMT}}\) is \[\text{mFDR}(\boldsymbol{\delta}^{\text{HAMT}})=\frac{P_{H_{0}}\left\{\hat{T}_ {i,m}\leq\hat{Q}_{C}^{-1}(0)\right\}}{P\left\{\hat{T}_{i,m}\leq\hat{Q}_{C}^{-1 }(0)\right\}}.\] From Corollary 1, \(\hat{T}_{i,m}\overset{p}{\rightarrow}T_{i}^{\text{OR}}\). Using the continuous mapping theorem, \(\text{mFDR}\left(\boldsymbol{\delta}^{\text{HAMT}}\right)=\text{mFDR}\left( \boldsymbol{\delta}^{\text{OR}}\right)+o(1)=\alpha+o(1)\). The desired result follows. Finally, using the fact that \(\hat{T}_{i,m}\overset{p}{\rightarrow}T_{i}^{\text{OR}}\) and \(\hat{Q}_{C}^{-1}(0)\overset{p}{\rightarrow}Q_{C}^{-1}(0)\), we can similarly show that \(\text{ETP}(\boldsymbol{\delta}^{\text{HAMT}})/\text{ETP}(\boldsymbol{\delta} ^{\text{OR}})=1+o(1)\). ### Proof of Lemma 1 Suppose \(\mu_{i}\overset{iid}{\sim}g\), for \(i=1,...,m\). Let \(\hat{g}\) be the empirical density function \(\sum_{i=1}^{m}\delta_{\mu_{i}}(\cdot)\). Let \(f(x|\tau)=g*\phi_{\tau}(x)\) and \(\hat{f}(x|\tau)=\hat{g}*\phi_{\tau}(x)\). Then \[E\hat{f}(x|\tau)=E\sum_{i=1}^{m}\frac{1}{m}\phi_{\tau}(x-\mu_{i})=E\phi_{\tau} (x-\mu)=\int_{-\infty}^{\infty}\phi_{\tau}(x-\mu)g(\mu)d\mu=f(x|\tau).\] Also since \(\phi_{\tau}\) is bounded, it follows that \(\text{Var}\{\phi_{\tau}(x-\mu_{i})\}<\infty\). Therefore \[\text{Var}\hat{f}(x|\tau) =\text{Var}\left\{\int_{-\infty}^{\infty}\phi_{\tau}(\mu-x)\hat{g} (\mu)d\mu\right\}\] \[=\text{Var}\left\{\frac{1}{m}\sum_{i=1}^{m}\phi_{\tau}(x-\mu_{i})\right\}\] \[=\frac{1}{m}\text{Var}\{\phi_{\tau}(x-\mu_{i})\}\to 0.\] It follows that \(E_{\boldsymbol{\mu}}|f(x|\tau)-\hat{f}(x|\tau)|^{2}\to 0\) as \(n\to\infty\). The above implies it is possible to find a set \(\{\mu_{1},\ldots,\mu_{m}\}\) and \(\hat{f}(x|\tau)=\frac{1}{m}\sum_{i=1}^{m}\phi_{\tau}(x-\mu_{i})\) such that for all \(x\), \(|f(x|\tau)-\hat{f}(x|\tau)|^{2}\to 0\). Consider the set of functions \(\{\sum_{j=1}^{k}\theta_{j}\phi_{\tau}(x-u_{j})|\sum_{j=1}^{k}\theta_{j}=1, \theta_{j}\geq 0\ \ \forall j.\}\). We can make the grid fine enough so that for any \(\epsilon^{\prime}>0\) and \(j\), there exists \(u_{i(j)}\in\{u_{1},\ldots,u_{k}\}\) such that \(|\mu_{j}-u_{i(j)}|<\epsilon^{\prime}\). We can choose \(\epsilon^{\prime}\) small enough so that \(|\phi_{\tau}(x-u_{j})-\phi_{\tau}(x-\mu_{i(j)})|^{2}<\epsilon\). Hence, \[\big{|}\frac{1}{m}\sum_{j=1}^{m}\phi_{\tau}(x-\mu_{j})-\frac{1}{m }\sum_{j=1}^{m}\phi_{\tau}(x-u_{i(j)})\big{|}^{2} =\frac{1}{m^{2}}\big{|}\sum_{j=1}^{m}\phi_{\tau}(x-\mu_{j})-\sum_ {j=1}^{m}\phi_{\tau}(x-u_{i(j)})\big{|}^{2}\] \[\leq\frac{1}{m}\sum_{j=1}^{m}|\phi_{\tau}(x-\mu_{j})-\phi_{\tau}( x-u_{i(j)})|^{2}\leq\epsilon.\] By the triangle inequality we have \(|f(x|\tau)-\frac{1}{m}\sum_{j=1}^{m}\phi_{\tau}(x-u_{i(j)})|^{2}\leq\epsilon\), we can let \(g^{\prime}(\cdot)=\frac{1}{m}\sum_{j=1}^{m}\delta_{u_{i(j)}}(\cdot)\). \(\blacksquare\) ### Proof of Lemma 2 By Fubini's theorem, we have \[E|\hat{f}(x|\sigma)-f(x|\sigma)|^{2} =E_{\sigma}E_{\boldsymbol{x},\boldsymbol{\sigma}}E_{x|\sigma}| \hat{f}(x|\sigma)-f(x|\sigma)|^{2}\] \[=E_{\sigma}E_{\boldsymbol{x},\boldsymbol{\sigma}}\int_{-\infty}^{ \infty}|\hat{f}(x|\sigma)-f(x|\sigma)|^{2}f(x|\sigma)dx\] Hence, \(E|\hat{f}(x|\sigma)-f(x|\sigma)|^{2}\to 0\), implies there exists \(\sigma\) such that \[E\int_{-\infty}^{\infty}|\hat{f}(x|\sigma)-f(x|\sigma)|^{2}f(x|\sigma)dx\to 0.\] Given any fixed \(\tau>0\), suppose \(E\int_{-\infty}^{\infty}|\hat{g}*\phi_{\tau}(x)-g*\phi_{\tau}(x)|^{2}dx\to 0\), then there exists a set \(\mathcal{X}\) with \(m(\mathcal{X})>0\) and \(\epsilon_{1}>0\) such that \(E|\hat{g}*\phi_{\tau}(x)-g*\phi_{\tau}(x)|>\epsilon_{1}\) on \(\mathcal{X}\), here \(m(\mathcal{X})\) is the Lebesgue measure of \(\mathcal{X}\). Since \(m(\mathcal{X})>0\), and \(M_{1}<\sigma<M_{2}\) it follows that \(\int_{\mathcal{X}}f(x|\sigma)dx>0\) and on \(\mathcal{X}\)\(E|\hat{f}(x|\sigma)-f(x|\sigma)|^{2}>\epsilon_{2}\) for some \(\epsilon_{2}>0\). This contradicts the fact \(E\int_{-\infty}^{\infty}|\hat{f}(x|\sigma)-f(x|\sigma)|^{2}f(x|\sigma)dx\to 0\). \(\blacksquare\) ### Proof of lemma 3 Using the definitions of \(\hat{U}_{i}\) and \(U_{i}\), we can show that \[\left(\hat{U}_{i}-U_{i}\right)^{2} =\left(\hat{T}_{i,m}-T_{i}^{\mathsf{OR}}\right)^{2}I\left(\hat{T }_{i,m}\leq t,T_{i}^{\mathsf{OR}}\leq t\right)+\left(\hat{T}_{i,m}-\alpha \right)^{2}I\left(\hat{T}_{i,m}\leq t,T_{i}^{\mathsf{OR}}>t\right)\] \[+\left(T_{i}^{\mathsf{OR}}-\alpha\right)^{2}I\left(\hat{T}_{i,m}> t,T_{i}^{\mathsf{OR}}\leq t\right).\] Denote the three sums on the RHS as \(I\), \(II\), and \(III\) respectively. By Corollary 1, \(E(I)=o(1)\). Let \(\varepsilon>0\). Consider \[P\left(\hat{T}_{i,m}\leq t,T_{i}^{\mathsf{OR}}>t\right) \leq P\left(\hat{T}_{i,m}\leq t,T_{i}^{\mathsf{OR}}\in(t,t+ \varepsilon)\right)+P\left(\hat{T}_{i,m}\leq t,T_{i}^{\mathsf{OR}}\geq t+ \varepsilon\right)\] \[\leq P\left\{T_{i}^{\mathsf{OR}}\in(t,t+\varepsilon)\right\}+P \left(\left|\hat{T}_{i,m}-T_{i}^{\mathsf{OR}}\right|>\varepsilon\right)\] The first term on the right hand is vanishingly small as \(\varepsilon\to 0\) because \(T_{i}^{\mathsf{OR}}\) is a continuous random variable. The second term converges to \(0\) by Corollary 1. we conclude that \(II=o(1)\). In a similar fashion, we can show that \(III=o(1)\), thus proving the lemma. \(\blacksquare\) ## Appendix C Experiments involving two-sided composite null hypotheses Here we assess the numerical performance of HAMT for two-sided composite null hypotheses. We evaluate the following three competing testing procedures from Section 5.2: AdaPTGMM, DECONV and OR. Additionally, we consider the testing procedure NPMLE that relies on the deconvolution estimate obtained from nonparametric maximum likelihood estimation to construct the Lfdr statistic. Note that for both DECONV and NPMLE, the underlying deconvolution estimator ignores the dependence between \(\mu_{i}\) and \(\sigma_{i}\). We fix \(m=10^{4}\), \(\alpha=0.1\) and evaluate the aforementioned five methods across three simulation settings. Setting 1, presented in Figure 12, is a modification of Setting 2 from Section 5.2. Here \(\sigma_{i}=0.5,1\) or \(3\) with equal probabilities. Conditional on \(\sigma_{i}\), \(\mu_{i}=0\) with probability \(0.9\), and \(\mu_{i}=u\sigma_{i}\) or \(-u\sigma_{i}\) with probability \(0.05\) each. We take \(\mathcal{A}=[-5,5]\) for this setting and find that HAMT controls the FDR level at \(\alpha\) and dominates AdaPTGMM, DECONV and NPMLE in power. For Setting 2, we sample \(\sigma_{i}\) independently from \(U(0.5,u)\) but consider a three component mixture distribution for \(\mu_{i}\) conditional on \(\sigma_{i}\). In particular \(\mu_{i}=0\) with probability \(0.9\) and \(\mu_{i}\stackrel{{ ind.}}{{\sim}}N(3,\sigma_{i})\) or \(N(-3,\sigma_{i})\) each with probability \(0.05\). Here we let \(\mathcal{A}=[-2,2]\). In Figure 13 we find that HAMT continues to deliver a better performance than AdaPTGMM, DECONV and NPMLE. NPMLE, in particular, fails to control the FDR level while AdaPTGMM is the most conservative among the five competing testing procedures. In Setting 3, we allow \(\sigma_{i}^{2}\) to be unknown and use sample variances instead. Specifically, for the \(i^{th}\) hypothesis testing problem we sample \(X_{ij}\stackrel{{ i.i.d}}{{\sim}}N(\mu_{i},\sigma_{i}^{2})\) for \(j=1,\ldots,100\). Conditional on \(\sigma_{i}\), \(\mu_{i}=0\) with probability \(0.9\), and \(\mu_{i}=u\sigma_{i}\) or \(-u\sigma_{i}\) with probability \(0.05\) each, as in Setting 1. For the standard deviation, we take \(\sigma_{i}=0.5\sqrt{100},\sqrt{100}\) or \(3\sqrt{100}\) with equal probabilities and fix \(\mathcal{A}=[-5,5]\) for this setting. In Figure 14 we find that although HAMT controls the FDR level at \(\alpha\), it is relatively more conservative and demonstrates lower power than Setting 1 where \(\sigma_{i}\) were known.
2307.10537
Wide-band Unambiguous Quantum Sensing via Geodesic Evolution
We present a quantum sensing technique that utilizes a sequence of $\pi$ pulses to cyclically drive the qubit dynamics along a geodesic path of adiabatic evolution. This approach effectively suppresses the effects of both decoherence noise and control errors while simultaneously removing unwanted resonance terms, such as higher harmonics and spurious responses commonly encountered in dynamical decoupling control. As a result, our technique offers robust, wide-band, unambiguous, and high-resolution quantum sensing capabilities for signal detection and individual addressing of quantum systems, including spins. To demonstrate its versatility, we showcase successful applications of our method in both low-frequency and high-frequency sensing scenarios. The significance of this quantum sensing technique extends to the detection of complex signals and the control of intricate quantum environments. By enhancing detection accuracy and enabling precise manipulation of quantum systems, our method holds considerable promise for a variety of practical applications.
Ke Zeng, Xiaohui Yu, Martin B. Plenio, Zhen-Yu Wang
2023-07-20T02:31:58Z
http://arxiv.org/abs/2307.10537v1
# Wide-band Unambiguous Quantum Sensing via Geodesic Evolution ###### Abstract We present a quantum sensing technique that utilizes a sequence of \(\pi\) pulses to cyclically drive the qubit dynamics along a geodesic path of adiabatic evolution. This approach effectively suppresses the effects of both decoherence noise and control errors while simultaneously removing unwanted resonance terms, such as higher harmonics and spurious responses commonly encountered in dynamical decoupling control. As a result, our technique offers robust, wide-band, unambiguous, and high-resolution quantum sensing capabilities for signal detection and individual addressing of quantum systems, including spins. To demonstrate its versatility, we showcase successful applications of our method in both low-frequency and high-frequency sensing scenarios. The significance of this quantum sensing technique extends to the detection of complex signals and the control of intricate quantum environments. By enhancing detection accuracy and enabling precise manipulation of quantum systems, our method holds considerable promise for a variety of practical applications. _Introduction. --_ Accurate characterization of the qubit environment holds significant importance across a range of applications, spanning from quantum information processing to quantum sensing [1; 2; 3; 4]. A widely utilized technique for achieving this is the implementation of dynamical decoupling (DD) pulse sequences [5; 6]. These sequences serve to filter out environmental noise, thereby extending the quantum coherence time, as well as to extract and amplify signals of specific frequencies [3; 4; 7]. Consequently, qubits under such sequences become highly sensitive quantum sensors, presenting diverse applications. For instance, nitrogen-vacancy (NV) centers [8; 9; 10] subjected to DD pulse sequences enable nanoscale nuclear magnetic resonance (NMR) [11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26], spin label detection [27; 28; 29], spin cluster imaging [30; 31; 32; 33; 34; 35; 36; 37; 38], and AC field sensing [39; 40; 41; 42; 43; 44; 45; 46]. Furthermore, they can be employed for controlling nearby single nuclear spins [47; 48; 49; 50; 51; 52] in the context of quantum information processing [53; 54], quantum simulations [55], and quantum networks [56; 57; 58; 59]. However, DD pulse sequences used for quantum sensing, such as the commonly employed Carr-Purcell-Meiboom-Gill (CPMG) [60; 61] and XY8 [62] sequences encounter an issue known as spectral leakage and spurious resonance. These complications make the interpretation of the sensor's recorded signal challenging and can result in ambiguous signal identification [63; 64]. DD pulses introduce abrupt temporal state flipping of the qubit sensor, causing a pronounced frequency modulation at a specific frequency [indicated by the gray line in Fig. 1(d)]. This modulation leads to a strong resonance at the flipping frequency, enabling frequency-selective sensing. However, it also generates resonances at other frequencies, as evident in its Fourier transform [represented by the gray squares in Fig. 1(d)]. Consequently, signals outside the target sensing frequency band can contribute to the sensor's response and each individual frequency signal produces multiple signal peaks in the measured spectrum. Complex signals with a wide frequency range often exhibit significant signal overlap, posing challenges for analysis. Additionally, quantum heterodyne methods employed to down-convert high-frequency signals for sensing can exacerbate signal overlap [43; 44; 45]. Furthermore, the limited power of control pulses introduces spurious signals at unexpected frequencies [63]. These factors collectively present obstacles to reliably measuring environmental signals, particularly when various background noise sources are not fully characterized. With this goal in mind sequence timing has been optimised [65; 66] to eliminate higher-harmonics and sequence randomization has been explored to mitigates spurious signals [67; 68]. However, both approaches can address only specific their aspects and can solve these only partially. In this Letter, we present a novel approach called cyclic geodesic driving, which enables unambiguous sensing of signals across a broad frequency range on qubit sensors. Our method involves the application of a sequence of \(\pi\)-pulses to implement accelerated high-fidelity quantum adiabatic driving, effectively inducing periodic evolution of the quantum sensor along the geodesic path on the Bloch sphere. As a result, the resonance frequency of the quantum sensor aligns with the frequency of the periodic evolution, while simultaneously mitigating the impact of environmental background noise. By employing this technique, each individual frequency signal generates a single, distinct signal response within the wide frequency band. This eliminates undesirable signal overlap, enabling precise characterization of complex external environments and signals. For achieving arbitrary high frequency resolution, our method can be combined with synchronized readout techniques [39; 40]. Furthermore, our approach exhibits remarkable resilience against control errors, as it is the counterpart of quantum adiabatic control. Overall, our proposed method of cyclic geodesic driving offers a robust and effective solution for wide-band, unambiguous signal sensing, enabling accurate analysis of intricate quantum systems and external environments. _Adiabatic shortcut by jumping. --_ Our sensing scheme utilizes a sequence of periodic \(\pi\) pulses [Fig. 1 (a)] which achieves cyclic quantum adiabatic evolution along a geodesic [69; 70; 71; 72] defined by the control Hamiltonian for the qubit sensor \[H_{c}(t)=\frac{E(t)}{2}(|+_{\phi}\rangle\langle+_{\phi}|-|-_{\phi}\rangle \langle-_{\phi}|), \tag{1}\] where the instantaneous eigenstates \(|\pm_{\phi}\rangle\) are varied by a parameter \(\phi=\phi(t)\) starting with \(\phi(0)=0\) at the initial time \(t=0\). For adiabatic evolution of the parameter \(\phi(t)\), the evolution operator for the quantum state driven by \(H_{c}(t)\) reads [73; 74; 69] \[U_{c}(\phi)=e^{-i\varphi_{+}(t)}|+_{\phi}\rangle\langle+_{0}|+e^{-i\varphi_{-} (t)}|-_{\phi}\rangle\langle-_{0}|, \tag{2}\] which transfers the initial state \(|\pm_{0}\rangle\) to the instantaneous eigenstate \(|\pm_{\phi}\rangle\) at a later time. We use the Born-Fock gauge \(\langle\pm_{\phi}|\frac{d}{d\phi}|\pm_{\phi}\rangle=0\) such that \(\varphi_{\pm}(t)=\pm\frac{1}{2}\int_{0}^{t}E(t^{\prime})dt^{\prime}\) are the dynamic phases. To realize the adiabatic evolution \(U_{c}(\phi)\) by \(H_{c}(t)\) in finite times, we choose \[\begin{split}|+_{\phi}\rangle=&\cos(\frac{\phi}{2} )|+_{0}\rangle+\sin(\frac{\phi}{2})|-_{0}\rangle,\\ |-_{\phi}\rangle=&-\sin(\frac{\phi}{2})|+_{0}\rangle +\cos(\frac{\phi}{2})|-_{0}\rangle,\end{split} \tag{3}\] which connect two orthonormal states \(|\pm_{0}\rangle\) via a geodesic curve [75]. In our scheme we apply a sequence of control \(\pi\) pulses via Eq. (1) with \(\phi=\phi_{j}\) at the moments \(T_{j}=T_{\rm scan}\frac{2j-1}{2N}\) (\(j=1,2,\ldots\)), where \(N\) is the pulse number in one periodic of evolution, see Fig. 1(a). We use the linear relation \(\phi_{j}=\omega_{\rm scan}T_{j}\), where frequency \(\omega_{\rm scan}\) can be negative or positive depending on the change of \(\phi_{j}\) in time. Each control \(\pi\) pulse has a time duration \(t_{j}\) such that the pulse area \(\int_{T_{j}-t_{j}/2}^{T_{j}+t_{j}/2}E(t)dt=\pi\). Between the \(\pi\) pulses, there is no control, i.e., \(E(t)=0\) if \(\phi\notin\{\phi_{j}\}\). We remove the dynamic phases at the final time of the evolution, we introduce a \(\pi\) phase shift to the pulses in the second-half of the sequence. According to Refs. [69; 70; 71], the sequence realizes \(U_{c}(\phi)\) with unit-fidelity at the middle of any successive path points \(\phi=\bar{\phi}_{j}\equiv(\phi_{j}+\phi_{j-1})/2\). For other values of \(\phi\notin\{\bar{\phi}_{j}\}\) the difference between \(U_{c}(\phi)\) and the actual evolution implemented by \(H_{c}(t)\) is negligible when \(N\) is sufficiently large. In contrast to conventional shortcuts to adiabaticity [74] to accerate adiabatic process, in our method the instantaneous eigenstates of the control Hamiltonian \(H_{c}(t)\) are the same as the evolution path \(|\pm_{\phi}\rangle\) in Eq. (2). This avoids the use of counter-diabatic fields and retains the intrinsic robustness of traditional adiabatic process [71]. _Unambiguous wide-bandwidth robust sensing. --_ To demonstrate the concept of quantum sensing through the aforementioned geometric control, we examine a qubit coupled to its environment via the interaction Hamiltonian \[H_{\rm int}(t)=\frac{1}{2}\sigma_{z}B(t), \tag{4}\] where the Pauli operator \(\sigma_{z}=|1\rangle\langle 1|-|0\rangle\langle 0|\) and \(B(t)\) could be a classical field or a quantum operator in a rotating frame which includes possible dephasing noise. We use the control Hamiltonian \(H_{c}(t)=\Delta(t)\frac{\sigma_{z}}{2}+\Omega(t)\frac{\sigma_{z}}{2}\) and the states \(|+_{0}\rangle=|1\rangle\) and \(|-_{0}\rangle=|0\rangle\) for the geodesic in Eq. (3) for sensing. This geodesic driving around the \(y\)-axis (GD\({}_{y}\)) is sketched in Fig. 1(b). In the rotating frame of \(H_{c}(t)\), \(H_{\rm int}(t)\) becomes \(\vec{H}_{\rm int}(t)\approx\frac{1}{2}U_{c}^{\dagger}\sigma_{z}U_{c}B(t)\). For Figure 1: Quantum sensing via geodesic jumping. (a) Upper panel: Repeated application of a sequence of \(N\)\(\pi\) pulses realizes cyclic quantum adiabatic evolution along the geodesic in (b) or (c). Lower panel: Combined with synchronized readout techniques for arbitrary frequency resolution. (b) GD\({}_{y}\), where the closed geodesic is sampled by \(N\)\(\pi\) pulses. The red solid line illustrates the trajectory of the state evolution starting at \(|1\rangle\) for \(N=12\). When \(\omega_{\rm scan}\) matches the frequency \(\nu_{n}\) of the target, a resonance occurs. (c) As (b) but for GD\({}_{z}\) which uses a horizontal geodesic, e.g., for robust heterodyne sensing of high-frequency signal. The resonant condition is accurately tuned by \(T_{\rm scan}\) and the frequency \(\omega_{\rm{ctr}}\) of control field. (d) The resulting modulation function \(F(t)\) (red line for \(N=20\)) and its Fourier components (red cycles) where \(f_{k}=0\) for all \(1<k<N-1\). The gray line and squares are the corresponding ones for DD pulse sequences. the evolution Eq. (2), we find the approximated transformation [76] \[U_{c}^{\dagger}\sigma_{z}U_{c}\approx\cos\phi\sigma_{z};\;U_{c}^{\dagger}\sigma_ {x}U_{c}\approx\sin\phi\sigma_{z};\;U_{c}^{\dagger}\sigma_{y}U_{c}\approx 0, \tag{5}\] which have the nice property that they only depend on the geometric paramteter \(\phi(t)\). We obtain [76] \[\tilde{H}_{\rm int}(t)\approx F(t)\frac{\sigma_{z}}{2}B(t), \tag{6}\] where when \(N\) is sufficiently large the modulation function \(F(t)\approx\cos(\omega_{\rm scan}t)\) has only one Fourier component over a large frequency band, see Fig. 1(d). Conventional DD sequences [3; 4] also induce modulation factors \(F(t)\) to the \(\sigma_{z}\) operator with \(F(t)\in\{\pm 1\}\) for ideal sequences [3; 4; 77]. However, those modulation factors have multiple Fourier components that complicate the interpretation of the sensor's signal [63; 64; 65], see Fig. 1(d) for equally spaced DD sequences [3; 4; 60; 61; 62; 78; 62; 79; 80]. In Figs. 2(a)-(d) we simulate the measured spectrum of a classical AC field with \(B(t)=\sum_{j=1}^{3}b_{j}\cos(\nu_{j}t+\theta_{j})\), where \(\nu_{j}\) are the frequencies of different components. For the result of Fig. 2(a) obtained by the widely-used robust XY8 sequence with an interpulse duration \(\tau\)[62], all the frequencies (\(\{\nu_{j}/2\pi\}=\{500,1500.05,2499.88\}\) kHz) cause transitions of the sensor states at \(1/(2\tau)\approx 500\) kHz via the 1st, 3rd, and the 5th harmonics. This problem of ambiguous spectral overlap is not solved even when we improve the frequency resolution sufficiently high via the synchronized readout technique (Qdyne) [39; 40], see Figure 2: Quantum spectroscopy. (a) Population signal of XY8 sequence (blue solid line) for spectroscopy of AC fields with the frequencies \(\{\nu_{j}/2\pi\}=\{500,1500.05,2499.88\}\) kHz by varying the pulse interval \(\tau\). The 1500.05 kHz and 2499.88 kHz AC fields distort the signal centered at 500 kHz via the 3rd and 5th harmonics, respectively. Yellow dashed line is the result when there is dephasing noise (with a control-free decoherence time \(T_{2}^{*}\approx 2\)\(\mu\)s) and control errors (with about 20% drift on the amplitude of control field). (b) As (a) but by using GD\({}_{y}\). The 500 kHz signal dip is not distorted by the 1500.05 kHz and 2499.88 kHz AC fields. (c) Power spectrum for the AC signal fields in (c) by using the synchronized readout technique in Ref. [39; 40]. The resonances due to higher harmonics make the signal unidentifiable even through the expected spectral resolution is about 1 Hz. (d) As (c) but by using GD\({}_{y}\). (e) Signal of XY8 sequence for the detection of a \({}^{1}\)H spin with its frequency indicated by a red arrow. The spurious resonance (centered around the pink vertical line) produced by a \({}^{13}\)C spin in diamond distorts the \({}^{1}\)H spin signal. Yellow dashed line is the result when there are \(2\pi\times 2\) MHz detuning error and 30% of amplitude drift in the control field. (f) As (e) but by using GD\({}_{y}\), where the spurious signals disappear. See [76]for details of simulation. Figure 3: (a) Quantum heterodyne spectroscopy of a signal field with frequencies \(\{\nu_{j}\}=\omega_{q}+2\pi\times\{-84,-68,-56,-50,-42,-2,72,90\}\) kHz by using the protocol in Refs. [43; 44] with the CPMG sequences. \(\omega_{q}\) could be at the GHz range. The blue solid line (the line with yellow filling) is the simulation with (without) dephasing noise that induces a control-free decoherence time \(T_{2}^{*}\approx 2\)\(\mu\)s. The pulse interval \(\tau\) is varied to measure the spectrum. Because for each AC signal frequency \(\nu_{j}\) resonance dips occur whenever \(1/(2\tau)=(\nu_{j}-\omega_{\rm crt})/(2\pi k)\), \((k=\pm 1,\pm 3,\pm 5,\ldots)\), the dips at \(1/(2\tau)=(\nu_{j}-\omega_{q})/(2\pi)\) indicated by red vertical arrows are obscured by other resonance dips (vertical lines). Different number \(N^{\prime}\) of \(\pi\) pulses are used for different range of \(\tau\) to insure that the sequence times are smaller than 1 ms. (b) [(c)]: As (a) but by using Protocol with fixed \(\Delta_{\rm scan}=0\) (\(\omega_{\rm scan}=2\pi\times 80\) kHz)]. All the dips only appear at the right frequencies \(\nu_{j}\). See [76] for details of simulation. Fig. 2(c), because all the frequency components contribute to the readout signal in Qdyne. However, using GD\({}_{y}\), only the frequency \(\nu_{1}/2\pi=500\) kHz contributes to the dip at the resonance \(\omega_{\text{scan}}=\nu_{1}\), because we have the effective Hamiltonian \(\tilde{H}_{\text{int}}\approx\frac{1}{4}b_{1}\cos{\theta_{1}\sigma_{z}}\) from Eq. (6) after rotating wave approximation [76]. The phase dependence on the effective Hamiltonian (and hence the signal) allows for arbitrary frequency resolution with synchronized readout, see Figs. 2(d) and 1(a). The results also show that our protocol is more resistant to control errors and dephasing noise. The already strong robustness of GD\({}_{y}\) is enhanced further with increasing number \(N\) of pulses, see Fig. 4. It is interesting that cyclic geodesic driving also fully solve the problems of spurious response due to finite-width pulses [63], as detailed in [76]. In Fig. 2(e) and (f), we simulate the quantum sensing of a single proton spin (\({}^{1}\)H) by an NV center. A \({}^{13}\)C spin in diamond is also coupled to the NV center as a noise source. For this case \(B(t)\) is a quantum operator [48; 49; 76]. For the result Fig. 2(e) obtained by XY8 sequences, the spurious response from the \({}^{13}\)C spin disturb the target proton spin signal and lead to misidentification of \({}^{13}\)C nuclei for proton. On the contrary, GD\({}_{y}\) provides a clean signal dip in the spectrum [Fig. 2(f)], because when \(\omega_{\text{scan}}\) matches the frequency \(\nu_{1}\) of the target \({}^{1}\)H, \(\tilde{H}_{\text{int}}\approx\frac{1}{2}a_{1}^{x}\sigma_{z}I_{1}^{x}\) (where \(I_{1,z}\) is the \({}^{1}\)H spin operator and \(a_{1}^{x}\) is the hyperfine strength [76]) and the effect of the \({}^{13}\)C spin is removed. _Unambiguous heterodyne sensing. --_ The idea can be generalized to other settings. Consider the sensing of a multi-frequency signal field \(\vec{B}(t)=(B_{x},B_{y},B_{z})\) with frequencies \(\nu_{j}\) much larger than the Rabi frequency of the control field. We use GD\({}_{z}\) [see Fig. 1(c)] with \(|\pm_{0}\rangle=(|0\rangle\pm|1\rangle)/\sqrt{2}\) for the geodesic in Eq. (3) to sense the traverse part \(B_{\perp}(t)\equiv B_{x}+iB_{y}=\sum_{j}b_{j}e^{i\alpha_{j}}\cos(\nu_{j}t+ \theta_{j})\). The relevant Hamiltonian reads \[H=\frac{1}{2}(\omega_{q}+\delta_{t})\sigma_{z}+(\frac{1}{2}B_{\perp}(t)\sigma _{+}+\text{h.c.})+H_{\text{ctr}},\] where \(\omega_{q}\) is the frequency of the qubit and \(\delta_{t}\) is an unknown fluctuation. For NV qubits, \(\omega_{q}\) (e.g., \(\omega_{q}=D+\gamma_{e}B_{z}\) with \(D=2\pi\times 2.87\) GHz and \(\gamma_{e}\approx 2\pi\times 2.8\) MHz/G) can be adjusted by changing the magnetic field \(B_{z}\). The control Hamiltonian \(H_{\text{ctr}}=\Omega(t)\cos(\omega_{\text{ctr}}t+\phi)\sigma_{x}\) has a controllable detuning \(\Delta_{\text{scan}}=\omega_{\text{ctr}}-\omega_{q}\). In the rotating frame with respect to \(\frac{1}{2}(\omega_{q}+\delta_{t})\sigma_{z}+H_{\text{ctr}}\), we obtain the effective interaction [76] \[\tilde{H}_{\text{int}}(t)\approx\frac{1}{4}\sum_{j}b_{j}e^{i(\alpha_{j}- \theta_{j})+i(\omega_{\text{ctr}}+\omega_{\text{scan}}-\nu_{j})t}\sigma_{+}+ \text{h.c.}. \tag{7}\] Under the resonance condition for a signal frequency \(\omega_{\text{ctr}}+\omega_{\text{scan}}=\nu_{n}\) and when \(b_{j}\ll|\nu_{n}-\nu_{j}|\) for \(j\neq n\), \(\tilde{H}_{\text{int}}(t)\approx\frac{1}{4}b_{n}e^{i(\alpha_{n}-\theta_{n})} \sigma_{+}+\text{h.c.}\) picks up the signal only at the frequency \(\nu_{n}\). As exemplified in Fig. 3(a), the heterodyne sensing using the CPMG sequences produce multiple dips at \(1/(2\tau)=\pm(\nu_{j}-\omega_{q})/k\) (\(k=1,3,5,\ldots\)), which implies ambiguity sensor responses especially when \(\{\nu_{j}\}\) happen to be close to the qubit frequency \(\omega_{q}\). In contrast, our geodesic driving gives clear signal dips for accurate determination of all the signal frequencies, see Fig. 3(b). Our method is also more resilient to dephasing noise (Fig. 3) and is more robust against control errors (Fig. 4). The robustness can be further enhanced by combining composite pulse techniques [see Fig. 4 (e) for the result where each pulse is replaced by a Knill pulse [66; 78; 81]]. _Conclusion. --_ We propose a quantum sensing scheme based on a quantum adiabatic shortcut along a geodesic path. This scheme offers the capability to resolve complex broadband signals and addresses the issues of spectral overlap and spurious signals that arise in existing DD-based quantum sensing methods. Notably, our approach allows for arbitrary frequency resolution through the utilization of synchronized readout techniques. Moreover, it exhibits robustness against control errors and effectively suppresses unwanted decoherence noise. The versatility of our method extends beyond signal detection; it can also be employed for the detection and control of various quantum objects, including single nuclear spins, spin clusters, and mechanical oscillators. Furthermore, our scheme holds promise for applications aimed at mitigating crosstalk in qubit arrays. In summary, our proposed quantum sensing scheme based on a quantum adiabatic shortcut along a geodesic path provides a universal solution for accurate signal detection, offering improved performance over existing methods. Its potential applications encompass a wide range of quantum systems and address key challenges in the field. Figure 4: Control fidelity with respect to amplitude and detuning errors for (a) XY8 sequence, (b) GD\({}_{y}\) with \(N=20\), (c)GD\({}_{y}\) with \(N=80\), (d) CPMG sequence with 40 pulses for heterodyne sensing, (e) GD\({}_{z}\) with \(N=10\), (f) GD\({}_{z}\) with \(N=10\) but each pulse is replaced by a Knill pulse. All the protocols have the same sequence time length, and the ideal Rabi frequency of the control is \(2\pi\times 50\) MHz. ###### Acknowledgements. This work was supported by National Natural Science Foundation of China (Grant No. 12074131), the Natural Science Foundation of Guangdong Province (Grant No. 2021A1515012030), the ERC Synergy grant HyperQ (grant no. 856432) and the BMBF Zukunftscluster QSense: Quantensenoren fur die biomedizinische Diagnostik (QMED) (grant no 03ZU1110FF).
2304.06605
Integral structure of the skein algebra of the 5-punctured sphere
We give an explicit presentation for the Kauffman bracket skein algebra of the $5$-punctured sphere over any commutative unitary ring.
Haimiao Chen
2023-04-13T15:16:48Z
http://arxiv.org/abs/2304.06605v2
# Integral structure of the skein algebra of the 5-punctured sphere ###### Abstract We give an explicit presentation for the Kauffman bracket skein algebra of the 5-punctured sphere over any commutative unitary ring. **Keywords:** Kauffman bracket; skein algebra; presentation; punctured sphere **MSC2020:** 57K16, 57K31 ## 1 Introduction Let \(R\) be a commutative ring with identity and a fixed invertible element \(q^{\frac{1}{2}}\). Let \(\Sigma_{g,h}\) denote the \(h\)-punctured orientable surface of genus \(g\). The _Kauffman bracket skein algebra_ of \(\Sigma_{g,h}\) over \(R\), denoted by \(\mathcal{S}(\Sigma_{g,h};R)\), is defined as the \(R\)-module generated by isotopy classes of framed links (which may be empty) embedded in \(\Sigma_{g,h}\times[0,1]\) modulo the local relations For framed links \(L_{1},L_{2}\), the product \(L_{1}L_{2}\) is defined by stacking \(L_{1}\) over \(L_{2}\) in the \([0,1]\) direction. Using the local relations, each element of \(\mathcal{S}(\Sigma;R)\) can be written as a \(R\)-linear combination of multi-curves on \(\Sigma\), with the vertical framing understood. The description of the structure of \(\mathcal{S}(\Sigma_{g,k};\mathbb{Z}[q^{\pm\frac{1}{2}}])\) was raised as [5] Problem 1.92 (J) and also [6] Problem 4.5. The structure of \(\mathcal{S}(\Sigma_{g,k};\mathbb{Z}[q^{\pm\frac{1}{2}}])\) for \(g=0,k\leq 4\) and \(g=1,k\leq 2\) was known to Bullock and Przytycki [2] early in 2000. A finite set of generators had been given by Bullock [1] in 1999. Till now it remains a difficult problem to find all relations for general \(g\) and \(k\). As weak solutions, recently Cooke and Lacabanne [4] obtained a presentation for \(\mathcal{S}(\Sigma_{0,5};\mathbb{C}(q^{\frac{1}{4}}))\), and the author [3] found a presentation for \(\mathcal{S}(\Sigma_{0,n+1};R^{\prime})\) for all \(n\), assuming \(R^{\prime}\supseteq\mathbb{Z}[q^{\pm\frac{1}{2}}]\) with \(q+q^{-1}\) invertible. In this paper we still focus on the genus \(0\) case. Let \(\mathcal{S}_{n}=\mathcal{S}(\Sigma_{0,n+1};R)\). We extend the machinery of [3] to show that for each \(n\), the ideal of defining relations of \(\mathcal{S}_{n}\) is generated by certain relations of degree at most \(2n+2\). For \(n=4\), we find an explicit set of relations to generate the ideal. The 'integral' in the title is justified by that the assumption on \(q+q^{-1}\) is removed. Actually our method can be easily extended to all \(n\geq 5\). Let \(\overline{q}\) denote \(q^{-1}\), and let \(\alpha=q+\overline{q}\). Denote the cardinality of a finite set \(A\) by \(\#A\). ## 2 Relations can be localized (sketch) Display \(\Sigma=\Sigma_{0,n+1}\) as in Figure 1. Let \(\mathsf{p}_{1},\ldots,\mathsf{p}_{n}\) denote the punctures, listed from left to right. Let \(\gamma_{i}\) denote the vertical segment connecting \(\mathsf{p}_{i}\) and a point on \(\partial\Sigma\). Let \(\gamma=\bigcup_{i=1}^{n}\gamma_{i}\), and let \(\Gamma=\bigcup_{i=1}^{n}\Gamma_{i}\), with \(\Gamma_{i}=\gamma_{i}\times[0,1]\). Let \(\pi:\Sigma\times[0,1]\to\Sigma\) be the projection. Each \(1\)-submanifold \(X\subset\Sigma\times[0,1]\) is assumed to be in generic position, and each arc (i.e. a component homeomorphic to \([0,1]\)) is equipped with an orientation. Define the _multi-degree_\(\mathrm{md}_{X}:\{1,\ldots,n\}\to\mathbb{N}\) to be the function sending \(v\) to \(\#(X\cap\Gamma_{v})\). Let \(|X|=\#(X\cap\Gamma)\), called the degree. Let \(\mathrm{Cr}(X)\) denote the set of crossings of \(X\). For \(\mathsf{c}\in\mathrm{Cr}(X)\), let \(\mathrm{over}(\mathsf{c})\), \(\mathrm{under}(\mathsf{c})\) respectively denote the upper and lower point that constitute \(\pi^{-1}(\mathsf{c})\). Let \(\mathrm{cn}(X)=\#\mathrm{Cr}(X)\). Define the complexity by \(\lambda(X)=(|X|,\mathrm{cn}(X))\), ordered lexicographically. Given arcs \(E,F\) with \(E\cap F=\emptyset\), call \(E\) blocked by \(F\) from above (resp. below) if there exists \(\mathsf{c}\in\mathrm{Cr}(E\sqcup F)\) such that \(\mathrm{over}(\mathsf{c})\in F,\mathrm{under}(\mathsf{c})\in E\) (resp. \(\mathrm{over}(\mathsf{c})\in E,\mathrm{under}(\mathsf{c})\in F\)). For each link \(L\), let \([L]\in\mathcal{S}_{n}\) denote the element it represents, and let \(\Theta(L)\) denote the linear combination of multi-curves obtained by resolving all crossings. Let \(\mathcal{V}_{n}\) denote the \(R\)-module generated by multi-curves on \(\Sigma\), which is free by [7]. A _stacked link_\(L\) is a union of knots \(L=K_{1}\sqcup\cdots\sqcup K_{r}\) such that \(K_{i}\subset\Sigma\times(t_{i},t_{i-1})\) for some \(t_{i}\), with \(0=t_{r}<\cdots<t_{0}=1\). In particular, if \(M=S_{1}\sqcup\cdots\sqcup S_{r}\) is a multi-curve, then each element \(\tau\) of \(\mathrm{Sym}(r)\), the group of permutations on \(\{1,\ldots,r\}\), makes \(M\) into a stacked link \(S_{\tau(1)}\cdots S_{\tau(r)}\). For an arc \(F\), let \(W(F)\) denote the unreduced word in \(\mathbf{x}_{1}^{\pm 1},\ldots,\mathbf{x}_{n}^{\pm 1}\) determined as follows: walk along \(F\) guided by the orientation, record \(\mathbf{x}_{i}\) (resp. \(\mathbf{x}_{i}^{-1}\)) whenever accessing \(\Gamma_{i}\) from left to right (resp. from right to left), and multiply the recorded words together. For \(1\leq i_{1}<\cdots<i_{r}\leq n\), fix a subsurface \(\Sigma(i_{1},\ldots,i_{r})\subset\Sigma\) homeomorphic to \(\Sigma_{0,r+1}\) and containing \(\mathsf{p}_{i_{1}},\ldots,\mathsf{p}_{i_{r}}\); let \(t_{i_{1}\cdots i_{r}}\in\mathcal{F}_{n}\) be the element represented by the outer boundary of \(\Sigma(i_{1},\ldots,i_{r})\). Let \[\mathfrak{T}_{n}=\{t_{i_{1}\cdots i_{r}}\colon 1\leq i_{1}<\cdots<i_{r}\leq n,\ 1 \leq r\leq n\}.\] Denote \(t_{1\cdots n}\) by \(t_{0}\). Note that \(t_{1},\ldots,t_{n}\) and \(t_{0}\) are central in \(\mathcal{S}_{n}\). Let \(\mathcal{T}_{n}\) be the free \(R\)-algebra generated by \(\mathfrak{T}_{n}\). Call a product of elements of \(\mathfrak{T}_{n}\) a _monomial_, and call an element of \(\mathcal{T}_{n}\) a _polynomial_. Let \(\theta_{n}:\mathcal{T}_{n}\to\mathcal{S}_{n}\) denote the canonical map. For \(\mathfrak{u}=\sum_{i}a_{i}\mathfrak{g}_{i}\in\mathcal{T}_{n}\), where \(a_{i}\in R\) and \(\mathfrak{g}_{i}\) is a monomial, put \[\mathrm{md}_{\mathfrak{u}}(v)=\max_{i}\mathrm{md}_{\mathfrak{g}_{i}}(v),\qquad \|\mathfrak{u}\|=\sum\nolimits_{v=1}^{n}\mathrm{md}_{\mathfrak{u}}(v).\] **Lemma 2.1**.: _Suppose \(C\) is a shortenable arc, by which we mean that \(W(C)=\mathbf{x}_{i_{1}}^{\epsilon_{1}}\cdots\mathbf{x}_{i_{m}}^{\epsilon_{m}}\) such that \(i_{k}=i_{\ell},k<\ell\) if and only if \(k=1,\ell=m\)._ 1. _There exists a formal sum_ \(\mathfrak{s}_{u}(C)=\sum_{i}\mathfrak{a}_{i}\mathbf{c}_{i}\)_, where_ \(\mathfrak{a}_{i}\in\mathcal{T}_{n}\) _and_ \(\mathbf{c}_{i}\) _is an arc with_ \(\partial_{\pm}\mathbf{c}_{i}=\partial_{\pm}C\)_,_ \(|\mathbf{c}_{i}|<m\)_, such that_ \(\sum_{i}\mathfrak{a}_{i}(\mathbf{c}_{i}\cup G)=C\cup G\) _in_ \(\mathcal{S}_{n}\) _for any arc_ \(G\) _that does not block_ \(C\) _from above._ 2. _There exists a formal sum_ \(\mathfrak{s}_{d}(C)=\sum_{j}\mathbf{d}_{j}\mathfrak{b}_{j}\)_, where_ \(\mathfrak{b}_{j}\in\mathcal{T}_{n}\) _and_ \(\mathbf{d}_{j}\) _is an arc with_ \(\partial_{\pm}\mathbf{d}_{j}=\partial_{\pm}C\)_,_ \(|\mathbf{d}_{j}|<m\)_, such that_ \(\sum_{j}(\mathbf{d}_{j}\cup G)\mathfrak{b}_{j}=C\cup G\) _in_ \(\mathcal{S}_{n}\) _for any arc_ \(G\) _that does not block_ \(C\) _from below._ Proof.: We only prove (i); the proof for (ii) is similar. Clearly \(m\leq n\). The assertion is trivial if \(m=2\). Assume \(3\leq m\leq n\). A simple observation: if \(K\) is a knot with \(\#(K\cap\Gamma_{i})\leq 1\) for all \(i\), then \(\Theta(K)\) can be represented by an element of \(\mathcal{T}_{n}\). When \(\epsilon_{1}=1\) and \(\epsilon_{m}=-1\), the assertion is clear from Figure 2. When \(\epsilon_{1}=\epsilon_{m}=1\), the assertion is clear from Figure 3. The other two cases are similar. For each knot \(K\), we can use induction on \(\lambda(K)\) to recursively define the set \(\mathcal{A}(K)\) of _admissible expressions_. When \(|K|\leq 2n+2\), put \[\mathcal{A}(K)=\big{\{}\mathfrak{a}\in\mathcal{T}_{n}\colon\mathrm{md}_{ \mathfrak{a}}=\mathrm{md}_{K},\ \theta(\mathfrak{a})=[K]\big{\}}.\] Suppose \(|K|>2n+2\) and suppose that \(\mathcal{A}(J)\) has been defined for each knot \(J\) with \(\lambda(J)\prec\lambda(K)\). Take a shortenable arc \(F\) (whose degree is at most \(n+1\)), replace \(F\) by \(\mathfrak{s}_{u}(F)\) so as to replace \(K\) by \(\sum_{i}\mathfrak{c}_{i}M_{i}\), and put \(\mathcal{A}_{u}(K,F)=\{\sum_{i}\mathfrak{c}_{i}\mathfrak{g}_{i}\colon \mathfrak{g}_{i}\in\mathcal{A}(M_{i})\}\); also, replace \(F\) by \(\mathfrak{s}_{d}(F)\) so as to replace \(K\) by \(\sum_{j}N_{j}\mathfrak{d}_{j}\) Figure 3: Figure 2: and put \(\mathcal{A}_{d}(K,F)=\{\sum_{j}\mathfrak{h}_{j}\mathfrak{d}_{j}\colon\mathfrak{h}_{ j}\in\mathcal{A}(N_{j})\}.\) (Many details are omitted here; see [3] for complete information.) Set \[\mathcal{A}(K)=\bigcup\nolimits_{F}\bigl{(}\mathcal{A}_{u}(K,F)\cup\mathcal{A}_ {d}(K,F)\bigr{)},\] where \(F\) ranges over all shortenable arcs of \(K\). For a multi-curve \(M=S_{1}\sqcup\cdots\sqcup S_{r}\), put \[\mathcal{A}(M)=\bigcup\nolimits_{\tau\in\operatorname{Sym}(r)}\mathcal{A}(S_ {\tau(1)}\cdots S_{\tau(r)}).\] By construction, \(\mathcal{A}(K)\neq\emptyset\) for any knot \(K\). Hence \(\mathcal{A}(M)\neq\emptyset\) for any multi-curve \(M\); any element of \(\mathcal{A}(M)\) is sent to \([M]\) by \(\theta_{n}\). Since \(\mathcal{S}_{n}\) is spanned by isotopy classes of multi-curves, we have established the following lemma, which is the main result of [1] in genus 0 case. **Lemma 2.2**.: _The skein algebra \(\mathcal{S}_{n}\) is generated by \(\mathfrak{T}_{n}\)._ A triple of links \((L_{\times},L_{\infty},L_{0})\) in one of the three cases illustrated in Figure 4 is called an _elementary skein triple_. Given \(\vec{e}=(e_{1},\ldots,e_{n})\) with \(\|\vec{e}\|\leq 2r(\vec{e})+2\), where \(\|\vec{e}\|=\sum_{j=1}^{n}e_{j}\) and \(r(\vec{e})=\#\{i\colon e_{i}>0\}\), let \[\mathcal{R}(\vec{e})=\{\mathfrak{u}\in\ker\theta_{n}\colon\operatorname{md}_{ \mathfrak{u}}(j)\leq e_{j},\ 1\leq j\leq n\}\subset\mathcal{T}_{n}.\] Let \(\mathcal{I}\) denote the two-sided ideal generated by elements of \(\mathcal{R}(\vec{e})\) for \(\vec{e}\) with \(\|\vec{e}\|\leq 2r(\vec{e})+2\). **Theorem 2.3**.: _The ideal of defining relations of \(\mathcal{S}_{n}\) is exactly \(\mathcal{I}\)._ The proof is outlined as follows. * For each knot \(K\), we can show that any two admissible expressions differ by an element of \(\mathcal{I}\); in other words, we can choose \(\mathfrak{a}(K)\in\mathcal{T}_{n}\) with \(\theta_{n}(\mathfrak{a}(K))=[K]\), and the image \(\breve{\mathfrak{a}}(K)\) under \(\mathcal{T}_{n}\twoheadrightarrow\mathcal{T}_{n}/\mathcal{I}\) depends only on \(K\). * To each stacked link \(L=K_{1}\cdots K_{r}\) we associate \(\tilde{\mathfrak{a}}(L)=\tilde{\mathfrak{a}}(K_{1})\cdots\tilde{\mathfrak{a}}(K_{ r})\in\mathcal{T}_{n}/\mathcal{I}\). For a multi-curve \(M\), it can be shown that, \(\tilde{\mathfrak{a}}(M)\) depends only on the isotopy class \([M]\), so \(\tilde{\mathfrak{a}}([M])\) is well-defined. This extends by linearity to a \(R\)-linear map \(\tilde{\mathfrak{a}}:\mathcal{V}_{n}\hookrightarrow\mathcal{T}_{n}/\mathcal{I}\). * For each elementary skein triple \((L_{\times},L_{\infty},L_{0})\), we can show \(\tilde{\mathfrak{a}}(L_{\times})=q^{\frac{1}{2}}\tilde{\mathfrak{a}}(L_{ \infty})+\overline{q}^{\frac{1}{2}}\tilde{\mathfrak{a}}(L_{0})\). * Each stacked link \(L\) can be related to \(\Theta(L)\) through elementary skein triples, so \(\tilde{\mathfrak{a}}(L)=\tilde{\mathfrak{a}}(\Theta(L))\). * In particular, for each monomial \(\mathfrak{g}\), regarded as a stacked link, we have \(\mathfrak{a}(\mathfrak{g})=\mathfrak{g}\) and \(\tilde{\mathfrak{a}}(\mathfrak{g})=\tilde{\mathfrak{a}}(\Theta(\mathfrak{g}))\). * Consequently, if a polynomial \(\mathfrak{f}\) satisfies \(\mathfrak{f}=0\) in \(\mathcal{S}_{n}\), so that \(\Theta(\mathfrak{f})=0\), then \(\tilde{\mathfrak{a}}(\mathfrak{f})=\tilde{\mathfrak{a}}(\Theta(\mathfrak{f}))=0\), hence \(\mathfrak{f}=\mathfrak{a}(\mathfrak{f})\in\mathcal{I}\). The full implement employs induction on the complexity \(\lambda(L)\) and turns out to be tortuous. Refer to [3] Section 3 for details. All arguments are still working as long as each 'degree 3 arc' is replaced by'shortenable arc'. ## 3 A presentation for \(\mathcal{S}_{4}\) If \(C\) is a simple arc with \(W(C)=\mathbf{x}_{i_{1}}^{\epsilon_{1}}\cdots\mathbf{x}_{i_{m}}^{\epsilon_{m}}\) with \(\epsilon_{s}\in\{\pm 1\}\), then we denote the simple curve obtained by connecting the ends of \(C\) via an arc in \(\Sigma\setminus\gamma\) by \(t_{i_{1}^{\prime}\cdots i_{m}^{\prime}}\), where \(i_{s}^{\prime}=i_{s}\) (resp. \(i_{s}^{\prime}=\overline{i_{s}}\)) if \(\epsilon_{s}=1\) (resp. \(\epsilon_{s}=-1\)). ### Useful identities By direct computation, \[t_{12}t_{23} =qt_{123\overline{2}}+\overline{q}t_{13}+t_{1}t_{3}+t_{2}t_{123}, \tag{1}\] \[t_{13}t_{24} =\alpha t_{0}+t_{1}t_{234}+t_{2}t_{134}+t_{3}t_{124}+t_{4}t_{123} +q^{2}t_{12}t_{34}+\overline{q}^{2}t_{14}t_{23}\] \[\quad+qt_{3}t_{4}t_{12}+\overline{q}t_{1}t_{4}t_{23}+qt_{1}t_{2} t_{34}+\overline{q}t_{2}t_{3}t_{14}+t_{1}t_{2}t_{3}t_{4},\] (2) \[t_{14}t_{234} =t_{4}t_{0}+\overline{q}t_{1}\overline{t}_{234}+qt_{123}+t_{1}t_{ 23},\] (3) \[t_{34}t_{124} =t_{4}t_{0}+qt_{12}\overline{t}_{34}+\overline{q}t_{123}+t_{3}t_{ 12},\] (4) \[t_{24}t_{134} =t_{4}t_{0}+\overline{q}t_{12}\overline{t}_{34}+qt_{1}\overline{ t}_{234}+t_{2}t_{1}\overline{t}_{34}. \tag{5}\] Illuminated by Figure 5, \[t_{123\overline{2}}t_{13} =t_{123}^{2}+(qt_{1}t_{23}+\overline{q}t_{3}t_{12}+t_{1}t_{2}t_{3 })t_{123}+q^{2}t_{23}^{2}+\overline{q}^{2}t_{12}^{2}\] \[\quad+qt_{2}t_{3}t_{23}+\overline{q}t_{1}t_{2}t_{12}+t_{1}^{2}+t _{2}^{2}+t_{3}^{2}-\alpha^{2}. \tag{6}\] Since \(t_{23}t_{34}=qt_{234\overline{3}}+\overline{q}t_{24}+t_{2}t_{4}+t_{3}t_{234}\), we have \[t_{12}t_{23}t_{34}=q^{2}t_{1234\overline{32}}+t_{134\overline{3}}+qt_{1}t_{4}+ qt_{2}t_{1234\overline{3}}+\overline{q}t_{12}t_{24}+t_{2}t_{4}t_{12}+t_{3}t_{12}t_{ 234}.\] Then replacing \(t_{134\overline{3}}\) and \(t_{1234\overline{3}}\) via \[t_{13}t_{34} =qt_{134\overline{3}}+\overline{q}t_{14}+t_{1}t_{4}+t_{3}t_{134},\] \[t_{34}t_{123} =\overline{q}t_{1234\overline{3}}+qt_{124}+t_{4}t_{12}+t_{3}t_{0},\] respectively, we are led to \[t_{12}t_{23}t_{34} =q^{2}t_{1234\overline{32}}+t_{3}t_{12}t_{234}+q^{2}t_{2}t_{34}t_{ 123}-q^{3}t_{2}t_{124}-\overline{q}t_{3}t_{134}+\overline{q}t_{12}t_{24}\] \[\qquad+\overline{q}t_{13}t_{34}+(1-q^{2})t_{2}t_{4}t_{12}- \overline{q}^{2}t_{14}+(q-\overline{q})t_{1}t_{4}-q^{2}t_{2}t_{3}t_{0}. \tag{7}\] ### Relations **Proposition 3.1**.: _In \(\mathcal{S}_{4}\) we have_ \[qt_{23}t_{12}-\overline{q}t_{12}t_{23} =(q^{2}-\overline{q}^{2})t_{13}+(q-\overline{q})(t_{1}t_{3}+t_{2}t _{123}), \tag{8}\] \[t_{24}t_{13}-t_{13}t_{24} =(\overline{q}^{2}-q^{2})(t_{12}t_{34}-t_{14}t_{23})\] \[\qquad+(\overline{q}-q)(t_{3}t_{4}t_{12}-t_{1}t_{4}t_{23}+t_{1}t _{2}t_{34}-t_{2}t_{3}t_{14}),\] (9) \[\overline{q}t_{234}t_{14}-qt_{14}t_{234} =(\overline{q}^{2}-q^{2})t_{123}+(\overline{q}-q)(t_{4}t_{0}+t_{1} t_{23}),\] (10) \[qt_{124}t_{34}-\overline{q}t_{34}t_{124} =(q^{2}-\overline{q}^{2})t_{123}+(q-\overline{q})(t_{4}t_{0}+t_{3} t_{12}),\] (11) \[t_{134}t_{24}-t_{24}t_{134} =(q-\overline{q})(\overline{q}t_{34}t_{124}-qt_{14}t_{234}+qt_{1} t_{23}-\overline{q}t_{3}t_{12})\] \[\qquad+(q-\overline{q})^{2}(t_{4}t_{0}+\alpha t_{123}). \tag{12}\] Proof.: The identities (8)-(11) respectively follow from (1)-(4) and their mirrors. The last one follows from (5) and its mirror, as well as (10), (11). Call (8), (9) (resp. (10), (11), (12)) as well as the identities resulting from acting the indices via the permutations \((1,2,3,4)^{i},i=1,2,3\) the _commuting relations_ of type \([2,2]\) (resp. type \([2,3]\)). They enable us to write any element of \(\mathcal{S}_{4}\) as a linear combination of monomials in prescribed forms. Similarly as in [3], we do not present commuting relations of type \([3,3]\). **Proposition 3.2**.: _The following relations hold in \(\mathcal{S}_{4}\):_ \[t_{13}t_{24} =\alpha t_{0}+t_{1}t_{234}+t_{2}t_{134}+t_{3}t_{124}+t_{4}t_{123}+q^ {2}t_{12}t_{34}+\overline{q}^{2}t_{14}t_{23}\] \[\quad+qt_{3}t_{4}t_{12}+\overline{q}t_{1}t_{4}t_{23}+qt_{1}t_{2} t_{34}+\overline{q}t_{2}t_{3}t_{14}+t_{1}t_{2}t_{3}t_{4}, \tag{13}\] \[t_{24}t_{134} =q^{2}t_{14}t_{234}+\overline{q}^{2}t_{34}t_{124}+(1-q^{2}- \overline{q}^{2})t_{4}t_{0}-(q^{3}+\overline{q}^{3})t_{123}-q^{2}t_{1}t_{23}\] \[\quad-\overline{q}^{2}t_{3}t_{12}+t_{2}(qt_{14}t_{34}-q^{2}t_{13} -qt_{4}t_{134})-qt_{1}t_{2}t_{3},\] (14) \[t_{123}^{2} =\overline{q}t_{12}t_{23}t_{13}-(t_{1}t_{2}t_{3}+qt_{1}t_{23}+ \overline{q}t_{2}t_{13}+\overline{q}t_{3}t_{12})t_{123}-(t_{1}^{2}+t_{2}^{2}+t _{3}^{2})\] \[\quad+\alpha^{2}-(qt_{2}t_{3}t_{23}+\overline{q}t_{1}t_{3}t_{13}+ \overline{q}t_{1}t_{2}t_{12})-(q^{2}t_{23}^{2}+\overline{q}^{2}t_{13}^{2}+ \overline{q}^{2}t_{12}^{2}),\] (15) \[t_{123}t_{234} =(t_{23}+qt_{2}t_{3})t_{0}+\overline{q}t_{12}t_{23}t_{34}- \overline{q}t_{3}t_{12}t_{234}-qt_{2}t_{34}t_{234}+q^{2}t_{2}t_{124}\] \[\quad+\overline{q}^{2}t_{3}t_{134}-\overline{q}^{2}t_{12}t_{24}- \overline{q}^{2}t_{13}t_{34}+(q-\overline{q})t_{2}t_{4}t_{12}+\overline{q}^{2} (\alpha t_{14}+t_{1}t_{4}),\] (16) \[t_{123}t_{134} =t_{13}t_{0}+t_{12}t_{14}+t_{23}t_{34}-t_{1}t_{123}-t_{3}t_{234}- \alpha t_{24}-t_{2}t_{4},\] (17) \[t_{23}t_{34}t_{124} =(t_{234}+\overline{q}t_{4}t_{23}+\overline{q}t_{2}t_{34}+t_{2}t_ {3}t_{4})t_{0}+\alpha t_{1}+t_{2}t_{12}+t_{3}t_{13}+t_{4}t_{14}\] \[\quad+(\overline{q}^{2}t_{23}+\overline{q}t_{2}t_{3})t_{123}+(q^ {2}t_{34}+qt_{3}t_{4})t_{134},\] (18) \[t_{14}t_{12}t_{23}t_{34} =q^{2}t_{0}^{2}+q^{2}(\overline{q}t_{1}t_{234}+qt_{4}t_{123}+t_{ 1}t_{4}t_{23}-t_{2}t_{3})t_{0}+t_{234}^{2}+q^{4}t_{123}^{2}\] \[\quad+t_{3}t_{14}t_{12}t_{234}+q^{2}t_{2}t_{14}t_{34}t_{123}+qt_{ 4}t_{23}t_{234}-\overline{q}t_{3}t_{14}t_{134}+q^{3}t_{1}t_{23}t_{123}\] \[\quad-q^{3}t_{2}t_{14}t_{124}+\overline{q}t_{14}t_{12}t_{24}+ \overline{q}t_{14}t_{13}t_{34}+(1-q^{2})t_{2}t_{4}t_{14}t_{12}\] \[\quad-\overline{q}^{2}t_{14}^{2}+q^{2}t_{23}^{2}+(q-\overline{q}) t_{1}t_{4}t_{14}+q^{2}(t_{1}^{2}+t_{4}^{2}-\alpha^{2}). \tag{19}\] Proof.: The first identity is a repeat of (2). The third is a known result [2]. It can be deduced from (1), (6); see [3] Example 3.10 for an elegant deduction. The second is obtained by combining (5) with (3), (4) and \[t_{14}t_{34}=\overline{q}t_{1\overline{4}34}+qt_{13}+t_{1}t_{3}+t_{4}t_{134}.\] To deduce (16), we first compute \[t_{123}t_{234}=qt_{1234\overline{32}}+\overline{q}t_{14}+t_{1}t_{4}+t_{23}t_{0},\] and then replace \(t_{1234\overline{32}}\) via (7). The identity (17) is obtained by combing \[t_{123}t_{134} =qt_{234\overline{3}}+\overline{q}t_{12\overline{1}4}+t_{2}t_{4}+t _{13}t_{0},\] \[t_{12\overline{1}4} =qt_{12}t_{14}-q^{2}t_{24}-qt_{2}t_{4}-qt_{1}t_{124},\] \[t_{234\overline{3}} =\overline{q}t_{23}t_{34}-\overline{q}^{2}t_{24}-\overline{q}t_{2 }t_{4}-\overline{q}t_{3}t_{234}.\] For (18), by (4) it suffices to compute \(t_{23}t_{12\overline{4}34}\), which is not difficult. Remember to convert \(t_{23}t_{12}\) via (1). Replacing 2, 3 in (6) respectively with 23, 4 and taking the mirror, we obtain \[t_{14}t_{1234\overline{32}} =t_{0}^{2}+(\overline{q}t_{1}t_{234}+qt_{4}t_{123}+t_{1}t_{4}t_{23} )t_{0}+q^{2}t_{123}^{2}+\overline{q}^{2}t_{234}^{2}\] \[\quad+\overline{q}t_{4}t_{23}t_{234}+qt_{1}t_{23}t_{123}+t_{23}^{2 }+t_{1}^{2}+t_{4}^{2}-\alpha^{2}.\] Then applying (7) leads to the last identity. Use _reduction relations_ to name these relations and the ones obtained from cyclically permuting indices in (14)-(18) as well as the mirror of (16). ### Statement and proof **Theorem 3.3**.: _The Kauffman bracket skein algebra of \(\Sigma_{0,5}\) over any commutative ring containing \(q^{\pm\frac{1}{2}}\) is generated by \(t_{1},t_{2},t_{3},t_{4}\), \(t_{0}\), \(t_{12},t_{13},t_{14},t_{23},t_{24},t_{34}\), \(t_{123},t_{124},t_{134},t_{234}\), and the ideal of defining relations is generated by the commuting relations of type \([2,2]\), \([2,3]\), the reduction relations, and the centralities of \(t_{1},t_{2},t_{3},t_{4},t_{0}\)._ Proof.: Given a monomial \(\mathfrak{g}\), call it _reduced_ if it is a product of non-central generators; call it _irreducible_ if it cannot be eliminated by the relations given in the statement. When \(\operatorname{md}_{\mathfrak{g}}(i)=0\) for some \(i\), the result on \(\mathcal{S}_{3}\) can be applied. The strategy is to show that using the known relations, any reduced monomial can be converted into a unique linear combination of irreducible ones with a fixed multi-degree, which are seen to be linearly independent from their leading multi-curves; (if it is not so, a new relation would be found). Up to symmetries, it suffices to consider the cases in the following table. \begin{tabular}{|c|c|} \hline \(\operatorname{md}\) & irreducible reduced monomials (leading multi-curves) \\ \hline \((1,1,1,1)\) & \(t_{12}t_{34}\), \(t_{14}t_{23}\) \\ \hline \((1,1,1,2)\) & \(t_{14}t_{234}\)\((t_{2341\overline{4}})\), \(t_{34}t_{124}\)\((t_{12\overline{4}34})\) \\ \hline \((1,1,2,2)\) & \(t_{12}t_{34}^{2}\), \(t_{14}t_{23}t_{34}\) \\ \hline \((1,2,1,2)\) & \(t_{12}t_{24}t_{34}\) \((t_{12\overline{4}34\overline{2}})\), \(t_{14}t_{23}t_{24}\)\((t_{1\overline{4}23\overline{2}4})\) \\ \hline \((1,1,1,3)\) & \(t_{14}t_{24}t_{34}\) \\ \hline \((1,2,2,2)\) & \(t_{12}t_{34}t_{234}\)\((t_{34}t_{1234\overline{2}})\), \(t_{14}t_{23}t_{234}\)\((t_{23}t_{1\overline{4}234})\) \\ \hline \((1,1,2,3)\) & \(t_{34}^{2}t_{124}\)\((t_{12\overline{4}34\overline{3}4})\), \(t_{14}t_{34}t_{234}\)\((t_{1\overline{4}3\overline{4}234})\) \\ \hline \((1,2,1,3)\) & \(t_{24}^{2}t_{34}\)\((t_{124\overline{2}34})\), \(t_{14}t_{24}t_{34}\)\((t_{1\overline{2}4234})\) \\ \hline \((2,2,2,2)\) & \(t_{12}t_{34}^{2}\), \(t_{14}^{2}t_{23}^{2}\) \\ \hline \((2,2,1,3)\) & \(t_{14}^{2}t_{23}t_{24}\)\((t_{14\overline{4}23\overline{2}4})\), \(t_{12}t_{14}t_{24}t_{34}\)\((t_{124\overline{2}1\overline{4}34})\) \\ \hline \((1,1,3,3)\) & \(t_{12}t_{34}^{3}\), \(t_{14}t_{23}t_{34}^{3}\)\((t_{23}t_{1\overline{4}3\overline{2}34})\) \\ \hline \((1,3,1,3)\) & \(t_{12}t_{34}t_{24}^{2}\)\((t_{124\overline{4}2\overline{4}34})\), \(t_{14}t_{23}t_{24}\)\((t_{23\overline{2}4}t_{1\overline{4}24})\) \\ \hline \((1,1,2,4)\) & \(t_{14}t_{24}^{2}t_{34}\) \\ \hline \((2,2,2,3)\) & \(t_{12}t_{34}^{2}t_{124}\)\((t_{12\overline{4}34\overline{4}})\), \(t_{14}^{2}t_{23}t_{234}\)\((t_{23}t_{1\overline{4}234})\) \\ \hline \((1,2,2,4)\) & \(t_{24}t_{34}^{2}t_{124}\)\((t_{2\overline{3}44\overline{1}2\overline{3}4})\), \(t_{14}t_{24}t_{34}\)\((t_{24}t_{1\overline{4}3\overline{4}234})\) \\ \hline \((1,2,4,2)\) & \(t_{23}t_{34}^{2}t_{123}\)\((t_{23\overline{4}3\overline{4}})\), \(t_{23}^{2}t_{34}t_{134}\)\((t_{23\overline{4}3\overline{4}}t_{1\overline{3}234})\) \\ \hline \((1,2,3,3)\) & \(t_{14}t_{23}t_{34}\)\(t_{234}\)\((t_{1\overline{4}23\overline{2}42\overline{3}4\overline{2}})\), \(t_{14}t_{23}t_{24}t_{234}\)\((t_{1\overline{4}2\overline{3}2\overline{2}4234})\) \\ \hline \((1,1,3,4)\) & \(t_{34}^{3}t_{124}\)\((t_{12\overline{3}434344})\), \(t_{14}t_{34}^{2}t_{234}\)\((t_{34}^{2}t_{1\overline{4}234})\) \\ \hline \((2,2,3,3)\) & \(t_{12}^{2}t_{24}t_{34}^{2}\)\((t_{12\overline{4}34\overline{4}2\overline{1}2})\), \(t_{14}^{2}t_{23}t_{24}\)\((t_{14\overline{4}23\overline{2}4})\) \\ \hline \((2,2,2,4)\) & \(t_{12}t_{14}t_{24}t_{34}^{2}\)\((t_{12\overline{4}34\overline{3}4})\), \(t_{14}^{2}t_{23}t_{24}t_{34}\)\((t_{1\overline{2}3\overline{4}}^{2})\) \\ \hline \end{tabular} As a supplement, if the multi-degree is \(\vec{e}=(1,a,b,c)\) with \(a+b+c=9\), then each relation is implied by the above ones. To see this, note that if \(S\) is a simple curve with \(\mathrm{md}_{S}=\vec{e}\), then any two of admissible expressions are equivalent through a shortenable arc. Now that all possible monomials have been checked, the proof is complete.
2305.11050
Production of $p$-nuclei from $r$-process seeds: the $Ξ½r$-process
We present a new nucleosynthesis process that may take place on neutron-rich ejecta experiencing an intensive neutrino flux. The nucleosynthesis proceeds similarly to the standard $r$-process, a sequence of neutron-captures and beta-decays, however with charged-current neutrino absorption reactions on nuclei operating much faster than beta-decays. Once neutron capture reactions freeze-out the produced $r$-process neutron-rich nuclei undergo a fast conversion of neutrons into protons and are pushed even beyond the $\beta$-stability line producing the neutron-deficient $p$-nuclei. This scenario, which we denote as the $\nu r$-process, provides an alternative channel for the production of $p$-nuclei and the short-lived nucleus $^{92}$Nb. We discuss the necessary conditions posed on the astrophysical site for the $\nu r$-process to be realized in nature. While these conditions are not fulfilled by current neutrino-hydrodynamic models of $r$-process sites, future models, including more complex physics and a larger variety of outflow conditions, may achieve the necessary conditions in some regions of the ejecta.
Zewei Xiong, Gabriel MartΓ­nez-Pinedo, Oliver Just, Andre Sieverding
2023-05-18T15:42:21Z
http://arxiv.org/abs/2305.11050v2
# Production of \(p\)-nuclei from \(r\)-process seeds: the \(\nu r\)-process ###### Abstract We present a _new_ nucleosynthesis process that may take place on neutron-rich ejecta experiencing an intensive neutrino flux. The nucleosynthesis proceeds similarly to the standard \(r\)-process, a sequence of neutron-captures and beta-decays, however with charged-current neutrino absorption reactions on nuclei operating much faster than beta-decays. Once neutron capture reactions freeze-out the produced \(r\)-process neutron-rich nuclei undergo a fast conversion of neutrons into protons and are pushed even beyond the \(\beta\)-stability line producing the neutron-deficient \(p\)-nuclei. This scenario, which we denote as the \(\nu r\)-process, provides an alternative channel for the production of \(p\)-nuclei and the short-lived nucleus \({}^{92}\)Nb. We discuss the necessary conditions posed on the astrophysical site for the \(\nu r\)-process to be realized in nature. While these conditions are not fulfilled by current neutrino-hydrodynamic models of \(r\)-process sites, future models, including more complex physics and a larger variety of outflow conditions, may achieve the necessary conditions in some regions of the ejecta. _Introduction_ A variety of processes have been suggested as the origin of stable nuclei heavier than iron and located at the neutron-deficient side of the \(\beta\)-stability line, the so-called \(p\)-nuclei. This includes the \(\gamma\)-process (also denoted as \(p\)-process) [1; 2], \(\nu p\)-process [3; 4; 5], and \(rp\)-process [6]. In the \(\gamma\)-process, seed nuclei present from the initial composition of the star, undergo \((\gamma,n)\) reactions followed by \((\gamma,p)\) or \((\gamma,\alpha)\) as the temperature rises to 3-5 GK in core-collapse and Type Ia supernovae [2; 7; 8; 9]. While its yields are consistent for more than half of the \(p\)-nuclei, some specific regions, such as \({}^{92,94}\)Mo and \({}^{96,98}\)Ru, are underproduced. The \(\gamma\)-process is not a primary process and depends on the preexisting \(s\)-process seeds. On the other hand, \(p\)-nuclei can also be produced in the \(\nu p\)-process through proton capture aided by neutrinos in neutrino-driven winds from core-collapse supernovae (CCSNe) [3; 10]. Long \(\beta^{+}\) decay times of waiting-point nuclei such as \({}^{64}\)Ge can be circumvented by \((n,p)\) reactions with neutrons produced by absorption of electron antineutrinos on protons. However, current three-dimensional supernova models suggest that a neutrino-driven wind may not develop except for low mass progenitors [11]. Furthermore, neutrino-wind simulations based on up-to-date set of neutrino opacities produce ejecta that are not proton-rich enough to allow for a strong \(\nu p\)-process [12]. Light \(p\)-nuclei, like \({}^{92}\)Mo, may also be produced in the inner ejecta of core-collapse supernova by explosive nucleosynthesis [13; 14; 15]. However, no substantial production occurs of other \(p\)-nuclei. Light \(p\)-nuclei can also be produced by the \(rp\)-process in accreting neutron stars [6], however it is an open questions how the produced heavy elements can become gravitationally unbound from the neutron star in order to contribute to galactic chemical evolution. In addition to \(p\)-nuclei, another element of yet unknown origin is the by now extinct nucleus \({}^{92}\)Nb that existed in the early solar system (ESS) [16; 17; 18; 19]. It cannot be produced by the \(\nu p\)- or \(rp\)-processes as it is shielded by \({}^{92}\)Mo [20]. The production of \({}^{92}\)Nb through the \(\gamma\)-process is viable but the yield is not sufficient for explaining the amount measured in the ESS, probably related to the underproduction of \({}^{92,94}\)Mo and \({}^{96,98}\)Ru [21]. A potentially feasible way of production could be through charge-current (CC) and neutral-current (NC) weak interactions on the preexisting nuclei \({}^{92}\)Zr and \({}^{93}\)Nb in \(\nu\)-process [22; 18; 23]. Previous studies of the \(r\)-process both in the context of core-collapse supernovae and binary neutron star mergers (BNM) have shown that neutrinos play a fundamental role. At high temperatures, when the composition consists of neutrons and protons, electron (anti)neutrino absorption and the inverse reactions determine the neutron-richness of the ejected material [24; 25; 26; 27; 28]. This aspect is fundamental to produce ejecta with a broad distribution of neutron-richness and account for the observation of Sr in the AT2017gfo kilonova [29]. Large neutrino fluxes are in general detrimental for the \(r\)-process as they drive the composition to proton-to-nucleon ratios of \(Y_{e}\approx 0.5\) due to the operation of the \(\alpha\)-effect [30]. Once nuclei form, substantial neutrino fluxes hinder the operation of the \(r\)-process by converting neutrons into protons and reducing the amount of neutrons available for captures. Furthermore, large rates of electron neutrino absorption on nuclei hinder the formation of \(r\)-process peaks associated to magic numbers [31; 32; 33]. This regime of large neutrino fluxes, when electron-neutrino absorption rates are faster than beta-decays for neutron-rich nuclei, is precisely the regime we consider in this letter. We will show that it leads to a _new_ nucleosynthesis process that produces \(p\)-nuclei and \({}^{92}\)Nb operating on seeds produced by the \(r\)-process under strong irradiation by neutrinos. We denote this process as \(\nu r\)-process. Unlike \(\gamma\)-, \(\nu p\)-, or \(rp\)-processes that occur in proton-rich conditions, the \(\nu r\)-process operates in neutron-rich conditions. The large neutrino fluxes restrict the conditions at which the process operates to \(Y_{e}\approx 0.4\)-0.5. It requires several stages for the production of \(p\)-nuclei. The nucleosynthesis starts at high temperatures with a composition of neutrons and protons. As the temperature decreases, nuclei are formed by charged particle reactions that freeze out at temperatures \(T\sim 3\) GK. This phase results in an \(\alpha\)-rich composition characteristic of the operation of the \(\alpha\)-process [34; 35]. The produced nuclei act as seeds for further neutron captures along a path determined by \((n,\gamma)\rightleftarrows(\gamma,n)\) equilibrium. In the absence of neutrino irradiation, this equilibrium continues down to \(T\sim 1\) GK until the freeze-out of neutron captures, where after nuclei undergo beta decay and migrate towards the valley of stability. This picture, however, is drastically changed if the ejecta are irradiated by a large neutrino flux. During the phase of neutron captures the flow to higher charge numbers is determined by neutrino absorption rates instead of beta-decays. This speeds up the whole scale of the process and leads to an earlier freeze-out of neutron captures at higher temperatures compared with the cases without neutrino fluxes. Once neutrons are exhausted, neutron-rich nuclei are transformed on timescales of 10 ms into neutron-deficient nuclei by neutrino charged-current processes and produce the \(p\)-nuclei. As the energy of neutrinos is larger than the neutron, proton and alpha separation energies, both charged-current and neutral-current neutrino-nucleus reactions produce free neutrons, protons and alpha-particles that are captured at the still relatively high temperatures as the material moves beyond the beta-stability line. We will discuss first parametrically constructed outflow conditions and then look for suitable astrophysical sites as well as possible observables. _Parametric trajectory_ We use the nuclear reaction network (employed previously in, e.g., Refs. [36; 37]) with the reaction rates based on the FRDM mass model [38] and a consistent description of neutrino reactions with nucleons (\(\nu\)-N) and nuclei (\(\nu\)-A) that considers light particle spallation induced by both NC and CC \(\nu\)-A reactions [22]. We assume zero temperature for neutrino cross sections and neglect possible finite temperature corrections that may be relevant at temperatures \(\lesssim 4\) GK (0.35 MeV). We assume adiabatic expansions starting with an initial temperature of \(T_{0}=10\) GK and density of \(\rho_{0}=4.6\times 10^{6}\) g cm\({}^{-3}\) that corresponds to an initial entropy per nucleon of \(s=84\)\(k_{b}\). We assume homologous expansion for the density evolution \(\rho(t)=\rho_{0}(1+t/\delta_{\rho})^{-3}\). The temperature is evolved accounting for the energy generation by nuclear reactions [39]. \(\delta_{\rho}\) defines the timescale for the production of nuclei. Considering that nuclei are typically produced around 5 GK, we have \(t(T=5\) GK) \(\approx\delta_{\rho}\). We fix the initial electron fraction, \(Y_{e,0}\), that is subsequently evolved under the influence of (anti)neutrino absorption and their inverse reactions. Neutrino rates are proportional to the neutrino number densities that are parametrized as \(n_{\nu_{e}}(t)=n_{\nu_{e},0}(1+t/\delta_{\nu})^{-3}\) and \(n_{\bar{\nu}_{e}}(t)=R_{n}n_{\nu_{e}}(t)\), where \(\delta_{\nu}\) is a timescale taken larger than \(\delta_{\rho}\) (because the neutrino flux usually decreases more slowly than the baryonic density), and \(R_{n}\) is a constant ratio relating the \(\bar{\nu}_{e}\) and \(\nu_{e}\) densities. Neutrino spectra are assumed to be given by Fermi-Dirac distributions with constant effective temperatures \(T_{\nu_{e}}\) and \(T_{\bar{\nu}_{e}}\). To demonstrate the dependence of the nucleosynthesis on the expansion timescale and neutrino number density, Fig. 1 compares the abundances from three parametric trajectories I, II, and III with parameter sets \([\delta_{\rho}\,(\text{ms}),n_{\nu_{e},0}\,(10^{32}\text{ cm}^{-3})]\) of \([4,2.5]\), \([1,10]\), and \([0.5,20]\), respectively. The other parameters have the values: \(Y_{e,0}=0.4\), \(R_{n}=1.2\), \(\delta_{\nu}=4\delta_{\rho}\), \(T_{\nu_{e}}=5\) MeV, and \(T_{\bar{\nu}_{e}}=6.25\) MeV. In all those three cases we find at \(T=5\) GK that \(Y_{e}\) coincidentally reaches \(\approx 0.467\) and the neutron-to-seed ratios \(n_{s}\) are 13.8, 59.2 and 177, respectively, which qualitatively follow \(n_{s}\propto 1/\tau_{\text{exp}}\)[40], with \(\tau_{\text{exp}}=[d\ln\rho(t)/dt]^{-1}=(t+\delta)/3\). _Results_ Without \(\nu\)-A reactions, case I corresponds to the \(\alpha\)-rich freeze-out with moderate neutron-rich conditions, such as considered in Refs. [5; 13; 42], that is known to produce \({}^{92}\)Mo but not \({}^{94}\)Mo. Once the \(\nu\)-A reactions are included, the ratio \({}^{92}\)Mo/\({}^{94}\)Mo becomes consistent with solar proportions and the yield of the radioactive species \({}^{92}\)Nb is also enhanced. Despite their much larger neutron-to-seed ratios, the abundance peak for cases II and III is only moderately shifted to higher mass numbers when compared to case I. This is due to the larger neutrino densities that lead to a late conversion of neutrons into protons while the \(r\)-process occurs, reducing the amount of neutrons for captures on heavy nuclei. When \(\nu\)-A reactions are included, slightly heavier nuclei are produced during the phase when neutron captures dominate (cyan versus green lines in Fig 1), as \(\nu_{e}\) absorption on nuclei accelerates the build-up of heavier elements competing with \(\nu_{e}\) absorption on free neutrons. After neutron-capture freeze-out there is still a substantial amount of neutrons present, produced by neutrino spallation reactions, that furthers contribute to the synthesis of \(p\)-nuclei. All those three cases show comparable amounts of \(p\)-nuclei produced relative to the final total yield, which indicates a successful conversion of those \(r\)-process seeds. The abundance pattern in case I shows a peak around \({}^{92,94}\)Mo and \({}^{96,98}\)Ru. Larger neutron-to-seed ratios in cases II and III lead to the production of \(p\)-nuclei up to \(A\sim 145\) and 180, respectively. Heavier \(p\)-nuclei up to \({}^{196}\)Hg can be produced with higher neutrino number densities. To illustrate the reaction dynamics of the \(\nu r\)-process, we compare composition average rates for \(\beta^{-}\) decay, \((n,\gamma)\), \((\gamma,n)\), \((\alpha,\gamma/n)\), CC and NC \(\nu_{e}\)-A reactions for case I in Fig. 2(a). They are computed as \(\lambda_{I}=\sum_{i}\lambda_{I,i}Y_{i}/Y_{\rm heavy}\) where \(I\) stands for a particular reaction, \(i\) sums over all heavy (\(A>4\)) nuclei, and \(Y_{\rm heavy}=\sum_{i}Y_{i}\) is the total abundance of heavy nuclei. The \(\nu_{e}\)-\(n\) rate \(\lambda_{\nu_{e}n}=n_{\nu_{e}}\sigma_{\nu_{e}n}c\) is approximately 1 ms\({}^{-1}\) for \(n_{\nu_{e}}=10^{33}\) cm\({}^{-3}\) and \(T_{\nu_{e}}=5\) MeV. The CC rates \(\lambda_{\nu_{e}A}^{\rm CC}=n_{\nu_{e}}\sigma_{\nu_{e}A}^{\rm CC}\),\(c\) for the nuclei of interest with \(A=100\)-\(200\) near the stability line are \(\sim 1\)-\(10\) times larger than \(\lambda_{\nu_{e}n}\). When the temperature is above \(\sim 4.5\) GK, there is essentially no difference between the cases with and without \(\nu\)-A reactions. As the temperature becomes lower, and the composition shifts to neutron-rich nuclei, \(\lambda_{\nu_{e}A}^{\rm CC}\) becomes comparable with the expansion rate leading to a faster depletion of neutrons because \(\nu_{e}\) absorption on nuclei speeds up the production of heavy elements. The balance between \((n,\gamma)\) and \((\gamma,n)\) is broken at \(\sim 3\) GK. The rate of \((\gamma,n)\) decreases drastically, but the rate of \((n,\gamma)\) changes gradually as neutrons are continuously supplied from the neutrino spallation by both CC and NC \(\nu_{e}\)-A reactions. The \((n,\gamma)\) rate follows \(\lambda_{\nu_{e}A}^{\rm CC}\) and the nucleosynthesis flow is characterized by an equilibrium between \((n,\gamma)\) and \(\nu_{e}\)-A reactions. This produces a rather broad abundance distribution that reaches from the neutron-deficient to the neutron-rich side of beta-stability and is responsible for the slight increase of the average \(\beta^{-}\) rate below \(T\sim 3\) GK. The average number of emitted neutrons from the CC \(\nu\)-A reaction is approximately given by the ratio \(\lambda_{(n,\gamma)}/\lambda_{\nu_{e}A}^{\rm CC}\). (The average rate of NC reactions, \(\lambda_{\nu_{e}A}^{\rm NC}\), is about one order of magnitude lower than \(\lambda_{\nu_{e}A}^{\rm CC}\).) We note that we do not consider heavy lepton neutrinos. They are expected to have higher mean energy and possibly larger flux and hence amplify the total rate to values comparable to \(\lambda_{\nu_{e}A}^{\rm CC}\). The \(\alpha\)-capture reactions are also greatly enhanced when \(\nu_{e}\)-A reactions are considered. The \((\alpha,\gamma)\) reaction dominates over \((\alpha,n)\) as the material moves to and beyond beta-stability. This speeds up the consumption Figure 2: Characteristic average rates (upper), abundances (middle), and number densities (lower) as functions of temperature for case I including (solid) and neglecting (dashed) \(\nu\)-A reactions. The dotted line labels the expansion rate \(1/\tau_{\rm exp}\). The abundances of electrons, \(Y_{e}\), and alpha-particles, \(Y_{\alpha}\), are shown according to the scale on the right side of the middle panel. Figure 1: Abundances of \(r\)-nuclei (black circles) [41] and \(p\)-nuclei (black squares) [1] in the solar system. The results for our parametric trajectories are shown for cases I (panel a), II (panel b), and III (panel c). All the panels show the isobaric yields at 1 Gyr (blue curves) and the abundance of \(p\)-nuclei on those yields (red dots). Green and cyan curves show the yields at \(n_{s}\sim 1\) with and without \(\nu\)-A reactions, respectively. of \(\alpha\)-particles and results in a lower \(\alpha\) abundance when \(\nu_{e}\)-A reactions are considered, see Fig. 2(b) that shows the evolution of abundances and the electron fraction \(Y_{e}\). Protons produced from neutrino spallation are hardly recaptured by heavy nuclei below \(\sim 4\) GK, causing their abundance to increase to \(\sim 10^{-3}\). The number density of neutrons is compared with that of baryons and \(\nu_{e}\) in Fig. 2(c). In the case without \(\nu\)-A reactions, \(Y_{e}\sim 0.45\), neutrons are absorbed until \(n_{n}\) reaches \(\sim 10^{13}\) cm\({}^{-3}\) at which point they are supplied by beta-delayed neutron emission. Including \(\nu\)-A reactions, we have an earlier freeze-out of neutron captures and a larger neutron density after freeze-out due to the production of neutrons by neutrino spallation reactions as discussed above. Survey of astrophysical conditionsIn addition to the previous three cases, we surveyed a larger parameter space keeping all parameters unchanged from the previous cases except for the following variations: \(\delta_{\rho}=\{0.5,1,2,4,8\}\) ms, \(n_{\nu_{e},0}=\{0.25,0.5,1,2\}\times 10^{33}\) cm\({}^{-3}\), \(R_{n}=\{0.5,0.8,1,1.2,1.5,2,4\}\), and \(\delta_{\nu}=\{1,1.5,2.5,4,6\}\times\delta_{\rho}\). Each calculation of the survey is shown in a scatter plots in Fig. 3. The results classified according to \(Y_{e}\) at 5 GK and the exposure of the ejecta to neutrinos. As a measure of the latter, we use the product of the expansion timescale and the rate for \(\nu_{e}\) absorption on neutrons, both evaluated at 3 GK. The different panels of Fig. 3 show the abundance of \({}^{92}\)Mo (panel a), the abundance ratio \({}^{94}\)Mo/\({}^{92}\)Mo (panel b), and the ratio \({}^{92}\)Nb/\({}^{92}\)Mo at 1 Myr (panel c). For low values of the neutrino exposure all calculations are concentrated around \(Y_{e}\approx 0.4\) because of our choice of \(Y_{e,0}\). High neutrino exposure makes all calculations to converge to \(Y_{e}\approx 0.5\). We observe large variations of the abundance yields in the intermediate region. Significant amounts of \({}^{92}\)Mo (\(\sim 10^{-4}\)) are produced only when \(\tau_{\rm exp}\lambda_{\nu_{e}n}\gtrsim 0.1\). The ratios \({}^{94}\)Mo/\({}^{92}\)Mo and \({}^{92}\)Nb/\({}^{92}\)Mo are larger for the neutron-rich cases with small neutrino exposure, and are below unity when \(Y_{e}\approx 0.5\). In addition to the parametric models discussed above, Fig. 3 also shows the nucleosynthesis results for a neutrino-hydrodynamics simulation of a BNM, namely model sym-n1-a6 of Ref. [37]. In the simulation, neutrino-fluxes become negligible by the time nuclei form. Consequently, \(p\)-nuclei around \(A\sim 92\) are produced by the \(\alpha\)-process operating in a narrow window of \(Y_{e}\sim 0.45\)-\(0.48\). The maximum abundance of \({}^{92}\)Mo is \(1.5\times 10^{-6}\), with \({}^{94}\)Mo and \({}^{92}\)Nb being relatively underproduced. The abundances of \(p\)-nuclei heavier than \({}^{94}\)Mo are further suppressed and do not exceed values of \(10^{-8}\). Figure 4 shows the neutrino exposure, radius, and neutrino density at \(T=3\) GK and \(T=10\) GK for all tracer particles of model sym-n1-a6 of Ref. [37] classified by the type of ejecta component (see [37] for more details): dynamical ejecta, neutron-star torus ejecta, and black-hole torus ejecta. None of the tracers reaches high enough values of \(\tau_{\rm exp}\lambda_{\nu_{e}n}>0.1\) at \(T=3\) GK to enable the \(\nu r\)-process. Interestingly, the neutron-star torus ejecta (blue points in Fig. 4) driven mainly by neutrino heating from the surface of the neutron-star remnant show rather less favorable conditions for the \(\nu r\)-process than the prompt ejecta produced during and shortly after the merger (red points). This is connected to the fact that the neutrino-driven wind from the neutron-star remnant is powered by thermal energy, while the prompt ejecta are (at least to a larger extent) accelerated by dynamical effects. The prompt ejecta therefore tend to be colder, at lower radii and higher neutrino densities when \(T=3\) GK. The comparison suggests that, despite the need for high neutrino fluxes, neutrino-driven winds are not well suited for the \(\nu r\)-process and that outflows driven by nonthermal processes may be more promising. We speculate that the \(\nu r\) process may operate in magnetically-driven ejecta subject to strong neutrino fluxes, which are likely found in polar regions of magnetorotational supernovae [43] and collapsar engines [44], or around magnetized remnants of BNMs [45; 46; 47]. All these scenarios have been suggested as sites for the \(r\)-process. ObservablesAssuming that the \(\nu r\)-process produces \(p\)-nuclei in an astrophysical site where also the \(r\)-process Figure 3: Scatter plots for the abundance of \({}^{92}\)Mo (upper), the ratios \(Y(^{94}\)Mo)/\(Y(^{92}\)Mo) (middle), and \(Y(^{92}\)Nb)/\(Y(^{92}\)Mo) (lower) with respect to \(Y_{e}\) and neutrino exposure from our parametric survey (circles) and the simulation model sym-n1-a6 (stars) from Ref. [37]. The white line in the color-bar of the middle panel shows the \(Y(^{94}\)Mo)/\(Y(^{92}\)Mo) ratio in solar abundance. The middle and lower panels show the abundance ratio only for \(Y(^{92}\)Mo) \(>10^{-7}\). operates and that the yields of both follow solar proportions, we can obtain constraints on the relative contributions of the \(\nu r\)-process and the \(r\)-process to the ejecta. If the whole solar inventory of \(p\)- and \(r\)-nuclei is produced on the same site, we expect a ratio of \(\sim 90/1\) between the \(r\)-process and \(\nu r\)-process material. We notice that the observation of Sr in the kilonova transient AT2017gfo and the low lanthanide mass fraction inferred from multi-band light-curve analyses requires the production of all \(r\)-process nuclei [48], assuming solar proportions. We show in Fig. 5(a) the production factors of \(p\)-nuclei, i.e. the abundances normalized to the solar value \(Y_{i}/Y_{i,\odot}\) for case I. The grey band, which covers one order of magnitude right below the largest abundance, illustrates the isotopes that are expected to be co-produced. Case I produces mainly \(p\)-nuclei from \({}^{78}\)Kr to \({}^{102}\)Pd. Case II (not shown in the figure) co-produces \(p\)-nuclei from \({}^{98}\)Ru to \({}^{138}\)La. Given that case II requires very high neutrino fluxes, we expect that it is more rare than case I. Therefore, we combine 20% from case II with 80% from case I in Fig. 5(b), which yields a pattern that is in good agreement with the solar abundances of the \(p\)-nuclei from \({}^{78}\)Kr to \({}^{138}\)La. The \(\nu r\)-process does not only produce nuclei commonly associated to the \(\gamma\)-process, but also \({}^{138}\)La and \({}^{180}\)Ta that are often associated with the \(\nu\)-process [22]. Associating the origin of the \(p\)-nuclei with an \(r\)-process site, such as BNLs, allows to explain the observed strong correlation between \(p\)- and \(r\)-nuclei of Mo in low-mass asymptotic giant branch meteorites [50]. We can further estimate the time since the last \(\nu r\)-process addition to the solar system by considering the short-lived radioactive isotope \({}^{92}\)Nb. The \(\nu r\)-process results in an abundance ratio between \({}^{92}\)Nb and \({}^{92}\)Mo that is close to unity, as shown in Fig. 3(c), which is \(\sim 10^{3}\) larger than the ratio \(\sim\mathcal{O}(10^{-3})\) found from the \(\nu\)-process in CCSNe [18; 22]. Assuming a production ratio close to unity in a simple model of uniform production over the age of the universe [51; 52] and supposing that both, \({}^{92}\)Mo and \({}^{92}\)Nb are predominantly made from the \(\nu r\)-process, the ratio in the interstellar medium is \(\sim 5\times 10^{-3}\) after 10 Gyr of evolution. This rough estimate suggests a decay time of \(\sim 250\) Myr since the last event to match the ESS ratio, which is consistent with the expected time of \(\sim 100\) Myr since the last \(r\)-process event[53; 54]. Since the \(\nu r\)-process depends on the neutrino flux, it could be significantly affected by collective neutrino flavor phenomena [55; 56; 57; 58; 59]. In addition, the process depends on the competition between neutrino absorption and neutron captures near the stability line, which calls for further improved measurements of the reaction rates in the laboratory. In the present work, we have considered moderately neutron-rich ejecta and shown that large neutrino fluxes can drive the composition to the neutron-deficient site of the valley of stability producing \(p\)-nuclei. One can wonder if a reverse process may operate in proton-rich ejecta driving the composition from the neutron-deficient side to the neutron-rich side by \(\bar{\nu}_{e}\)-A reactions and producing \(r\)-nuclei. We expect that this will not be the case as neutron-deficient nuclei still have a neutron excess and hence similar or even larger cross sections for \(\nu_{e}\)-A absorption than \(\bar{\nu}_{e}\)-A absorption. It remains to be explored if variations of the neutrino flux can produce abundance patterns that resemble those of the \(s\)-process or \(i\)-process. We thank Andreas Bauswein, Karlheinz Langanke, Ninoy Rahman, Yong-Zhong Qian, and Friedrich-Karl Thielemann for fruitful discussions. GMP and ZX acknowledge support by the European Research Coun Figure 4: Neutrino exposure, radius, and neutrino density of outflow tracers at \(T=3\) GK (left panels) and 10 GK (right panels) for the dynamical ejecta (red), neutron-star-torus ejecta (blue), and black-hole-torus ejecta in the simulation model sym-n1-a6. Figure 5: Relative ratios over the solar abundance [49] for case I (upper) and case I plus 20% of case II (lower). Nuclei in the grey bands have relative abundance exceeding 10% of the corresponding maximum value. - Project-ID 279384907 - SFB 1245, and MA 4248/3-1. GMP acknowledges support by the State of Hesse within the Cluster Project ELE-MENTS and the hospitality of the Instituto de Fisica Teorica UAM-CSIC, supported by the Severo Ochoa Excellence Program No CEX2020-001007-S funded by MCIN/AEI/10.13039/501100011033, where part of this work was done. AS acknowledges funding from the European Union's Framework Programme for Research and Innovation Horizon Europe under Marie Sklodowska-Curie grant agreement No. 101065891.
2305.01243
Invertible Coarse Graining with Physics-Informed Generative Artificial Intelligence
Multiscale molecular modeling is widely applied in scientific research of molecular properties over large time and length scales. Two specific challenges are commonly present in multiscale modeling, provided that information between the coarse and fine representations of molecules needs to be properly exchanged: One is to construct coarse grained models by passing information from the fine to coarse levels; the other is to restore finer molecular details given coarse grained configurations. Although these two problems are commonly addressed independently, in this work, we present a theory connecting them, and develop a methodology called Cycle Coarse Graining (CCG) to solve both problems in a unified manner. In CCG, reconstruction can be achieved via a tractable deep generative model, allowing retrieval of fine details from coarse-grained simulations. The reconstruction in turn delivers better coarse-grained models which are informed of the fine-grained physics, and enables calculation of the free energies in a rare-event-free manner. CCG thus provides a systematic way for multiscale molecular modeling, where the finer details of coarse-grained simulations can be efficiently retrieved, and the coarse-grained models can be improved consistently.
Jun Zhang, Xiaohan Lin, Weinan E, Yi Qin Gao
2023-05-02T08:05:42Z
http://arxiv.org/abs/2305.01243v2
# Machine-Learned Invertible Coarse Graining for Multiscale Molecular Modeling ###### Abstract Multiscale molecular modeling is widely applied in scientific research of molecular properties over large time and length scales. Two specific challenges are commonly present in multiscale modeling, provided that information between the coarse and fine representations of molecules needs to be properly exchanged: One is to construct coarse grained (CG) models by passing information from the fine to coarse levels; the other is to restore finer molecular details given CG configurations. Although these two problems are commonly addressed independently, in this work, we present a theory connecting them, and develop a methodology called Cycle Coarse Graining (CCG) to solve both problems in a unified manner. In CCG, reconstruction can be achieved via a tractable optimization process, leading to a general method to retrieve fine details from CG simulations, which in turn, delivers a new solution to the CG problem, yielding an efficient way to calculate free energies in a rare-event-free manner. CCG thus provides a systematic way for multiscale molecular modeling, where the finer details of CG simulations can be efficiently retrieved, and the CG models can be improved consistently. \({}^{1}\)Changing Laboratory, Beijing 102200, China \({}^{2}\) Beijing National Laboratory for Molecular Sciences, College of Chemistry and Molecular Engineering, Peking University, Beijing 100871, China \({}^{3}\) AI for Science Institute, Beijing, China \({}^{4}\) Center for Machine Learning Research and School of Mathematical Sciences, Peking University, Beijing 100871, China Correspondence should be sent to: \(\dagger\) [email protected] (Jun Zhang) ## I Introduction Multiscale modeling is critical in various fields of scientific research including physics, chemistry, biology, materials and engineering. For many-particle Hamiltonian systems, coarse-graining of the microscopic model can drastically simplify the representation of the physical system. Specifically, in molecular simulations [1] interactions between particles are described by the potential energy \(U(\mathbf{R})\) which is a function of the positions of particles (denoted by \(\mathbf{R}\)). Since \(\mathbf{R}\) can be very high dimensional, the corresponding energy landscape \(U(\mathbf{R})\) is usually rugged with many local traps. As a result, fine grained (FG) or first-principle based simulations, including all-atom and _ab initio_ molecular dynamics (MD), are known to suffer from limited accessible time and length scales. One solution against this issue is to extract slowly changing CG variables of the system \(\mathbf{s}=s(\mathbf{R})\), and build CG models accordingly. For example, in a widely used linear coarse graining protocol, groups of atoms or particles from a fine-grained model are bundled into single beads [2, 3], thus eliminating the microscopic degrees of freedom (DoFs) that are not essential to resolve structural features above a certain length scale. CG variables can also be non-linear functions of \(\mathbf{R}\), which are more often termed as collective variables [4, 5, 6] and widely investigated in the context of enhanced sampling [7, 8, 9]. In molecular simulations, the CG potential \(F(\mathbf{s})\) should satisfy the _thermodynamic consistency principle_ (Eqs. (1-2)), which states that \(F(\mathbf{s})\) should reproduce the marginalized Boltzmann distribution of the CG variables \(\mathbf{s}\) given the FG potential \(U(\mathbf{R})\), \[\begin{split} p\left(\mathbf{s}\right)&=\frac{ \left[e^{-\beta U(\mathbf{R})}\delta\left(\mathbf{s}-s\left(\mathbf{R}\right) \right)d\mathbf{R}\right.}{\int e^{-\beta U(\mathbf{R})}d\mathbf{R}}\\ &=\frac{e^{-\beta F(\mathbf{s})}}{Z}\end{split} \tag{1}\] \[F\left(\mathbf{s}\right)=-\frac{1}{\beta}\Big{[}\log p\left(\mathbf{s}\right) +\log Z\Big{]} \tag{2}\] where \(\beta\) is the inverse temperature, \(\delta\) denotes the Dirac-delta function and \(Z=\int e^{-\beta U(\mathbf{R})}d\mathbf{R}\) is the partition function. Under this setting, the CG potential \(F(\mathbf{s})\) becomes a simplified description of the original thermodynamic system and is usually much smoother than the original energy landscape \(U(\mathbf{R})\). Consequently, CG simulations performed under \(F(\mathbf{s})\) are generally much faster, hence, can reach larger time and length scales that are inaccessible to the fine grained models [3], despite that the finer details of the system are lost. Various approaches have been developed to parametrize CG models satisfying Eqs. (1-2), such as Iterative Boltzmann inversion [10], force matching [11], and relative entropy [12], etc. Some recent improvement along this line includes the deployment of artificial neural networks to augment the expressivity and complexity of the CG potentials [13, 14], as well as the use of generative and reinforcement learning to boost the training efficiency of CG models in lack of FG simulation samples [15, 16]. However, in many applications, one may need to simulate large systems with long correlation time in sufficiently fine details. Examples include the studies of the interaction between biological macromolecules [17], and the local structure of polymers at the interface with solid surfaces [18], etc. In these cases, a reconstruction approach, which will be referred to as _fine-grained reconstruction_ (FGR) hereafter, is needed in order to retrieve or reconstruct reasonable FG structures consistent with the given CG model, and it has been noted that reconstruction is an integral part for systematical multiscale modeling [19]. Unfortunately, FGR has not been as well studied as CG, partly due to the ill-posed definition of FGR provided that mathematically \(s(\mathbf{R})\) is usually a non-invertible function. In other words, for a given CG rule \(s(\mathbf{R})\), there exists one unique way to map one FG structure \(\mathbf{R}\) into one CG structure \(\mathbf{s}\). In contrast, many different FG configurations can be assigned to the same CG structure, and it usually remains _ad hoc_ to choose the "correct" one. Due to this difficulty, existing FGR methods, such as random mapping and position-restrained molecular dynamics (MD), commonly rely on system-specific knowledge and manually engineered fragment libraries [20-23], hence, are limited in scopes and lack of generalizability. Worse still, in most studies, CG and FGR problems were treated in a disconnected manner, and there lacks a unified theory to establish the relation between these two deeply coupled problems. In this work, we formally formulate FGR as a probabilistic learning problem, and demonstrate that the FGR problem can be systematically solved by means of machine learning as an optimization problem. Moreover, we also draw a mathematical relation between the CG and FGR problems, and develop Cycle Coarse Graining (CCG), a general-purposed approach to performing multiscale modeling for molecular simulations: Based on machine learning, CCG delivers a tractable solution to the FGR problem. In turn, it also provides a rare-event-free way to perform coarse graining or calculate free energies, and allows efficient equilibrium sampling of rare events governed by the CG variables. ## II Methods ### Fine-grained reconstruction with thermodynamic consistency In molecular simulations, given the coarse-graining function \(s(\mathbf{R})\), any FG structure \(\mathbf{R}\) that satisfies \(s\left(\mathbf{R}\right)\!=\!\mathbf{s}\) can be considered as a valid reconstruction for the given CG structure \(\mathbf{s}\). The distribution of all such valid reconstructions can be written in the form of a conditional probability (Eq. (3)), \[\mathbf{R}\succ p\left(\mathbf{R};\mathbf{s}\right) \tag{3}\] Equation (3) provides a general statistical viewpoint to the FGR problem, and exiting FGR methods can be categorized according to how they choose and sample from \(p(\mathbf{R};\mathbf{s})\). We then formally decompose the FGR problem into two subproblems: i) defining a reasonable reconstruction distribution \(p(\mathbf{R};\mathbf{s})\), and ii) efficiently drawing samples from \(p(\mathbf{R};\mathbf{s})\). For the first task, a natural selection of \(p(\mathbf{R};\mathbf{s})\) is given by Eq. (4), \[p\left(\mathbf{R};\mathbf{s}\right)\!=\!\frac{e^{-\beta U\left(\mathbf{R} \right)}\delta\left(\mathbf{s}\!-\!s\left(\mathbf{R}\right)\right)}{Z\left( \mathbf{s}\right)} \tag{4}\] where \(Z\left(\mathbf{s}\right)\!=\!\exp\!\left(-\beta F\left(\mathbf{s}\right)\right)\) is the marginal partition function given a reference \(\mathbf{s}\). In terms of physics, \(p(\mathbf{R};\mathbf{s})\) defined in Eq. (4) allows a CG structure to be reconstructed into thermodynamically favourable FG structures according to the Boltzmann distribution. We thus name Eq. (4) as the _thermodynamic consistency principle_ for FGR, in analogy with the thermodynamic consistency principle for CG (Eqs. (1-2)). The remaining task is to draw samples according to Eq. (4). Conventionally, this is done by restrained or targeted MD [6, 24]. However, after applying the restraints (typically in harmonic forms), computing the partition function in Eq. (4) becomes intractable. Alternatively, sampling according to Eq. (4) can be translated as a conditional generative learning problem, which has been intensively investigated by the machine learning community over topics like image super-resolution [25, 26]. Therefore, we may turn the sampling problem into a more tractable optimization task and employ proper deep learning techniques to achieve this goal [27]. ### Deep generative learning for fine-grained reconstruction In line with deep generative learning such as generative adversarial networks (GAN) [28], we start with a "generator", e.g., a neural network model parametrized by \(\theta\) which generates samples \(\mathbf{R}\) according to an optimizable generative distribution \(q_{\theta}(\mathbf{R};\mathbf{s})\). Usually sampling from \(q_{\theta}(\mathbf{R};\mathbf{s})\) is done via the re-parametrization trick [28, 29]: \[\begin{gathered}\mathbf{R}=f_{\theta}\left(\mathbf{z};\mathbf{s} \right)\mathbf{\right\}\mathbf{\right\}\mathbf{\right}\mathbf{\right}\mathbf{\right}\mathbf{\right} \mathbf{\right}\mathbf{\right}\mathbf{\right}\mathbf{\right}\mathbf{\right}\mathbf{\right}\mathbf{\left}\mathbf{ \left}\mathbf{\right}\mathbf{\left}\mathbf{\left}\mathbf{\right}\mathbf{\left}\mathbf{\left}\mathbf{\left} \mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{ \left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left} \mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{ \left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left} \mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{ \left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left} \mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{ \left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left} \mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left} \mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{ \left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{ \left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left} \mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left} \mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{ \left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left} \mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{ \left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left} \mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{ \left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left} \mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left} \mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{ \left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{ \left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{ \left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{ \left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{ \left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{ \left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{ \left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{ \left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{ \left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{ \left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{ \left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\mathbf{\left}\mathbf{\left}\mathbf{\left} \mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\mathbf{ \left}\mathbf{\left}\mathbf{\left}\mathbf{\left}\bm where \(f_{\theta}\) is a function transforming a random variable \(\mathbf{z}\) into a configuration sample \(\mathbf{R}\), and \(\mathbf{z}\) usually comes from a tractable prior distribution (or base distribution) like standard normal. We then aim to tune the model parameters \(\theta\) so that the resulting \(q_{\theta}\) is identical or close enough to the target reconstruction distribution \(p(\mathbf{R};\mathbf{s})\) defined in Eq. (4). To achieve this goal, we can minimize a strict divergence, for example, Kullback-Leibler divergence \(D_{\text{KL}}\) (Eq. (S1); see SI for more details), between the generative distribution and the reconstruction distribution. As in variational inference [30], \(q_{\theta}\) can be optimized by minimizing \(D_{\text{KL}}\big{(}q_{\theta}(\mathbf{R};\mathbf{s})||p(\mathbf{R};\mathbf{ s})\big{)}\) following the gradient in Eq. (5): \[\nabla_{\theta}D_{\text{KL}}\left(q_{\theta}\mid p\right)=\mathbb{E}_{\mathbf{ R}_{-q_{\theta}}}\big{[}\nabla_{\theta}\log q_{\theta}\left(\mathbf{R}; \mathbf{s}\right)+\beta\nabla_{\theta}U\left(\mathbf{R};\mathbf{s}\right) \big{]} \tag{5}\] Note that computing this gradient can be done without calculating the partition function \(Z\big{(}\mathbf{s}\big{)}\), thus forgoing sampling from \(U(\mathbf{R})\). Given access to a potential energy function \(U(\mathbf{R})\) such as a force field, optimization of Eq. (5) can be done without data, so we call Eq. (5) the _energy-based_ training objective (see more details in SI). On the other hand, if samples \(\{\mathbf{R}_{i}\}_{i=1,...N}\) are drawn via FG simulations, yielding a paired dataset \(\mathcal{D}=\big{\{}\big{(}\mathbf{R}_{i};s(\mathbf{R}_{i})\big{)}\big{\}}\), we can also minimize the reversed KL divergence \(D_{\text{KL}}\big{(}p(\mathbf{R};\mathbf{s})||q_{\theta}(\mathbf{R};\mathbf{ s})\big{)}\), which is called the _data-based_ training objective and equivalent to maximum likelihood estimation (MLE), to optimize the generative distribution: \[\nabla_{\theta}D_{\text{KL}}\left(p\mid q_{\theta}\right) =\mathbb{E}_{\mathbf{R}_{-p}}\left[-\nabla_{\theta}\log q_{\theta }\left(\mathbf{R};\mathbf{s}\right)\right] \tag{6}\] \[\approx\mathbb{E}_{\mathbf{R};\mathcal{D}}\left[-\nabla_{\theta} \log q_{\theta}\left(\mathbf{R};\mathbf{s}\right)\right]\] In order to optimize \(q_{\theta}\) according to either of the two objectives or both (Eq. (S5) in SI), we need to conveniently draw samples from \(q_{\theta}\) and express \(\log q_{\theta}\) in a closed form. Both issues can be solved if \(q_{\theta}\) is parametrized by a deep bijective or invertible model (such as the normalizing-flow models in machine learning literature) and make use of the _change-of-variable_ formula [31]. Specifically, given a function \(f_{\theta}\) that maps a random variable \(\mathbf{z}\)_bijectively_ to \(\mathbf{R}\) given an \(\mathbf{s}\), i.e., \(\mathbf{R}=f_{\theta}\left(\mathbf{z};\mathbf{s}\right)\), and assuming that \(\mathbf{z}\) comes from a tractable Gaussian base distribution \(q\left(\mathbf{z};\mathbf{s}\right)\) conditioned on \(\mathbf{s}\), we can then define a complex distribution \(q_{\theta}(\mathbf{R};\mathbf{s})\) so that the sampling from this distribution remains straightforward according to Eq. (7), \[\log q_{\theta}\left(\mathbf{R};\mathbf{s}\right)=\log q\left(\mathbf{z}; \mathbf{s}\right)-\log\left|\det\left(\frac{\partial f_{\theta}}{\partial \mathbf{z}}\right)\right| \tag{7}\] where \(\det(\partial f/\partial\mathbf{z})\) is the determinant of the Jacobian matrix \(\partial f/\partial\mathbf{z}\). In addition to bijective models, various approximations for the likelihood in Eq. (6) exist in literature, and they can be used for data-based training as well [29, 32]. The remaining issue is how to impose the constraint \(s\big{(}\mathbf{R}\big{)}\!=\!\mathbf{s}\) when optimizing \(q_{\theta}\) (or \(f_{\theta}\)). If \(s(\mathbf{R})\) is a linear function, as in most particle-based CG models [2, 33], the constraint can be rigorously preserved by means of substitution of variables. While in a more general case where \(s(\mathbf{R})\) is non-linear, we can approximate \(\partial\big{(}\mathbf{s}-s\big{(}\mathbf{R}\big{)}\big{)}\) in Eq. (4) using smooth functions, although Lagrangian methods [34, 35] can also be used. Taking energy-based training as example, when a Gaussian kernel is used to approximate the Dirac \(\delta\) function, the gradient of the loss function becomes, \[\mathbb{E}_{\mathbf{R}\sim q_{\theta}}\Big{[}\nabla_{\theta}\log q_{\theta} \left(\mathbf{R};\mathbf{s}\right)+\beta\nabla_{\theta}U\left(\mathbf{R}; \mathbf{s}\right)+\lambda\nabla_{\theta}\left|\mathbf{s}-s\left(\mathbf{R}; \mathbf{s}\right)\right|\!\Big{]}^{2}\Big{]} \tag{8}\] where \(\lambda\) characterizes the bandwidth of the Gaussian kernel. In this way the constraint is relaxed into a restraint, and the hyper-parameter \(\lambda\) can be regarded as a regularization factor which is reminiscent of the force constant defining a harmonic restraint potential applied in the restrained MD simulations. We call this restraint term _consistency regularization_ or _cycle loss_, and remark that it has other interpretations in related machine learning literature. For example, a similar term is incorporated by Cycle-GAN for unsupervised image-to-image translation [36], and used for mutual information maximization which helps overcome the mode-dropping issue in the studies of GANs [37]. ## 3 Cycle coarse graining An important relation can be drawn between CG and FGR on the basis of the two thermodynamic consistency principles, namely, Eq. (1) and Eq. (4). Given a deterministic CG rule \(s(\mathbf{R})\), the Boltzmann distribution of the FG potential \(U(\mathbf{R})\) can be factorized in the form of Eq. (9), \[\begin{split} p\left(\mathbf{R}\right)&=\frac{e^{- \beta U\left(\mathbf{R}\right)}}{Z}\\ &=p\left(\mathbf{R},s\left(\mathbf{R}\right)\right)=\int p\left( \mathbf{s}\right)\cdot p\left(\mathbf{R};\mathbf{s}\right)d\mathbf{s}\end{split} \tag{9}\] where \(p\left(\mathbf{R},s\left(\mathbf{R}\right)\right)\) denotes the joint distribution of \(\mathbf{R}\) and \(s(\mathbf{R})\), \(p(\mathbf{s})\) is given by Eq. (1) and \(p(\mathbf{R};\mathbf{s})\) by Eq. (4), respectively. Equation (9) shows that, to investigate the equilibrium properties of the FG model, instead of directly sampling from \(p(\mathbf{R})\) which is typically time-consuming, one can perform _factorized sampling_: First sample at a coarse-grained scale according to \(p(\mathbf{s})\), then perform FGR according to \(p(\mathbf{R};\mathbf{s})\) and retrieve finer details. Factorized sampling thus corresponds to a general multi-scale sampling approach. Intuitively, Eq. (9) would not hold unless that the motions of CG variable \(s(\mathbf{R})\) can be decoupled with other DoFs, echoing the common practice that only the most slowly-varying DoFs shall be chosen as CG variables. One intriguing advantage of factorized sampling lies in its ability to "interpolate" or "extrapolate" along the CG variables. Consider that \(s(\mathbf{R})\) is a characteristic chemical reaction coordinate, Eq. (9) then allows us to explore the reaction coordinate in a rare-event-free manner, and can be further boosted by existing enhanced sampling methods [38-41]. Although such interpolated trajectories can do not necessarily correspond to the true reaction pathways, however, many approaches can be implemented to retrieve the correct kinetics or transition pathways given the reconstructed free energy profile [42,43]. Finally, we notice that solving the FGR problem defined in Eq. (4) simultaneously yields a novel solution to the CG problem in Eq. (1). Specifically, given a bijective generator \(f_{\theta}(\mathbf{z};\mathbf{s})\) corresponding to a generative distribution \(q_{\theta}(\mathbf{R};\mathbf{s})\) (Eq. (7)) which minimizes \(D_{\text{KL}}(q_{\theta}||p)\) or \(D_{\text{KL}}(p||q_{\theta})\), the _variational free energy_\(F_{\theta}(\mathbf{s})\) in the form of Eq. (10) is a good approximation to the ground-truth free energy \(F(\mathbf{s})\) in Eq. (2), \[-\log p_{\theta}\left(\mathbf{s}\right)\coloneqq\beta F_{\theta}\left(\mathbf{ s}\right) \tag{10}\] where \(F_{\theta}(\mathbf{s})\) is an upper bound to \(F(\mathbf{s})\) (see Supplemental Texts in SI for derivation of Eq. (10)). As a result, we can approximate the free energy or construct CG potential function of \(\mathbf{s}\) through an optimized FGR generator \(f_{\theta}\) according to Eq. (10). The assembled training and inference protocol of CCG is summarized in SI and Algorithm S1. In summary, by solving the FGR problem, we can obtain a generative model \(q_{\theta}(\mathbf{R};\mathbf{s})\) or \(f_{\theta}(\mathbf{z};\mathbf{s})\) which reconstructs FG structures according to the CG variables (Eq. (4)). On the other hand, the optimized FGR model \(q_{\theta}\) in turn gives rise to a CG potential \(F_{\theta}(\mathbf{s})\) (Eq. (10)), without calculating the mean forces or sampling the rare events explicitly. Finally, combining \(F_{\theta}\) and \(q_{\theta}\), we can perform efficient multiscale sampling over a complex \(U(\mathbf{R})\) in a factorized fashion (Eq. (9)). This workflow forms a cycle between CG and FGR, hence, is named altogether as Cycle Coarse Graining (CCG), and provides a novel and self-consistent framework for multiscale molecular modelling. ## III Results ### Benchmark cycle coarse graining on numerical potentials We first benchmarked CCG on a 2-dimensional numerical model and illustrated how it works with different settings. The potential energy surface \(U(x,y)\) of Tiwary-Berne model [44] is shown in Fig. 1a, which consists of three local minima. The optimal linear reaction coordinate \(s=x\cos\alpha+y\sin\alpha\) (where \(\alpha=81.5^{\circ}\) is the skewing angle from \(x\)-axis) for this model was derived by Tiwary et al. [44] (dashed line in Fig. 1a), and computing potential of mean force (PMF, or free energy) along \(s\) entails time-consuming simulations via either Monte Carlo or Langevin dynamics. Although being simple, this model is characteristic of many bio-physical systems composed of multiple metastable states and the interstate transitions are rare events. We thus treated \(s\) as the CG variable and tried to reconstruct \((x,y)\) given the 1-dimensional \(s\). We first performed energy-based training for FGR according to Eq. (5). We adopted FFJORD [45], one type of deep bijectors to model the generative distribution \(q_{\theta}(x,y;s)\), and randomly sampled \(s\) according to a uniform distribution between \([-0.5,1.5]\). Since \(s\) is linear function of \((x,y)\), we resorted to substitution-of-variables instead of consistency regularization, rigorously satisfying the constraint (see more details about training and model setup in SI). Given the optimized \(q_{\theta}\), we computed the variational free energy along \(s\) according to Eq. (10). Figure 1b shows the resulting variational free energy profile learned by CCG, which has neither performed rare event sampling nor required transitions between metastable states. However, the accuracy is remarkably good compared to the reference PMF which is computed by integrating over the true potential energy surface, showing that CCG enables free energy calculation or coarse graining in a rare-event-free approach. We also benchmarked data-based FGR. To simulate better the real-world cases, we assumed that the accessible data of \((x,y)\) are distributed off equilibrium. To do so, we ran Langevin dynamics simulation with much higher temperature and collected the samples as shown in Fig. S1b, and the corresponding data distribution along \(s\) is shown by green line in Fig. 1c which deviates dramatically from the reference as expected. Besides, we tested the strength of cycle loss (\(\lambda\) in Eq. (8)) ranging from \(10^{- Figure 1: Illustration of cycle coarse graining (CCG) on a numerical model. **(a)** The potential energy surface (PES) of the model, shown in colored contour plot. The dashed line indicates the linear reaction coordinate \(\mathbf{s}\) with a skewing angle \(\alpha\). **(b)** The potential of mean force (PMF) computed by energy-trained CCG (red line) compared to ground-truth reference (black dashed line). **(c)** The potential of mean force computed by data-trained CCG (red line) compared to ground-truth reference (black dashed line). Distribution of the off-equilibrium data is also shown (green line). **(d)** Generated data through factorized sampling using energy-trained CCG model, colored according to the CG variable \(\mathbf{s}\) during FGR. The PES is shown as grey contours in background. that training was robust with respect to the choice of \(\lambda\). Particularly, even if the regularization strength was set to be rather weak (\(\lambda=10^{-4}\) ), the consistency regularizer still converged much faster than the generative objective, and achieved negligible errors after training (Fig. S1a), indicating that a small \(\lambda\) would suffice for the relaxed FGR objective. Given that the training data is off-equilibrium, we optimized FFJORD according to Eq. (S16), a modified version of Eq. (6) with reweighting trick, and applied correction to the variational free energy according to Zwanzig's free-energy perturbation theory [46] (Eq. (S17); see SI for more details). Figure 1c shows that the resulting free energy profile along \(s\) agrees well with the reference, demonstrating that CCG can rescue free energy calculations even if the training data is distributed far off equilibrium. To generate samples from \(U(x,y)\), we can perform factorized sampling based on the CG potential \(F_{\theta}(s)\) according to Eq. (9). We first conducted Monte Carlo sampling of \(s\) according to \(F_{\theta}(s)\), then performed FGR with an optimized \(q_{\theta}(x,y;s)\). The generated samples from an energy-trained \(q_{\theta}\) is shown in Fig. 1d, which conform to the correct Boltzmann distribution. Similar results were also obtained for models trained by data-based objective (Fig. S1c). Finally, we can generate fake trajectories connecting metastable states by interpolating or extrapolating the CG variable \(s\), and we term this technique as "_trajectory interpolation_". Trajectory interpolation can be done simply by fixing the random variable \(\mathbf{z}\) of the generative model \(f_{\theta}(\mathbf{z};\mathbf{s})\) (Eq. (7)) while varying the conditional variable \(\mathbf{s}\) as desired, then recording the corresponding output of the model \(\mathbf{R}=f_{\theta}(\mathbf{z};\mathbf{s})\). By selecting different random variables \(\mathbf{z}\) and performing trajectory interpolation for each \(\mathbf{z}\), one can obtain an ensemble of fake trajectories. We showcased such a fake trajectory generated by interpolation in Fig. S1d, which connects all the three metastable states and passes through barriers between them, implying the possible application of this technique for investigating the transition states of chemical reactions. **Fig. 2** Multiscale fine-grained reconstruction for Chignolin. **Row 1:****(a)** The PMF along the CG variable \(s\), i.e., RMSD with respect to the native structure. **(b) to (f)** Five selected RMSD values at which the FGR results are inspected; **(b), (c)** and **(d)** correspond to unfolded states, **(e)** to the folding transition state and **(f)** is the folded state. **Row 2:****(a)** Ca-structures are reconstructed given specified RMSD values. **(b) to (f)** show reconstructed C\(\alpha\) structures (opaque magenta) by trajectory interpolation at corresponding RMSD in Row 1; The best aligned MD structure is shown in transparency. **Row 3:****(a)** Structures with backbone heavy atoms (BH-structures) are reconstructed from Ca-structures. **(b) to (f)** show BH-structures reconstructed from the corresponding Ca-structures in Row 2; The best aligned MD structure is shown as transparent yellow ribbons. **Fig. 3** Multiscale fine-grained reconstruction for Chignolin. **Row 1:****(a)** The PMF along the CG variable \(s\), i.e., RMSD with respect to the native structure. **(b) to (f)** Five selected RMSD values at which the FGR results are inspected; **(b), (c)** and **(d)** correspond to unfolded states, **(e)** to the folding transition state and **(f)** is the folded state. **Row 2:****(a)** Ca-structures are reconstructed given specified RMSD values. **(b) to (f)** show reconstructed C\(\alpha\) structures (opaque magenta) by trajectory interpolation at corresponding RMSD in Row 1; The best aligned MD structure is shown in transparency. **Row 3:****(a)** Structures with backbone heavy atoms (BH-structures) are reconstructed from Ca-structures. **(b) to (f)** show BH-structures reconstructed from the corresponding Ca-structures in Row 2; The best aligned MD structure is shown as transparent yellow ribbons. **Fig. 4** Multiscale fine-grained reconstruction for Chignolin. **Row 2:****(a)** The PMF along the CG variable \(s\), i.e., RMSD with respect to the native structure. **(b) to (f)** Five selected RMSD values at which the FGR results are inspected; **(b), (c)** and **(d)** correspond to unfolded states, **(e)** to the folding transition state and **(f)** is the folded state. **Row 2:****(a)** Ca-structures are reconstructed given specified RMSD values. **(b) to (f)** show reconstructed C\(\alpha\) structures (opaque magenta) by trajectory interpolation at corresponding RMSD in Row 1; The best aligned MD structure is shown in transparency. **Row 3:****(a)** Structures with backbone heavy atoms (BH-structures) are reconstructed from Ca-structures. **(b) to (f)** show BH-structures reconstructed from the corresponding Ca-structures in Row 2; The best aligned MD structure is shown as transparent yellow ribbons. **Fig. 5** Multiscale fine-grained reconstruction for Chignolin. **Row 1:****(a)** The PMF along the CG variable \(s\), i.e., RMSD with respect to the native structure. **(b) to (f)** Five selected RMSD values at which the FGR results are inspected; **(b), (c)** and **(d)** correspond to unfolded states, **(e)** to the folding transition state and **(f)** is the folded state. **Row 2:****(a)** Ca-structures are reconstructed given specified RMSD values. **(b) to (f)** show reconstructed C\(\alpha\) structures (opaque magenta) by trajectory interpolation at corresponding RMSD in Row 1; The best aligned MD structure is shown in transparency. **Row 3:****(a)** Structures with backbone heavy atoms (BH-structures) are reconstructed from Ca-structures. **(b) to (f)** show BH-structures reconstructed from the corresponding Ca-structures in Row 2; The best aligned MD structure is shown as transparent yellow ribbons. **Fig. 6** Multiscale fine-grained reconstruction for Chignolin. **Row 3:****(a)** Structures with backbone heavy atoms (BH-structures) are reconstructed from Ca-structures. **(b) to (f)** show BH-structures reconstructed from the corresponding Ca-structures in Row 2; The best aligned MD structure is shown as transparent yellow ribbons. **Fig. 7** Multiscale fine-grained reconstruction for Chignolin. **Row 1:****(a)** The PMF along the CG variable \(s\), i.e., RMSD with respect to the native structure. **(b) to (f)** Five selected RMSD values at which the FGR results are inspected; **(b), (c)** and **(d)** correspond to unfolded states, **(e)** to the folding transition state and **(f)** is the folded state. **Row 2:****(a)** Ca-structures are reconstructed given specified RMSD values. **(b) to (f)** show reconstructed C\(\alpha\) structures (opaque magenta) by trajectory interpolation at corresponding RMSD in Row 1; The best aligned MD structure is shown in transparency. **Row 3:****(a)** Structures with backbone heavy atoms (BH-structures) are reconstructed from Ca-structures. **(b) to (f)** show BH-structures reconstructed from the corresponding Ca-structures in Row 2; The best aligned MD structure is shown as transparent yellow ribbons. **Fig. 8** Multiscale fine-grained reconstruction for Chignolin. **Row 4:****(a)** The PMF along the CG variable \(s\), i.e., RMSD with respect to the native structure. **(b) to (f)** Five selected RMSD values at which the FGR results are inspected; **(b), (c)** and **(d)** correspond to unfolded states, **(e)** to the folding transition state and **(f)** is the folded state. **Row 2:****(a)** Ca-structures are reconstructed given specified RMSD values. **(b) to (f)** show reconstructed C\(\alpha\) structures (opaque magenta) by trajectory interpolation at corresponding RMSD in Row 1; The best aligned MD structure is shown in transparency. **Row 3:****(a)** Structures with backbone heavy atoms (BH-structures) are reconstructed from Ca-structures. **(b) to (f)** show BH-structures reconstructed from the corresponding Ca-structures in Row 2; The best aligned MD structure is shown as transparent yellow ribbons. **Fig. 8** Multiscale fine-grained reconstruction for Chignolin. **Row 5:****(a)** Structures with backbone heavy atoms (BH-structures) are reconstructed from Ca-structures. **(b) to (f)** show BH-structures reconstructed from the corresponding Ca-structures in Row 2; The best aligned MD structure is shown as transparent yellow ribbons. **Fig. 9** Multiscale fine-grained reconstruction for Chignolin. **Row 6:****(a)** Structures with backbone heavy atoms (BH-structures) are reconstructed from Ca-structures. **(b) to (f)** show BH-structures reconstructed from the corresponding Ca-structures in Row 2; The best aligned MD structure is shown as transparent yellow ribbons. **Fig. 10** Multiscale fine-grained reconstruction for Chignolin. **Row 7:****(a)** Structures with backbone heavy atoms (BH-structures) are reconstructed from Ca-structures. **(b) to (f)** show BH-structures reconstructed from the corresponding Ca-structures in Row 2; The best aligned MD structure is shown as transparent yellow ribbons. **Fig. 11** Multiscale fine-grained reconstruction for Chignolin. **Row 8:****(a)** Structures with backbone heavy atoms (BH-structures) are reconstructed from Ca-structures. **(b) to (f)** show BH-structures reconstructed from the corresponding Ca-structures in Row 2; The best aligned MD structure is shown as transparent yellow ribbons. **Fig. 12** Multiscale fine-grained reconstruction for Chignolin. **Row 1:****(a)** The PMF along the CG variable \(s\), i.e., RMSD with respect to the native structure. **(b) to (f)** Five selected RMSD values at which the FGR results are inspected; **(b), (c)** and **(d)** correspond to unfolded states, **(e)** to the folding transition state and **(f)** is the folded state. **Row 2:****(a)** Ca-structures are reconstructed given specified RMSD values. **(b) to (f)** show reconstructed C\(\alpha\) structures (opaque magenta) by trajectory interpolation at corresponding RMSD in Row 1; The best aligned MD structure is shown in transparency. **Row 3:****(a)** Structures with backbone heavy atoms (BH-structures) are reconstructed from Ca-structures. **(b) to (f)** show BH-structures reconstructed from the corresponding Ca-structures in Row 2; The best aligned MD structure is shown as transparent yellow ribbons. **Fig. 13** Multiscale fine-grained reconstruction for Chignolin. **Row 1:****(a)** Structures with backbone heavy atoms (BH-structures) are reconstructed from Ca-structures. **(b) to (f)** show BH-structures reconstructed from the corresponding Ca-structures in Row 2; The best aligned MD structure is shown as transparent yellow ribbons. **Fig. 14** Multiscale fine-grained reconstruction for Chignolin. **Row 1:****(a)** Structures with backbone heavy atoms (BH-structures) are reconstructed from Ca-structures. **(b) to (f)** show BH-structures reconstructed from the corresponding Ca-structures in Row 2; The best aligned MD structure is shown as transparent yellow ribbons. **Fig. 15** Multiscale fine-grained reconstruction for Chignolin. **Row 1:****(a)** Structures with backbone heavy atoms (BH-structures) are reconstructed from Ca-structures. **(b)** Structures with backbone heavy atoms (BH-structures) are reconstructed from Ca-structures. under restraints of RMSD. However, targeted MD is very time-consuming, and depends severely on initial conditions and other hyperparameters like restraint strength; worse still, it can neither guarantee the diversity of reconstructed structures, nor the consistency with respect to the RMSD values. In contrast, we will show that CCG can efficiently reconstruct diverse FG samples of high quality and consistency. We performed data-based training of a generative \(\mathbf{r}=f_{\theta_{\mathbf{1}}}(\mathbf{z};\mathbf{s})\) (Eq. (7)) using all-atom trajectories contributed by Lindorff-Larsen et al. [48] and applied consistency regularization (see more training details in SI). After training is done, we can generate C\(\alpha\) protein structures at a certain RMSD by feeding a random variable \(\mathbf{z}\) (drawn from standard normal distribution) and the specified \(\mathbf{s}=\text{RMSD}\) to \(f_{\theta_{\mathbf{1}}}(\mathbf{z};\mathbf{s})\). Figure 2 Row 1 shows the equilibrium free energy profile along \(\mathbf{s}\) computed from the long MD simulations (Column a). Various reconstructed structures can be reconstructed at any specific \(\mathbf{s}\) by means of CCG as shown in Row 2 in Figure 2. Particularly, we selected five milestones along the RMSD profile (Column \(\mathbf{b}\) to \(\mathbf{f}\) in Fig. 2 Row 1) at which the reconstructed structures were inspected. Three of the five states (Column \(\mathbf{b}\) to \(\mathbf{e}\)) correspond to unfolded states which exhibit large RMSD with respect to the native structure, one state (Column \(\mathbf{f}\)) corresponds to folded structure, and one (Column \(\mathbf{e}\)) resides nearby the "transition state" of folding. Figure 2 Row 2 shows a fake trajectory interpolated along the five selected milestones: When RMSD is relatively large, extended conformations are generated; the hairpin-like structure forms near the transition state and a folded structure is yielded when RMSD approaches zero. We also verified that the generated structures given by CCG can be aligned reasonably with a counterpart found in real MD simulations, further validating the quality of these reconstructed structures. Next, we performed CCG at another scale, where backbone structures (denoted by \(\mathbf{R}\)) are reconstructed from a given C\(\alpha\) structure \(\mathbf{r}\), with all backbone heavy atoms being added. Since this is a linear FGR problem, we trained the generative model \(\mathbf{R}=f_{\theta_{\mathbf{2}}}(\mathbf{z};\mathbf{r})\) according to the data-based objective without applying consistency regularization. Now these two FGR models can be cascaded and generate FG structures at multiple scales given a certain RMSD: We first specified a RMSD value and generated C\(\alpha\) structures \(\mathbf{r}\) with \(f_{\theta_{\mathbf{1}}}(\mathbf{z};\mathbf{s})\), then generated backbone structures \(\mathbf{R}\) with \(f_{\theta_{\mathbf{2}}}(\mathbf{z};\mathbf{r})\). Put together, backbone structures can be generated according to a certain RMSD. Following this way, backbone structures are generated corresponding to the selected \(\mathbf{s}\) and \(\mathbf{r}\), which are shown in Fig. 2 Row 3. In Figure S2, we presented more reconstructed structures generated through cascaded trajectory interpolation. It can be found that when RMSD approaches zero, the generated structures aligned better with each other, showing convergence of conformations in the folded state. Conversely, when RMSD is large, the entropic effect dominates, and an increment of conformational diversity and flexibility is observed. It is also appealing to examine whether multiscale factorized sampling of protein backbone structures satisfy the thermodynamic consistency principle in Eq. (4). Because conformations of proteins are of particular research interest, we compared the distribution of RMSD and radius of gyration (both are global characterizations of protein conformations) of generated structures against samples from equilibrium all-atom MD simulations, as shown in Fig. 3a and 3b. Furthermore, we are often concerned with the detailed local structures of a protein, so we showed the Ramachandra plots of all the torsional angles for backbone structures generated by factorized sampling and compared them with samples from long MD simulations (Fig. 3d and 3e, Fig. S3). As Figure 3 shows, the conformations generated via factorized sampling reproduce those sampled via long MD simulations, proving the thermodynamic consistency of CCG. This implies that by virtue of CCG, knowing the PMF along a low-dimensional CG variable or collective variable like RMSD may suffice for obtaining more detailed conformational statistics of a protein. ## 3 Rare-event-free sampling for chemical reactions Since trajectory interpolation circumvents sampling of rare events, it can be particularly useful in the studies of many biophysical processes involving rare transition events. As an important case, simulation of chemical reactions is often hindered by two major challenges: 1) chemical Figure 3: Conformational distribution of Chignolin through factorized sampling. The 2D free-energy plot for RMSD (with respect to native structure) and radius of gyration is drawn for MD samples (**a**) and factorized sampling (**b**), respectively. The Ramachandra plot for the 2\({}^{\text{nd}}\) residue is drawn for MD samples (**c**) and factorized sampling (**d**), respectively. transitions are rare events due to high reaction barriers, thus a redundant number of single-point energy calculations are needed; 2) evaluation of the single-point energy involves quantum calculations which are slow and expensive. Recent advent of machine-learned potential [49, 50] could help soothe the latter issue, by providing a surrogate potential to the quantum oracle which can be evaluated much faster. However, optimization of the surrogate potential requires training labels covering all the relevant reaction phase space with reference to the oracle [51], which entails exploration over the reaction coordinates and sampling of rare events. We will show that CCG can be employed to solve these issues effectively following an active learning workflow (Fig. 4) and the training protocol is summarized in SI and Algorithm S2. As Fig. 4 illustrates, given initial configurations \(\mathbf{R}\), we first label them by calling the oracle \(U(\mathbf{R})\), then train the surrogate function \(\vec{U}(\mathbf{R})\) using these labels and update the CCG model \(q_{\theta}(\mathbf{R};\mathbf{s})\) with respect to \(\vec{U}(\mathbf{R})\). This allows us to quickly obtain diverse samples along \(\mathbf{s}\) without performing rare-event sampling. These samples are further queried by the oracle and form new labels for the surrogate potential. Hence, the CCG model \(q_{\theta}\) and the surrogate potential \(\vec{U}\) can be optimized in an alternating manner. When both models converge, the free energy profile along \(\mathbf{s}\) as well as molecular configurations can be directly obtained from \(q_{\theta}\) without performing MD simulations. To examine whether CCG can help study chemical reactions in a computationally cheap and rare-event-free manner, we applied this strategy (Fig. 4) to investigate a textbook reaction, the substitution between Cl' and CH\({}_{3}\)Cl (Fig. 5a) which is known to undergo a typical S\({}_{\mathrm{N}}\)2 mechanism. The positions of three reactive atoms (Cl-, Cl and C) are selected as the CG variable \(\mathbf{s}\), and we transformed the Cartesian coordinates into three internal coordinates (Fig. 5a), namely, the lengths of two C-Cl bonds (\(d_{1}\), \(d_{2}\)) and the C-Cl-C reaction angle (\(\alpha\)), which suffice to describe the relative positions of these three atoms. We trained the surrogate potential as a function of all atom positions to approximate the reference atomic forces and energies, and employed a deep bijective model as \(q_{\theta}(\mathbf{R};\mathbf{s})\) which output all atom positions \(\mathbf{R}\) including hydrogens (see SI for more details). After training, we can draw samples at given \(\mathbf{s}\) and compute the variational free energy according to Eq. (10). The free energy surface (FES) is plotted at different reaction angles (\(\alpha\)) in Fig. 5b. Because this reaction is symmetric with respect to two Cl atoms, the FES appears symmetric as expected. At \(\alpha=180^{\lx@math@degree}\) where C-Cl-C reside on a straight line, there is a unique saddle point across the FES residing at Figure 4: Rare-event-free workflow for active learning of chemical reactions via CCG. In one iteration, the all-atom structures \(\mathbf{R}\), generated by either initialization or CCG, are fed to oracle quantum potential function and yield training labels \(U(\mathbf{R})\). The surrogate machine-learned potential \(\vec{U}(\mathbf{R})\) is updated by supervision of these labels. The CCG model \(q_{\theta}(\mathbf{R};\mathbf{s})\) is then updated based on \(\vec{U}(\mathbf{R})\), and generates new samples by extrapolating the reaction coordinates \(\mathbf{s}\), which are labeled during next iteration. Figure 5: Rare-event-free sampling of S\({}_{\mathrm{N}}\)2 reaction: the substitution between Cl’ and CH\({}_{3}\)Cl. **(a)** Scheme of the reaction, where reactant, product and transition state complex (T.S.) is shown. Important structural descriptors are also illustrated, including the lengths of two C-Cl bonds (\(d_{1}\) and \(d_{2}\), respectively), Cl-C-Cl angle (\(\alpha\)) and H-C-Cl angle (\(\gamma\)). **(b)** Free-energy surface spanned by the two C-Cl bond lengths at different Cl-C-Cl reaction angles: \(\alpha=180^{\lx@math@degree}\) (left), \(\alpha=135^{\lx@math@degree}\) (right). **(c)** Snapshots of atomic structures through trajectory interpolation, including reactant (top), transition state (middle) and product (bottom); Carbons are colored by cyan, chlorides by magenta, and hydrogens by white. **(d)** Distribution of Cl-C-Cl reaction angle \(\alpha\) at different reaction stages. **(e)** Distribution of H-C-Cl angle \(\gamma\) in reactant (black) and transition state (red). about \(d_{1}=d_{2}\approx 2.45\) A, indicating that bond breaking is synchronized with bond forming, consistent with a Sn2 mechanism. However, when we changed \(\alpha\) to \(135^{\lx@math@degree}\), mimicking the reaction triggered by non-straight collisions (Fig. 5b), although the FES at product or reactant region does not change significantly, the saddle point region, which is key to the reactivity, is dramatically leveled up, excluding the possibility of such reaction pathways. Next, we interpolated linearly between \(d_{1}\) and \(d_{2}\) with \(\alpha\) fixed at \(180^{\lx@math@degree}\), and Fig. 5c shows the all-atom structures, which are generated by the optimized \(q_{\theta}\), of the reactant, product, and transition state along an interpolated trajectory. Noteworthy, these structures exhibit authentic chemical details. For instance, the CH\({}_{3}\)Cl molecules in reactant or product both show a tetrahedron shape in agreement with a sp3 central carbon. However, for the transition state (TS) complex, the hydrogens become planar with respect to the central carbon, and they form trigonal bipyramid together with the two Cl atoms. To verify whether the chemical details are generally preserved as reaction proceeds, we performed factorized sampling and generated all-atom structures at different \(\mathbf{s}\). The distribution of the angle \(\alpha\) was computed based on these samples (Fig. 5d). It can be seen that, when Cl' is far away from CH\({}_{3}\)Cl (i.e., \(d_{2}>4\)A), its orientation with respect to the other C-Cl bond is relatively arbitrary and isotropic. When Cl' approaches the reactive center (characterized by a shortened \(d_{2}\) value), its orientation with respect to C-Cl bond becomes more restrained. When the TS complex is formed, the C-Cl-C angle becomes highly restrained and only \(\alpha\approx 180^{\lx@math@degree}\) is allowed (i.e., three atoms in a line). Similarly, we also showed the distribution of H-C-Cl angle (\(\gamma\)) in Fig. 5e. In reactant or product, angle \(\gamma\) stays around \(110^{\lx@math@degree}\), corresponding to a sp3-hybridized carbon. But in the TS complex (defined as \(d_{1},d_{2}\leq 2.55\) A), this angle shifts to about \(90^{\lx@math@degree}\), indicating that Cl-C bond is orthogonal to the CH\({}_{3}\) planar, which also agree well with the chemical fact. ## 4 Concluding Remarks Multiscale molecular modeling is very useful in molecular science where molecular properties at large time and lengths scales are of interest. But its application is limited by two long-standing challenges: One is to construct coarse-grained models by proper abstraction of fine-scaled models; the other is to restore finer molecular details given coarse-grained configurations. Although these two problems are commonly addressed independently, in this work, we presented a theory connecting them, and developed CCG to solve both problems in a consistent manner. In CCG, we formulated fine-grained reconstruction as a probabilistic learning problem, and delivered a tractable solution to this task by means of machine learning. Through experiments we demonstrated that CCG is a reliable strategy for high throughput and high accuracy conversion of CG structures into FG ones for complex biophysical processes like protein folding and chemical reactions. Moreover, CCG provides a rare-event-free approach to coarse graining or free energy calculations. Specifically, trajectory interpolation enables fast exploration of the CG space that governs the slow motions of the system, whereas conventional MD simulations would be inefficient in sampling these transitions involving rare events. On the other hand, by separating the slow coarse-grained or collected motions and fast fine-grained DoFs, factorized sampling leads to a rigorous multiscale approach to expediting the investigation of equilibrium properties of complex systems. The methodology presented in this paper can be easily extended to other biophysical systems at different coarse-graining levels, such as biomacromolecules and polymers, where tuning atomistic details via human intervention is particularly labor-intensive. Besides, trajectory interpolation as well as factorized sampling makes it easy to marry other pathway-searching techniques like milestoning [52] and string method [42]. Although the generative models employed in this paper are all bijective, various quasi-invertible models [53, 54] have been developed by the machine learning community recently, and they can help extending the application scope of CCG. In addition to thermodynamic consistency discussed throughout this paper, CG or FGR pursuing correct dynamics is also an active research field. Some recent efforts proposed machine learning approaches to extract dynamic information from molecular simulation trajectories [55, 56], where CCG may also deliver helpful solutions, and we leave this direction to further studies. ## 5 Acknowledgements This research was supported by Science & Technology Innovation 2030 - New Generation of Artificial Intelligence 2022 Major Program (No. 2022ZD0115003), National Natural Science Foundation of China (22050003, 92053202, 21821004, 21927901 to Y.Q.G.). The authors thank Dr. Yi Isaac Yang and Yifan Li for useful discussion. J.Z. also thanks the Tiger supercomputer cluster in Princeton University.
2310.05293
Wait-free Trees with Asymptotically-Efficient Range Queries
Tree data structures, such as red-black trees, quad trees, treaps, or tries, are fundamental tools in computer science. A classical problem in concurrency is to obtain expressive, efficient, and scalable versions of practical tree data structures. We are interested in concurrent trees supporting range queries, i.e., queries that involve multiple consecutive data items. Existing implementations with this capability can list keys in a specific range, but do not support aggregate range queries: for instance, if we want to calculate the number of keys in a range, the only choice is to retrieve a whole list and return its size. This is suboptimal: in the sequential setting, one can augment a balanced search tree with counters and, consequently, perform these aggregate requests in logarithmic rather than linear time. In this paper, we propose a generic approach to implement a broad class of range queries on concurrent trees in a way that is wait-free, asymptotically efficient, and practically scalable. The key idea is a new mechanism for maintaining metadata concurrently at tree nodes, which can be seen as a wait-free variant of hand-over-hand locking (which we call hand-over-hand helping). We implement, test, and benchmark a balanced binary search tree with wait-free insert, delete, contains, and count operations, returning the number of keys in a given range which validates the expected speedups because of our method in practice.
Ilya Kokorin, Dan Alistarh, Vitaly Aksenov
2023-10-08T21:42:05Z
http://arxiv.org/abs/2310.05293v1
# Wait-free Trees with Asymptotically-Efficient Range Queries ###### Abstract Tree data structures, such as red-black trees, quad trees, treaps, or tries, are fundamental tools in computer science. A classical problem in concurrency is to obtain expressive, efficient, and scalable versions of practical tree data structures. We are interested in concurrent trees supporting _range queries_, i.e., queries that involve multiple consecutive data items. Existing implementations with this capability can list keys in a specific range, but do not support _aggregate range queries_: for instance, if we want to calculate the number of keys in a range, the only choice is to retrieve a whole list and return its size. This is suboptimal: in the sequential setting, one can augment a balanced search tree with counters and, consequently, perform these aggregate requests in logarithmic rather than linear time. In this paper, we propose a generic approach to implement a broad class of range queries on concurrent trees in a way that is wait-free, asymptotically efficient, and practically scalable. The key idea is a new mechanism for maintaining metadata concurrently at tree nodes, which can be seen as a wait-free variant of hand-over-hand locking (which we call _hand-over-hand helping_). We implement, test, and benchmark a balanced binary search tree with wait-free insert, delete, contains, and count operations, returning the number of keys in a given range which validates the expected speedups because of our method in practice. Data structures, Concurrent programming, Range queries ## I Introduction Tree data structures are ubiquitous in computer science, due to their high expressive power and practical versatility. For instance, in databases, index trees allow searching for an indexed key faster than traversing through all the elements. Typically, such index is implemented as B-tree [10, 15, 18], although alternate implementations are possible, such as the red-black tree [19], or the splay tree [29]. Moreover, one could use quad trees [16] to store and retrieve a collection of points in a plane, or tries [11] for fast prefix matching in strings. In this paper, we are interested in _concurrent_ implementations of fundamental tree data structures that combine theoretical and practical efficiency, with _expressivity_ in terms of the class of queries they support efficiently. Specifically, we are interested in trees supporting the following types of operations. We call a query, retrieving or modifying a single data item, a _scalar query_; and a query, involving multiple consecutive (by value) data items, a _range query_. For example, a search tree can provide the following scalar queries: * insert(key) -- if key does not exist in the tree, inserts it to the tree, otherwise, leaves the tree unmodified; * remove(key) -- if key exists in the tree, removes it from the tree, otherwise, leaves the tree unmodified; * contains(key) -- returns true if the tree contains key, false, otherwise. Also, a search tree can provide the following range queries: * collect(min, max) -- returns all the keys from the [min; max] interval from the set; * count(min, max) -- returns the number of keys from the [min; max] interval from the set. In addition, we would like to support aggregate range queries: for example, in a search tree storing key-value pairs, the range query range_add(min, max, delta) adds delta to all the values corresponding to the keys in a given range [min, max], whereas the range query range_sum(min, max) calculates the sum of all values corresponding to the keys in a given range [min, max]. In this work, in addition to extensively investigated collect query (see e.g., [8, 12]) we require the index to perform aggregate range queries (e.g., count) in an asymptotically optimal way. For example, we can use such aggregate range queries to find the number of requests to the system in the specified time range from the specified users. Currently, all existing concurrent trees answer the aggregate range queries in time proportional to the number of elements in the range, i.e., for a count query it works as count(min, max) = collect(min, max).length(). This is clearly suboptimal: in the sequential setting, augmented search trees can perform such queries in \(O\)(height) (where height is the height of the tree) which can be exponentially faster for balanced trees. Now, we overview how to sequentially perform count query in \(O\)(height) time for a binary search tree. Note that other aggregate range queries can be implemented similarly. For each node, we store the number of keys in its subtree. Then, we start traversing the tree from the root downwards. When we are in the node v we check three cases. If the range of keys in the subtree of v lies inside the required range -- we add the stored size of the subtree of v to the answer. If it intersects with the required range -- we go recursively to children, returning the sum of results for v.left and v.right. And, finally, if it does not intersect with the required range -- we stop the call and return zero. (We unroll this recursion in our sequential and concurrent implementations we unroll the recursion.) In this paper, we present a scalable approach that can make any tree data structure support wait-free operations, including asymptotically efficient aggregate range queries with logarithmic amortized time. The main idea is that the execution of an operation Op by a process P at node v begins by inserting the descriptor of Op into the root queue root.Queue, and obtaining a timestamp. Then, the process P helps to perform all pending operations in the queue, applies itself, and proceeds recursively at the children nodes, applying the same pattern. Thus, the process traverses the tree downwards, from the root to the appropriate lower nodes, at which the operation (e.g., an insertion of a new data item, or a removal of an existing one) should be performed. This method can be seen as a _wait-free_ version of the classic _hand-over-hand locking_ technique [23], where instead of blocking we ask processors to perform work that is preceding them in the queue. We name this method as _hand-over-hand helping_. In the following, we describe this construction in detail, using a binary search tree as a running example. Finally, we provide a practical implementation of such a tree, supporting insert, delete, contains, and count operations. We validate the fact that our design permits the efficient implementation of various types of range queries while achieving non-trivial scalability. ### _Related work_ **Lock-based solutions.** The easiest and the most obvious way to implement a concurrent data structure is to protect a sequential data structure with a _lock_ to guarantee mutual exclusion [25]. Such construction is not lock-free (it is not even obstruction-free) and suffers from starvation. Moreover, since a lock allows only one process to work with the data structure at a time, the resulting construction does not scale and its throughput remains low. **Linear-time solutions.** Several papers [8, 9, 12, 17, 32] address the issue of executing lock-free (and even wait-free, but with lock-free scalar queries) range queries on concurrent trees. However, the aforementioned papers address only the collect(min, max) query, returning the list of keys, located within a range [min; max]. All other range queries are proposed to be implemented on top of the collect query. For example, as we said before, the count query can be implemented as count(min, max) = collect(min, max).length(). This approach suffers from a major drawback: the collect query is executed in time proportional to the number of keys in the range. Thus, for wide ranges, such query takes \(O(N)\) time where \(N\) is the size of the tree: the number of keys in the range is almost equal to the size of the tree. This implementation is not asymptotically efficient: e.g., the count query can be executed in \(O(\log N)\) time in a sequential environment using balanced search trees. Therefore, despite being lock-free, these methods do not guarantee time efficiency, and thus cannot be used. **Persistent data structures.** There exists a solution for efficient aggregate range queries based on persistent data structures [3]. Each read-only operation (e.g., contains or count) takes the current version of the data structure and operates on it. Each update operation (e.g., insert or remove) creates a new version of the data structure without modifying the existing one and then tries to replace the old version with the new one using a Compare-And-Swap [1] (or CAS, for simplicity). If the CAS succeeds the operation finishes, otherwise the operation restarts from the very beginning. This approach is called Lock-free Universal Construction [23] and can be applied to any sequential persistent tree. As an interesting observation, this approach scales even on write-only workloads [5]. However, there are at least two drawbacks: 1) we cannot provide strong fairness guarantees -- one operation can restart infinitely often if we are not lucky enough; 2) for an update range query, the majority of computation time will be spent needlessly -- since unsuccessful CAS makes us retry the whole operation from the very beginning. For more information about this approach, we point to [5]. **Parallel augmented persistent trees.** Sun, Ferizovic, and Belloch [31] presented a persistent augmented tree that can serve a batch of operations in parallel using _fork-join_ parallelism. The paper does not propose a method of executing concurrent operations on augmented data structures. However, we can use various combining techniques [7, 21, 30] to form large batches of operations from individual concurrent updates. The main problem with this approach is that the combining techniques increase individual operation latency and, thus, are not acceptable in settings, where low operation latency is required. ## II Overview of the approach ### _Timestamps invariant_ The main problem with the sequential algorithm for an aggregate range query presented in the introduction is that it will be incorrect if running as is in a concurrent environment. Indeed, each update operation (e.g. insert or remove) should modify not only the tree structure, but the augmentation values on the path (e.g., subtree sizes). By that, the augmentation values may become inconsistent with the tree structure. Therefore, the main purpose of our concurrent solution is to get rid of such situations by ensuring that all operations are executed in a particular order. We enforce a particular execution order by maintaining an operation queue in each node. Consider an arbitrary node v and its subtree vs. At v we maintain an _operations queue_, that contains descriptors of operations to be applied to vs (Fig. 1). These operations can, for example, insert a key to vs or remove a key from vs. We maintain the following invariant: operations should be applied to vs in the order, their descriptors were added to v queue. Note that the aforementioned invariant can be applied to the root node too: indeed, since the whole tree is just the subtree of the root operations should be applied to the tree in the order their descriptors were added to the operations queue in the root. Thus, the order, in which operation descriptors are added to the queue in the root, is exactly the _linearization order_. Thus, we may use the operations queue at the root to allocate timestamps for operations. A timestamp allocation mechanism should provide the following guarantee: if a descriptor of operation A was added to the root queue before a descriptor of operation B, then timestamp(A) \(<\) timestamp(B) should hold. We explain how to achieve it in Section II-D. We store the timestamp of an operation in the corresponding descriptor, i.e., descriptor.Timestamp field. As was stated before, operations should be applied to the tree in the order, their descriptors were added to the root descriptor queue. Therefore, one can wonder: how can we achieve parallelism, while linearizing all operations via the root queue? Note that there is no parallelism only in the queue in the root. Lower by the tree, two operations (even the modifying ones, e.g., two inserts) may be executed in parallel if they are executed on different subtrees, since on lower tree levels their descriptors will be placed to different operation queues (Fig. 2). ### _Operation execution: overview_ For simplicity of the overview, we consider only unbalanced trees for now. If we want to make our tree balanced, we can adapt the subtree rebuilding approach (we provide a detailed description in Section II-E). The study of other concurrent balancing strategies we leave for the future work. The execution of an operation Op by a process P (we call such process P the _initiator_ process) begins with inserting the descriptor of Op into the root queue and obtaining Op timestamp. In Section II-D, we describe, how the root queue with timestamp allocation may be implemented. After that, the initiator process starts traversing the tree downwards, from the root to the appropriate lower nodes, at which the operation (e.g., an insertion of a new data item, or a removal of an existing one) should be performed. In each visited node v some additional actions should be performed in order to execute Op properly. For example, during the count query the size of v's subtree can be added to the result, and during insert or remove operations pointers to v's children and v's subtree size can be changed. We call the process of performing these necessary actions -- an _execution of operation Op in node v_. As stated in the previous subsection, operations should be applied to v's subtree in the order their descriptors appear in v's operations queue. Thus, if the descriptor of Op is not located at the head of v's queue the initiator process P has to wait before executing Op in node v (Fig. 3). The execution of Op in node v cannot begin until execution of all the preceding operations in node v is finished. Fig. 1: Node v has an operations queue with descriptors of three operations: Op\({}_{1}\), Op\({}_{2}\) and Op\({}_{3}\). These three operations should be applied to vs in the order of descriptors in the queue: first Op\({}_{1}\), then Op\({}_{2}\), and, finally, Op\({}_{3}\) Fig. 2: Two operations can be executed in arbitrary order (even in parallel) if they operate on different subtrees, since on lower tree levels they are placed to different queues To make the algorithm wait-free we use the _helping_ mechanism (e.g., [20, 27]): instead of merely waiting for the Op descriptor to move to the head of v queue, P helps executing in node v the operation from the head of the queue -- D0 in the example above. Thus, even if the initiator process of D0 is suspended, the system still makes progress. As discussed later, while helping to execute operations D0,D1,... in node v the process P removes descriptors of these operations from the head of v's queue and inserts them to queues of appropriate v's children. Thus, while helping other processes execute their initiated operations in v, P moves Op descriptor closer to the head of v queue. Once P helped all preceding operations to finish their execution in node v, it can finally execute its operation Op in v (note that some other process may help executing Op in v, just like P previously helped executing D0 in v). The process of executing an operation Op in a node v consists of the following actions: 1. Determine the set of child nodes C, in which Op execution should continue. For example, an execution of the count query on a binary search tree may continue in either single child or both children, as explained in Section I. 2. For each child C from the set C: 1. Modify the state of C (e.g., a size of C's subtree), if necessary; 2. Try to insert Op descriptor to the end of C's operations queue, thus allowing Op to continue its execution at lower levels of the tree. 3. Remove Op descriptor from the head of v's queue. Note, that during the execution of operation Op in node v the said operation only modifies states of v's children, not v itself. Thus, no operation can ever modify the root state, since the root is not a child of some other node. We overcome that issue by the introduction of the _fictive root_. This fictive root does not contain any state and has only one child -- the real tree root. The only purpose of the fictive root is to allow operations to modify the state of the real root. The state of the real root can be modified by operation Op while Op is being executed in the fictive root, since the real root is the child of the fictive root. In Section II-C, we describe how an operation Op should be executed in a node v. Since now we force processes to help each other, operation Op, initiated by process P, in any node v can be executed by some other helper process. Thus, we need to provide a mechanism for the process P by which it distinguishes between the two following situations: * Operation Op has not yet been executed in node v. Thus, the descriptor of Op is still located somewhere in v queue. In that case, P needs to continue executing operations from the head of v queue in node v. * Operation Op has already been executed in node v. In that case, P can proceed to execute Op in lower nodes of v's subtree. We use timestamps to distinguish between these two situations. We describe that usage of timestamps with formulating and proving _timestamps increasing property_. **Theorem 1**.: _In each queue, operation timestamps form a monotonically increasing sequence. More formally, if at any moment we traverse any queue Q from the head to the tail and obtain t1, t2,...t2 -- a sequence of timestamps of descriptors, located in Q, then t1 < t2 <...< tn will hold._ We prove that theorem in Appendix D. As follows from that property, the initiator process P can easily learn, whether its operation Op has been executed in node v by using the simple algorithm: * if the queue is empty -- we conclude that Op has been executed in v; * if the queue is not empty, we compare the timestamp of the descriptor in the head of v queue with the timestamp of Op: if v.Queue.Head.Timestamp > Op.Timestamp, we conclude that Op has been executed in v, otherwise, we conclude that Op has not been executed in v yet. Therefore, we can implement the algorithm of executing all operations from v's queue up to Op.Timestamp (Listing 1): ``` 1funcexecute_until_timestamp(Op, v): 2whiletrue: 3//obtainsthefirst descriptorinFIFOorder 4head_descriptor:=v.Queue.pose() 5ifhead_descriptor=nil: 6return 7ifhead_descriptor.Timestamp>Op.Timestamp: 8return 9//exec_in_nodechangesstatesofvchildren 10//pusheshead_descriptortochildqueues 11//removeshead_descriptorfromv queueexecute_in_node(head_descriptor, v) ``` Listing 1: The algorithm to execute all operations, up to the specified timestamp Op.Timestamp, from v queue Suppose the initiator process P is traversing the tree to execute operation Op and P just finished executing Op in node v. How can P choose the next node in the traversal? It is not necessary to always continue the traversal in one of v's children, since Op can be now finished in v subtree by Fig. 3: Process P has to wait before executing Op in node v, since only the operation D0, corresponding to the descriptor at the head of v queue, can be executed right now in v. other helper processes. To address this issue, in each operation descriptor we store a queue with nodes Op.Traverse -- the queue of nodes that must be visited during the execution of Op. The Traverse queue is maintained and used in the following way: * When any process (no difference initiator or helper) starts executing Op in node v, it adds to the tail of Op.Traverse all children of node v in which the execution should continue; * When the initiator process finishes the procedure execute_until_timestamp(Op.Timestamp, v), it removes v from the head of Op.Traverse queue. Note, that only the initiator process can remove nodes from Op.Traverse queue; * After the initiator process has removed the current node v from the head of Op.Traverse, it checks Op.Traverse: if it is empty, the operation is completed and the initiator returns the query result to the caller; otherwise (if Op.Traverse is not empty), the initiator continues the traverse by taking the next node from the head of Op.Traverse. Note, that this queue maintenance scheme allows a node v to be inserted into Op.Traverse multiple times, since multiple helper processes may be executing Op in v parent in parallel. However, as will be explained in Section II-C v's state will still be modified exactly once, no matter how many times it is processed. The traverse algorithm can be implemented as in Listing 2. ``` 1funexecute_operation(op): 2Tree.Root.Queue.push_acquire_timestamp(op) 3op.Traverse = [Tree.Root] 4whiletrue: 5v:=op.Traverse.peek() 6ifv-nil://opisfinished 7return 8execute_until_timestamp(op.Timestamp, v) 9op.Traverse.pop() ``` Listing 2: The algorithm for traversing the tree Now we have to design a method, that will allow the initiator process to learn the operation result when the operation is completed. The problem here is that the operation result might consist of multiple parts (e.g., count result consists of a sum of multiple subtree sizes), and these parts (e.g., subtree sizes) may be computed by different processes, since force processes to help each other. To allow operation result to be assembled from these parts, in each operation descriptor we store a concurrent map Op.Processed, filled with nodes, in which the execution of Op has been finished. The size of this map is expected to be small for aggregate range queries (e.g., \(O(\log N)\)), so, we can implement them in any way we want: a wait-free queue that stores all the required nodes (maybe multiple times, which we filter out at the end of the operation) or with a Wait-free Universal Construction [22], and, finally, we can use a wait-free map. The Op.Processed uses tree nodes as its keys. To allow this, we augment each tree node v with an identifier, stored in the v.Id field. Each node receives its identifier at the creation moment and the node identifier does not change throughout the node lifetime. The node identifiers must be unique. We can achieve that property using UUID [4] generation procedure or by incrementing fetch-and-add [2] counter. Values of the Op.Processed map store parts of the result: for example, for the count query we store in the Op.Processed the node identifiers with the sizes of their subtrees that should be added to the result of the query. Before removing Op descriptor from the head of v's queue we try to add v.Id along with a value x, corresponding to the part of the answer for the node v, into the Op.Processed map. If key v.Id already exists in the Processed map, we left the Op.Processed map unmodified, without changing the value, associated with v.Id. We never modify the value, associated with node v, since stalled processes can calculate the value incorrectly. Indeed, consider the following scenario: 1. [noitemsep,topsep=0pt] 2. Descriptor D, corresponding to a count operation with timestamp 42, is located at the head of v's queue; 3. Process P reads D from the head of v's queue; 4. Process P is suspended by the OS; 5. Process R reads D from the head of v's queue; 6. Process R determines that the size of v's left subtree should be added to the result; 7. Process R reads the size of v's left subtree (say, it equals to 5) and adds key-value pair \(\langle\) v.Id, 5 \(\rangle\) to the Processed map; 8. A new key is inserted to v left subtree by insert operation with timestamp 43, making v left subtree size equal to 6; 9. Process P is resumed by the OS; 10. Process P reads the size of v's left subtree (now it equals to 6) and tries to add key-value pair \(\langle\) v.Id, 6 \(\rangle\) to the Processed map. On step (9) we should not modify the value, corresponding to the node v, since the value 6 reflects the modification, performed by the insert operation with timestamp 43. The count operation has timestamp 42, thus, the count result should not include the key, inserted by insert operation with timestamp 43. When the operation execution is finished (i.e., Op.Traverse is empty) we traverse the Processed map, forming the query result from partial results associated with visited nodes. Note, that it is safe to traverse the Processed map -- indeed, now the Processed map cannot be modified concurrently, since the query execution is finished. ### _Detailed description of an execution in a node_ In Section II-B, we explained how the execution of the operation works in general. Now, we go into details of the execution in the node. The process of executing an operation Op in a node v consists of the following actions: * Determine the set of child nodes C, in which Op execution should continue. * For each child c from the set C: 1. Insert c into Op.Traverse queue; 2. Modify the state of c (e.g., a size of c's subtree), if necessary; 3. Insert Op descriptor to the operations queue of c, thus allowing Op to continue its execution at lower levels of the tree. * Try to add v.Id along with a value x, corresponding to the part of the answer for the node v, into Op.Processed map. * Try to remove Op descriptor from the head of v's queue if it is still there. The removal of Op descriptor from the head of v's queue should be done after the insertion of Op descriptor to child queues and modification of child states are finished. Otherwise, the execution of later operations in v may start before the execution of Op in v is finished, which may break the main invariant (Section II-A). Inserting the descriptor to child queues, modifying child states, and removing the descriptor from the parent queue should happen exactly once, no matter how many processes are working on the descriptor concurrently. Exactly-once insertion to and removal from queues is handled by our implementation of concurrent queues (see Section II-D). Queues provide two procedures: * push_if inserts the descriptor to the tail of the queue only if it has not been inserted yet, otherwise, the queue is left unmodified. * pop_if removes the descriptor from the head of the queue only if it has not been removed yet, otherwise, the queue is left unmodified. The main problem in the execution of an operation Op in a node v is the proper work with the children states: we should be able to work with each state atomically and we should modify each state exactly once, no matter how many processes are executing Op in v. The atomicity problem comes from the fact that the state may consist of multiple fields. To solve this problem, we do not store the state directly inside the node -- instead we store the immutable state in the heap and the node stores the pointer S_Ptr to it. The state, located in the heap, is considered immutable and is never modified. To modify the node state, we simply do the following: 1. create the structure, corresponding to the modified state, with an arbitrary set of fields changed; 2. place the modified state somewhere in the heap; 3. change the node.S_Ptr, so that it points to the new state. To read the state atomically, we simply read the S_Ptr register. After that, we can safely access any field from the state structure, pointed at by the fetched pointer, without worrying that the state structure is being modified concurrently by another process. Since the structure is immutable, it can never be modified by another process. Now, we return to the second problem of modifying the state exactly once. In the state we store one additional field: Ts_Mod -- timestamp of the operation, that was the last to modify the state. Thus, if the operation Op is willing to modify the state of node v, we should first read the current v's state and acquire the last modification timestamp. * If Ts_Mod \(\geq\) Op.Timestamp we conclude that v's state has been already modified by Op. In that case, we simply do not try to modify v's state according to Op anymore. * Otherwise, we create a new state (with Ts_Mod = Op.Timestamp) and try to change the state pointer using CAS(&v.S_Ptr, cur_state, new_state). We then go to the next step, no matter what was the CAS result. If the CAS returned true -- we have successfully modified the state, otherwise (if the CAS returned false), some other process has already modified the state according to Op. Thus, the state is modified with each executed operation exactly once. Indeed, even if some stalled process will try to modify node v with an already applied operation Op the node state will not be changed, since the last modification timestamp is greater than or equal to Op.Timestamp. Therefore, the algorithm can be implemented in the following way (Listing 3): ``` 1funexecute_in_node(op,v): 2C:=/*setofvchildreninwhich 3executionofopshouldcontinue*/ 4forcinC: 5cur_state:=v.State_Ptr 6op_Traverse.push(c) 7ifcur_state.Ts_Mod < op.Timestamp: 8new_state:=op.get_modified_state(cur_state) 9new_state.Ts_Mod :=op.Timestamp 10cas(&v.State_Ptr,cur_state,new_state) 11c.Queue_push_if(op) 12node_key:=v.Id 13node_value:=/*partofftheresult 14correspondingtov*/ 15op.try_insert(node_key,node_value) 16v.Queue.pop_if(op) ``` Listing 3: Algorithm for executing operation op in node v ### _Implementation of an operations queue_ **Queue structure.** For our purpose, we can use any practical queue algorithms as a basis for our descriptors queues, e.g., fetch-and-add queue [33] or practical wait-free queue [24]: the final implementation remains almost the same. However, for simplicity of the presentation, we use Michael-Scott queue. This queue is lock-free which makes the whole algorithm lock-free. But if we make the root queue to be wait-free -- all other queues based on Michael-Scott queue will automatically have the same progress guarantee due to the way how we work with the descriptors. For more information about the wait-freedom see Section II-F. In each node of the queue we store the descriptor in field Data and the pointer to the next node in field Next. Also, we have two pointers: Tail, that points to the last node of the queue, and Head, that points to the node _before_ the first node of the queue. Note that the node at Head pointer does not store any data, residing in the queue. This node is considered dummy and only the node at Head.Next pointer contains the first real descriptor in the queue. **Queue in the root.** As discussed in Section II-A, the operation queue in the root node should provide timestamp allocation mechanism, with the following guarantees: if the descriptor of operation A was added to the root queue before the descriptor of the operation B, then timestamp(A) < timestamp(B) should hold. As stated above, we can use a slight modification of Michael-Scott queue [27] to implement the timestamp allocation mechanism for the root queue. Each time we need to add a new descriptor to the root queue, we 1) create a new node with the descriptor; 2) take the timestamp of the tail; 3) set the new timestamp in our descriptor as the incremented timestamp of the tail; 4) try to move the queue tail to the new node using CAS; 5) if the CAS is successful we stop, otherwise, we repeat from step (2). In Section II-F, we show how to implement such queue in a wait-free manner. **push_if implementation.** As discussed in Section II-C, non-root queues should provide push_if operation that inserts a descriptor into the queue if it was not inserted yet (otherwise, the queue should be left unmodified). The procedure is based on the Michael-Scott queue insertion algorithm [27]: we check the timestamp of the tail, if it is higher then the descriptor has been inserted and we leave the queue unmodified, otherwise, we try to move the queue tail to the new node using CAS. **pop_if implementation.** As discussed in Section II-C, the operation queue in any node should provide pop_if operation, that tries to remove descriptor with the specified timestamp TS from the head of the queue. If descriptor D with timestamp TS is still located at the head of the queue, it is removed. Otherwise, the queue is left unmodified -- in this case, we assume that D was removed by some other process. We assume that at some moment D was located at the head of the queue (it may still be located at the head of the queue or it may be already removed), i.e., we never try to remove a descriptor from the middle of the queue. We can do this using Michael-Scott queue [27]. ### _Balancing strategy_ Until now, we considered unbalanced trees which may have \(height\in\Omega(\log N)\). Since most of the queries (e.g., insert, remove, contains,and count) are executed on a tree in \(\Theta(height)\) time, using unbalanced trees may result in these queries being executed in non-optimal \(\omega(\log N)\) time. Therefore, we must design an algorithm to keep the tree balanced. One possible balancing strategy is based on a subtree rebuilding and is similar to the balancing strategy proposed in [6, 14, 26, 28]. The idea of this approach can be formulated the following way: when the number of modifications in a particular subtree exceeds a threshold, we rebuild that subtree making it perfectly balanced. For each tree node we maintain Mod_Cnt in the node state -- the number of modifications in the subtree of this node. Moreover, for each node we store an immutable number Init_Sz -- the initial size of its subtree, i.e., the number of data items in that node subtree at the moment of node creation (node can be created when a new data item is inserted to the tree or when the subtree, where the node is located, is rebuilt). We rebuild the node subtree when Mod_Cnt > K \(\cdot\) Init_Sz, where K is a predefined constant. This approach makes the rebuilding to take \(O(1)\) amortized time and, thus, the rebuilding does not affect amortized total cost (according e.g., to [26]). We check whether the subtree of v needs rebuilding (and perform the rebuilding itself) before inserting an operation descriptor to v's queue and changing v's state. Therefore, we can perform v's subtree rebuilding only during execution of some operation in v's parent. Consider node v, its parent pv and operation Op, that is being executed in pv and that should continue its execution in v's subtree (and, therefore, its descriptor should be inserted to v's queue). Before inserting Op to v's queue and changing v state, we check whether Mod_Cnt in v will exceed the threshold after applying Op to v's subtree: if so, v subtree must be rebuilt. Note, that the subtree of v can contain unfinished operations: their descriptors still reside in the queues in that subtree (Fig. 4). As the first step, we should finish all these unfinished operations before rebuilding the subtree. To do so, we traverse the subtree and in each node u \(\in\) subtree(v) execute all operations, residing in u queue. After that, we again traverse the subtree of v, that no longer contains unfinished operations, and collect all the stored data items (e.g., keys or key-value pairs). Then, we build an ideally balanced subtree, containing all these data items. Each node of the new subtree should be initialized with Mod_Cnt = 0 and contain correct Init_Sz. We should set Ts_Mod of each node in the rebuilt subtree so that Op and all later operations (with timestamp \(\geq\) Op.Timestamp) can still modify the new subtree, but all the preceding opera Fig. 4: The subtree that needs rebuilding may contain descriptors of unfinished operations tions (with timestamp < Op.Timestamp) cannot. Thus, we set Ts_Mod = Op.Timestamp - 1. After that, we take nv -- the root of the new subtree and try to modify the pointer that pointed at v, so that it starts to point at nv. For example, if v was the left child of pv, we execute CAS(&pv.Left, v, nv); if v was the right child of pv, we execute CAS(&pv.Right, v, nv). If the CAS returned true we conclude that we have successfully finished the rebuilding; if the CAS returned false we conclude that some other process has completed the rebuilding before us. In either case we resume the execution of Op in pv: we read nv -- new root of the subtree, modify nv's state, insert Op descriptor to nv's queue (here we re-read root of the subtree because nv can be root of the subtree build not by our process, but by some another holder process) and remove Op descriptor from pv queue. ### _Wait-freedom_ We now prove that our solution can be implemented efficiently with wait-free progress guarantee. We recall that _wait-freedom_[22] is a progress guarantee that requires all non-suspended processes to finish their execution within a bounded number of steps. **Theorem 2**.: _Each operation \(\mathcal{Op}\) in our solution finishes within a bounded number of steps._ To prove that theorem we recall that the execution of operation Op consists of: 1. Inserting Op descriptor into the root queue; 2. Propagating Op descriptor downwards, from the root to the appropriate lower nodes; 3. Executing Op in each node v on the target path. Now, we prove that each of these stages finishes within a bounded number of steps. **Lemma 1**.: _The insertion of a descriptor into the root queue finishes within a bounded number of steps._ Proof.: Our queue implementation, described in Section II-D is lock-free, but not wait-free, since it is just a version of Michael and Scott queue [27]. The simplest approach is to implement the wait-free root queue using the well-known Wait-free Universal Construction [22], with no implementation caveats. However, this approach has a very huge overhead. We hope that some practical wait-free queue (e.g., [24, 33]) can emulate our root queue and its timestamps distribution. Unfortunately, a wait-free queue from [33] can support the increasing timestamps using cell identifiers for that, but do not allow a simple wait-free peek function, that reads the head of the queue but does not remove it -- this functionality is crucial for our queue in pop_if. Luckily for us the wait-free queue from [24] supports wait-free peek function and supports non-decreasing timestamps (or epochs in the paper). We can make them strongly increasing using a fetch-and-add register. To distribute the timestamps, we need a version variable and an array of size \(P\) that contains the current descriptors. Each descriptor has an empty timestamp variable at the initialization. When performing an operation, process \(\pi\) creates a new descriptor and puts it into the corresponding cell. Then, it gets new version from the version variable using fetch-and-add and tries to CAS the current empty timestamp in its descriptor to the obtained version. Not depending on the result of CAS, the descriptor of \(\pi\) has a timestamp. Then, \(\pi\) traverses the array of descriptors and replaces empty timestamps by a newly fetched version. Also, \(\pi\) saves the descriptors with the timestamp smaller than the one in its descriptor. Finally, the process tries to enqueue into the root queue all these descriptors in the sorted order of their timestamps. Thus, the algorithm works in \(O(P\log P)\) time. **Lemma 2**.: _In each tree node v on the \(\mathcal{Op}\) traversal path executing Op in v finishes in a finite number of steps._ Proof.: Consider an operation queue at node v (Fig. 5). Here some operations (\(X_{1}\ldots X_{K}\)) should be executed before Op, while all other operations (\(Y_{1}\ldots\)) will be executed only after execution of Op in v is fully completed. Thus: * We help to complete only a finite number of operations in a node v, since there cannot be more than \(|P|\) operations in the queue of v before Op (where \(P\) is the set of the processes executing operations); * Each operations \(X_{i}\) takes a finite number of steps to complete its execution in a node v (see Section II-C for the list of those steps). Note, that in the process of execution operation Op in node v we never retry any operation (in contrast to lock-free algorithms, e.g., in [27]): for example if the insertion of Op descriptor to child node cv fails, we conclude that Op descriptor has been inserted to cv by another helper process and merely continue the execution of Op in v; Therefore, executing Op in v finishes in a constant number of steps. **Lemma 3**.: _Propagating the descriptor downwards, from the root to the appropriate lower nodes finishes within a bounded number of steps_ Proof.: Consider some operation \(Op_{2}\) such that \(Op_{2}\).Timestamp > Op.Timestamp. If both Fig. 5: Operation queue structure at node v and Op are willing to change the very same tree node v, \(Op_{2}\) under any conditions will do it after Op, since the operations are executed in a strict timestamp order (see Section II-A for details). Thus, \(Op_{2}\) cannot somehow change the structure of the tree to disrupt Op's traversal. Therefore, Op will finish its traversal in a constant amount of steps, since later operations cannot interfere in Op traversal. Since none of the later operations can overcome Op, we note the following: * At the moment when Op begins execution the size of tree is \(N\) and no more than \(|P|\) concurrent processes are inserting new nodes in the tree. Thus at the Op.Timestamp moment the size of the tree will no exceed \(O(N+|P|)\), which is definitely a finite number; * By Lemma 2 operations takes a finite number of steps to execute in a node. Thus, the operation takes a finite number of steps to finish its traversal. Note, that our rebuilding procedure does not fail the wait-freedom guarantee in the proof above since each rebuilding finishes in a bounded number of steps. **Lemma 4**.: _The rebuilding procedure finishes in a bounded number of steps_ Proof.: Indeed, the rebuilding procedure of a subtree vs consists of the following steps: * Traverse the subtree vs, collecting all unfinished operations; * Help to complete all these unfinished operations; * Collect all keys from vs; * Build an ideal tree from collected keys. Note, that only the operations that started before Op can be unfinished in vs (Fig 6), since we execute operations in the timestamp order. Therefore: 1) there is a finite set of unfinished operations in vs; 2) a completing of each unfinished operations takes a finite number of steps by Lemma 3; 3) vs has a finite size, thus, the collecting all keys from vs and the construction of a new ideal subtree also takes a finite amount of steps. Thus, the rebuilding completes in a finite amount of steps. ### _Time cost analysis_ We now estimate the time it takes to execute an operation in our solution. **Theorem 3**.: _The amortized cost of insert, remove, contains or count operation on our concurrent binary search tree with rebuilding is \(O((\log N+|P|)\cdot|P|)\)_ Proof.: Suppose \(N\) is the size of the tree when the operation Op starts its execution. In a sequential setting each of these operations takes \(O(\log N)\) time since it visits \(O(\log N)\) nodes performing \(O(1)\) operations in each node. In concurrent setting, up to \(|P|\) other processes can be inserting their keys to the tree concurrently with Op, thus, at the moment of Op.Timestamp the size of the tree will not exceed \(N+|P|\), therefore the amortized number of nodes Op will traverse is \(O(\log N+|P|)\) (since the tree is balanced). In each node v no more than \(|P|\) descriptors will be located closer to the head of v.Queue than the descriptor of our operation Op. Each operation takes \(O(1)\) amortized time to execute (the rebuilding takes \(O(1)\) amortized time as stated e.g., in [26]), thus, Op takes \(O(|P|)\) amortized time to finish its execution in each node. Therefore, amortized Op execution cost is \(O((\log N+|P|)\cdot|P|)\). **Theorem 4**.: _When the workload is uniform (i.e., each data item is equally likely to be queried) insert and remove take \(O(\log N+|P|)\) amortized time on our concurrent binary search tree with rebuilding._ Proof.: Consider the size of the root operation queue. Since there exist up to \(|P|\) processes executing operations concurrently, the size of root operation queue is \(O(|P|)\). Let us see, in which nodes these operations will continue their execution. Since each data item is equally likely to be queried, approximately half descriptors continues their execution in root.Left node, and the other half continues their execution in root.Right node. Therefore, the expected size of operation queue in each node of the second tree level is \(O(\frac{|P|}{2})\). Following the same reasoning, the expected size of operation queue in each node of the third tree level is \(O\left(\frac{|P|}{2^{k}}\right)=O\left(\frac{|P|}{4}\right)\) and the expected size of operation queue in each node of the \(k\)-th level of the tree is \(O\left(\frac{|P|}{2^{k-1}}\right)\). Since the tree is balanced, the operation traverses \(O(\log N+|P|)\) nodes. The expected amortized number of operations performed in \(i\)-th node is \(O\left(\max\left(\frac{|P|}{2^{k-1}},1\right)\right)\) since the amortized cost of executing a single operation in a node is \(O(1)\) (of course, in each node we perform at least \(O(1)\) operations). Therefore, the total expected amortized cost of performing an operation is \(O\left(\sum\limits_{k=1}^{\log N+|P|}\max\left(\frac{|P|}{2^{k-1}},1\right) \right)=O(\log N+|P|)\). Fig. 6: Unfinished operation \(O_{1},O_{2},\ldots O_{5}\) have timestamp lower than Op.Timestamp ## III Experiments According to the framework described in Section II, we implemented a concurrent balanced binary search tree that supports insert, remove, contains, and count queries. The code is written in Kotlin. We decided to test our data structure only against the concurrent persistent tree presented in [5], since it is the only available data structure that supports asymptotically efficient range queries (e.g., can execute count queries in logarithmic time). We test the implementations on the following workloads: 1) a read-heavy workload that runs contains operations; 2) an insert-delete workload with half insertions and half deletions on a random keys drawn from a range so that each operation is successful with a probability of approximately \(0.5\); 3) a successful-insert workload where we insert a random key from a very wide range (from \(-2^{63}\) to \(2^{63}-1\)) so that all insertions are successful with the very high probability. We consider these experiments more as preliminary rather than the full-detailed ones. All our experiments are performed on Intel Gold 6240R with 24 cores. We decide to run on one socket due to the heavy load on the memory by our search tree. The plots show the throughput of the data structures, i.e., the number of operations in \(10\) seconds. Each point on the plots is obtained as an average of \(5\) separate runs. The blue lines are for our data structure, and the orange lines are for the persistent tree. **Contains Benchmark.** We fix the key range as \([1,2\cdot 10^{6}]\). At first, we initialize a data structure -- each element from the range is inserted with the probability \(1/2\). Then, we start \(T\) threads. Each thread for \(10\) seconds searches for a key taken uniformly at random from the range. As shown in Figure 7, our data structure does not have a large overhead for contains operations. **Insert-Delete Benchmark.** We fix the key range as \([1,2\cdot 10^{6}]\). At first, we initialize a data structure -- each element from the range is inserted with the probability \(1/2\). Then, we start \(T\) threads. Each thread for \(10\) seconds chooses the operation (insert/delete) uniformly at random and the argument uniformly at random from the range. As shown in Figure 8, our data structure starts worse due to the larger overhead, but it works under contention better than the persistent tree. **Successful-Insert Benchmark.** We initialize a data structure with \(10^{6}\) random integer elements. Then, we start \(T\) threads. Each thread for \(10\) seconds inserts random integers. With the very high probability each insertion is successful which affects the persistent tree very much. As shown in Figure 9, our data structure starts worse due to the larger overhead, but it works under contention better than persistent tree. **Outcome.** Our experiments show that our data structure works better than the only existing solution with aggregate range queries on update-heavy workloads and has a small overhead on contains operations while supporting efficient aggregate range queries. ## IV Conclusion We present an approach to obtain concurrent trees with efficient aggregate range queries in a wait-free manner. Our practical results validate our performance and scalability claims. We propose a number of avenues for future work. First, we can make the rebuilding collaborative [14], i.e., make different processes work together to rebuild a single subtree. Then, in order to achieve pure \(O(\log n)\) complexity, instead of the amortized one, we can use another rebuilding strategy -- the top-down rebuilding from the chromatic tree [13]. Another interesting question is how to decrease the number of allocations -- now we use too many memory. Finally, it would be good to implement other tree data structures, e.g., quad-trees or tries. Fig. 8: Insert-Delete Benchmark. Fig. 7: Contains Benchmark. Fig. 9: Successful-Insert Benchmark.
2304.00260
Gaussian Mechanism Design for Prescribed Privacy Sets in Data Releasing Systems
The data transmitted by cyber-physical systems can be intercepted and exploited by malicious individuals to infer privacy-sensitive information regarding the physical system. This motivates us to study the problem of preserving privacy in data releasing of linear dynamical system using stochastic perturbation. In this study, the privacy sensitive quantity is the initial state value of the system. For protecting its privacy, we directly design the covariance matrix of a Gaussian output noise to achieve a prescribed uncertainty set in the form of hyper-ellipsoids. This is done by correlated noise and through a convex optimization problem by considering the utility of released signals. Compared to other available methods, our proposed technique for designing the Gaussian output noise provides enhanced flexibility for system designers. As a case study, the results are applied to a heating ventilation and air conditioning system.
Teimour Hosseinalizadeh, Nima Monshizadeh
2023-04-01T08:32:16Z
http://arxiv.org/abs/2304.00260v1
# Gaussian Mechanism Design for Prescribed Privacy Sets in Data Releasing Systems ###### Abstract The data transmitted by cyber-physical systems can be intercepted and exploited by malicious individuals to infer privacy-sensitive information regarding the physical system. This motivates us to study the problem of preserving privacy in data releasing of linear dynamical system using stochastic perturbation. In this study, the privacy sensitive quantity is the initial state value of the system. For protecting its privacy, we directly design the covariance matrix of a Gaussian output noise to achieve a prescribed uncertainty set in the form of hyper-ellipsoids. This is done by correlated noise and through a convex optimization problem by considering the utility of released signals. Compared to other available methods, our proposed technique for designing the Gaussian output noise provides enhanced flexibility for system designers. As a case study, the results are applied to a heating ventilation and air conditioning system. P + Footnote †: footnoteinfo sign, and practically ignore privacy-insensitive ones. This is advantageous in systems where not all state variables have similar importance in view of privacy. We prove that any confusion set described by hyper-ellipsoids can be obtained for the unbiased adversaries by utilizing correlated Gaussian noise at output. This treatment is different from other noise based methods in, Le Ny and Pappas (2013) and Murguia et al. (2021) where the confusion set is mapped to a scalar and as we show is predetermined by system dynamics. The problem for finding an uncorrelated Gaussian output noise does not always accept a solution and hence an approximation is provided. _Notation._ The set of positive and nonnegative integers and (positive) real numbers are denoted by \(\mathbb{N}\), \(\mathbb{N}_{0}\) and \((\mathbb{R}^{+})\)\(\mathbb{R}\), respectively. We denote the identity matrix of size \(n\) by \(I_{n}\), the zero matrix of size \(n\) by \(0_{n}\), and we drop the index whenever the dimension is clear from the context. For a square matrix \(A\), \(\operatorname{tr}(A)\) and \(\det(A)\) denote its trace and determinant; \(A^{\dagger}\) denote its Moore-Penrose pseudoinverse, \(\operatorname{spec}(A)\) and \(\operatorname{spec}_{\neq 0}(A)\) are the set of its eigenvalues and nonzero eigenvalues, respectively. We denote the algebraic multiplicity for an eigenvalue \(\lambda\) of \(A\) by \(\operatorname{amult}_{A}(\lambda)\). By \(A\succ 0(\succeq 0)\), we mean \(A\) is a positive(-semi) definite matrix. By \(X\sim\mathcal{N}_{n}(\mu,\Sigma)\) we denote the random variable \(X\) that has the normal distribution with density function \(f(x)=(2\pi)^{-n/2}(\det(\Sigma)^{-1/2})\exp(-\frac{1}{2}(x-\mu)^{\top}\Sigma^{ -1}(x-\mu))\), where \(\mu\in\mathbb{R}^{n}\) and \(\Sigma\succ 0\) are mean and covariance, respectively, and \(x\) is a realization of \(X\). The rest of the paper is organized as follows: In Section 2, we formulate the problem of interest; Section 3 designs the output Gaussian mechanism, in Section 4, we present an optimization by considering the performance of the Gaussian noise, Section 5 provides a case study and finally Section 6 concludes the paper. ## 2 Problem Formulation We consider linear dynamical systems described by equations of the form \[\begin{array}{c}x(k+1)=Ax(k)+Bu(k)\\ y(k)=Cx(k)+Du(k),\quad k\in\mathbb{N}_{0},\end{array} \tag{1}\] with state \(x\in\mathbb{R}^{n}\), input \(u\in\mathbb{R}^{m}\) and output \(y\in\mathbb{R}^{p}\). For this system, define \[\begin{array}{c}U_{K-1}\coloneqq\begin{bmatrix}u^{\top}(0),u^{\top}(1),\dots,u^{\top}(K-1)\end{bmatrix}^{\top}\in\mathbb{R}^{mK}\\ Y_{K-1}\coloneqq\begin{bmatrix}y^{\top}(0),y^{\top}(1),\dots,y^{\top}(K-1) \end{bmatrix}^{\top}\in\mathbb{R}^{pK}\\ \mathcal{T}_{K}\coloneqq\begin{bmatrix}D&0&\cdots&0&0\\ CB&D&\cdots&0&0\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ CA^{K-2}B&CA^{K-3}B&\cdots&D&0\\ CA^{K-1}B&CA^{K-2}B&\cdots&CB&D\end{bmatrix}\\ \mathcal{O}_{K}\coloneqq\begin{bmatrix}C^{\top},(CA)^{\top},\dots,(CA^{K-1})^{ \top}\end{bmatrix}^{\top},\end{array}\] for some \(K\in\mathbb{N}\). Note that the matrices \(U_{K-1}\) and \(Y_{K-1}\) corresponds to the \(K\)-long input and output trajectories of the system. The matrix \(\mathcal{T}_{K}\) has a Toeplitz structure and contains the Markov parameters of the system and the matrix \(\mathcal{O}_{K}\) is the \(K\)-step observability matrix of the system. The above matrices satisfy \[Y_{K-1}=\mathcal{O}_{K}x_{0}+\mathcal{T}_{K}U_{K-1}, \tag{2}\] with \(x_{0}\) denoting the initial state of system (1). We consider a scenario where the input (\(U_{K-1}\)) and output (\(Y_{K-1}\)) trajectories of the system (1) are transmitted through a public channel to another party for further processing such as monitoring, safety, or control design. We are interested in the case where state variables or some of the state variables contain privacy-sensitive information. From (1), it follows that given the system matrices \((A,B)\), and the input of the system, preserving the privacy of the state variables amounts to preserving the privacy of \(x_{0}\). Furthermore, the initial state \(x_{0}\) for stable systems such as a chemical reactor can include valuable information worthy of protection. Hence, we treat \(x_{0}\)1 as a privacy-sensitive value which should remain hidden from any other party, known as _adversary_. The adversary's capabilities are specified in the following assumption. Footnote 1: The results can be applied for preserving privacy of \(x(l)\) for arbitrary \(l\in\mathbb{N}_{0}\) as long as \(x(l)\) can be estimated using a window of length \(T\) of input/output data. _Standing Assumption 1._ (Adversary's model). An adversary \(\mathcal{A}\) knows the system matrices \(\big{(}A,B,C,D\big{)}\), the released input/output of the system (1), and the exact distribution of the added noises (to be determined later). This type of adversary is also known as honest-but-curious or passive to distinguish it from an active adversary which can manipulate the system. The passive adversaries eavesdrop on communication channels, use public information, and the side knowledge (Assumption 1) to infer privacy-sensitive quantities of the system, i.e., \(x_{0}\) in the current setup. It is well-known that, if the system is observable, then its initial condition can be reconstructed from sufficiently long input-output data samples. Namely, if \(\mathcal{O}_{K}\) has full column rank, then (Antsaklis and Michel, 2006, p. 259) \[x_{0}=\mathcal{W}_{o}^{-1}\mathcal{O}_{K}^{\top}(Y_{K-1}-\mathcal{T}_{K}U_{K-1}), \tag{3}\] where \(\mathcal{W}_{o}\coloneqq\mathcal{O}_{K}^{\top}\mathcal{O}_{K}\) is called the _observability gramian_. It follows from (3) that, under the observability assumption, the adversary \(\mathcal{A}\) can uniquely determine the initial condition \(x_{0}\) and consequently the state trajectory \(x(k)\) for all \(k\). Therefore, we make the following assumption throughout the paper. _Standing Assumption 2._ The matrix \(\mathcal{O}_{K}\) has full column rank. As a solution for providing privacy for \(x_{0}\), we first consider perturbing the initial condition \(x_{0}\) itself. ### Perturbing the initial state Assume the perturbed initial condition to be \[\tilde{x}_{0}\coloneqq x_{0}+v, \tag{4}\] where \(v\) is a random variable with normal distribution \(v\sim\mathcal{N}_{n}(0,\Sigma_{v})\) and independent of \(x_{0}\). The data equation (2) then modifies to \[\tilde{Y}_{K-1}=\mathcal{O}_{K}x_{0}+\mathcal{O}_{K}v+\mathcal{T}_{K}U_{K-1}. \tag{5}\] The following well-known result based on the Gauss-Markov theorem (Kailath et al., 2000, p. 97) provides the estimation of \(x_{0}\) using an optimal approach by adversary _Lemma 1._ (Privacy by adding noise to \(x_{0}\)).: Let the perturbed initial state for the system (1) be given by (4). The optimum unbiased linear least-mean-squares estimator of \(x_{0}\) is \[\hat{x}_{0}=\mathcal{W}_{o}^{-1}\mathcal{O}_{K}^{\top}(\tilde{Y}_{K-1}- \mathcal{T}_{K}U_{K-1}), \tag{6}\] with \(\tilde{Y}_{K-1}\) in (5). The covariance of \(\hat{x}_{0}\) is \[\mathrm{Cov}(\hat{x}_{0})=\mathbb{E}\left[(\hat{x}_{0}-x_{0})(\hat{x}_{0}-x_{0 })^{\top}\right]=\Sigma_{v}.\] The estimator (6) is also known as maximum likelihood estimator which for Gaussian noise achieves the Cramer-Rao bound, the lowest possible bound that any unbiased estimator can obtain. With the estimation in (6) and the knowledge set specified in Assumption 1, the adversary can optimally estimate \(x(k)\) for any \(k\geq 1\). It follows from Lemma 1 that adding noise directly to the initial state \(x_{0}\) enables the designer to hide the true value of \(x_{0}\) within a prescribed confusion set, characterized by \(\Sigma_{v}\). Despite this advantage, perturbing the initial state is neither feasible nor desired in real-life processes such as a chemical reactor, since the method requires the system (e.g., the chemical reactor) to be operated with \(\hat{x}_{0}\) instead of \(x_{0}\), which often requires physical interventions. Motivated by this limitation, we consider instead perturbing the measurements of the system \(Y_{K-1}\), which can be implemented in the cyber part of CPSs. ### Perturbing the output measurements As an alternative for adding noise to the initial state \(x_{0}\), we perturb the measurement vector \(Y_{K-1}\) as \[\tilde{Y}_{K-1}=Y_{K-1}+N_{K-1}, \tag{7}\] where the added noise \(N_{K-1}\sim\mathcal{N}_{pK}(0,\Sigma)\) is independent of \(Y_{K-1}\). While a trusted party can remove \(N_{K-1}\) from released signals \(\tilde{Y}_{K-1}\) by agreeing with the system designer on the seed of the pseudo-random number generator, an adversary can only optimally estimate the initial condition. This estimation is presented in the following lemma which its proof is analogous to Lemma 1, and thus is dropped. _Lemma 2._ (Privacy by adding noise to \(Y_{K-1}\)).: Let the perturbed measurement vector for the system (1) be given by (7). The optimum unbiased linear least-mean-squares estimator of \(x_{0}\) is \[\hat{x}_{0}=\big{(}\mathcal{O}_{K}^{\top}\Sigma^{-1}\mathcal{O}_{K}\big{)}^{- 1}\mathcal{O}_{K}^{\top}\Sigma^{-1}(\tilde{Y}_{K-1}-\mathcal{T}_{K}U_{K-1}),\] and the covariance of \(\hat{x}_{0}\) is \[\mathrm{Cov}(\hat{x}_{0})=\mathbb{E}\left[(\hat{x}_{0}-x_{0})(\hat{x}_{0}-x_{0 })^{\top}\right]=\big{(}\mathcal{O}_{K}^{\top}\Sigma^{-1}\mathcal{O}_{K}\big{)} ^{-1}.\] Lemma 2 and the discussion succeeding Lemma 1 motivate us to pose the following question: Whether a Gaussian noise \(N_{K-1}\) can be found for the output mechanism (7) such that the optimal adversary in Lemma 2 encounters a prescribed confusion set \(\Sigma_{v}\) for \(x_{0}\)? More formally, we state the following problem: _Problem 1_.: Find the covariance matrix \(\Sigma\succ 0\) for the Gaussian mechanism in (7) such that \[\big{(}\mathcal{O}_{K}^{\top}\Sigma^{-1}\mathcal{O}_{K}\big{)}^{-1}=\Sigma_{v}, \tag{8}\] for a given \(\Sigma_{v}\succ 0\). The prescribed confusion set \(\Sigma_{v}\) can be shaped to value highly privacy-sensitive state components in the Gaussian mechanism design, and practically ignore privacy-insensitive ones. Working with the full matrix \(\Sigma_{v}\) rather than \(\sigma I_{n}\), with \(\sigma\in\mathbb{R}^{+}\), is particularly advantageous in systems where not all state variables have similar importance in view of privacy. The _statistical_ interpretation for the confusion set originates from the notion of confidence set (region) of an estimation; see Adkins and Hill (1990). A \((1-\alpha)\) confidence set for a parameter \(x_{0}\in\mathbb{R}^{n}\) is a set \(Q\) such that \[[\mathbb{P}(x_{0}\in Q)]\geq 1-\alpha, \tag{9}\] where \(\alpha=0.05\) is a common choice. For a point estimator with a normal distribution, i.e., \(\hat{x}_{0}\sim\mathcal{N}_{n}(x_{0},\Sigma_{v})\) a common confidence set \(Q\) is the so-called confidence ellipsoid characterized by \(\Sigma_{v}\) and is defined as \[Q(\Sigma_{v},\gamma)\coloneqq\{x_{0}|(x_{0}-\hat{x}_{0})^{\top}\Sigma_{v}^{-1 }(x_{0}-\hat{x}_{0})\leq\gamma\}, \tag{10}\] where \(\gamma\) is a function of \(n\) and \(\alpha\). This expression (9) implies that the true values of \(x_{0}\) are \(100(1-\alpha)\) percent of the time in repeated samples within the ellipsoid given in (10). Notice, by choosing nondiagonal \(\Sigma_{v}\) we can shape the orientation of the resulted ellipsoids in (10). ## 3 Output Gaussian Mechanism In this section, we solve Problem 1 by finding the set of positive definite matrices \(\Sigma\) satisfying (8). We draw on the following lemmas in answering the design problem in (8). Lemma 3 provides us with the solutions of the matrix equations of the form (8), and Lemma 4 is a classical result on detectability of linear system, which we include to make the paper self-contained. _Lemma 3._ (Laub, 2005, Theorem 13.27) Let \(E\in\mathbb{R}^{m\times n}\), \(F\in\mathbb{R}^{p\times q}\), and \(G\in\mathbb{R}^{m\times q}\). Then the equation \[EXF=G, \tag{11}\] has a solution \(X\in\mathbb{R}^{n\times p}\) if and only if \[EE^{\dagger}GF^{\dagger}F=G, \tag{12}\] in which case the general solution of (11) is of the form \[X=E^{\dagger}GF^{\dagger}+R-E^{\dagger}ERFF^{\dagger}, \tag{13}\] where \(R\in\mathbb{R}^{n\times p}\) is arbitrary. A solution, provided that it exists, is unique if \(E\) has full column rank and \(F\) has full row rank, i.e., \(E^{\dagger}E=I\) and \(FF^{\dagger}=I\). _Lemma 4._ (Hespanha, 2018, p.192) For the linear system \[x(k+1)=Mx(k),\quad y(k)=Nx(k), \tag{14}\] with \(x(k)\in\mathbb{R}^{n}\) and \(y(k)\in\mathbb{R}^{p}\), the following statements are equivalent: 1. The pair \((M,N)\) is detectable. 2. For \(|\lambda|\geq 1\), with \(\lambda\in\mathrm{spec}(M)\), we have \[\mathrm{rank}\begin{bmatrix}M-\lambda I\\ N\end{bmatrix}=n.\] (15) 3. There exists \(P\succ 0\) such that \(M^{\top}PM-P-N^{\top}N\prec 0\). The following theorem addresses Problem 1 by providing results on existence and uniqueness of positive definite solutions \(\Sigma\) to (8). _Theorem 5._ Let \(\Sigma_{v}\succ 0\) be the prescribed confusion set for \(x_{0}\) in Problem 1. Consider the matrix equation \[(\mathcal{O}_{K}^{\top}X\mathcal{O}_{K})^{-1}=\Sigma_{v}, \tag{16}\] and the set \[S_{X}\coloneqq\{X\in\mathbb{R}^{pK\times pK}|\text{\eqref{eq:16 holds}}\}. \tag{17}\] Then, the following statements hold. 1. The set \(S_{X}\) is nonempty and given by \[S_{X}=\{N^{\top}N+R-MRM|\ R\in\mathbb{R}^{pK\times pK}\},\] (18) where \[M\coloneqq\mathcal{O}_{K}\mathcal{W}_{o}^{-1}\mathcal{O}_{K}^{\top},\quad N \coloneqq\Sigma_{v}^{-1/2}\mathcal{W}_{o}^{-1}\mathcal{O}_{K}^{\top}.\] (19) 2. The set \[S_{X}^{+}\coloneqq\{X\in S_{X}|\ X\succ 0\},\] (20) is nonempty. 3. If \(pK=n\), then the set \(S_{X}^{+}\) is singleton and is given by \(S_{X}^{+}=\{N^{\top}N\}\). \(\Box\) Proof.: _Statement (i):_ We resort to Lemma 3 to show the existence of a matrix \(X\) satisfying (16). By choosing \(E=\mathcal{O}_{K}^{\top}\), \(F=\mathcal{O}_{K}\), and \(G=\Sigma_{v}^{-1}\), the condition in (12) is verified as \[(\mathcal{O}_{K}^{\top})(\mathcal{O}_{K}\mathcal{W}_{o}^{-1})(\Sigma_{v}^{-1} )(\mathcal{W}_{o}^{-1}\mathcal{O}_{K}^{\top})(\mathcal{O}_{K})=\Sigma_{v}^{-1},\] where we used the facts that \(E^{\dagger}=\mathcal{O}_{K}\mathcal{W}_{o}^{-1}\), \(F^{\dagger}=\mathcal{W}_{o}^{-1}\mathcal{O}_{K}^{\top}\), and \(\mathcal{W}_{o}=\mathcal{O}_{K}^{\top}\mathcal{O}_{K}\). This proves the existence claim in Statement (_i_). It follows from (13) in Lemma 3 that every \(X\) satisfying (16) is given by \[X=\mathcal{O}_{K}\mathcal{W}_{o}^{-1}\Sigma_{v}^{-1}\mathcal{W}_{o}^{-1} \mathcal{O}_{K}^{\top}+R-\mathcal{O}_{K}\mathcal{W}_{o}^{-1}\mathcal{O}_{K}^ {\top}R\mathcal{O}_{K}\mathcal{W}_{o}^{-1}\mathcal{O}_{K}^{\top}.\] By substituting \(M\) and \(N\) from (19) in the above, we obtain the set \(S_{X}\) in (18). _Statement (ii):_ First, we observe that the set \(S_{X}^{+}\) requires the matrix \(R\) in (18) to be symmetric, i.e. \(R=R^{\top}\). Then, \(S_{X}^{+}\) is nonempty if and only if there exists a solution \(R=R^{\top}\) to the following linear matrix inequality: \[N^{\top}N+R-MRM\succ 0. \tag{21}\] To prove feasibility of the LMI (21), consider a fictitious linear system (14) given by the pair \((M,N)\) in (19). The main idea is to show that the pair \((M,N)\) is detectable, and thus from Lemma 4, the LMI (21) which can be seen as a Lyapunov inequality for detectability, equivalently holds. For the matrix \(M\), we have \[\begin{split}\operatorname{spec}_{\neq 0}(M)&= \operatorname{spec}_{\neq 0}(\mathcal{O}_{K}\mathcal{W}_{o}^{-1}\mathcal{O}_{K}^{ \top})\\ &=\operatorname{spec}_{\neq 0}(\mathcal{W}_{o}^{-1}\mathcal{O}_{K}^{ \top}\mathcal{O}_{K})=\operatorname{spec}(I_{n}),\end{split} \tag{22}\] where the second equality follows from the fact that for two arbitrary matrices \(A\in\mathbb{R}^{q\times r}\) and \(B\in\mathbb{R}^{r\times q}\) the nonzero eigenvalues of \(AB\) and \(BA\) are the same, with the same algebraic multiplicities (Garcia and Horn, 2017, p. 214). Hence, \(\operatorname{spec}(M)=\{0,1\}\) with \(\operatorname{annult}_{M}(0)=pK-n\) and \(\operatorname{annult}_{M}(1)=n\). Next, we draw on the second statement in Lemma 4 for the detectability of the pair \((M,N)\). Noting that \(\lambda=1\) is the only (marginally) unstable eigenvalue, the rank condition in (15) gives rise to \[\operatorname{rank}\begin{bmatrix}\mathcal{O}_{K}\mathcal{W}_{o}^{-1}\mathcal{ O}_{K}^{\top}-I_{pK}\\ \Sigma_{v}^{-1/2}\mathcal{W}_{o}^{-1}\mathcal{O}_{K}^{\top}\end{bmatrix}=pK. \tag{23}\] Assume that there exists \(\zeta\in\mathbb{R}^{pK}\) such that \[(\mathcal{O}_{K}\mathcal{W}_{o}^{-1}\mathcal{O}_{K}^{\top}-I)\zeta=0 \tag{24a}\] \[\Sigma_{v}^{-1/2}\mathcal{W}_{o}^{-1}\mathcal{O}_{K}^{\top}\zeta=0. \tag{24b}\] Since \(\Sigma_{v}\succ 0\), it follows from (24b) that \((\mathcal{W}_{o}^{-1}\mathcal{O}_{K}^{\top})\zeta=0\). This, together with (24a) and Standing Assumption 2, results in \(\zeta=0\). Hence, we conclude that (23) holds, and thus the pair \((M,N)\) is detectable. By the third statement of Lemma 4, there exists \(R\succ 0\) such that (21) holds, and consequently \(S_{X}^{+}\) is nonempty. _Statement (iii):_ It follows from Lemma 3 that \(X\) satisfying (16) is unique when \(\mathcal{O}_{K}\) has full row rank. Moreover, by Standing Assumption 2, we have \(pK\geq n\), and thus \(X\) is unique if \(pK=n\). In this case, \(R-MRM=0\) in (18) and hence \(S_{X}=\{N^{\top}N\}\). The proof is complete by noting that \(N^{\top}N\succ 0\) due to Standing Assumption 2. \(\blacksquare\) It follows from Theorem 5 that the solution to (16) is in general not unique. In fact, any solution \(R=R^{\top}\) to the LMI in (21) returns an admissible solution \(\Sigma\) to (8). In the next section, we leverage on this degree of freedom to look for solutions that are superior in terms of performance of the system (1). ## 4 Optimal Gaussian Mechanism Design We provide a performance measure for the Gaussian noise in (8), and formulate an optimization problem to derive the best performance for a given confusion set \(\Sigma_{v}\). To differentiate among the admissible solutions (\(\Sigma\)) in (8), we first need a notion of performance for the system. Noting that the amount of sensor measurements perturbations in (7) directly affects the _utility_ of the signal \(Y_{K-1}\), we define the error resulting from the perturbation \(N_{K-1}\) as2 Footnote 2: The error signal \(Z_{K-1}\) is in fact equal to the noise signal \(N_{K-1}\); however, we opt for the former since (25) can be viewed as a performance metric independent of the adopted perturbation technique. \[\mathbb{E}(Z_{K-1}^{\top}Z_{K-1}), \tag{25}\] where \(Z_{K-1}\coloneqq\tilde{Y}_{K-1}-Y_{K-1}\). By (25) we measure the average effects of the added noise on \(Y_{K-1}\). The expression (25) can be rewritten in terms of covariance of the added noise \(\Sigma\) as \[\begin{split}\mathbb{E}(Z_{K-1}^{\top}Z_{K-1})&= \mathbb{E}(\operatorname{tr}(Z_{K-1}Z_{K-1}^{\top}))\\ &=\operatorname{tr}\mathbb{E}(N_{K-1}N_{K-1}^{\top})= \operatorname{tr}(\Sigma).\end{split} \tag{26}\] By taking \(\operatorname{tr}(\Sigma)\) as our performance metric, we propose the following optimization problem in order to find performance-optimal solution \(X\succ 0\) to (16), and thus \(\Sigma=X^{-1}\) to (8): \[\begin{split}\min_{R\in\mathbb{R}^{pKK}\times pK,\epsilon\in \mathbb{R}^{+}}&\epsilon\\ \text{subject to}&\underbrace{N^{\top}N+R-MRM}_{ \coloneqq X}\succ 0\\ &\operatorname{tr}X^{-1}\leq\epsilon.\end{split} \tag{27}\] Observe from the last constraint in optimization (27) that we are interested in preserving the privacy of \(x_{0}\) with the minimum amount of distortion of the system output \(Y_{K-1}\). Noting that the feasibility set in optimization (27) is nonconvex in decision variable \(R\), we derive a convex approximation for it. To this end, we upper bound \(\operatorname{tr}X^{-1}\) as \[\operatorname{tr}X^{-1}\leq(pK)\lambda_{\max}(X^{-1})=\frac{pK}{\lambda_{ \min}(X)},\] where we used the fact that \(\lambda_{\max}(X^{-1})=1/\lambda_{\min}(X)\). Consequently, a sufficient condition for imposing \(\operatorname{tr}X^{-1}\leq\epsilon\) in (27) is given by \[\frac{pK}{\lambda_{\min}(X)}\leq\epsilon,\] This can be equivalently rewritten as \[X\succeq\frac{pK}{\epsilon}I.\] By defining \(\beta\coloneqq\frac{pK}{\epsilon}\), we replace (27) by the following convex optimization problem: \[\max_{R\in\mathbb{R}^{pK\times pK},\beta\in\mathbb{R}^{+}}\beta \tag{28}\] \[\text{subject to }\] \[N^{\top}N+R-MRM\succeq\beta I.\] In what follows, we provide the optimal value of \(\beta\) in the above maximization problem. For doing so, we first recap the following algebraic result: **Lemma 6**: _(Garcia and Horn, 2017, p. 284) Let \(\mathcal{F}\subseteq\mathbb{R}^{n\times n}\) be a nonempty set of matrices. Suppose that each matrix in \(\mathcal{F}\) is real and symmetric. Then \(AB=BA\) for all \(A\), \(B\in\mathcal{F}\) if and only if there is a real orthogonal \(Q\in\mathbb{R}^{n\times n}\) such that \(Q^{\top}AQ\) is diagonal for every \(A\in\mathcal{F}\)._ **Theorem 7**: _Let \(\Sigma_{v}\succ 0\) be the prescribed privacy set for \(x_{0}\) in Problem 1. Consider the convex optimization problem given in (28), with the matrices \(M\), and \(N\) in (19). The optimal value of the objective function is \(\beta_{\mathrm{opt}}=\lambda_{\min}(\Sigma_{v}^{-1}\mathcal{W}_{o}^{-1})\). \(\Box\)_ Proof: Recall from (22) that \(M=\mathcal{O}_{K}\mathcal{W}_{o}^{-1}\mathcal{O}_{K}^{\top}\) has \(\mathrm{spec}(M)=\{0,1\}\) where \(\mathrm{annult}_{M}(0)=pK-n\) and \(\mathrm{annult}_{M}(1)=n\). Similarly for \(N^{\top}N=\mathcal{O}_{K}\mathcal{W}_{o}^{-1}\Sigma_{v}^{-1}\mathcal{W}_{o}^{ -1}\mathcal{O}_{K}^{\top}\), we have \[\mathrm{spec}(N^{\top}N)=\{0\}\cup\mathrm{spec}(\Sigma_{v}^{-1}\mathcal{W}_{o }^{-1}),\] where \(\mathrm{annult}_{N^{\top}N}(0)=pK-n\). Observe that the matrix \(M\) commutes with \(N^{\top}N\), and thus they are simultaneously diagonalizable by Lemma 6. Namely, there exists an orthogonal matrix \(S\in\mathbb{R}^{pK\times pK}\) such that \[M=S\left[\begin{array}{c|c}I_{n}&0\\ \hline 0&0_{pK-n}\end{array}\right]S^{\top},\quad N^{\top}N=S\left[\begin{array}[ ]{c|c}\Lambda&0\\ \hline 0&0_{pK-n}\end{array}\right]S^{\top}, \tag{29}\] where \(\Lambda=\mathrm{diag}(\lambda_{1},\lambda_{2},\ldots,\lambda_{n})\) denote the nonzero eigenvalues of \(N^{\top}N\) arranged in a non-increasing order. Notice that \(\ker M=\ker N^{\top}N\), which allows to write the decomposition in (29) corresponding to zero and nonzero eigenvalues. Next, partition \(S\) and \(R\) consistently as \[S=\left[\begin{array}{c|c}S_{11}&S_{12}\\ \hline S_{21}&S_{22}\end{array}\right],\quad R=\left[\begin{array}{c|c}R_{11 }&R_{12}\\ \hline R_{12}^{\top}I&R_{22}\end{array}\right],\] where \(S_{11},R_{11}\in\mathbb{R}^{n\times n}\). Next, we apply the congruence transformation associated with \(S\) to the constraint in (28): \[(S^{\top}MS)(S^{\top}RS)(S^{\top}MS)-S^{\top}RS\\ -S^{\top}(N^{\top}N-\beta I)S\preceq 0,\] which in block partitioned form is \[\left[\begin{array}{c|c}I_{n}&0\\ \hline 0&0_{pK-n}\end{array}\right]\left[\begin{array}{c|c}\hat{R}_{11}& \hat{R}_{12}\\ \hline R_{12}^{\top}&\hat{R}_{22}\end{array}\right]\left[\begin{array}{c|c}I_{ n}&0\\ \hline 0&0_{pK-n}\end{array}\right]-\left[\begin{array}{c|c}\hat{R}_{11}& \hat{R}_{12}\\ \hline R_{12}^{\top}&\hat{R}_{22}\end{array}\right]\\ -\left[\begin{array}{c|c}\Lambda-\beta I_{n}&0\\ \hline 0&-\beta I_{pK-n}\end{array}\right]\preceq 0,\] where \[S^{\top}RS\eqqcolon\hat{R}=\left[\begin{array}{c|c}\hat{R}_{11}&\hat{R}_{12 }\\ \hline R_{12}^{\top}&\hat{R}_{22}\end{array}\right].\] The above inequality further simplifies to \[\left[\begin{array}{ccc|c}\lambda_{1}-\beta&0&\\ &\ddots&\hat{R}_{12}\\ 0&\lambda_{n}-\beta&\\ \hline R_{12}^{\top}&\hat{R}_{22}-\beta I_{pK-n}\end{array}\right]\succeq 0. \tag{30}\] A necessary condition for (30) is \[\beta\leq\lambda_{n},\] which implies that \(\beta_{\mathrm{opt}}\leq\lambda_{n}\). Moreover, by choosing \(\hat{R}_{12}=0\) and \(\hat{R}_{22}\) such that \(\hat{R}_{22}-\beta I\succeq 0\), we conclude that the choice \(\beta=\lambda_{n}\) is a feasible solution to (30). Now, since \(R=S\hat{R}S^{\top}\), we find that there exists \(R\) such that the LMI (28) holds for \(\beta=\lambda_{n}\), thereby proving \(\beta_{\mathrm{opt}}=\lambda_{n}\). The proof is complete by noting \(\lambda_{n}=\lambda_{\min}(\Sigma_{v}^{-1}\mathcal{W}_{o}^{-1})\). \(\blacksquare\) By Theorem 5, the covariance matrix can be designed as \(\Sigma^{-1}=N^{\top}N+R-MRM\) where \(R=R^{\top}\) is any solution to the LMI in (28) with \(\beta=\beta_{\mathrm{opt}}\). Recalling \(\beta=\frac{pK}{\epsilon}\), the optimal performance is given by \[\epsilon_{\mathrm{opt}}=pK\lambda_{\max}(\Sigma_{v}\mathcal{W}_{o}). \tag{31}\] It follows from (31) that the error caused by the output perturbation (7) is proportional to the total time steps \(K\) and the number of outputs of the system \(p\). Moreover, we observe that both observability degree and the desired privacy guarantees contribute to the optimal performance; namely \(\epsilon_{\mathrm{opt}}\) is proportional to the spectral norm of the product of the observability gramian and the prescribed privacy set \(\Sigma_{v}\). We close this section by a few remarks on the proposed results and their potential extensions. **Remark 8**: _(Structured output mechanism). An extension for the design mechanism (8) is to impose a specific structure on the covariance matrix \(\Sigma\) of the output mechanism in (7). A case of particular interest is given by the block diagonal structure_ \[\Sigma_{\mathrm{blk}}=\mathrm{blockdiag}\left(\Sigma_{1},\Sigma_{2},\ldots, \Sigma_{K}\right)\] _with \(\Sigma_{k}\in\mathbb{R}^{p\times p}\). The interest in the block diagonal structure stems from the fact that the output perturbation in (7) can be then implemented by using uncorrelated noise signals, which is favorable in online applications. Working with the block diagonal structure in (8) modifies Problem 1 to: find \(\Sigma_{blk}\succ 0\) for Gaussian mechanism in (7) such that_ \[\left(\mathcal{O}_{K}^{\top}\Sigma_{blk}^{-1}\mathcal{O}_{K}\right)^{-1}=\Sigma _{v},\] _for a prescribed \(\Sigma_{v}\succ 0\). By following analogous steps as before, we obtain a counterpart of (27) as_ \[\min_{\Sigma_{1},\ldots,\Sigma_{K}\succ 0,\epsilon_{blk}\in\mathbb{R}^{+}} \epsilon_{blk} \tag{32}\] \[\text{subject to }\] \[\mathcal{O}_{K}^{\top}\Sigma_{\mathrm{blk}}^{-1}\mathcal{O}_{K}= \Sigma_{v}^{-1}\] \[\operatorname{tr}\Sigma_{blk}\leq\epsilon_{blk}.\] _Unlike (27), feasibility of (32) depends on the choice of \(\Sigma_{v}\). A possible remedy to overcome this challenge is to relax the equality constraint and replacing it by a solution \(\Sigma_{\mathrm{blk}}\) that (approximately) results in a prescribed confusion set \(\Sigma_{v}\); namely, \[\left\|\mathcal{O}_{K}^{\top}\Sigma_{\mathrm{blk}}^{-1}\mathcal{O}_{K}-\Sigma_{v} ^{-1}\right\|_{F}^{2}\leq e_{blk},\] where \(e_{blk}>0\) determines the accuracy level of the solution and \(\left\|\left(\cdot\right)\right\|_{F}\) denotes the Frobenius norm. _Remark 9_.: (Confusion set in differential privacy). In order to preserve the privacy of \(x_{0}\), the system designer can use differential privacy to find the covariance \(\Sigma\) for the Gaussian noise in (7). Generally speaking, this method determines the covariance as \(\Sigma=\sigma I_{pK}\) with \(\sigma\geq f(\epsilon,\delta,c,s)\) where \((\epsilon,\delta)\) are predefined user's parameters, \(c\) defines the adjacency metric for \(x_{0}\), and \(s\) is the sensitivity of the output vector \(Y_{k-1}\) to changes in \(x_{0}\) (Le Ny and Pappas, 2013, Theorem 3). Based on Lemma 2, the confusion set that results from this choice of covariance is \(\Sigma_{v}=\sigma\mathcal{W}_{o}^{-1}\), i.e., a proportion of the inverse of observability gram \(\mathcal{W}_{o}\). Therefore, the shape of the confusion set in the differential privacy is predetermined by \(\mathcal{W}_{o}\), namely the dynamics of the system (1), while in our method we are interested in the case where the confusion set can be shaped by the designer. _Remark 10_.: (Confusion set for \(K\rightarrow\infty\)). While we have designed \(\Sigma\) in (8) for a finite time step \(K\), we need to consider (Schur) stability properties of system (1) to design \(\Sigma\) when \(K\rightarrow\infty\). If system (1) is Schur stable and for \(\Sigma=\sigma I_{pK}\) in (8), the confusion set when \(K\rightarrow\infty\) is \(\Sigma_{v}(\infty)=\sigma\mathcal{W}_{o}^{-1}(\infty)\) where \(\mathcal{W}_{o}(\infty)\succ 0\) is the unique and bounded solution to the Lyapunov equation for observability (Hespanha, 2018, p.192) \[A^{\top}\mathcal{W}_{o}A-A+C^{\top}C=0.\] It follows that for stable systems and the Gaussian mechanism \(\Sigma=\sigma I_{pK}\) with finite \(\sigma\in\mathbb{R}_{+}\), the adversary faces the confusion set \(\sigma\mathcal{W}_{o}^{-1}(\infty)\) when \(K\rightarrow\infty\). On the other hand, if system (1) is unstable, \(\mathcal{W}_{o}(K)\) grows unboundedly as \(K\rightarrow\infty\). Thus from \(\sigma\mathcal{W}_{o}^{-1}(\infty)\) it follows that we should use large amount of noise (higher value of \(\sigma\)) to preserve privacy of \(x_{0}\). It is worth mentioning that noise to signal ratio is not necessarily increasing for unstable systems. ## 5 Comparison and Simulation We provide a privacy preserving method from the literature for comparison purposes, and then present a case study to illustrate the concepts in this paper. ### differential entropy measure Another approach for privacy preserving in data releasing system is to map the adversary uncertainty set to a scalar metric. To prepare for the case study, we consider differential entropy as the privacy metric. Differential entropy of a random variable \(X\) with normal distribution \(X\sim\mathcal{N}_{n}(\mu,\Sigma)\) is (Cover and Thomas, 2006, p. 250) \[h(X)=\frac{1}{2}\log\det(\Sigma)+\frac{n}{2}(1+\log(2\pi)),\] where \(\log\) is base 2. It is related to the volume of a set defined by the random variable \(X\), and essentially the higher value for it indicates the random variable is widely dispersed. Consistently, we consider differential entropy of the adversary's best estimation of \(\hat{x}_{0}\) given in Lemma 2 \(\left(\mathcal{O}_{K}^{\top}\Sigma_{\mathrm{de}}^{-1}\mathcal{O}_{K}\right)^{-1}\) as the objective function to be maximized. Analogous to Hayati et al. (2021), we consider the following optimization problem to obtain the covariance of the output noise in (7): \[\max_{\Sigma_{\mathrm{de}}\in\mathbb{R}^{pK\times pK}} \log\det\left(\mathcal{O}_{K}^{\top}\Sigma_{\mathrm{de}}^{-1} \mathcal{O}_{K}\right)^{-1}\] (33) subject to \[\mathrm{tr}\,\Sigma_{\mathrm{de}}\leq\epsilon_{p}\] \[\Sigma_{\mathrm{de}}\succ 0,\] where \(\epsilon_{p}\) is a predefined performance budget, i.e., an upper bound on the average error caused by introducing output noise (see (26)). It should be noted that \(\log\det\left(\mathcal{O}_{K}^{\top}\Sigma_{\mathrm{de}}^{-1}\mathcal{O}_{K} \right)^{-1}\) is concave in \(\Sigma_{\mathrm{de}}\); see (Bernstein, 2018, Proposition 10.6.17) and hence the optimization problem (33) obtains its global maximum. _Remark 11_.: (Comparison with differential entropy). We highlight the differences and complementarity of our proposed method (summarized in optimization problem (28)) and the approach (33). The optimization problem (33) maximizes a privacy metric for a given performance budget. On the other hand, given a prescribed _confusion set_\(\Sigma_{v}\), the optimization problem (28) looks for the best performance that can be obtained for the Gaussian mechanism by a convex program. As the designer can shape the confusion set faced by the adversary, the latter approach is particularly useful in scenarios where different state components have different privacy sensitivity. We further illustrate the comparison mentioned in Remark 11 in the next subsection. ### Case study As an instance of releasing data in linear dynamical system, we consider a Heating, Ventilation, and Air Conditioning (HVAC) system with the following model from Kelman and Borrelli (2011) \[M\frac{d}{dt}T_{z}=RT_{z}+Q_{\mathrm{offset}}(t)+c_{p}m_{z}(t)\circ(T_{s}-T_{z}), \tag{34}\] where \(M=\mathrm{diag}\left((mc)_{1},\ldots,(mc)_{n}\right)\), \(T_{z}=\left[T_{z1},\ldots,T_{zn}\right]^{\top}\), \(m_{z}(t)=\left[m_{z1}(t),\ldots,m_{zn}(t)\right]^{\top}\), \(T_{s}=\left[T_{s1},\ldots,T_{sn}\right]^{\top}\), \(R=\left[R_{ij}\right]\), and \(\circ\) denotes the Hadamard product. The system (34) is known as zone temperature dynamic which relates temperate of each zone (\(T_{zi}\)) in an area with \(n\) zones to physical parameters such as heat capacity of air (\(c_{p}\)), thermal capacitance of zone \(i\) denoted as \((mc)_{i}\), thermal resistance of heat transfer between zone \(i\) and \(j\) denoted as \(R_{ij}\), mass flow rates \(m_{zi}(t)\) and supply Figure 1: Simplified floor plan of the zones and monitoring systems. The measured temperatures \(T_{z1}\) and \(T_{z4}\) are perturbed using Gaussian noise and transmitted to another party temperature \(T_{si}\) to each zone \(i\), and the varying load for each zone \(Q_{\text{offset}}(t)\). _Privacy concerns:_ We study the case when in (34) the input of the system and the load are zero, i.e., \(Q_{\text{offset}}(t),m_{z}(t)=0\). This scenario can be interpreted when the HVAC system is switched off for instance at the end of a working day. The initial temperature of the zones in this case can be _privacy-sensitive_ since for instance they can be used to infer presence and absence of an employee. We consider four zones with the structure given in Figure 1 and assume that temperature of the zone 1, and 4 are measured for monitoring reasons. The parameters \(R_{ij}\) and \((mc)_{i}\) are picked uniformly randomly from \([0.4,0.6]\) and \([950,1050]\), respectively, where the mean values are from the study by Ma et al. (2011), and Euler discretization with \(\Delta t=360\) seconds is used to discretize the system (34). To compare our proposed method with the differential entropy case (33), we set the prescribed confusion set as \(\Sigma_{v}=\text{diag}(16,16,100,100)\) in the design problem (8), which basically means the temperature in zones 3 and 4 are more privacy-sensitive compared to zone 1 and 2. We solve (28) with the given \(\Sigma_{v}\) and obtain \(\Sigma\). Next, we solve the optimization (33) to find \(\Sigma_{\text{de}}\) where we set \(\epsilon_{p}=\operatorname{tr}\Sigma\) that we found for our proposed method. The confusion set defined in (10) for the adversary can be seen in Figure 2, where we have projected the obtained hyper-ellipsoids onto \(T_{z1}-T_{z4}\) plane, and the true values of the measure temperature \(T_{z1}\) and \(T_{z4}\) along their perturbed versions are shown in Figure 3. As it can be seen by using the proposed method in this paper, the adversary's confusion set is prescribed by the system designer. On the other hand, the confusion set emerging from differential entropy while is "larger" in the \(T_{z1}\) direction, it is "smaller" in the \(T_{z4}\) direction which is in contrast with the desired privacy specifications concerning the privacy-wise importance of the zones. ## 6 Conclusion We have considered the problem of privacy preservation of state trajectories in data releasing dynamical systems where we have optimally designed output Gaussian noise to create a prescribed confusion set against worst case adversaries. We have proved that a system designer can create any prescribed confusion set described by hyper-ellipsoids using correlated Gaussian noise. Furthermore, we have provided an approximate solution for the case of uncorrelated Gaussian noise. The proposed method can be pursued for preserving the privacy of the input in left-invertible linear dynamical system and also be combined with a controller design for the system.
2302.02921
Holistic Deep-Reinforcement-Learning-based Training of Autonomous Navigation Systems
In recent years, Deep Reinforcement Learning emerged as a promising approach for autonomous navigation of ground vehicles and has been utilized in various areas of navigation such as cruise control, lane changing, or obstacle avoidance. However, most research works either focus on providing an end-to-end solution training the whole system using Deep Reinforcement Learning or focus on one specific aspect such as local motion planning. This however, comes along with a number of problems such as catastrophic forgetfulness, inefficient navigation behavior, and non-optimal synchronization between different entities of the navigation stack. In this paper, we propose a holistic Deep Reinforcement Learning training approach in which the training procedure is involving all entities of the navigation stack. This should enhance the synchronization between- and understanding of all entities of the navigation stack and as a result, improve navigational performance. We trained several agents with a number of different observation spaces to study the impact of different input on the navigation behavior of the agent. In profound evaluations against multiple learning-based and classic model-based navigation approaches, our proposed agent could outperform the baselines in terms of efficiency and safety attaining shorter path lengths, less roundabout paths, and less collisions.
Linh KΓ€stner, Marvin Meusel, Teham Bhuiyan, Jens Lambrecht
2023-02-06T16:52:15Z
http://arxiv.org/abs/2302.02921v1
# Holistic Deep-Reinforcement-Learning-based Training of Autonomous Navigation Systems ###### Abstract In recent years, Deep Reinforcement Learning emerged as a promising approach for autonomous navigation of ground vehicles and has been utilized in various areas of navigation such as cruise control, lane changing, or obstacle avoidance. However, most research works either focus on providing an end-to-end solution training the whole system using Deep Reinforcement Learning or focus on one specific aspect such as local motion planning. This however, comes along with a number of problems such as catastrophic forgetfulness, inefficient navigation behavior, and non-optimal synchronization between different entities of the navigation stack. In this paper, we propose a holistic Deep Reinforcement Learning training approach in which the training procedure is involving all entities of the navigation stack. This should enhance the synchronization between- and understanding of all entities of the navigation stack and as a result, improve navigational performance. We trained several agents with a number of different observation spaces to study the impact of different input on the navigation behavior of the agent. In profound evaluations against multiple learning-based and classic model-based navigation approaches, our proposed agent could outperform the baselines in terms of efficiency and safety attaining shorter path lengths, less roundabout paths, and less collisions. ## I Introduction As human-machine-collaboration becomes essential, mobile robot navigation in crowded environments is increasingly becoming an important aspect to consider. Traditional navigation stacks of ground vehicles used within industrial setups such as AGVs utilize the ROS navigation stack [1], which consists of a global planner, which, given a global map, calculates an optimal path from a start point to a goal position and a local planner, which executes the global plan by utilizing sensor observations to avoid dynamic obstacles that were not present in the map. While navigation in static or slightly dynamic environments can be solved with currently employed navigation stacks, navigation in highly dynamic environments remains a challenging task [2]. It requires the agent to efficiently generate safe actions in proximity to unpredictably moving obstacles in order to avoid collisions. Traditional model-based motion planning approaches often employ hand-engineered safety rules to avoid dynamic obstacles. However, hand-designing the navigation behavior in dense environments is difficult since the future motion of the obstacles is unpredictable [3]. In recent years, Deep Reinforcement Learning (DRL) has emerged as an end-to-end method that demonstrated superiority for obstacle avoidance in dynamic environments and for learning complex behavior rules. Thus, a variety research publications incorporated DRL to solve high-level tasks such as grasping, navigation or simulation [4, 5, 6, 7]. However, DRL-based navigation approaches come along with issues such as difficult training, the myopic nature of the DRL agent, or catastrophic forgetfulness [8, 9]. Recent approaches either handled this problem by shortening the planning horizon using waypoints [10],[11], employing hybrid approaches [8][12], or switch between classic model-based navigation and DRL planners. However, regarding the parts of the navigation system separately can lead to synchronization issues and non-optimal behavior such as jerky motions or the agent moving too far away from the initially planned global path. On that account, this paper proposes a holistic training approach incorporating the global planner and the waypoint generator into the DRL training pipeline. Therefore, classic global planners such as RRT or A*, and the waypoint generators presented in our previous work will be utilized to provide the agent with more information about the higher-level planning directly within its training procedure. Thus, the understanding of the agent for decisions made by other components of the navigation stack should be improved, which makes navigation smoother and more consistent. We compare different agent inputs and evaluate all agents against classic baseline approaches within the simulation platform arena-rosnav [5] in terms of various navigational metrics. The main contributions of this work are the following: Fig. 1: Information about the global planner and waypoint generator will be given as input for the DRL agent in order to enhance understanding of the DRL agent for high-level planners and thus improve synchronization and navigation efficiency. * Proposal of an holistic training approach utilizing the whole navigation stack instead of an isolated training procedure * Incorporation of global planning and waypoint information into the reward system of the agent to improve synchronization between the entities and as a result improve navigational performance * Qualitative and quantitative evaluation on different highly dynamic environments and comparison against a baseline DRL and classic model-based navigation approaches The paper is structured as follows. Sec. II begins with related works. Subsequently, the methodology is presented in Sec. III. Sec. IV presents the results and discussion. Finally, Sec. V will provides a conclusion and outlook. ## II Related Works DRL-based navigation approaches proved to be a promising alternative that has been successfully applied into various approaches for navigation of vehicles and robots with remarkable results. Various works demonstrated the superiority of DRL-based OA approaches due to more flexiblility in the handling of obstacles, generalization new problem instances, and ability to learn more complex tasks without manually designing the functionality. Thus, various research works incorporated DRL into their navigation systems for tasks such as lane changing [13], cruise control [14, 15, 16], cooperative behavior planning [17], or and obstacle avoidance [6, 7, 5]. Atouri et al. [18] proposed a DRL-based control switch for lateral control of autonomous vehicles. Similarily, Kastner et al. proposed a DRL-based control switch to choose between different navigation policies [2]. Liu et al. proposed a DRl approach for autonomous driving of vehicles in urban environments using expert demonstrations. Other works incorporated DRL for dynamic obstacle avoidance. Works from Everet et al. [19] and Chen et al. [7] require the exact obstacle positions and perform a DRL-based obstacle avoidance approach. Dugas et al. relied solely on DRL for navigation [6]. The authors remarked that this could lead to jerky motions and failed navigation for long ranges. Since the reward that a DRL agent can obtain in long-range navigation over large-scale maps is usually sparse, agents are only suitable for short-range navigation due to local minima. Thus, a variety of research works combine DRL-based local planning with traditional methods such as RRT [20] or A-Star [21]. Faust et al. utilized DRL to assist an PRM-based global planner [8]. Similarily, Chiang et al. utilized DRL in combination with the RRT global planner. Other works utilize waypoints, as an interface for communication between global and local planner. These are points sampled from the global path to be given as input to the DRL agent in order to shorten its planning horizon. Gundelring et al. [22] integrated a DRL-based local planner with a conventional global planner employing a simple subsampling of the global path given a static lookahead distance to create waypoints for the DRL-local planner. Similarly, Regler et al. [23] propose a hand-designed sub-sampling to deploy a DRL-based local planner with conventional navigation stacks. A limitation of these works is that the simple sub-sampling of the global path is inflexible and could lead to hindrance in complex situations, e.g. when multiple humans are blocking the way. Hence, other works employed a more intelligent way to generate waypoints. Brito et al. [10] proposed a DRL-based waypoint generation where the agent is trained to learn a cost-to-go model to directly generate subgoals, which an MPC planner follows. The better estimated cost-to-go value enables MPC to solve a long-term optimal trajectory. Similarly, Bansal et al. [24] proposed a method called LB-WayPtNav, in which a supervised learning-based perception module is used to process RGB image data and output a waypoint. With the waypoint and robot current state, a spline-based smooth trajectory is generated and tracked by a traditional model-based, linear feedback controller to navigate to the waypoint. However, the proposed supervised training approach, requires a tedious data acquisition stage to provide annotated training data. In our previous work, we proposed various waypoint generation approaches that are more flexible [11, 5, 25] and could show improved navigation performance in long-range navigation within crowded environments. Of note, all previously mentioned works train the DRL agent as a separate entity and later incorporated it into the navigation stack, which could result in a number of issues such as synchronization problems and inefficient navigation behavior. The DRL agent is almost always trained for short-range obstacle avoidance and produce failures over long-ranges. Furthermore, navigation performance of the DRL agent is also heavily dependent on the efficiency of the global planner or the waypoint generator. On that account, this work incorporates all entities of the navigation stack into the training pipeline. More specifically, the whole navigation stack consisting of the global planner, the waypoint generator, and the DRL agent is deployed in the training pipeline. The DRL agent should still be responsible for local obstacle avoidance but receive high level input of the other two entities as input to improve its understanding of their decisions. Thus, a better synchronization and inter-operation between the three entities should be attained. ## III Methodology In this chapter, we present the methodology of our proposed framework. In total six agents with different inputs are trained. ### _System Design and training procedure_ Fig. 2 illustrates the system design of our approach. The DRL agents are trained within our 2D simulation environment arena-rosnav [26] and trained using the staged training curriculum, that is whenever the agent reaches a succes rate of 80 percent the next stage with increasing difficulty will be loaded. The stages contain dynamic and static obstacles spawned randomly. The stages are illustrated in Fig. 3. Stage 1 is an outdoor map of size 100x100 pixels without any obstacles. Stage two is a mixed map of size 150x150 cells with static obstacles, which the agent knows. Stage 3 is an outdoor map of size 200x200 cells with known and additionally unknown static obstacles. Stage 4 is an indoor map of size 200x200 cells with known and unknown static obstacles. Stage 5 is an outdoor map of size 200x200 cells with known and unknown static obstacles and additionally unknown dynamic obstacles. Stage 6 is an indoor map of size 200x200 cells with known static obstacles and unknown static and dynamic obstacles. Stage 7 is almost the same as Stage 5 but with more unknown static and dynamic obstacles. The observations are processed by the DRL agent, which produces an action in the environment. Compared to our previous work [5] training the DRL agent is not separated from the navigation system. Rather the full navigation stack is included inside the training loop. Although this might increase the overhead and extend the training due to more complex input, the agent should learn to synchronize better with the global planner and waypoint generator. While it is common to train only the local planner for local obstacle avoidance with DRL and integrate it as part of the full navigation stack, the proposed system already involves the global and waypoint generator and uses its input while in the training stage. Thereby, 6 different inputs where developed to test the extend to which these input will influence the behavior of the agent. The input of the different agents are listed in Tab. [2]. ### _Agent and Neural Network Design_ In total, we propose six different agents, each with different observation spaces to study the effect of different inputs. The agent's input is listed in Tab. I. The internal architecture and output layer are equal for all of them. They differ only in the input layer. All agents get as primary input the Lidar scan data and the global goal, represented as two values, the linear and angular distance from the odometry to the goal. All points are represented the same way as the global goal. Likewise, the optional subgoal is represented as a single point, whereas the global plan is represented in 2 different ways: as a representation of waypoints. From the global plan, every 5th point is extracted as a waypoint until 50 points are extracted in total. If the plan is not long enough to extract 50 points, the last extracted waypoint is used to fill up the waypoints list. Way 2 simplifies of the whole plan as a summed-up length of the plan. The internal architecture is illustrated in Figure 2. For the body network, CNNs are used while the actor-critic network is designed using LSTM cells as [2] showed the benefits of using memory-aided networks for navigation. The agent might be able to recognize movement directions \begin{table} \begin{tabular}{l c c c c c} \hline \hline Agent & Scan & Global Goal & Subgoal & Waypoints & Length \\ \hline Agent 1 & 0 - 359 & 360, 361 & 362, 363 & 364 - 463 & – \\ Agent 2 & 0 - 359 & 360, 361 & – & – & – \\ Agent 3 & 0 - 359 & 360, 361 & 362, 363 & – & 364 \\ Agent 4 & 0 - 359 & 360, 361 & 362, 363 & – & – \\ Agent 5 & 0 - 359 & 360, 361 & – & – & 362 \\ Agent 6 & 0 - 359 & 360, 361 & – & 362 - 461 & – \\ \hline \hline \end{tabular} \end{table} TABLE I: Input of the different agents Fig. 2: System Design and Training Pipeline. The input is exemplary for Agent 4. The specific parameters and tensor sizes for each of the agents are specified in Table I and memorize older scan data for a better exploration of the area. The output layer consists of 2 scalar values. Both are used to create a _Twist_ message for a 2D space. It consists of a linear velocity and an angular velocity. ### _Reward Functions_ Since sparse rewards do not lead to fast convergence of the agent, we design our reward function to be dense and return a reward after each transition. Negative rewards are only given for collisions or if the agent gets too close to a static or dynamic obstacle. Positive rewards are given when the agent moves toward or reaches the target with a reasonable number of steps: the fewer steps required, the higher the reward. Equation 1 states the reward system of our agents. The reward function is the sum of all sub rewards \[r^{\prime}=r^{\prime}_{gr}+r^{\prime}_{c}+r^{\prime}_{ga}+r^{\prime}_{sd}+r^{ \prime}_{fgp}+r^{\prime}_{dgp}+r^{\prime}_{tc}+r^{\prime}_{adc} \tag{1}\] Where \(r^{\prime}_{s}\) is the success reward for reaching the goal, \(r^{\prime}_{c}\) is the punishment for a collision and both lead to episode ends. \(r^{\prime}_{d}\) describes the reward for approaching the goal. Additionally, we introduce two safety rewards \(r^{\prime}_{ss}\) to help avoid static obstacles and \(r^{\prime}_{sd}\) is meant for dynamic obstacles. \[r^{\prime}_{gr}=\left\{\begin{array}{ll}45&\text{, if goal reached}\\ 0&\text{, otherwise}\end{array}\right. \tag{2}\] \[r^{\prime}_{c}=\left\{\begin{array}{ll}-50&\text{, if collided}\\ 0&\text{, otherwise}\end{array}\right. \tag{3}\] \[r^{\prime}_{ga}=\left\{\begin{array}{ll}0.8*diff^{\prime}_{robot,goal}&\text {, if }diff^{\prime}_{robot,goal}>0\\ 0.6*diff^{\prime}_{robot,goal}&\text{, otherwise}\end{array}\right. \tag{4}\] \[r^{\prime}_{sd}=\left\{\begin{array}{ll}-1.25&\text{, if }\exists\in O:d(p^{ \prime}_{robot},p^{\prime}_{o}<D_{s})\\ 0&\text{, otherwise}\end{array}\right. \tag{5}\] Reaching the goal gives a vast positive reward for the agent. This is the overall purpose of the agent. The reward for achieving this is set to 45. A collision results in a negative reward of -50. Getting closer to the goal seems good behavior, even though it is not like that in every case, such dead ends. That is why the reward should not be too high. Another is to keep a certain safe distance to obstacles. The agent should avoid driving just one mm away from obstacles. The calculation depends on \(D_{s}\), the safe distance the agent should keep. It is set to 0.345m. The distance is calculated based on the center of the agent. As the agent has a radius of 0.3m, the safe distance between the agent surface to the obstacle surface is 4.5cm. A negative reward is given as soon as the agent is closer to an obstacle than the safe distance. Furthermore the rewards incorporating information about the global planner are defined as following: \[r^{\prime}_{fgp}=\left\{\begin{array}{ll}0.1*vel^{\prime}_{linear}&\text{, if }\min_{wp\in G}d(p^{\prime}_{wp},p^{\prime}_{robot})<0.5m\\ 0&\text{, otherwise}\end{array}\right. \tag{6}\] \[r^{\prime}_{dgp}=\left\{\begin{array}{ll}0.2*diff^{\prime}_{robot,wp}&\text {, if }\frac{\min_{wp\in G}d(p^{\prime}_{wp},p^{\prime}_{robot})}{diff_{robot,wp}}>0 \\ 0&\text{, otherwise}\end{array}\right. \tag{7}\] \[r^{\prime}_{adc}=-\frac{\left|vel^{\prime-1}_{angular}-vel^{\prime}_{angular} \right|^{4}}{1000} \tag{8}\] with \(diff^{\prime}_{x,y}=d(p^{\prime-1}_{x},p^{\prime-1}_{y})-d(p^{\prime}_{x},p^{ \prime}_{y})\). The goal following reward \(r^{\prime}_{fgp}\) is weighted based on the linear velocity and is given if the agent is closer than 0.5m to the next point in the global plan. The distance to goal reward is another component of rewarding the agent for following the global plan is to reduce the distance to the closest point in the global plan. Furthermore, agent navigation aims to drive smooth paths. That is why abrupt changes in the angular velocity are penalized with the reward \(r^{\prime}_{adc}\). ### _Training Hardware Setup_ Every agent is trained separately on one of two different systems. Table II shows the hardware specifications of the systems. A docker image was created to perform the training on the systems. ## IV Evaluation In this chapter, we present the evaluation of our agents. The experiments are split into two categories. In the first part, we assess the training performance of all agents to assess the overhead and complexity of the training compared to a baseline agent with no additional input about the global planner and waypoint generator (Agent 2). In the second part, we compare our agents against baseline navigation \begin{table} \begin{tabular}{l l l} \hline \hline **Component** & **System 1** & **System 2** \\ \hline CPU & Ryzen Threadripper 1950X & Ryzen R7 2700X \\ GPU & 2x NVIDIA RTX 2080TI & 1x NVIDIA RTX 2080TI \\ RAM & 64GB & 40GB \\ \hline \hline \end{tabular} \end{table} TABLE II: Training System Specifications This figure displays the specifications of the used computer systems for the training. Fig. 3: Stage one to seven. approaches in terms of navigational metrics such as path length time to reach the goal etc. Therefore, we compared our agents against the classic local path planners Timed Elastic Bands (TEB) [27], Dynamic Window Approach (DWA) [28], and Model Predictive Control (MPC) [29] as well as our proposed All-in-One Planner, which is able to switch between classic TEB and DRL planning [2]. ### _Training Performance_ In order to evaluate the training process, the success rate of successfully completed tasks without collisions is investigated. The success rates are illustrated in Fig. 4. Stage transitions are also indicated within that figure. Generally, the time points for reaching certain stages differ significantly among the agents. Not all agents reached the last stage 7. Only Agent 3 was capable of reaching that stage, rather late in the training. Agent 5 was able to reach stage 6 and all other agents only reached stage 5. Stage 5 was the first stage containing moving obstacles. Agent 1 and Agent 4 reached stage 5 earliest around training step 7M, and Agent 2 was latest around step 15.5M. The additional input seems to increase the learning speed in the lower stages. Namely, the agents with additional inputs can reach stage 5 much earlier than the baseline Agent 2 without additional input. Surprisingly, agents with similar input like Agent 1 and Agent 6 differ more than expected. As expected,the success rate of all agents include drops after reaching a new stage, indicating greater difficulty. Furthermore, the rate fluctuates for all agents, with some outlier drops in both directions. Although the agents have many similarities, some minor differences are observable. For example, Agent 2 seems to have a higher fluctuation than the other agents. Furthermore, the rate seems to stay at a level of around 60% from step 25M onwards. In contrast, Agent 1 still has a slight improvement trend at the end of the training. Furthermore, the rate lies around 70% in the last 5M steps. Agent 3 is the only agent that reached stage 7. However, some outlier runs might have caused those stage transitions. In summary, Agent 1 has the highest level of success rate and reached stage 5 earliest and thus might be the best agent alongside Agent 3, which was able to reach the last stage 7. On the other hand, the baseline Agent 2 reached stage 5 the latest and has the lowest level of success rate. Furthermore, the agent seems not to improve anymore. These observations on the training metrics indicate a beneficial impact of additional input on training speed and also hint at a better performance. ### _Navigational Performance_ After investigating the training metrics, the navigational performance of the agents are compared against baseline approaches. These include the model-based planners DWA [28], TEB [30], and MPC [31], as well as the AIO planner presented in our previous works, which is able to switch between TEB and DRL [2]. The evaluation is done by running 150 episodes for each agent in the same random scenarios and tasks. For each scenario, 150 episodes were performed. The comparison concentrates on success rates, path lengths, episode time, and collisions. For the qualitative evaluations of the navigational performance, we tested all approaches in three different scenarios: a) with 20 obstacles, b) with obstacle clusters, and c) with running obstacles with an obstacle velocity of up to 1m/s. The scenarios have a fixed start and goal position and the obstacles are moving according to the Pedsim social model [32]. The qualitative trajectories agents 3 and 5 are exemplarily illustrated in Fig. 6. The timesteps are sampled every 100 ms and visualized within the trajectory of all approaches. The trajectories of the obstacles are marked with the start and end time in Fig. 4: Imagination results for three different scenarios from four different models. The area covered by the blue mask is the imagination part. The results from four different models are trained on the different ground truth, namely 60x60, 60x60 Extended, 100x100, 100x100 Extended. The imaginations are based on thr observation and close to the real objects. The models trained with extended ground truth (denoted with β€˜Ext’) predict the object more aggressively. seconds. The episode ended once a collision occurred. Four metrics are considered for the base comparison: the success rate of reaching the goal without a collision, the mean number of collisions, the mean path length, and the mean time. An episode is considered unsuccessful when the agent collides with an obstacle, or a timeout happens. Figure 5 illustrates the results of all planners. It is observed that Agent 1 performed best according to the success rate of 97.3% and mean collisions of 0.02. whereas Agent 2 has the lowest success rate with 48.6%. Surprisingly, Agent 3 performed rather severely with a success rate of just 70.6% and mean collisions of 0.06. The other 4 agents performed similarly and can compete well with the AIO and TEB planner. All agents outperform the classic planners DWA and MPC except for the baseline Agent 2 and Agent 3. The path length of Agent 1 is slightly higher than that of Agent 2, 3 and 4, but in general, all path lengths are higher than those of the classic planners. For scenarios with 15 obstacles the difference between the DRL-based agents and the classic baselines become even more noticeable. Whereas the success rates for the 6 agents did not change much, the rates of the baseline planners decreased more noticeably. The mean amount of collisions for Agent 2 has increased significantly from 0.94 to 5.18. The mean collisions of the other agents and planners have also increased, but not more than the drop in the success rate would imply. The path length, mean time and speed did not change in a substantially. Fig. 5: Imagination results for three different scenarios from four different models. The area covered by the blue mask is the imagination part. The results from four different models are trained on the different ground truth, namely 60x60, 60x60 Extended, 100x100, 100x100 Extended. The imaginations are based on thr observation and close to the real objects. The models trained with extended ground truth (denoted with β€˜Ext’) predict the object more aggressively. Fig. 6: Our proposed agent can solve human-following and -guiding tasks within crowded environments. It was trained with additional semantic information like social states such as human talking, running etc. to reason and perform high level semantic tasks end-to-end. In general, Agent 1 performed best among the 6 agents, and an increase in obstacles did not significantly impact the performance. The performance is similar to the AIO planner on maps with 5 obstacles and slightly better on maps with 15 obstacles. As expected, Agent 2 performed worst of the 6 agents. Surprisingly, Agent 3 performed rather severely, although this agent was the only one which reached the last training stage. Especially Agent 2 and Agent 3 have comparably high path lengths. As they have the most timeouts, those agents sometimes might wander around the map without finding the goal. In some cases, the path lengths for scenarios with 15 obstacles are lower than for scenarios with 5 obstacles. Fig. 6 exemplarily depicts the paths of agent 1 compared to the baseline agent 2. It is noticed that the agent with additional global information produces much more straightforward trajectories towards the goal, whereas the baseline agent with only Lidar information produce many roundabout paths. This indicates the better interoperation between global and local planner which also results in smoother trajectories. The navigation behavior of our planners is demonstrated visually in the supplementary video. ## V Conclusion In this paper, we proposed a holistic DRL training pipeline incorporating all components and entities of the the ROS navigation stack typically used in industrial ground vehicles such as AGVs to improve synchronization between its entities. Rather than considering each entity of the navigation stack - global planner, waypoint generator, and local planner - separately, the training involves all entities and provides the DRL agent an enhanced understanding of them. Therefore, we integrated information about the global plan, the waypoint generator into the observation spaces of our trained agents. In total, we proposed six agents with different observation combinations to explore the effect of different input on the agents training and navigational performance. The additional information about the other entities could improve navigational performance and resulted in higher success rates, less collisions, and low path lengths compared to classic and learning-based baseline approaches. Future works aspire to explore the effect of additional input parameters such as semantic information about pedestrians or vehicles on the training and navigational performance. Furthermore, the approaches should be deployed towards real robots.
2306.01707
Learning Multi-Step Reasoning by Solving Arithmetic Tasks
Mathematical reasoning is regarded as a necessary ability for Language Models (LMs). Recent works demonstrate large LMs' impressive performance in solving math problems. The success is attributed to their Chain-of-Thought (CoT) reasoning abilities, i.e., the ability to decompose complex questions into step-by-step reasoning chains, but such ability seems only to emerge from models with abundant parameters. This work investigates how to incorporate relatively small LMs with the capabilities of multi-step reasoning. We propose to inject such abilities by continually pre-training LMs on a synthetic dataset MsAT which is composed of Multi-step Arithmetic Tasks. Our experiments on four math word problem datasets show the effectiveness of the proposed method in enhancing LMs' math reasoning abilities.
Tianduo Wang, Wei Lu
2023-06-02T17:29:22Z
http://arxiv.org/abs/2306.01707v3
# Learning Multi-Step Reasoning by Solving Arithmetic Tasks ###### Abstract Mathematical reasoning is regarded as a necessary ability for Language Models (LMs). Recent works demonstrate large LMs' impressive performance in solving math problems. The success is attributed to their Chain-of-Thought (CoT) reasoning abilities, i.e., the ability to decompose complex questions into step-by-step reasoning chains, but such ability seems only to emerge from models with abundant parameters. This work investigates how to incorporate relatively small LMs with the capabilities of multi-step reasoning. We propose to inject such abilities by continually pre-training LMs on a synthetic dataset **M**s**AT which is composed of **M**ulti-**step** Arithmetic **T**asks. Our experiments on four math word problem datasets show the effectiveness of the proposed method in enhancing LMs' math reasoning abilities.1 Footnote 1: Our code and data are released at [https://github.com/TianduoWang/MsAT](https://github.com/TianduoWang/MsAT). ## 1 Introduction Making Language Models (LMs) perform mathematical reasoning is a valuable, yet challenging research objective Hendrycks et al. (2021); Cobbe et al. (2021). Recently, we have witnessed large-scale LMs' impressive performance on a series of reasoning tasks via _chain-of-thought_ prompting Wei et al. (2022). This method elicits large LM's ability to decompose a complex problem into several intermediate steps. However, it is believed that such ability only emerges from sufficiently large models (empirically more than 100B parameters) Wei et al. (2022). In this paper, we examine how to incorporate moderate-sized LMs, e.g., RoBERTa Liu et al. (2019), with such multi-step reasoning ability via continual pre-training to improve the performance on math problems. Correctly understanding numbers is a prerequisite of mathematical reasoning abilities. But Wallace et al. (2019) shows that medium-sized LMs have a deficiency in numerical comprehension. To overcome this issue, previous works inject numerical reasoning skills into LMs following two approaches. The first is masking numbers with special tokens, and generating symbolic expressions with a structured neural decoder Xie and Sun (2019); Jie et al. (2022). An example of such expression is provided in Figure 1. The second strategy continually pre-trains LMs on synthetic numerical tasks, which requires models to learn how to perform computation involving numbers Geva et al. (2020); Pi et al. (2022). However, both approaches suffer from critical limitations. For symbolic methods, they neglect the information carried by the numbers, which could provide crucial hints for solving math problems Wu et al. (2021); Liang et al. (2022). As for continual pre-training methods, LMs' arithmetic skills are not reliable. Previous works indicate that such skills are highly influenced by the training data Razeghi et al. (2022) and hard for extrapolation Wallace et al. (2019). Motivated by these shortcomings, we propose to first pre-train moderate-sized LMs on a synthetic dataset called MsAT (Multi-step Arithmetic Tasks) Figure 1: A math word problem example with different kinds of answers. In **Question**, <Num>, <Num>, and <Num> are special tokens used for masking numbers. before downstream task fine-tuning. To make sure LMs capture the information carried by the numbers, we keep the numbers in the questions instead of masking them during both pre-training and fine-tuning. Instead of making LMs conduct computation internally, MsAT encourages LMs to generate a series of intermediate steps leading to the answer. Experiments on four math word problem datasets with two backbone models demonstrate the effectiveness of our method in enhancing LMs' math reasoning performance. ## 2 Method Our method essentially appends a continual pre-training stage before fine-tuning LMs on downstream tasks. The continual pre-training serves two purposes: first, we tokenize numbers digit-by-digit to improve LMs' numerical comprehension; second, we make LMs learn multi-step reasoning skills from the proposed synthetic task. ### Digit tokenization for numbers Sub-word tokenization methods, e.g., byte pair encoding (BPE) (Sennrich et al., 2016), is one of the reasons why moderated-sized LMs poorly understand numbers (Wallace et al., 2019). BPE-based tokenizers split text based on the token frequency in the training corpus, which can be counter-intuitive when dealing with numbers. For example, numbers "52" and "52" will be tokenized into ["52"] and ["5", "21"] respectively by the RoBERTaTokenizer2 of the Transformers library (Wolf et al., 2020). Such inconsistent tokenization strategy for numbers undermines LM's numerical understanding ability. Hence, we tokenize numbers digit-by-digit for both pre-training and fine-tuning. Footnote 2: [https://huggingface.co/docs/transformers/model_doc/roberta](https://huggingface.co/docs/transformers/model_doc/roberta) ### Multi-step Arithmetic Tasks (MsAT) The core of our method is the synthetic task MsAT where LMs can learn multi-step reasoning skills. Like MWP tasks, MsAT can be formulated as a Seq2Seq task: the input of a MsAT example describes an arithmetic question, while the output is a reasoning chain leading to the answer. Specifically, each input sequence is composed of three components: _question context_, _equation_, and _question variable_. Equation is a sequence of symbols and operators (\(+\), \(-\), \(\times\), \(\div\), \(=\)) that builds equality relationship between symbols. Given an equation, only one of the symbols is set as the question variable, while other symbols will be listed in question context with their numerical values. The output sequence of MsAT is constructed in a code-style multi-step reasoning format. Each step consists of two sub-steps: _variable assignment_ and _calculation_. In variable assignment, numbers appear in the input sequence are assigned to the variable names that are exclusive for decoder. In calculation, a new variable is generated from the calculation of the existing variables. This makes our outputs become executable Python code so that the numerical answer can be calculated by an external Python interpreter. Both inputs and outputs of MsAT are generated purely automatically. Details about the construction of MsAT are provided in Appendix A.1. ### Pre-training via adapter-tuning Directly training on synthetic data that are largely different from the natural language corpus harms LMs' language prowess (Geva et al., 2020). Therefore, we adopt a two-stage tuning strategy (Wang and Lu, 2022) to inject reasoning skills into LMs. Specifically, we perform adapter-tuning (Houlsby et al., 2019) on MsAT and then jointly fine-tune adapter and LM backbone on downstream tasks. It mitigates catastrophic forgetting because LM's original parameters are largely preserved during adapter-tuning (Houlsby et al., 2019). We consider two backbone models to verify the effectiveness of our method. In particular, we select a sequence-to-sequence (Seq2Seq) model (Lan et al., 2021) and a directed acyclic graph (DAG) structured model (Jie et al., 2022) that both adopt RoBERTa\({}_{\text{base}}\) to encode the input questions. More details of these models are provided in SS3.1. Figure 2 shows an overview of the proposed pre-training method. Figure 2: An illustration of the continual pre-training process on our Seq2Seq model. We attach adapter modules to each layer of LM encoder and fix LM’s parameters (shaded area) during pre-training. Tokens \(\mathsf{N_{n}}\), \(\mathsf{N_{i}}\), and \(\mathsf{Ans}\) in the output are the variable names only used by the decoder. Our DAG structured model is similarly pre-trained with the only difference on the decoder part. ## 3 Experiments Now we investigate whether our pre-training method facilitates models on Math Word Problem (MWP) solving tasks. All results are averaged over three different runs. ### Experimental setup Existing datasetsWe consider three commonly-used MWP datasets: MAWPS (Koncel-Kedziorski et al., 2016), ASDiv-A (Miao et al., 2020), and SVAMP (Patel et al., 2021). The statistics of these datasets is provided in Table 2. More details can be found in Appendix A.2. We report five-fold cross-validation results for both MAWPS and ASDiv-A and test set accuracy for SVAMP following previous practice (Lan et al., 2021; Jie et al., 2022). SVAMP (hard)We find more than 85% of the numbers in the above datasets are smaller than \(10^{2}\). To investigate the extrapolation performance of the models trained with MsAT, we create SVAMP (hard) from the original SVAMP dataset by replacing the numbers with much larger ones inspired by Gao et al. (2022). More details about SVAMP (hard) and number distribution of the existing datasets are provided in Appendix A.3. ModelsWe consider both sequence-to-sequence (Seq2Seq) models and directed acyclic graph (DAG) structured models as our backbone models. For Seq2Seq model, we choose RoBERTaGen(Lan et al., 2021), an encoder-decoder model with RoBERTabase as the encoder combined with a Transformer decoder. For DAG structured model, we choose DeductReasoner(Jie et al., 2022) that combines RoBERTabase with a DAG decoder. In their original implementation, both models replace numbers with symbolic mask tokens. Hence, we additionally consider a baseline for each backbone model that uses actual numbers with digit tokenization. We name the models that are based on these two backbone models and pre-trained with our method as MsAT-RoBERTaGen and MsAT-DeductReasoner respectively. We also compare our models to large LMs, e.g., PaLM (Chowdhery et al., 2022) and Codex (Chen et al., 2021), with chain-of-thought prompting (Wei et al., 2022). All models are evaluated via greedy decoding. More implementation details, e.g., training hyper-parameters, are provided in Appendix B. ### Main results Table 1 compares our models with backbone model baselines and large LMs. On all datasets, digit tokenization baselines consistently perform worse than their symbolic mask counterparts, indicating the deficiency of the numeracy comprehension of the original RoBERTa model. However, the models trained with MsAT surpass both baselines by a large margin, which demonstrates the effectiveness of our pre-training method. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & **MAWPS** & **ASDiv-A** & **SVAMP** & **SVAMP** & **SVAMP** (hard) \\ \cline{2-7} & Acc. & \(\Delta\) & Acc. & \(\Delta\) & Acc. & \(\Delta\) \\ \hline _Large language models_ & (pair 5480) & (code-direct-082) & (pair 5480) & & \\ w/ Chain-of-Thought prompting & 93.3 & 80.4 & **79.0** & - & \\ \hline _Seq2Seq models_ & & & & & \\ RoBERTaGen(Lan et al., 2021) & & & & & \\ w/ symbolic masks & 88.4 & 72.1 & 30.3 & & 30.3\({}^{\heartsuit}\) & \\ w/ digit tokenization & 84.1 & (-4.3) & 71.9 & (-0.2) & 27.6 & (-2.7) & 19.6 & (-10.7) \\ MsAT-RoBERTaGen (Ours) & **91.6** & (+3.2) & **81.8** & (+9.7) & **39.8** & (+9.5) & **36.2** & (+5.9) \\ \hline _DAG structured models_ & & & & & & \\ DebertaEsonget (Jie et al., 2022) & & 85.0 & 45.0 & & 45.0\({}^{\heartsuit}\) & \\ w/ symbolic masks & 92.0 & 84.1 & (-0.9) & 44.4 & (-0.6) & 42.8 & (-2.2) \\ w/ digit tokenization & 91.6 & (-0.4) & 84.1 & (-0.9) & 44.4 & (-0.6) & 42.8 & (-2.2) \\ MsAT-DeductReasoner(Ours) & **94.3** & (+2.3) & **87.5** & (+2.5) & **48.9** & (+3.9) & **48.2** & (+3.2) \\ \hline \hline \end{tabular} \end{table} Table 1: Accuracy (%) comparison between large language models (LLMs), backbone model baselines, and our method. \(\Delta\): performance gap compared with the symbolic mask baselines. \(\heartsuit\): For baselines with symbolic masks, performance on SVAMP (hard) is the same as SVAMP because the actual numbers are replaced by symbolic tokens. The results of LLMs with chain-of-thought prompting are from Wei et al. (2022). \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**\# Data**} & **Avg. input length** & **Avg. output reasoning steps** \\ \hline MAWPS & 1,987 & 30.3 & 1.4 \\ ASDiv-A & 1,217 & 32.3 & 1.2 \\ SVAMP & 1,000 & 34.7 & 1.2 \\ \hline \hline \end{tabular} \end{table} Table 2: Existing dataset statistics. SVAMP (hard)We can observe that, on SVAMP (hard), the accuracies of digital tokenization baselines decrease dramatically (10.7 points drop for RoBERTaGen and 2.2 points drop for DeductReasoner) compared with baselines with symbolic masks, while the models trained with MsAT still outperforms symbolic mask baselines by 5.9 and 3.2 points respectively. This shows that not only does our models obtain better results than the baselines on the existing tasks, but it is also more robust in handling out-of-distribution numbers. Compare with large language modelsWe also observe that, on relatively simple tasks, i.e., MAWPS and ASDiv-A, RoBERTa-based models can outperform large LMs. But for the more challenging task SVAMP, there is still a large performance gap. We believe this is because SVAMP requires models to have a better understanding of natural languages. Jie et al. (2022) also reports that varying LM encoders results in significant performance disparities on SVAMP, indicating that SVAMP performance is closely tied to model's natural language capabilities. ## 4 Pre-training analysis In this section, we provide a careful analysis of our pre-training method from various perspectives to understand why it works. ### Pre-training task performance We visualize how the performance of pre-training task MsAT and one of the MWP tasks SVAMP changes with pre-training steps in Figure 3. It can be observed that the performance on both synthetic and natural language tasks tends to improve gradually as the number of pre-training steps increases. Figure 3 demonstrates that LMs are capable of learning multi-step reasoning gradually from the synthetic task MsAT. The acquired multi-step reasoning ability can subsequently be transferred to the downstream MWP solving tasks, enhancing performance during the fine-tuning phase. ### Reasoning format of MsAT The reasoning format of MsAT dictates the specific reasoning skills that LMs will acquire during pre-training. We demonstrate the superiority of our code-style multi-step reasoning format by comparing it with two different reasoning expressions. Effect of producing intermediate stepsWhile it is a common practice to train LMs towards directly producing the numerical answers of the arithmetic questions (Geva et al., 2020; Pi et al., 2022), a recent work shows that LMs' arithmetic skills are not reliable (Razeghi et al., 2022). To explore whether LMs can learn reasoning skills from MsAT without intermediate steps, we pre-train LMs on a variant of MsAT by replacing step-by-step output sequences with only numerical answers. Figure 4 compares this model (answer only) with our model (code-style). Its poor performance on both MsAT and SVAMP confirms the necessity of producing intermediate reasoning steps during pre-training. Structured code-style expressionWe next investigate the importance of applying the structured code-style reasoning expressions by comparing it with the less formatted math expressions. We argue that, compared with math expressions that only contain numbers and operators, our code-style expressions are more suitable for multi-step reasoning due to the structure information in the output sequences. Our experiments in Figure 4 demonstrate the superiority of the code-style output expressions. We can see that models with math expressions perform consistently worse than models with code-style multi-step reasoning format on both pre-training task MsAT and MWP solving task SVAMP. Figure 4: Comparison between different output expression formats. Results are obtained from our Seq2Seq model (with code-style expressions) and its variants. Figure 3: Performance on MsAT and SVAMP with respect to the pre-training steps. Results are obtained from 3 different runs. ### Difficulty level of MsAT Leveraging synthetic data for pre-training provides the advantage of enabling highly customizable difficulty levels for the training data. Here we define the difficulty level of a reasoning task as the averaged reasoning steps that are required to solve the problems. From Figure 5, we see that pre-training LMs on MsATs that are harder than downstream tasks generally leads to better results. It's important to note that, broadly speaking, the difficulty level of a reasoning task, particularly those involving natural language, is not solely determined by the number of reasoning steps. One example is that, though both ASDiv-A and SVAMP have an averaged reasoning steps of 1.2 (see Table 2), SVAMP is considered more difficult as it requires high-level natural language understanding Patel et al. (2021). ### Perform adapter-tuning on MsAT Tuning all parameters of LM encoders on synthetic data that are largely different from the pre-training corpus may lead to catastrophic forgetting Geva et al. (2020). To explore the importance of performing adapter-tuning on MsAT, we create a variant of our method in which we perform full fine-tuning on MsAT. We compare this variant with our models in Figure 6. It can be observed that both full fine-tuning and adapter-tuning can achieve good performance on MsAT, but adapter-tuning outperforms fine-tuning on all downstream MWP datasets, which demonstrates the benefits of performing adapter-tuning on MsAT. ## 5 Related Work In this work, we focus on improving moderate-sized LM's MWP performance by injecting multi-step reasoning ability. Hence, our work closely relates to both reasoning ability injection Geva et al. (2020); Pi et al. (2022) and MWP solving Xie and Sun (2019); Patel et al. (2021); Jie et al. (2022). Reasoning skills injectionThis technique refers to continually pre-training LMs on certain intentionally-crafted tasks to enhance their reasoning abilities. GenBERT Geva et al. (2020) pre-trains LMs on templated-based synthetic data to inject numerical skills into the LMs. PoET Pi et al. (2022) improves LMs' reasoning ability by pre-training them on tabular data towards imitating program executors. Both methods involve training LMs to produce numerical answers directly, which can be unreliable Razeghi et al. (2022). Our work focuses on injecting into LMs the capability for solving complex arithmetic problems step-by-step. Solving MWP with specialized architecturesOne of the research lines of MWP solving focuses on designing specialized achicectures for math reasoning Xie and Sun (2019); Lan et al. (2021); Jie et al. (2022). For example, Lan et al. (2021) combines RoBERTa Liu et al. (2019) with a Transformer Vaswani et al. (2017) decoder, and Jie et al. (2022) augments encoder-only LMs with a directed acyclic graph decoder. One of the shortages of such models is the information loss caused by masking actual numbers in the questions with symbolic tokens Wu et al. (2021). In this work, we propose to represent actual numbers with digit tokenization, and improve models' multi-step reasoning ability by pre-training them on a synthetic task MsAT. ## 6 Conclusion We propose a novel synthetic pre-training task, MsAT, to incorporate LMs with multi-step reasoning skills that improve performance on MWP tasks. This pre-training task encourages LMs to generate intermediate reasoning steps instead of predicting final numerical answers directly. Our experiments show that the proposed method is effective in improving the moderate-sized LM's performance on MWP solving tasks. Figure 5: Performance on MAWPS and ASDiv-A with respect to pre-training difficulty. The difficulty levels of two MWP tasks are also added for reference. Figure 6: MsAT and downstream task performance comparison between full fine-tuning and adapter-tuning during pre-training. ## Limitations Limited number of operators consideredFollowing previous methods Lan et al. (2021), we only consider binary operators (\(+\), \(-\), \(\times\), and \(\div\)). As we adopt a code-style output format, it is possible to introduce other non-binary operators supported by the Python interpreter, e.g., sum() and max(). However, obtaining labeled data with such operators may require laborious efforts. We believe it is an interesting research question on exploring how to teach models to solve practical questions e.g., math word problems, by writing code in a low-resource setting Jie and Lu (2023). Limited performance due to greedy decodingAll the results we report in this work are produced via greedy decoding. A recent work Wang et al. (2023) reports that making large LMs generate multiple answers and selecting the answer with the most votes can boost performance by a large margin. However, performing beam search for symbolic neural reasoners, e.g., DeductReasoner, can be challenging in that searching space increases exponentially with the number of variables in the question Jie et al. (2022). Designing effective beam search strategies for symbolic neural reasoners is a promising direction. ## Acknowledgements We would like to thank the anonymous reviewers, our meta-reviewer, and senior area chairs for their insightful comments and support with this work. We would also like to thank members of our StatNLP research group for helpful discussions. This research/project is supported by the National Research Foundation Singapore and DSO National Laboratories under the AI Singapore Program (AISG Award No: AISG2-RP-2020-016), and Ministry of Education, Singapore, under its Academic Research Fund (AcRF) Tier 2 Programme (MOE AcRF Tier 2 Award No: MOE-T2EP20122-0011)
2307.09072
Real-time Inference and Extrapolation via a Diffusion-inspired Temporal Transformer Operator (DiTTO)
Extrapolation remains a grand challenge in deep neural networks across all application domains. We propose an operator learning method to solve time-dependent partial differential equations (PDEs) continuously and with extrapolation in time without any temporal discretization. The proposed method, named Diffusion-inspired Temporal Transformer Operator (DiTTO), is inspired by latent diffusion models and their conditioning mechanism, which we use to incorporate the temporal evolution of the PDE, in combination with elements from the transformer architecture to improve its capabilities. Upon training, DiTTO can make inferences in real-time. We demonstrate its extrapolation capability on a climate problem by estimating the temperature around the globe for several years, and also in modeling hypersonic flows around a double-cone. We propose different training strategies involving temporal-bundling and sub-sampling and demonstrate performance improvements for several benchmarks, performing extrapolation for long time intervals as well as zero-shot super-resolution in time.
Oded Ovadia, Vivek Oommen, Adar Kahana, Ahmad Peyvan, Eli Turkel, George Em Karniadakis
2023-07-18T08:45:54Z
http://arxiv.org/abs/2307.09072v2
# DiTTO: Diffusion-inspired Temporal Transformer Operator ###### Abstract Solving partial differential equations (PDEs) using a data-driven approach has become increasingly common. The recent development of the operator learning paradigm has enabled the solution of a broader range of PDE-related problems. We propose an operator learning method to solve time-dependent PDEs continuously in time without needing any temporal discretization. The proposed approach, named DiTTO, is inspired by latent diffusion models. While diffusion models are usually used in generative artificial intelligence tasks, their time-conditioning mechanism is extremely useful for PDEs. The diffusion-inspired framework is combined with elements from the Transformer architecture to improve its capabilities. We demonstrate the effectiveness of the new approach on a wide variety of PDEs in multiple dimensions, namely the 1-D Burgers' equation, 2-D Navier-Stokes equations, and the acoustic wave equation in 2-D and 3-D. DiTTO achieves state-of-the-art results in terms of accuracy for these problems. We also present a method to improve the performance of DiTTO by using fast sampling concepts from diffusion models. Finally, we show that DiTTO can accurately perform zero-shot super-resolution in time. Scientific machine learning Diffusion models Transformers Partial differential equations ## 1 Introduction The field of scientific machine learning (SciML) has been growing rapidly in recent years, successfully modeling [1, 2, 3] and discovering [4, 5] scientific problems and applications using machine learning (ML) methods. This is a result of the significant advances in the field of ML, with state-of-the-art methods being developed daily. Appropriately used, many tools designed for standard ML and data science problems can also perform well on SciML tasks. These ML methods come from various domains, such as natural language processing, image classification, time-series analysis, etc. This work aims to use elements from a recently proposed method called _diffusion models_ and adapt them for solving forward partial differential equations (PDEs). Solving PDEs is an essential topic for the scientific community. This centuries-old research involves formulating a problem from physical domains, biological research, chemical reactions, etc., and using numerical tools to evaluate or approximate the phenomena. Therefore, the applications of this research area can be found in many fields, such as acoustic wave propagation [6, 7], computational mechanics [8, 9], fluid dynamics [10, 11], seismic imaging [12], and so on. However, as the problems become more complex, the difficulty in solving them using classical numerical methods is greater. Consequently, SciML methods are often helpful in such scenarios. There are two main types of PDE-related problems: forward and inverse problems. Forward problems focus on solving or approximating the solution of a physical process from a certain point in time to a later point. Inverse problems discuss using measurements of the solution to recover information about the problem itself. While many SciML methods were shown to be valid for inverse problems [1; 3; 13], the focus of this work is forward problems. When using computer simulations, one usually cannot compute the exact solution, as the problem is continuous, but the simulation is discrete. Therefore, a simulation is created to find an approximate solution. In this case, there exists a trade-off between the computational demands and the accuracy of the method, which is directly influenced by the spatial and temporal discretization of the problem. Finding an accurate continuous numerical solution for a forward problem is challenging and is the main focus of this work. Solving PDE-related problems involves several challenges. Two such challenges are generalizations for different problem conditions and dependency on the physical domain's discretization. Tackling the first, we utilize tools from the growing field of operator learning, where we attempt to use learning techniques to map a function space to another one. Thus, we are able to learn a family of solutions of PDEs corresponding to a family of initial conditions. For the latter challenge, we develop a method that is meshfree in time. While it is dependent on the spatial discretization, the temporal aspect of the solution, which is a prominent challenge for solvers of dynamical systems, is continuous. The structure of this paper is as follows. In Section 2, we give an overview of key components of the proposed method, such as operator learning, diffusion models, and transformers. In Section 3, we describe the proposed method and its architecture in detail. In Section 4, we demonstrate the effectiveness of the proposed method on a number of different datasets. Finally, in Appendix A, we show additional results and experiments. ## 2 Background and related work ### Operator learning The standard use of ML models for scientific computations involves fitting a function to map numerical inputs to outputs. These inputs are ordinarily coordinates, materials, boundary conditions, etc., and the outputs are usually solutions of forward PDEs. An example is physics-informed neural networks (PINNs) [1], which use a deep neural network to solve PDEs by embedding elements from the PDE problem into the loss function. In this way, the network trains on the given data while using prior information about the problem it is solving. One major drawback is that for each problem, one needs to re-train the network, which is computationally expensive. This includes any changes to the parameters defining the problem, such as the inputs mentioned above. The growing field of operator learning seeks to overcome this problem. Instead of fitting a function, one attempts to fit a mapping between two families of functions. Mathematically, let us consider a generic family of \(d\)-dimensional time-dependent PDE problems of the form: \[\begin{cases}\mathcal{L}u(\textbf{x},t)=f(\textbf{x},t),&\textbf{x}\in D,t\in [0,t_{final}]\\ \mathcal{B}u(\textbf{x},t)=g(\textbf{x},t),&\textbf{x}\in\partial D,t\in[0,t_{ final}]\\ u(\textbf{x},0)=I(\textbf{x}),&\textbf{x}\in D\end{cases}, \tag{1}\] where the differential operator \(\mathcal{L}\) and forcing term \(f\) define the PDE, the boundary operator \(\mathcal{B}\) and boundary condition \(g\) define the solution on the boundary, \(t_{final}\) is the final physical time, \(I\) is the initial condition, and \(D\) is a Euclidean domain in \(\mathbb{R}^{d}\) with boundary \(\partial D\). We assume that the problem (1) is well-posed [14], so a unique solution exists. Let \(\mathcal{I}\) be a function space containing initial conditions of (1). Then there exists another space \(\mathcal{U}\) that contains their respective solutions. We can define an operator \(\mathcal{G}:\mathcal{I}\longrightarrow\mathcal{U}\) as follows: \[\mathcal{G}(I)(\textbf{x},t)=u(\textbf{x},t), \tag{2}\] where \(I\in\mathcal{I},\textbf{x}\in D,\) and \(t\in[0,t_{final}]\). So, each initial condition \(I\in\mathcal{I}\) is mapped into its corresponding solution \(u\in\mathcal{U}\). The goal is to approximate the operator \(\mathcal{G}\) using a neural network. The first SciML operator learning method, called DeepONet, was proposed by Lu et al. [2]. The main components of a DeepONet are two neural networks: the branch and the trunk. Each network can be a fully-connected neural network, convolutional, or any other architecture. Usually, the branch inputs are functions, and the trunk inputs are coordinates. The DeepONet learns projections from the functions to a vector space, so it can map input functions to output functions at specific points. Another operator learning approach is the Fourier neural operator (FNO) [3; 15]. FNOs, similarly to DeepONets, learn mappings between function spaces using projections. Specifically, FNOs utilize the Fourier transform. They are effective and easy to implement, gaining traction in the SciML community. FNOs are accurate, especially for smooth and periodic problems [16]. We note that while the Fourier kernel is continuous, it is necessary to use discrete versions for operator learning in practice. Consequently, FNOs can be computationally costly when working with high-dimensional problems requiring many Fourier modes. ### Transformers and attention First presented by Vaswani et al. [17], transformers have been widely used in the ML community. Transformers introduce a new type of mechanism called the _scaled dot-product attention_. The attention module attempts to gather context from the given input. It does so by operating on a discrete embedding of the data composed of discrete tokens. The original architecture was proposed for natural language processing purposes, where one can encode sentences using their enumerated locations in the vocabulary. Since then, their usage has been extended to many other domains, and they outperform many different deep learning architectures in a wide variety of tasks. These domains include time series analysis [18] and computer vision [19]. For example, Vision Transformers (ViT) [20] split images into small patches, tokenize them, and apply the attention mechanism. In addition, they are significantly lighter than other alternatives and can be easily parallelized. Transformers are becoming increasingly popular across the SciML community as well. Transformers have been used for operator learning in many different ways [21; 22; 23; 24]. These methods show much promise by using the attention mechanism to find connections between points in the physical domain to function values. Some methods emphasize the attention mechanism itself [25; 26] and adapt it to PDE-related problems. Others utilize existing transformer models to help the SciML community solve problems more easily [27]. In this work, we employ elements from the original Transformer architecture as part of the proposed neural network architecture. ### Diffusion models A diffusion model is a generative deep learning model that uses a Markov chain to produce samples that match a given dataset [28]. These models essentially aim to learn the underlying distribution of a given dataset. After learning this distribution, they are used to generate new samples of similar properties to those found in the training set. In [29], Ho et al. introduced a new type of diffusion model called denoising diffusion probabilistic models (DDPM). It consists of a forward diffusion process and an inverse one. In the forward case, Gaussian noise is incrementally added to the original sample for a given number of iterations. For a sufficiently large number of iterations, the noise completely destroys the original signal. Then, in the reverse diffusion process, the goal is to reconstruct the original signal by performing iterative denoising steps using a neural network. Diffusion models have been used for SciML purposes, especially for generative artificial intelligence purposes [30; 31; 32]. While we are not using their generative capabilities in this work, we briefly explain their standard training procedure. We present a mathematical formulation mostly based on the works of Ho et al. [29] and Nichol et al. [33]. Given a data distribution \(x_{0}\sim q(x_{0})\), we define a forward noising process \(q\) which produces steps \(x_{1},\ldots,x_{T}\) by adding Gaussian noise at time \(t\) with variance \(\beta_{t}\in(0,1)\) as follows: \[q(x_{1},\ldots,x_{T}|x_{0}):=\prod_{t=1}^{T}q(x_{t}|x_{t-1}), \tag{3}\] \[q(x_{t}|x_{t-1}):=\mathcal{N}(x_{t};\sqrt{1-\beta_{t}}x_{t-1},\beta_{t}\textbf {I}). \tag{4}\] Given a sufficiently large \(T\) and a well-behaved schedule \(\beta_{t}\), the latent \(x_{T}\) is nearly an isotropic Gaussian distribution. From (4), we see that \(x_{t}\) is drawn from a conditional Gaussian distribution with mean \(\mu_{t}=\sqrt{1-\beta_{t}}x_{t-1}\) and variance \(\sigma_{t}^{2}=\beta_{t}\). In practice, this is done by randomly sampling a noise level parameter \(\varepsilon\sim\mathcal{N}(\textbf{0},\textbf{I})\), and setting: \[x_{t}=\sqrt{1-\beta_{t}}x_{t-1}+\sqrt{\beta_{t}}\varepsilon. \tag{5}\] Thus, if we know the exact reverse distribution \(q(x_{t-1}|x_{t})\), we can sample \(x_{T}\sim\mathcal{N}(0,\mathbf{I})\) and run the process in reverse to get a sample from \(q(x_{0})\). However, since \(q(x_{t-1}|x_{t})\) depends on the entire data distribution, we approximate it using a neural network with hyperparameters \(\theta\) as follows: \[p_{\theta}(x_{t-1}|x_{t}):=\mathcal{N}(x_{t-1};\mu_{\theta}(x_{t},t),\Sigma_{ \theta}(x_{t},t)). \tag{6}\] The neural network needs to learn the mean and variance to complete the backward diffusion process. Importantly for this case, using the formulation in (5), in each step, it is sufficient to know \(\beta_{t},x_{t}\), and \(\varepsilon\) to approximate \(x_{t-1}\). Then, the network is used autoregressively to reconstruct \(x_{0}\). Assuming we know the schedules \(\{\beta_{t}\}_{t=1}^{T}\) beforehand, we can view the neural network as the following mapping: \[(x_{t},\varepsilon)\longrightarrow x_{t-1}. \tag{7}\] In each step, the neural network performs a denoising operation, mapping \(x_{t}\) to a slightly less noisy signal \(x_{t-1}\). Including the noise level parameter \(\varepsilon\) is essential for the denoising operation. During training, various noise levels are sampled, and knowing the specific noise level that distinguishes between consecutive states \(x_{t}\) and \(x_{t-1}\), is crucial for effective denoising. Without this explicit knowledge of the noise level, the denoising process would become significantly more complicated, and the network may not converge. This means we have a conditional denoising operation, conditioned on \(\varepsilon\) (or equivalently on the timestep with \(\beta_{t}\)). ## 3 Methodology ### Diffusion-inspired operator learning We combine the formulations in Section 2.1 and Section 2.3 to define a new data-driven approach for operator learning. In this approach, the time evolution of the PDE solution is viewed as a forward process. Instead of incrementally adding noise to the inputs, we incrementally evolve the PDE solution over time. We replace the noise level parameter \(\varepsilon\) with the temporal variable \(t\). Then, we use the conditioning capabilities of diffusion models to learn the relations between the initial condition, the PDE solution, and the time domain. After training is complete, the model can interpolate between the initial and final time, creating a numerical solution that is continuous in time. Mathematically, given a PDE solution \(u\), we define a continuous forward process: \[\{x_{t}|\;t\in[0,t_{final}],\;x_{t}:=u(\textbf{x},t)\}, \tag{8}\] where \(x_{t}\) is the initial condition for \(t=0\), and for \(t=t_{final}\), \(x_{t}\) is the solution at the final time. To agree with the notations commonly used in the ML community, we use \(x\) to mark a sample or data point. To avoid confusion, the spatial element **x** is not necessarily related to the data point \(x\). Using this notation, the operator learning problem (2) becomes: \[x_{0}\longrightarrow x_{t},\;\forall t\in[0,t_{final}]. \tag{9}\] We further observe that while this formulation is inspired by diffusion models, it differs from the diffusion process described in Section 2.3 in many aspects. Most importantly, we do not view \(x_{0},x_{1},\ldots,x_{T}\) as an iterative process. Rather, the goal is to directly approximate \(u(\textbf{x},t)\) continuously without any prior information on the temporal evolution of the solution. Using similar notations to Section 2.3, the ultimate goal is to estimate \(p_{\theta}(x_{t}|x_{0})\) using a neural network, and not \(p_{\theta}(x_{t}|x_{t-1})\). We also note that this process is not necessarily a Markovian process. Moreover, we are not interested in a reverse process since we are solving a forward problem. The diffusion process discussed in Section 2.3 is discrete, while (8) is continuous. We discretize \(\{x_{t}\}\) by taking a partitioning \(\{t_{n}\}_{n=0}^{T}\) of \([0,t_{final}]\), where \(0=t_{0}<t_{1}<\ldots<t_{T-1}<t_{T}=t_{final}\). The discrete process is then defined as \(\{x_{n}\}_{n=0}^{T}\), where \(x_{n}:=u(\textbf{x},t_{n})\). Then, the discrete operator learning problem is given by: \[x_{0}\longrightarrow x_{n},\;n=1,\ldots,T. \tag{10}\] In PDE terms, given an initial condition \(x_{0}\), we approximate the analytic solution at a set of specific future time steps \(\{t_{n}\}_{n=1}^{T}\). In operator learning terms, we map a family of functions of the form \(x_{0}=u(\textbf{x},0)\) to another family of functions of the form \(u(\textbf{x},t)\). As outlined before, the role of the neural network in diffusion models is to perform conditional denoising in each step. We repurpose this exact network structure to solve a PDE-related problem. Since \(x_{0},x_{1},\ldots,x_{T}\) are directly taken from the analytical solution, we have no noise in this process. Therefore, there is no need for denoising. Thus, we replace the conditional denoising operation with a conditional temporal evolution. Mathematically, the mapping (10) becomes: \[(x_{0},t_{n})\longrightarrow x_{n}. \tag{11}\] So, each sample the neural network encounters during the training stage is composed of three elements: an initial condition \(x_{0}\), a desired time \(t_{n}\), and the analytic solution corresponding to \(x_{0}\) at time \(t_{n}\), i.e., \(x_{n}=u(\textbf{x},t_{n})\). At inference, only an initial condition and a desired time are given, and the network infers the solution at the desired time. We point out that even though the described process does not have any noise or other perturbations, from an application point of view, it is possible to include noise in various ways to create a more realistic scenario. To better understand the approach, we present the generic formulation without addressing noise. ### Training dataset To train a neural network using the formulation presented in Section 3.1, we require a large set of initial conditions (inputs) and corresponding solutions (outputs). Let \(\{I^{m}(\textbf{x})\}_{m=1}^{M}\) be a set of initial conditions with corresponding analytic solutions \(\{u^{m}(\textbf{x},t)\}_{m=1}^{M}\), where \(M\) is the desired number of training samples. Each sample consists of an initial condition and a PDE solution at the relevant timesteps. We note that in practice, \(\{u^{m}(\textbf{x},t)\}_{m=1}^{M}\) are accurate numerical approximations of the analytic solutions and not analytic solutions which are often unavailable. Furthermore, the solutions are discretized in space using a grid that partitions the domain \(D\). We emphasize that for all \(m=1,\ldots,M\) and \(t=0,\ldots,T\), \(u^{m}(\textbf{x},t)\) is a matrix, and its dimensions depend on the spatial discretization parameters, i.e., the number of nodes along each axis. We denote the forward process corresponding to the \(m\)-th initial condition and solution by \(\{x_{n}^{m}\}_{n=0}^{T}\), where \(x_{n}^{m}:=u^{m}(\textbf{x},t_{n})\). We define the following datasets: \[\textbf{X}=\{(x_{0}^{m},t_{n})|\ n=1,\ldots,T,\ \ \ m=1, \ldots,M\} \tag{12}\] \[\textbf{Y}=\{x_{n}^{m}|\ n=1,\ldots,T,\ \ \ m=1,\ldots,M\}\cdot\] So, each solution of the PDE is transformed into \(T\) pairs of samples that correspond to the mapping described in (11). ### DiTTO architecture Using the formulation in Section 3.1, we present DiTTO: a diffusion-inspired temporal transformer operator, and describe its architecture. The network receives two main inputs: the initial condition \(x_{0}=u(\textbf{x},0)\) and a time step \(t=t_{n}\). Recall, that \(x_{0}\) is a \(d\)-dimensional matrix, and \(t\) is a nonnegative scalar. The spatial and temporal inputs are treated differently throughout the forward pass of the network. For the temporal input \(t\), we use an embedding mechanism based on the original Transformer positional encoding [17]: \[\begin{split} PE_{(pos,2i)}&=\sin\Big{(}\frac{pos}{1 0000^{2i/d_{emb}}}\Big{)},\\ PE_{(pos,2i+1)}&=\cos\Big{(}\frac{pos}{10000^{2i/d_{emb }}}\Big{)},\end{split} \tag{13}\] where \(d_{emb}\) is the desired embedding dimension. Each scalar \(t\) is mapped into a vector of size \(d_{emb}\). Then this vector is passed through a simple feedforward neural network (FFN) with two linear layers and a GELU [34] activation function. For the spatial input \(x_{0}\), we begin by concatenating it with the spatial grid on which it was discretized. The result is then passed through a U-Net [35]. We use a similar U-Net variant common in many diffusion models, such as DDPM [29]. It follows the backbone of PixelCNN++ [36], a U-Net based on a Wide ResNet [37; 38]. It comprises three downsampling convolutional blocks, one block in the latent space, and three upsampling blocks. During downsampling, we double the number of filters in each block, starting from 16, until we reach 128 filters in the latent space. When upsampling, we half the number of filters in each block until we get 16 again. Each block contains four convolutional layers with a filter size of 3. Each layer is followed by a group normalization [39] and a SiLU activation [34; 40]. Skip connections and attention [41; 17] layers are added after the second and fourth layers. For the attention layers, we use 4 attention heads of size 32. The spatial and temporal inputs are connected in each block using a conditioning mechanism similar to Feature-wise Linear Modulation (FiLM) [42]. We begin by applying another FFN to the temporal embedding (13). The output of this FFN is two vectors, which we use to perform a scale and shift transformation. Specifically, the outputs of the first and third layers in each block are scaled by the first vector, and the second vector is then added to this result. This powerful conditioning mechanism enables the network to map the same initial condition into different values depending on the value of \(t\). Moreover, it allows the network to receive continuous values for the temporal input \(t\), making it entirely meshfree in time. The entire architecture is shown in Figure 1. This architecture is not limited to a specific dimension. The same mechanism can be implemented for \(d\)-dimensional problems, where \(d\in\{1,2,3\}\). The only major difference is the usage of \(d\)-dimensional convolutions for the relevant problem. The implementation relies on the following DDPM implementations: [https://github.com/lucidrains/denoising-diffusion-pytorch](https://github.com/lucidrains/denoising-diffusion-pytorch) and [https://github.com/lucidrains/video-diffusion-pytorch](https://github.com/lucidrains/video-diffusion-pytorch). We refer to the original implementation for more details regarding the architecture. ### Loss function Let \(\mathcal{O}_{\theta}\) be the neural network described in Section 3.3 with hyperparameters \(\theta\). The goal of \(\mathcal{O}_{\theta}\) is to learn the mapping described in (11), using the dataset (12). We split this dataset into training, validation, and testing sets. We split them in a way that makes sure that no initial conditions from the validation and testing sets appear in the training set. Difget models are often trained with a probabilistic loss function. However, since we learn a PDE operator, other loss functions commonly used for SciML applications are more fitting. Consequently, we train the network with a mean relative \(L^{2}\) loss: \[loss:=\frac{1}{MT}\sum_{m=1}^{M}\sum_{n=1}^{T}\frac{||\mathcal{O}_{\theta}(x_ {0}^{m},t_{n})-x_{n}^{m}||_{2}}{\varepsilon+||x_{n}^{m}||_{2}}, \tag{14}\] Figure 1: DiTTO architecture. The inputs are the discretized initial condition \(u(\textbf{x},0)\), its corresponding spatial grid, and the desired time \(t\in\mathbb{R}^{+}\). The initial condition and the grid are concatenated and then inserted into a U-net with convolutional blocks. The time step \(t\) first passes through an embedding layer. Then, the resulting embedding is used to condition the network in each block. where \(\varepsilon\) is a small number used to prevent a zero denominator and stabilize the loss. The inputs and outputs of the model are \(d\)-dimensional, so they are converted into a one-dimensional array by column stacking (flattening) inside the loss function when needed. We describe the loss for the entire dataset for simplicity, but in practice, we divide it into batches. ### Faster sampling Iterating over the entire dataset (12) can be time-consuming. For \(M_{train}\) initial conditions in the training set, we have \(M_{train}\cdot T\) samples. So, the number of training steps scales linearly with \(T\). This means the number of training samples is very large for fine temporal discretizations. A similar problem occurs in generative diffusion models. The original DDPM [29] requires hundreds of forward passes to produce good results. Later works suggested ways to improve the performance aspect of DDPMs. For example, Song et al. [43] suggest using non-Markovian processes to improve the computational cost. Nichol et al. [33] present a way to significantly reduce the number of necessary steps by subsampling the original diffusion process. Both methods focus primarily on the inference speed. However, in the case of DiTTO, inference is immediate. In Section 3.1, we explained that we do not view \(x_{0},x_{1},\ldots,x_{T}\) as an iterative process. Instead, we treat each sample individually, significantly increasing the inference speed compared to generative models such as DDPM. Hence, we focus on speeding up the training process. We propose DiTTO-s, a faster variant of DiTTO that relies on a subsampling mechanism similar to [33]. Instead of iterating over the entire process, we iterate over a random subsequence. Recall that for the \(m\)-th initial condition in the training set, the full process is \(\{x_{n}^{m}\}_{n=1}^{T}\). Instead, we take a set of random subsequences \(S_{m}\subset\{0,1,\ldots,T\}\), such that \(\sum_{m=1}^{M}|S_{m}|=\alpha MT\) for some \(\alpha<1\). For example, choosing \(\alpha=0.05\) means we only use \(5\%\) of the given samples in each epoch. The new DiTTO-s loss is given by: \[loss_{\alpha}:=\frac{1}{\alpha MT}\sum_{m=1}^{M}\sum_{n\in S_{m}}\frac{|| \mathcal{O}_{\theta}(x_{0}^{m},t_{n})-x_{n}^{m}||_{2}}{\varepsilon+||x_{n}^{ m}||_{2}}, \tag{15}\] We note that after each epoch, we randomly sample \(S_{m}\) again using a uniform distribution. That way, given a sufficiently large number of epochs, we expect to cover a significant portion of samples in the dataset. ## 4 Results We test the proposed DiTTO method on a wide variety of time-dependent PDEs in multiple dimensions. Specifically, we use the following PDEs: 1D Burgers' equation, 2D incompressible Navier-Stokes, 2D acoustic wave equation, and 3D acoustic wave equation. We compare the DiTTO method and its variant DiTTO-s in Section 3.5 to two other methods. First, we compare it to the popular FNO method [3]. We also conduct a comparison to a standard U-Net model with attention. We use the same U-Net architecture as in Section 3.3, except we remove all temporal conditioning. We compute the relative \(L^{2}\) error for each method and compare it to the ground truth data. We generate datasets containing solutions of the PDEs. We randomly sample \(1,000\) initial conditions and then use numerical solvers to find their corresponding solutions. Details regarding the data generation process and the solvers used for each problem are presented in the following sections. All models were trained on these datasets. In each case, the spatiotemporal resolution of the numerical solution is kept fixed. The spatial grid size, denoted by \(N_{x},N_{y}\), and \(N_{z}\), is determined by the dimensionality of the problem. We use the same spatial grid for training and inference. For the number of timesteps \(N_{t}\) (following standard PDE notation), we use different resolutions for training and inference to test the temporal interpolation and super-resolution capabilities of DiTTO. We train all models with \(N_{t}^{train}=50\) time steps, and test on \(N_{t}^{test}\in\{10,20,50,100,200\}\). These choices of \(N_{t}^{test}\) allow us to examine the results in three different regimes. First, when \(N_{t}^{test}=50\), we test the model on the same temporal resolution it was trained on. We see how well the model handles coarser temporal grids for \(N_{t}^{test}\in\{10,20\}\). Finally, we consider a zero-shot super-resolution scenario in time for \(N_{t}^{test}\in\{100,200\}\), exploring the interpolation capabilities of the model on unseen temporal discretizations. Note that DiTTO and DiTTO-s are entirely meshfree in time due to the time conditioning mechanism. Consequently, they do not require any temporal discretization. However, the ground-truth reference solutions for the above PDEs are generated using standard time-marching numerical solvers with fixed timestep sizes. When training the FNOs, for comparison purposes, we encounter different behavior. In [3], Li et al. suggest two ways to add time dependency to FNOs. The first method takes a \(d\)-dimensional (in space) problem and uses a \(d\)-dimensional FNO in an auto-regressive manner to evolve the solution over time. This approach is not computationally viable due to the large number of timesteps required for training. The second approach transforms a \(d\)-dimensional problem in space to a \((d+1)\)-dimensional spatiotemporal problem and uses a \((d+1)\)-dimensional FNO to solve it. We use the latter approach due to computational and scalability aspects. We train all models using the relative \(L^{2}\) loss in Equation (14) for 500 epochs. We monitor the loss during training and save the models with the lowest validation loss. The optimizer we use is the Adam/AdamW optimizer [44; 45] with an initial learning rate \(10^{-3}\) and weight decay \(10^{-4}\). The learning rate is updated throughout the training process using cosine annealing [46]. For DiTTO-s, we use a subsampling rate of \(\alpha=0.1\) as described in Section 3.5. For more details regarding the effect of \(\alpha\), we refer to Appendix A.3. In all examples, we provide a quantitative performance comparison using the relative \(L^{2}\) error (shown in the tables) and a qualitative performance comparison for a randomly selected sample from the test set (shown in the figures). ### 1D Burgers' equation The one-dimensional Burgers' equation for a viscous fluid is given by: \[\begin{cases}\partial_{t}u(x,t)+\partial_{x}(u^{2}(x,t)/2)=\nu\partial_{xx}u( x,t),&x\in(0,1),t\in(0,t_{final}]\\ u(x,0)=u_{0},&x\in(0,1)\end{cases}, \tag{16}\] where \(\nu\in\mathbb{R}^{+}\) is the viscosity coefficient, and the PDE is subject to periodic boundary conditions. Note that this is a nonlinear equation that can develop shocks even for smooth initial conditions. The initial condition \(u_{0}(x)\) is sampled from a Gaussian random field according to the following distribution: \(\mathcal{N}(0,625(-\Delta+25I)^{-2})\), where \(\mathcal{N}\) is the normal distribution, and \(\Delta,I\) are the Laplacian and identity operators, respectively. We use the publicly available MATLAB [47] solver given in [3] to create three separate datasets with different parameters. The first dataset is created with \(\nu=0.01,t_{final}=1,\) and \(N_{x}=128\). This is a relatively simple scenario since these parameters produce smooth solutions without shocks. In the second dataset, we decrease the viscosity coefficient to \(\nu=0.001\), which increases the shock behavior of the Burgers' equation. For this reason, we also increase the spatial discretization to \(N_{x}=256\). Finally, the third dataset is the same as the second one, except we increase the final simulation time to \(t_{final}=2\), which causes the shocks to be more pronounced. The results for the three scenarios are shown in Tables 1 to 3. DiTTO and DiTTO-s achieve the lowest errors for the three datasets. When \(N_{t}^{test}=N_{t}^{train}=50\), all methods have similar errors, with DiTTO and DiTTO-s having a slight advantage. However, when \(N_{t}^{test}\neq N_{t}^{train}\), DiTTO and DiTTO-s significantly outperform the FNO and the U-Net. Moreover, we observe that the DiTTO and DiTTO-s results do not depend on the temporal discretization, as the errors stay roughly the same for all values of \(N_{t}^{test}\). We also note that DiTTO-s has a slightly lower error than the full DiTTO. This demonstrates that the subsampling mechanism does not only require fewer training steps but also improves the model. The reason for this is that the subsampling mechanism acts as a form of regularization that helps decrease the error. In Figure 2, we present a visual comparison of the various methods for the third dataset. We refer to Appendix A.1 for similar figures regarding the other cases. Figure 2: Results for the Burgers’ problem described in Section 4.1 with \(\nu=0.001,t_{final}=2,N_{x}=256,N_{t}^{train}=50,\) and \(N_{t}^{test}=200\), for a random initial condition from the test set. In Figures 1(a) to 1(d), we see a comparison between the models at different times. We plot the predictions of the models alongside the reference solution (ground truth). ### 2D Navier-Stokes equation The time-dependent two-dimensional Navier-Stokes equation for the viscous, incompressible fluid in vorticity form is given by: \[\begin{cases}\partial_{t}\omega(x,y,t)+u(x,y,t)\cdot\nabla\omega(x,y,t)=\nu \Delta\omega(x,y,t)+f(x,y),&x,y\in(0,1)^{2},t\in(0,t_{final}]\\ \nabla\cdot u(x,y,t)=0,&(x,y)\in(0,1)^{2},t\in(0,t_{final}]\\ \omega(x,y,0)=\omega_{0},&(x,y)\in(0,1)^{2}\end{cases} \tag{17}\] where \(\omega\) is the vorticity, \(u\) is the velocity field, \(\nu\) is the viscosity, and \(\Delta\) is the two-dimensional Laplacian operator. We consider periodic boundary conditions here as well. The source term \(f\) is chosen as \(f(x,y)=0.1(sin(2\pi(x+y))+cos(2\pi(x+y)))\), and the initial condition \(\omega_{0}(x)\) is sampled from a Gaussian random field according to the distribution \(\mathcal{N}(0,7^{3/2}(-\Delta+49I)^{-5/2})\). We use the publicly available Python solver given in [3] to generate two datasets with a spatial resolution of \(N_{x}=N_{\eta}=64\). The first dataset is created with \(\nu=10^{-3}\) and \(t_{final}=50\), resulting in a Reynolds number \(Re\approx 20\). For the second dataset we use \(\nu=10^{-5}\) and \(t_{final}=20\), resulting in a Reynolds number \(Re\approx 2,000\). The error comparison for the two datasets is shown in Table 4 and Table 5, respectively. In Table 4, we see that for for \(Re\approx 20\), both DiTTO and DiTTO-s outperform the other models across all temporal discretizations while keeping similar error values. In Table 5, we see the results for \(Re\approx 2,000\). It is clear that increasing the Reynolds number also increases the difficulty of the problem, as evidenced by the noticeably higher errors for all models. We also see that the FNO errors are closer to DiTTO-s compared to other cases, even having a slight advantage when \(N_{t}^{test}\in\{50,100\}\). In Figures 3 and 4, we present a visual comparison of the various methods for both of the Navier-Stokes scenarios using one of the initial conditions from the test set. In both figures we see that the standard U-Net cannot interpolate in time, unlike the other methods. \begin{table} \begin{tabular}{c c c c c c} \hline \hline & \(N_{t}^{test}=10\) & \(N_{t}^{test}=20\) & \(N_{t}^{test}=50\) & \(N_{t}^{test}=100\) & \(N_{t}^{test}=200\) \\ \hline DiTTO & 0.0344 & 0.0334 & 0.0328 & 0.0327 & 0.0328 \\ DiTTO-s & **0.0334** & **0.0323** & **0.0318** & **0.0316** & **0.0316** \\ FNO & 0.3731 & 0.1430 & 0.0511 & 0.0646 & 0.0788 \\ U-Net & 0.6289 & 0.7379 & 0.0595 & 0.6416 & 0.5884 \\ \hline \hline \end{tabular} \end{table} Table 4: Relative \(L^{2}\) test set errors for the Navier-Stokes scenario in Section 4.2 with \(\nu=10^{-3},\ t_{final}=50,\ Re\approx 20,\ N_{x}=N_{y}=64\). Figure 3: Results for the Navier-Stokes scenario described in Section 4.2 with \(\nu=10^{-3},t_{final}=50\), and \(Re\approx 20\). The results are evaluated at different times for a random initial condition. In Figure 2(a), we see the reference solution obtained via a numerical solver. In Figures 2(b) to 2(e), we see the predictions of the models and their errors for the case \(N_{t}^{train}=50\) and \(N_{t}^{test}=200\). Figure 4: Results for the Navier-Stokes scenario described in Section 4.2 with \(\nu=10^{-5},t_{final}=20\), and \(Re\approx 2,000\). The results are evaluated at different times for a random initial condition. In Figure 3(a), we see the reference solution obtained via a numerical solver. In Figures 3(b) to 3(e), we see the predictions of the models and their errors for the case \(N_{t}^{train}=50\) and \(N_{t}^{test}=200\). ### 2D Wave equation We consider the following formulation of the acoustic wave equation in two dimensions [48; 49]: \[\begin{cases}u_{tt}(x,y,t)=c^{2}(x,y)(u_{xx}(x,y,t)+u_{yy}(x,y,t))&(x,y)\in(0,L)^ {2};0\leq t\leq t_{final},\\ u(x,y,0)=u_{0}(x,y)&(x,y)\in(0,L)^{2},\\ u_{t}(x,y,0)=0&(x,y)\in(0,L)^{2},\\ u(0,y,t)=u(L,y,t)=0&y\in(0,L),\ 0\leq t\leq t_{final},\\ u(x,0,t)=u(x,L,t)=0&x\in(0,L),\ 0\leq t\leq t_{final}.\end{cases} \tag{18}\] where \(u(x,y,t)\) is the wave amplitude or acoustic pressure, \(c(x,y)=(1+\sin{(x)}\sin{(y)})\) is the wave propagation speed, \(t_{final}=2\) is the final propagation time, and \(L=\pi\) is the size of the physical domain. The initial condition \(u_{0}\) is chosen to be a Gaussian source of the form: \[u(x,y,0)=e^{-\left(\frac{(x-x_{x})^{2}+(y-y_{c})^{2}}{10}\right)}. \tag{19}\] To create the dataset, we generate several initial conditions of the same type, randomly varying the spatial location \((x_{c},y_{c})\) of the center of the source. The locations are sampled using a discrete random uniform distribution on the indices of the grid. We generate the numerical solutions using a finite-difference numerical scheme with a grid of \(N_{x}=N_{y}=64\) spatial nodes. The results are shown in Table 6. DiTTO and DiTTO-s both achieve the lowest errors. We note that the FNO and the U-Net are much worse compared to the other PDEs when changing discretization. Specifically, the errors of DiTTO and DiTTO-s are an order of magnitude lower the FNO. We hypothesize that this might be related to the use of Dirichlet boundary conditions instead of periodic ones. Another reason might be the sparsity of the data in this case, which causes the solution to change rapidly over time and have sharp features (see Figure 5). ### 3D Wave equation Similarly to (18), the formulation of the acoustic wave equation in three dimensions is given by: \[\begin{cases}u_{tt}(x,y,z,t)=c^{2}(x,y,z)(u_{xx}(x,y,z,t)+u_{yy}(x,y,z,t)+u_{ zz}(x,y,z,t))&(x,y,z)\in(0,L)^{3};0\leq t\leq t_{final},\\ u(x,y,z,0)=u_{0}(x,y,z)&(x,y,z)\in(0,L)^{3},\\ u_{t}(x,y,z,0)=0&(x,y,z)\in(0,L)^{3},\\ u(0,y,z,t)=u(L,y,z,t)=0&(y,z)\in(0,L)^{2},\ 0\leq t\leq t_{final},\\ u(x,0,z,t)=u(x,L,z,t)=0&(x,z)\in(0,L)^{2},\ 0\leq t\leq t_{final}.\\ u(x,y,0,t)=u(x,y,L,t)=0&(x,y)\in(0,L)^{2},\ 0\leq t\leq t_{final}.\end{cases} \tag{20}\] \begin{table} \begin{tabular}{c c c c c c} \hline \hline & \(N_{t}^{test}=10\) & \(N_{t}^{test}=20\) & \(N_{t}^{test}=50\) & \(N_{t}^{test}=100\) & \(N_{t}^{test}=200\) \\ \hline DiTTO & **0.0159** & **0.0150** & **0.0137** & **0.0392** & 0.0460 \\ DiTTO-s & 0.0239 & 0.0250 & 0.0204 & 0.0396 & **0.0407** \\ FNO & 1.2238 & 0.4186 & 0.0818 & 0.1728 & 0.2348 \\ U-Net & 1.6576 & 1.3890 & 0.0702 & 1.2300 & 1.2896 \\ \hline \hline \end{tabular} \end{table} Table 6: Relative \(L^{2}\) test set errors for the 2D wave equation described in Section 4.3. \begin{table} \begin{tabular}{c c c c c c} \hline \hline & \(N_{t}^{test}=10\) & \(N_{t}^{test}=20\) & \(N_{t}^{test}=50\) & \(N_{t}^{test}=100\) & \(N_{t}^{test}=200\) \\ \hline DiTTO & 0.2030 & 0.1887 & 0.1800 & 0.1770 & 0.1755 \\ DiTTO-s & **0.1874** & **0.1749** & 0.1668 & 0.1641 & **0.1628** \\ FNO & 0.3200 & 0.2034 & **0.1582** & **0.1635** & 0.1700 \\ U-Net & 0.7287 & 0.7184 & 0.2028 & 0.7046 & 0.6802 \\ \hline \hline \end{tabular} \end{table} Table 5: Relative \(L^{2}\) test set errors for the Navier-Stokes scenario in Section 4.2 with \(\nu=10^{-5},\ t_{final}=20,\ Re\approx 2,000,\ N_{x}=N_{y}=64\). Figure 5: Results for the 2D wave equation described in Section 4.3 at different times for a random initial condition from the test set. In Figure 4(a), we see the reference solution obtained via a numerical solver. In Figures 4(b) to 4(e), we see the predictions of the models and their errors for the case \(N_{t}^{train}=50\) and \(N_{t}^{test}=200\). We generate the initial conditions \(u_{0}\) in the same way as the two-dimensional case in Equation (19), except we now define them on a three-dimensional grid: \[u(x,y,z,0)=e^{-\left(\frac{(x-x_{c})^{2}+(y-y_{w})^{2}+(z-x_{c})^{2}}{10}\right) }. \tag{21}\] We keep \(t_{final}=2,L=\pi\), and choose \(c(x,y,z)=(1+\sin{(2x)}\sin{(y)}\sin{(z)})\) to be the velocity. We generate the corresponding numerical solutions using a finite-difference numerical scheme with a grid of \(N_{x}=N_{y}=N_{z}=32\) spatial nodes. Note that for the three-dimensional case, we only use DiTTO and DiTTO-s. Since this is a time-dependent problem, we need a four-dimensional U-Net and a four-dimensional FNO, which are prohibitively expensive and not easily implemented. The results are shown in Table 7. Similarly to the other experiments, DiTTO and DiTTO-s maintain similar error values over the different temporal discretizations. A visual example using a random initial condition is shown in Figure 6. We note that the computational grid in this scenario is significantly larger. It has \(32,768\) nodes in space, with a maximum \(N_{t}^{test}=200\), effectively increasing the number of spatio-temporal nodes to approximately \(6.5\) million. We observe the low error values of the DiTTO models, showing excellent performance even for a large-scale three-dimensional problem, as seen in Figure 6. Figure 6: DiTTO results for the 3D wave equation described in Section 4.4 at different times for a random initial condition from the test set. In each figure, the predicted and reference solutions (ground-truth) are shown side-by-side, along with the relative error between them for the case \(N_{t}^{train}=50\) and \(N_{t}^{test}=200\). ## 5 Discussion and future work We have presented a novel approach to solving PDEs in a data-driven way. This method, named DiTTO, combines elements from diffusion models and the Transformer architecture. We have shown that DiTTO achieves state-of-the-art results for a wide variety of PDEs in multiple dimensions. Moreover, the proposed method was shown to have strong grid independence in the time domain. Consequently, DiTTO can make accurate predictions at arbitrary timesteps without retraining. We believe that the capabilities of DiTTO can be further expanded upon. Since this approach has shown excellent time interpolation capabilities, extending it to extrapolation problems is a natural course of action. Furthermore, adding physics-informed elements to the method could further enhance its potential. Finally, utilizing theoretical results from the fields of diffusion models and conditioning in deep learning could help us provide a theoretical background for DiTTO.
2306.06068
DeepStay: Stay Region Extraction from Location Trajectories using Weak Supervision
Nowadays, mobile devices enable constant tracking of the user's position and location trajectories can be used to infer personal points of interest (POIs) like homes, workplaces, or stores. A common way to extract POIs is to first identify spatio-temporal regions where a user spends a significant amount of time, known as stay regions (SRs). Common approaches to SR extraction are evaluated either solely unsupervised or on a small-scale private dataset, as popular public datasets are unlabeled. Most of these methods rely on hand-crafted features or thresholds and do not learn beyond hyperparameter optimization. Therefore, we propose a weakly and self-supervised transformer-based model called DeepStay, which is trained on location trajectories to predict stay regions. To the best of our knowledge, this is the first approach based on deep learning and the first approach that is evaluated on a public, labeled dataset. Our SR extraction method outperforms state-of-the-art methods. In addition, we conducted a limited experiment on the task of transportation mode detection from GPS trajectories using the same architecture and achieved significantly higher scores than the state-of-the-art. Our code is available at https://github.com/christianll9/deepstay.
Christian LΓΆwens, Daniela Thyssens, Emma Andersson, Christina Jenkins, Lars Schmidt-Thieme
2023-06-05T11:16:47Z
http://arxiv.org/abs/2306.06068v1
# DeepStay: Stay Region Extraction from Location Trajectories using Weak Supervision ###### Abstract Nowadays, mobile devices enable constant tracking of the user's position and location trajectories can be used to infer personal points of interest (POIs) like homes, workplaces, or stores. A common way to extract POIs is to first identify spatio-temporal regions where a user spends a significant amount of time, known as stay regions (SRs). Common approaches to SR extraction are evaluated either solely unsupervised or on a small-scale private dataset, as popular public datasets are unlabeled. Most of these methods rely on hand-crafted features or thresholds and do not learn beyond hyperparameter optimization. Therefore, we propose a weakly and self-supervised transformer-based model called DeepStay, which is trained on location trajectories to predict stay regions. To the best of our knowledge, this is the first approach based on deep learning and the first approach that is evaluated on a public, labeled dataset. Our SR extraction method outperforms state-of-the-art methods. In addition, we conducted a limited experiment on the task of transportation mode detection from GPS trajectories using the same architecture and achieved significantly higher scores than the state-of-the-art. Our code is available at [https://github.com/christianll9/deepstay](https://github.com/christianll9/deepstay). ## I Introduction Extracting stay regions (SR) from location trajectories identifies segments where a subject stays in the same place. It supports fine-grained spatio-temporal analysis of human and animal behavior and is often an intermediate step in point of interest (POI) mapping or POI extraction. Common SR extraction approaches apply unsupervised clustering algorithms and use thresholds for time, distance, and velocity, among others. These thresholds are determined by a qualitative analysis or a quantitative hyperparameter optimization. Typically, all experiments are performed either on small private datasets, in some cases with manually annotated labels, or on large datasets without any labels. This makes it difficult to compare different approaches and makes the problem less suited for supervised learning that requires a large amount of labeled data. Even though most trajectories do not contain ground truth SR labels, it is still possible to derive so-called "weak labels" from OpenStreetMap (OSM). For example, we can classify any location point lying within a building as part of a stay and a point near a road as part of a "non-stay" (see Figure 1). Given the large number of weak labels available, we assume that this data contains enough signal to learn useful latent representations. To this end, we apply a transformer model [1] that takes a trajectory as a time series of location points and classifies each point as either part of a stay or a non-stay. To our knowledge, this is the first approach to extract SRs from trajectory data using deep learning. Furthermore, we use publicly available data for training and evaluation to ensure reproducibility. We derived a ground truth dataset from the field of activity recognition and use it to compare our model with baselines from related work. ## II Problem Statement We define a location trajectory \(\mathcal{X}=\{g_{1},g_{2},\ldots,g_{|\mathcal{X}|}\}\) as a time series of consecutive location points \(g_{i}=(t_{i},x_{i},y_{i})\), where \(x,y\in\mathbb{R}\) denote the 2D coordinates and \(t\in\mathbb{R}^{\geq 0}\) the ascending timestamp. The sample rate \(\Delta t_{i}=t_{i}-t_{i-1}\) is defined as the time difference between two consecutive points and is either constant or fluctuating, depending on the dataset. SR extraction can be viewed as a time series segmentation task, where the trajectory \(\mathcal{X}\) is split in a set of segments \(\mathcal{TS}=\{ts_{1},\ldots,ts_{q}\}\). Each segment \(ts_{j}=(t_{\mathrm{start}_{j}},t_{\mathrm{end}_{j}},c_{j})\) is defined by its start \(t_{\mathrm{start}}\) and end time \(t_{\mathrm{end}}\) and the binary class \(c\in\{0,1\}\) indicating whether the user is staying at one place (\(c=1\)) or is moving around (\(c=0\)) within the time window \(t_{\mathrm{start}}\leq t<t_{\mathrm{end}}\). Moreover, we define \[t_{\mathrm{start}_{1}} =t_{1},\] \[t_{\mathrm{end}_{q}} =\infty,\] \[t_{\mathrm{start}_{j}} <t_{\mathrm{end}_{j}} \forall j\in\{1,\ldots,q\},\] \[t_{\mathrm{end}_{j}} =t_{\mathrm{start}_{j+1}} \forall j\in\{1,\ldots,q-1\},\] \[c_{j} \neq c_{j+1} \forall j\in\{1,\ldots,q-1\}.\] Fig. 1: Weak supervision of trajectories using OSM data and additional heuristics. Blue points indicate weakly labeled non-stays, yellow indicate stays. The set of stay regions \(\mathcal{SR}\) is a subset of all segments, where \[\mathcal{SR}=\{ts_{j}|ts_{j}\in\mathcal{TS}\wedge c_{j}=1\}. \tag{1}\] The task of SR extraction is now to predict \(\mathcal{SR}\) (and therefore \(\mathcal{TS}\)) solely from the trajectory data \(\mathcal{X}\). ## III Related Work Trajectory segmentation is an important research topic with many examples such as activity recognition, transportation mode detection (TMD) and SR extraction. In TMD, each segment is assigned to a mode, e.g. walking, car, bus, etc. [2]. A special binary case of this task is SR extraction with only two possible modes: stay and non-stay. In most cases, it functions as a preprocessing step for tasks such as POI mapping/extraction/prediction. In POI mapping, each SR is assigned to a visit to one of several POIs [3]. SR extraction identifies segments of a user's trajectory where the subject remains at the same place. A virtual location, usually the centroid of an SR, is called a stay point. So this task is also called stay point extraction/recognition/identification/detection. ### _Threshold-based Clustering_ The vast majority of published work uses threshold-based spatio-temporal clustering methods, where the clusters represent stay segments of the trajectory. Commonly used thresholds are a minimum duration \(T_{\min}\) and a maximum distance \(D_{\max}\)[4, 5, 6, 7, 8]. Here, the task is to find the maximum sets of consecutive location points \(\mathcal{P}=\{g_{m},g_{m+1},\ldots,g_{n}\}\) in the trajectory \(\mathcal{X}\), such that: \[t_{n}-t_{m} \geq T_{\min} \tag{2}\] \[\mathrm{dist}(g_{i},g_{j}) \leq D_{\max}\qquad\forall\quad g_{i},g_{j}\in\mathcal{P} \tag{3}\] Others apply additional thresholds for velocity, acceleration, and heading change [9, 10, 11, 12, 13]. ### _Adapted Density-based Clustering_ Other approaches adapt density-based clustering methods such as DBSCAN [14] and OPTICS [15]. If the trajectory is sampled at a constant rate, prolonged stays will result in dense spatial data and thus can be detected. Unlike k-means, they do not require a predetermined number of clusters, which is crucial for SR extraction. These approaches define SR extraction more as spatial clustering rather than time series segmentation. Therefore, the constraint that the clustered points must be consecutive is not always enforced. Many extensions have been proposed to utilize the temporal information as well [16, 17]. ### _Others_ The authors in [18] and [19] classify single location points as stay points when a GPS connection loss is detected. The algorithm proposed by [20] extracts SRs by searching for local minima of speed and zero crossings in acceleration within the trajectory. ## IV Methodology ### _Architectural Overview_ Figure 2 shows the overall architecture of our model DeepStay and the intermediate results of the processing pipelines. First, the raw trajectories are standardized and split into sequences of equal size. Furthermore, additional features are extracted to improve the performance of the subsequent transformer encoder. This encoder receives a sequence of constant length and outputs an embedding vector for each point comprising latent features about the point within its sequence. The following feedforward layer acts as a decoder and predicts a probability for each vector to be part of a stay. In the next step, all consecutive points with a predicted probability above a certain threshold are grouped as SRs. #### Iv-A1 Preprocessing of raw GNSS trajectories All datasets in this work contain GNSS coordinates, such as GPS. In the first step, we project all coordinates into a 2D Cartesian system \((x,y)\) using an appropriate UTM zone [21]. Since trajectories may have varying sample rates, we use the time difference \(\Delta t\) between each point and its predecessor as an additional feature. Our preliminary experiments indicate that this approach leads to better results than using linear interpolation as proposed by [22]. We also add the current velocity \(v\) as the ratio of the Euclidean distance and \(\Delta t\) between two consecutive points as another feature. All trajectories are chunked into sequences of equal length \(n=256\). This allows the transformer encoder to be trained with multiple sequences in a single batch of size \(B=64\). Furthermore, we standardize the features \(\Delta t\) and \(v\) separately based on their distribution in the training set to obtain a mean of 0 and a standard deviation of 1. The standardization of the location features \(x\) and \(y\) is done jointly. Each sequence is subtracted from its mean \((\overline{x}_{\mathrm{seq}},\overline{y}_{\mathrm{seq}})\) and divided by the common standard deviation of the entire training set \(\sigma_{x,y_{\mathrm{train}}}\) to prevent the model from memorizing specific regions. To further reduce overfitting, we rotate every sequence uniformly at random with respect to its origin \((0,0)\) before feeding it to the model. The final features of the \(i\)-th data point in sequence \(seq\) are shown in 4. \[seq_{i}=\left[\begin{array}{cccc}x_{i}&y_{i}&\Delta t_{i}&v_{i}\end{array} \right],\quad\;seq\in\mathbb{R}^{n\times 4} \tag{4}\] #### Iv-A2 Transformer Encoder We choose the encoder of the transformer model [1] to learn latent embeddings \(emb_{i}\) for each sequence point \(seq_{i}\). This allows us to predict the class probabilities pointwise instead of segmentwise. Thus, by design, segmentation and classification are performed jointly. We stick with the original setting of the base encoder Fig. 2: Overall architecture of DeepStay. Light brown colored boxes indicate trainable models. including the projection and positional encoding as described in [1] to get the final embeddings \(\mathit{emb}\in\mathbb{R}^{n\times d_{\mathrm{model}}}\). #### Iii-A3 Decoder A feedforward layer with sigmoid activation decodes the embeddings and predicts the probability for each point \(\mathit{emb}_{i}\) to be part of a stay: \[\hat{c}_{i}=\sigma(\mathit{emb}_{i}\ {W_{\mathrm{d}}}^{T}+b_{\mathrm{d}}), \hat{c}\in[0,1]^{n} \tag{5}\] Now the segmentation can be done by simply grouping consecutive points where \(\hat{c}_{i}<0.5\) for non-SRs and \(\hat{c}_{i}>0.5\) for SRs, respectively. #### Iii-A4 Supervision In the case of available SR labels, we can compute the pointwise binary cross entropy (BCE) between the prediction \(\hat{c}\) and the ground truth \(c\): \[\mathrm{BCE}(\hat{c}_{i},c_{i})=-c_{i}\log\hat{c}_{i}-(1-c_{i})\log(1-\hat{c}_{ i}) \tag{6}\] The distribution of the binary labels can be highly imbalanced. To prevent the model leaning towards one of the classes, we apply class weighting based on the mean \(\overline{c}_{\mathrm{train}}\) within the training set: \[\mathrm{BCE}_{\mathrm{w}}(\hat{c}_{i},c_{i},\overline{c}_{\mathrm{train}})= \left(\frac{c_{i}}{\overline{c}_{\mathrm{train}}}+\frac{1-c_{i}}{1-\overline{ c}_{\mathrm{train}}}\right)\mathrm{BCE}(\hat{c}_{i},c_{i}) \tag{7}\] Now the total loss \(\mathcal{L}_{\mathrm{super}}\) is the average weighted BCE over all points in all \(N_{\mathrm{train}}\) training sequences: \[\mathcal{L}_{\mathrm{super}}=\frac{1}{N_{\mathrm{train}}\cdot n}\sum_{j=1}^{N_{ \mathrm{train}}}\sum_{i=1}^{n}\mathrm{BCE}_{\mathrm{w}}(\hat{c}_{i}^{(j)},c_{ i}^{(j)},\overline{c}_{\mathrm{train}}) \tag{8}\] ### _Weakly Supervised SR Extraction_ Since the vast amount of publicly available location trajectories does not contain SR labels, we apply _programmatic_ weak supervision [23] by generating weak labels based on other data sources. These labels are often inaccurate. However, since we can generate them on a large scale, and since the error generally does not correlate with the input, we expect our model to still learn useful latent representations. For that, we define a function \(f_{\mathrm{weak}}\) that returns the estimated probability \(c_{i_{\mathrm{weak}}}\) that the location point \(g_{i}\) is part of a stay, and a confidence score \(w_{i_{\mathrm{weak}}}\) for that prediction: \[f_{\mathrm{weak}}(g_{i})=(c_{i_{\mathrm{weak}}},w_{i_{\mathrm{weak}}}) \tag{9}\] Here, \(c_{i_{\mathrm{weak}}}\) replaces the ground truth value \(c_{i}\), while \(w_{i_{\mathrm{weak}}}\) is used to weight the influence of the weak label on the total loss. Thus, the model learns more from weak labels, where the labeling function is more certain. The total loss is then: \[\mathcal{L}_{\mathrm{weak}}=\sum_{j=1}^{N_{\mathrm{train}}}\sum_{i=1}^{n} \frac{w_{i_{\mathrm{weak}}}^{(j)}}{N_{\mathrm{train}}\cdot n}\mathrm{BCE}_{ \mathrm{w}}(\hat{c}_{i}^{(j)},c_{i_{\mathrm{weak}}}^{(j)},\overline{c}_{ \mathrm{train}_{\mathrm{weak}}}) \tag{10}\] Furthermore, the mean label \(\overline{c}_{\mathrm{train}_{\mathrm{weak}}}\) is also weighted by the confidence score: \[\overline{c}_{\mathrm{train}_{\mathrm{weak}}}=\frac{\sum_{j=1}^{N_{\mathrm{ train}}}\sum_{i=1}^{n}c_{i_{\mathrm{weak}}}^{(j)}\cdot w_{i_{\mathrm{weak}}}^{(j)}}{ \sum_{j=1}^{N_{\mathrm{train}}}\sum_{i=1}^{n}w_{i_{\mathrm{weak}}}^{(j)}}\] \(f_{\mathrm{weak}}\) works as an ensemble of separate labeling functions that predict whether a location point is part of a stay or not. All labeling functions implement simple heuristics and may conflict with each other. Here, they predict a pair \((c_{\mathrm{weak}},w_{\mathrm{weak}}(g))\) with a constant value for \(c_{\mathrm{weak}}\) and a confidence weight \(w_{\mathrm{weak}}\) depending on the input data \(g\). In total, four different functions are defined: * \(f_{\mathrm{build}}\) predicts a stay with high confidence if a location lies within a building. * \(f_{\mathrm{am}}\) predicts a stay with high confidence if a location lies within small amenities. * \(f_{\mathrm{street}}\) predicts a non-stay with high confidence if a location is close to a street. * \(f_{\mathrm{transport}}\) predicts a non-stay based on available transportation mode labels. The data source for the first three functions is OSM. Similar to [19], the coordinates of the location \(g_{i}\) are used to query additional information from the map service. #### Iii-B1 Stay Labeling Functions \(f_{\mathrm{build}}\) checks, if \(g_{i}\) lies within a building \(b\in\mathcal{B}_{\mathrm{OSM}}\). If so, it returns a confidence weight of 1, since points that fall inside a building have a high chance of being part of a stay: \[w_{\mathrm{build}}(g_{i})=\begin{cases}1,&\text{if}\ \ \exists\ b\in\mathcal{B}_{\mathrm{OSM}}\ \ |\ \ \ b\cap g_{i}\neq\{\}\\ 0,&\text{otherwise}\end{cases} \tag{11}\] Similarly, \(f_{\mathrm{am}}\) returns \(w_{\mathrm{am}}>0\), if \(g_{i}\) lies within an amenity \(a\in\mathcal{A}_{\mathrm{OSM}}\). This is an OSM category for facilities like hospitals or airports that can encapsulate multiple buildings. For larger amenities, since it is less certain that people will stay in a single location, we model the confidence weight as a function of their geographic area: \[w_{\mathrm{am}}(g_{i})=\begin{cases}\max\limits_{a\in\mathcal{A}_{\mathrm{ combined}}\cap\exists\alpha\neq\{\}}\exp\left(-\frac{\alpha\alpha\alpha}{\mathrm{ channel}\ \sum_{j}\alpha\alpha\alpha\alpha\alpha\alpha\alpha\alpha\alpha\alpha\alpha\alpha\alpha\alpha\alpha} \right)&\text{if}\ \exists\alpha\alpha\alpha\alpha\alpha\alpha\alpha\alpha\alpha\alpha\alpha\alpha\alpha\alpha \alpha\alpha\alpha\alpha\alpha\alpha\alpha\alpha\alpha\alpha\alpha\alpha\alpha\alpha \alpha except walking, running, and biking) as a heuristic for non-stays. The confidence weight is formalized as: \[w_{\mathrm{transport}}(g_{i})=\begin{cases}1,&\text{if }label(g_{i})\in modes _{\mathrm{motorized}}\\ 0,&\text{otherwise}\end{cases} \tag{14}\] #### Iv-B3 Combining the Labeling Functions We combine the results of all heuristics \(\mathcal{H}\) by averaging the predicted probabilities and adding up all confidence weights as follows: \[\begin{split} f_{\mathrm{weak}}(g_{i})&=(c_{i_{\text{weak }}},w_{i_{\text{weak}}})\\ &=\left(\frac{\sum_{j\in\mathcal{H}}c_{j}\cdot w_{j}(g_{i})}{\sum_ {j\in\mathcal{H}}w_{j}(g_{i})},\quad\sum_{j\in\mathcal{H}}w_{j}(g_{i})\right), \\ \text{where }\mathcal{H}=\{build,am,street,transport\}.\end{split} \tag{15}\] Thus, each labeling function \(f_{j}\) has a linear influence on the total confidence weight \(w_{i_{\text{weak}}}\), independent of the output of other labeling functions. On the one hand, this combination is similar to an ensemble with model averaging, whereas, on the other hand, this resembles also a Mixture of Experts, where the weights depend on the input \(g_{i}\). ### _Self-Supervised Encoder_ Many points are not captured by any heuristic and receive a total confidence weight of 0. Self-supervised learning (SSL) could still leverage those data in a (weakly) semi-supervised manner and further strengthen the model's robustness to inaccurate training data [24]. Since [25, 26] show good results by using forecasting as a pretext task for time series data, we adopted their approach. #### Iv-C1 Forecasting Task We choose the velocity as one forecast target, which is less dependent on the sample rate compared to the location. Additionally, the bearing angle is forecasted as a second target because it is not directly included in the input and requires the model to encode more informative embeddings. More specifically, we predict the sine and cosine values of the angle to capture the periodicity. Given the encoder output \(emb\), we concatenate its sequence mean \(\overline{emb}\) and last embedding vector \(emb_{n}\) as an aggregated embedding vector \(emb_{\mathrm{agg}}\in\mathbb{R}^{2d_{\mathrm{model}}}\) for the whole sequence. This vector is then passed to two separate feedforward layers. No activation function is used for the velocity, while for the sine and cosine prediction we apply tanh to bind the output between -1 and 1: \[\hat{v}_{n+1}=emb_{\mathrm{agg}}\ W_{\mathrm{vel}}^{T}+b_{\mathrm{vel}} \tag{16}\] \[\begin{bmatrix}\hat{s}\hat{n}_{\alpha_{n+1}}\\ c\hat{\alpha}s_{\alpha_{n+1}}\end{bmatrix}=\tanh\left(emb_{\mathrm{agg}}\ W_{\mathrm{ang}}^{T}+b_{\mathrm{ang}}\right) \tag{17}\] #### Iv-C2 Multitask Loss The loss for each pretext task is defined by the Mean Squared Error (MSE) between the prediction and the ground truth: \[\mathrm{MSE}(\hat{y},y)=\frac{1}{N_{\mathrm{train}}}\sum_{j=1}^{N_{\mathrm{ train}}}\|\hat{y}_{n+1}^{(j)}-y_{n+1}^{(j)}\| \tag{18}\] \[\mathcal{L}_{\mathrm{vel}}=\mathrm{MSE}(\hat{v},v) \tag{19}\] \[\mathcal{L}_{\mathrm{ang}}=\mathrm{MSE}(\hat{s}\hat{n}_{\alpha},\sin\alpha)+ \mathrm{MSE}(c\hat{\alpha}s_{\alpha},\cos\alpha) \tag{20}\] We follow [25] and approach SSL as multitask learning with the sum of the downstream loss \(\mathcal{L}_{\mathrm{weak}}\) and the pretext losses: \[\mathcal{L}_{\mathrm{final}}=\mathcal{L}_{\mathrm{weak}}+\lambda_{\mathrm{vel} }\mathcal{L}_{\mathrm{vel}}+\lambda_{\mathrm{ang}}\mathcal{L}_{\mathrm{ang}}, \tag{21}\] where \(\lambda\) denotes tunable hyperparameters. If ground truth labels are available, \(\mathcal{L}_{\mathrm{weak}}\) is replaced with \(\mathcal{L}_{\mathrm{super}}\). ## V Data For this study, we select two datasets: GeoLife (GL) by [6, 12, 27] and ExtraSensory (ES) by [28]. GL contains two orders of magnitude more location points than ES but lacks proper SR labels. ES is chosen because of its activity labels, from which we can infer ground truth SR labels. Similar to [29] and [30], we remove outliers based on unrealistic velocity values and split a user's trajectory if the time difference between two consecutive points exceeds 20 minutes or if an unrealistic location jump is detected. ### _GeoLife_ Instead of ground truth SR labels, the GL dataset contain time-segmented transportation mode labels from 69 of all 182 participants. These labels are used to derive weak labels and for our experiment on TMD. To reduce network traffic and memory, we gather OSM data for points that fall within the \(15\%\)/\(85\%\) percentile of longitude and latitude, which covers about \(62\%\) of the total dataset. An overview of the total sum of confidence weights used for weak supervision can be found in Table I. The remaining unlabeled data is still used for SSL instead. The sample rate of GL is non-constant and varies between 1 and 6 seconds. For the UTM projection, we choose zone 50N. ### _ExtraSensory_ We use the ES dataset to fine-tune and evaluate DeepStay. Besides GNSS points, this dataset contains other sensor data, which we ignore. It was collected for the task of activity recognition. Participants should self-report their current activities such as "biking" or "watching TV". Some activity modes clearly indicate stays and non-stays. Thus, we define a function that maps these modes to SR labels. In the second step, we remove suspicious stays, where the velocity is higher than the average velocity of non-stays. The final number of points and derived labels are listed in Table II. The sample rate of ES is nearly constant at \(\frac{1}{\mathrm{min}}\). To achieve reasonable results with an encoder pre-trained \begin{table} \begin{tabular}{c c c} \hline \hline **weak label** & **heuristic** & \begin{tabular}{c} **total sum of** \\ **confidence weights** \\ \end{tabular} \\ \hline stays (\(c_{\mathrm{weak}}=1\)) & \begin{tabular}{c} building \\ amenity \\ \end{tabular} & \begin{tabular}{c} 1.0 M \\ 0.2 M \\ \end{tabular} \\ \hline non-stays (\(c_{\mathrm{weak}}=0\)) & \begin{tabular}{c} street \\ transport \\ \end{tabular} & \begin{tabular}{c} 5.0 M \\ 2.7 M \\ \end{tabular} \\ \hline \hline \end{tabular} \end{table} TABLE I: Summary of weak labels derived from GL dataset. on GL, we linearly interpolate the location trajectory at a rate of \(0.5\,\mathrm{Hz}\). However, for the final test results, only the predictions for the real, non-interpolated labels are evaluated. The prediction value is taken from the nearest interpolation point. For the map projection, we use the UTM zone 11N. ## VI Experiments In the first experiment, we train and test DeepStay on SR labels. The second experiment shows the ability of our architecture to be used for the more general task of TMD. ### _Experiment 1: Stay Region Extraction_ For this experiment, DeepStay is pre-trained on weak labels from the GL dataset and then fine-tuned and tested together with traditional baselines on the ES dataset, where it achieves the best overall results among all methods. #### Vi-A1 Baselines We implement the following algorithms as baselines and test them on the ES dataset: * **Kang et al. [4]**: Threshold-based clustering. It collects consecutive points until a distance threshold to the points' centroid is exceeded. Then the time criterion 2 is checked and if the minimum duration is reached, the collected points form a SR. Although the authors only proposed a POI extraction algorithm, it also implicitly incorporates SR extraction, which can be outsourced. * **D-Star [17]**: Density-based clustering. It is based on DBSCAN, but instead of solely clustering the location points spatially, it considers only neighboring points along the trajectory and tries to exclude outliers. D-Star seems to be state-of-the-art. * **CB-SMoT [16]**: Density-based clustering. While the algorithm is similar to D-Star, the resulting SRs contain only consecutive points, which is more in line with our definition. It can incorporate prior known POIs. However, for a fair comparison, we exclude this data. We optimize the hyperparameters of Kang et al. and CB-SMoT using a \(3\times 3\) grid search based on the values reported in the original publications. D-Star has 4 parameters to adjust, hence we perform a random search with 10 different constellations. Each parameter search is incorporated in a 5-fold cross-validation based on the \(F_{1}\) score. We split the ES data in the same way as for DeepStay. #### Vi-A2 Training, Validation, and Test The training and testing pipeline for DeepStay can be summarized in three steps: 1. **Hyperparameter optimization**: Training on about 80 % of the GL dataset with weak labels and optimization of hyperparameters on the remaining \(20\,\%\) in respect to the loss \(\mathcal{L}_{\mathrm{weak}}\). These hyperparameters are the number of training epochs, the weight decay, the learning rate, and the SSL weights \(\lambda_{\mathrm{vel}}\) and \(\lambda_{\mathrm{ang}}\). 2. **Pre-training**: Creating a pre-trained DeepStay model by reinitiating the training on the full GL dataset and using the best-known hyperparameters. 3. **Fine-tuning and test**: Fine-tuning the decoder of the pre-trained model on the ES dataset and freezing all other model weights including the encoder layers. We apply 5-fold cross-validation, i.e. each iteration about 80 % of the data is used for training, and validation and 20 % for testing. Of this 80 %, 10 % is used for a second hyperparameter optimization. We follow [31] and split the data by the participants of the respective study, to avoid leakage between training, validation and test set. During both the pre-training and the fine-tuning, we apply an Adam optimizer [32] and SSL. #### Vi-A3 Metrics A common metric in time series segmentation is the pointwise accuracy, i.e. the ratio of correctly classified points to the total number of labels. In addition, we measure the pointwise calculated recall and precision. The definition of the positive class is crucial for both metrics. Since the final test dataset, i.e. ES, is highly imbalanced and contains many more stays than non-stays (see Table II), it is more important to detect a non-stay than a stay. This also resembles everyday life, where people mostly stay in one place and only move from time to time. Therefore, we choose non-stays as the positive class. The derived \(F_{1}\) score is used as the main metric to evaluate all SR extraction algorithms. #### Vi-A4 Results The final results are shown in Table III. All reported values are calculated over all 5 ES test data splits. In addition to the three baselines, two simplistic baselines predict a constant value (either always non-stay \(\hat{c}_{i}=0\) or always stay \(\hat{c}_{i}=1\)). DeepStay achieves higher overall scores than all implemented baselines, while the results for D-Star are comparable in terms of accuracy. Kang et al. use an approach with hard thresholds, which seems to be disadvantageous compared to a density-based approach. Even though CB-SMoT achieves relatively high accuracy, its \(F_{1}\) score is significantly worse than the similar D-Star algorithm. This may be due to the missing outlier detection in CB-SMoT. #### Vi-A5 Ablation Study We compare the contribution of different training components in Table IV, where we analyze the effect of training DeepStay first without any SSL and second without any pre-training, i.e. solely trained on the ES dataset. For the latter, the original sample rate of \(\frac{1}{\min}\) was used instead of interpolation. In addition to the previous metrics, we also measure the area under the PR curve (PR-AUC). It can be seen that the effect of SSL is relatively \begin{table} \begin{tabular}{l c} \hline \multicolumn{3}{c}{**total number**} \\ \hline GNSS points & 306 k \\ stays (\(c_{i}=1\)) & 223 k \\ non-stays (\(c_{i}=0\)) & 28 k \\ \hline \end{tabular} \end{table} TABLE II: Summary of the cleaned ES dataset. \begin{table} \begin{tabular}{l c c c c} \hline **Method** & \(F_{1}\) & **Acc.** & **Precision** & **Recall** \\ \hline DeepStay (ours) & **0.788** & **0.954** & 0.822 & 0.757 \\ \hline D-Star [17] & 0.753 & 0.951 & **0.877** & 0.660 \\ CB-SMoT [16] & 0.548 & 0.909 & 0.619 & 0.491 \\ Kang et al. [4] & 0.453 & 0.796 & 0.325 & 0.748 \\ \hline constant \(\hat{c}_{i}=0\) & 0.203 & 0.113 & 0.113 & **1.000** \\ constant \(\hat{c}_{i}=1\) & 0.000 & 0.887 & - & 0.000 \\ \hline \end{tabular} \end{table} TABLE III: Final results on the ES dataset. small. However, the pre-training has a significant impact on the performance, showing that the model correctly handles the noise coming from the weak labels and learns reasonable latent representations of the SRs. ### _Experiment 2: Transportation Mode Detection_ To further demonstrate the broader applicability of DeepStay and to contribute our findings to a broader field of research, we apply the same encoder for TMD. There has been some work on transformer-based TMD for data other than trajectories, such as accelerometer, gyroscope, and magnetometer data [33]. However, these sensors are sampled at a much higher rate (\(>20\,\mathrm{Hz}\)) and thus the input sequences cover only a few seconds. In this case, the transportation mode is mostly constant, so the segmentation part is dropped from the TMD task and only the classification part remains. For TMD _from location trajectories_, sequences typically cover several minutes and therefore mode changes are likely to occur. Nevertheless, most of the related work presupposes a correct segmentation and simply classifies each of the segments as one of the available modes [22]. This is problematic for real-world applications, where a correct segmentation is never given in advance. Here, the advantage of using the transformer encoder is the joint segmentation and classification of transportation modes by simply predicting the pointwise class probabilities and grouping consecutive predicted points of the same modes together. The baseline model SECA [29] may be the state-of-the-art approach of those models that segment _and_ classify from raw GNSS trajectories. Although their published code lacks the segmentation part, we compare their self-reported results on the GL dataset with our own results and show that our approach significantly outperforms SECA. #### Iv-B1 Model Adaptations The only adaptations made to DeepStay are in the decoder and the supervision. The decoder's weights are expanded and a softmax activation predicts the pointwise probability \(\hat{c}_{i,m}\) for each of the \(M=5\) transportation modes: \[\hat{c}_{i,m}=\mathrm{softmax}(emb_{i}~{}{W_{\mathrm{d}^{\prime}}}^{T}+b_{ \mathrm{d}^{\prime}})_{m},\ \ \ \ \hat{c}\in[0,1]^{n\times M} \tag{22}\] Now \(\mathrm{BCE_{w}}\) in 8 is replaced by the weighted cross entropy (\(\mathrm{CE_{w}}\)) in 23 between prediction and ground truth \(c_{i,m}\), where \(\overline{c}_{\mathrm{train}_{m}}\) denotes the percentage of labels of the \(m\)-th class within the training set. For segmentation, we can simply group consecutive points with the same most probable class. \[\mathrm{CE_{w}}(\hat{c}_{i},c_{i},\overline{c}_{\mathrm{train}})=-\sum_{m=1}^{ M}c_{i,m}\frac{\log(\hat{c}_{i,m})}{M\cdot\overline{c}_{\mathrm{train}_{m}}} \tag{23}\] #### Iv-B2 Baseline and Comparable Datasets The SECA model [29] is used as the only baseline. The authors perform a change point search by using the PELT method [34] to first segment the trajectory. Second, they use a convolutional neural network (CNN) to predict the mode of each segment and integrate an autoencoder for semi-supervision. We compare the size of the dataset after our own preprocessing with that of SECA in Table V. It shows that we train DeepStay with significantly fewer labels compared to SECA. However, in total more unlabeled data is available. Overall, the test sets are quite similar, which allows us to compare the final results of DeepStay and SECA. #### Iv-B3 Training, Validation, and Test Both SECA and DeepStay are trained semi-supervised. While SECA uses an autoencoder, DeepStay applies SSL (see Section IV-C). Unlike in Experiment 1, we randomly assign each sequence \(seq\) to one of the training or test sets, regardless of the including participants, to match the setup of SECA. We also apply 5-fold cross-validation. In addition, 20% of the training data is used to adjust the same hyperparameters as in Experiment 1. We optimize our model using Adam [32]. #### Iv-B4 Results We report the weighted \(F_{1}\) score and the accuracy. This \(F_{1}\) score is the average of the per-class \(F_{1}\) scores weighted by the number of labels per class. SECA is performing segmentwise classification and DeepStay point-wise classification, thus the following results are not fully comparable. Nevertheless, the final results in Table VI clearly demonstrate the significant performance improvement of DeepStay. A major reason may be the pointwise predictions, which do not require a prior segmentation, but intrinsically segment the data for the classification task. However, even when SECA is given ground truth segments, DeepStay still achieves better results. One reason may be that, unlike SECA, the input sequence for our model is not limited to a single transportation mode, i.e., it can also learn the transition between modes. E.g., it is intuitively more likely to see a transition from bus to train than from bus to car. Furthermore, the autoencoder in SECA only tries to reconstruct the trajectory, while SSL can provide proxy labels for DeepStay, which may be more informative. In addition, the transformer model with its attention mechanism seems to be superior in comparison to the CNN layers for this task. \begin{table} \begin{tabular}{l c c} \hline \hline & **unlabeled** & **labeled** & **labeled** \\ & **(Training)** & **(Training)** & **(Test)** \\ \hline This work & 16.29 M & 3.74 M & 4.76 M \\ SECA [29] & 15.43 M & 4.76 M & 4.76 M \\ \hline \hline \end{tabular} \end{table} TABLE V: Total number of GNSS points after preprocessing. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Method** & \(F_{1}\) & **Acc.** & **PR-AUC** & **Precision** & **Recall** \\ \hline DeepStay (full) & **0.788** & **0.954** & **0.821** & 0.822 & 0.757 \\ w/o SSL & 0.780 & 0.953 & 0.809 & **0.837** & 0.729 \\ w/o pre-training & 0.557 & 0.850 & 0.787 & 0.418 & **0.838** \\ \hline \hline \end{tabular} \end{table} TABLE IV: Ablation study of DeepStay tested on ES. \begin{table} \begin{tabular}{l c c} \hline \hline & **\(F_{1}\)** & **Acc.** \\ \hline DeepStay (ours) & **0.830** & **0.831** \\ SECA with ground truth segments & 0.764 & 0.768 \\ SECA with predicted segments & 0.717 & 0.721 \\ \hline \hline \end{tabular} \end{table} TABLE VI: Final results for TMD tested on the GL dataset. ## VII Conclusion and Future Work In this work, we show, how to derive programmatically weak labels for SR extraction and how to successfully train a transformer encoder with these data. We demonstrate the effectiveness of this model on ground truth data for SR extraction and TMD, where it outperforms state-of-the-art methods. This work should be seen as a starting point for new data-driven approaches to SR extraction and provides useful training and test data. Ideas for future work are: #### Vii-1 More data augmentation Instead of always training on the same sequences, all trajectories could be shifted by a number of points in each epoch. This results in slightly different sequences and SSL targets and reduces overfitting. #### Vii-2 Modeling dependencies We treat all labeling functions independently, although there are clear dependencies. E.g., \(w_{\mathrm{build}}\) correlates strongly with \(w_{\mathrm{am}}\), because buildings are often part of amenities. Other work suggests that the performance benefits significantly from incorporating these dependencies [35]. #### Vii-3 Pre-training on multiple datasets In this study, we stick with the GL dataset for pre-training. However, there are many public unlabeled GNSS trajectory datasets. All of them could be weakly labeled with our approach and carefully combined to have an even larger training set.
2301.04800
Minimum Weight Random Graphs with Edge Constraints
In this paper, we study two examples of minimum weight random graphs with edge constraints. First we consider the complete graph on ${n}$ vertices equipped with uniformly heavy edge weights and use iteration methods to obtain deviation estimates for the minimum weight of subtrees with a given number of edges. Next we analyze edge constrained minimum weight paths in the integer lattice ${\mathbb{Z}^d}$ and employ martingale difference techniques to describe the behaviour of the scaled minimum weight in terms of the edge constraint.
Ghurumuruhan Ganesan
2023-01-12T04:08:07Z
http://arxiv.org/abs/2301.04800v1
# Minimum Weight Random Graphs with Edge Constraints # Minimum Weight Random Graphs with Edge Constraints **Ghurumuruhan Ganesan\({}^{1}\)** \({}^{1}\)IISER, Bhopal E-mail: [email protected] **Abstract:** In this paper, we study two examples of minimum weight random graphs with edge constraints. First we consider the complete graph on \(n\) vertices equipped with uniformly heavy edge weights and use iteration methods to obtain deviation estimates for the minimum weight of subtrees with a given number of edges. Next we analyze edge constrained minimum weight paths in the integer lattice \(\mathbb{Z}^{d}\) and employ martingale difference techniques to describe the behaviour of the scaled minimum weight in terms of the edge constraint. **Keywords:** Minimum spanning tree, heavy weights, minimum passage time paths, edge constraints. **2010 Mathematics Subject Classification:** Primary: 60J10, 60K35; Secondary: 60C05, 62E10, 90B15, 91D30. ## 1 Introduction Trees of complete graphs with random edge weights are important from both theoretical and practical perspectives. For independent and identically distributed (i.i.d.) edge weights with a common cumulative distribution function (cdf) \(F(.)\) that varies linearly close to zero, [5] studied convergence of the weight of the minimum spanning tree (MST) of the complete graph \(K_{n}\) on \(n\) vertices. Later [2] studied asymptotics for the _expected_ value of the MST weight, when the edge weight distributions follow a power law distribution. The paper [6] studied central limit theorems for a scaled and centred version of the MST weight and more recently [1] studied bounds on the diameter of the MST. The methods involve a combination of graph evolution via Kruskal's algorithm along with a component analysis of random graphs. For MSTs with nonidentical edge weight distributions, [10] use the Tutte polynomial approach [11] to compute expressions for the expected value of \(MST_{n}\). In Section 2 of our paper, we study "approximate" MSTs containing \(O(n)\) edges, obtained by placing random heavy weights in each edge of \(K_{n}\) that are not necessarily identically distributed but are uniformly heavy. We use stochastic domination to obtain deviation type estimates for the minimum weight and use the martingale method to bound the variance (see Theorem 2.1). Next we study constrained paths in the integer lattice. Consider the following scenario where each edge in the square lattice \(\mathbb{Z}^{d}\) is associated with a random passage time and it is of interest to determine the minimum passage time \(T_{n}\) between the origin and \((n,0,\ldots,0).\) The case of independent and identically distributed (i.i.d.) passage times has been well-studied and detailed results are known regarding the almost sure convergence and convergence in mean of the scaled passage time \(\frac{T_{n}}{n}\) (see [8]). Later [4] studied central limit theorems for first passage across thin cylinders and recently [7] have studied critical first passage percolation in the triangular lattice. In many applications, the passage times may not be i.i.d. For example, if we model vertices of \(\mathbb{Z}^{d}\) as mobiles stations and the edge passage times as the delay in sending a packet between two adjacent stations, then depending on external conditions, the edges may have different passage time distributions. In such cases, it is of interest to study convergence properties of the minimum passage time \(T_{n},\) with appropriate centering and scaling. In Section 3 our paper, we state and prove our result (Theorem 3.1) regarding the behaviour of the constrained minimum passage times as a function of the edge constraint. The paper is organized as follows. In Section 2, we state and prove our result regarding the asymptotic behaviour of weighted trees of the complete graph, with edge constraints. Finally, in Section 3, we describe the behaviour of constrained minimum passage time paths in the integer lattice \(\mathbb{Z}^{d}.\) ## 2 Edge Constrained Minimum Weight Trees For \(n\geq 1,\) let \(K_{n}\) be the complete graph with vertex set \(\{1,2,\ldots,n\}.\) Let \(\{w(i,j)\}_{1\leq i<j\leq n}\) be independent random variables with corresponding cumulative distribution functions (cdfs) \(\{F_{i,j}\}_{1\leq i<j\leq n}\) and for \(1\leq j<i\leq n,\) set \(w(i,j):=w(j,i).\) We define \(w(e):=w(i,j)\) to be the _weight_ of the edge \(e=(i,j)\in K_{n}\) and assume throughout that \(w(e)\leq 1,\) for simplicity. A tree in \(K_{n}\) is a connected acyclic subgraph. For a tree \(\mathcal{T}\) with vertex set \(\{v_{1},\ldots,v_{t}\},\) the weight of \(\mathcal{T}\) is the sum of the weights of the edges in \(\mathcal{T};\) i.e., \(W(\mathcal{T}):=\sum_{e\in\mathcal{T}}w(e).\) For \(1\leq\tau\leq n-1\) we define \[M_{n}=M_{n}(\tau):=\min_{\mathcal{T}}W(\mathcal{T}), \tag{2.1}\] where the minimum is taken over all trees \(\mathcal{T}\subset K_{n}\) having at least \(\tau\) edges. The following is the main result of this section. Constants throughout do not depend on \(n.\) **Theorem 2.1**.: _Suppose \(\tau\geq\rho n\) for some \(0<\rho\leq 1\) and also suppose there are positive constants \(D_{1}\leq D_{2}\) and \(0<\alpha<1\) such that_ \[D_{1}x^{\frac{1}{\alpha}}\leq F_{i,j}(x)\leq D_{2}x^{\frac{1}{\alpha}} \tag{2.2}\] _for \(0\leq x\leq 1.\) There are positive constants \(C_{i},1\leq i\leq 3\) such that_ \[\mathbb{P}\left(C_{1}n^{1-\alpha}\leq M_{n}\leq C_{2}n^{1-\alpha}\right)\geq 1-e ^{-C_{3}n^{1-\alpha}} \tag{2.3}\] _and \(C_{1}n^{1-\alpha}\leq\mathbb{E}M_{n}\leq C_{2}n^{1-\alpha}.\) Moreover, \(var(M_{n})\leq 2n.\)_ To prove Theorem 2.1, we use the following preliminary Lemma regarding the behaviour of the exponential moments of small edge weights. For \(1\leq j\leq n\) and distinct deterministic integers \(1\leq a_{1},\ldots,a_{j}\leq n\) let \[Y_{j}=Y_{j}(a_{1},\ldots,a_{j}):=\min_{a\notin\{a_{1},\ldots,a_{j-1}\}}w(a_{j},a). \tag{2.4}\] **Lemma 2.2**.: _There are positive constants \(C_{1}\) and \(C_{2}\) not depending on the choice of \(\{a_{i}\}\) or \(j\) such that for any \(1\leq j\leq n-1,\)_ \[\frac{C_{1}}{(n-j)^{\alpha}}\leq\mathbb{E}Y_{j}\leq\frac{C_{2}}{(n-j)^{\alpha }}. \tag{2.5}\] _Moreover for every \(s>1\) there are constants \(K,C\geq 1\) not depending on the choice of \(\{a_{i}\}\) or \(j\) such that for all \(1\leq j\leq n-K\)_ \[\mathbb{E}e^{sY_{j}}\leq\exp\left(\frac{C}{(n-j)^{\alpha}}\right). \tag{2.6}\] _Proof of Lemma 2.2_: In what follows, we use the following standard deviation estimate. Suppose \(W_{i},1\leq i\leq m\) are independent Bernoulli random variables satisfying \(\mathbb{P}(W_{1}=1)=1-\mathbb{P}(W_{1}\ =\ 0)\leq\mu_{2}.\) For any \(0<\epsilon<\frac{1}{2},\) \[\mathbb{P}\left(\sum_{i=1}^{m}W_{i}>m\mu_{2}(1+\epsilon)\right)\leq\exp\left(- \frac{\epsilon^{2}}{4}m\mu_{2}\right). \tag{2.7}\] For a proof of (2.7, we refer to Corollary A.1.14, pp. 312 of Alon and Spencer (2008). We first find the lower bound for \(\mathbb{E}Y_{j}\) and then upper bound \(\mathbb{E}Y_{j}\) and \(\mathbb{E}e^{sY_{j}}\) for constant \(s>1\) in that order. The term \(Y_{j}\) is the minimum of \(n-j\) edge weights and so for \(0<x<(n-j)^{\alpha}\) we use the upper bound for the cdfs in (2.2) to get that \(\mathbb{P}\left(Y_{j}>\frac{x}{(n-j)^{\alpha}}\right)\geq\left(1-D_{2}\frac{ x^{\frac{1}{\alpha}}}{n-j}\right)^{n-j},\) where \(D_{2}\geq 1\) is as in (2.2). Thus \[(n-j)^{\alpha}\mathbb{E}Y_{j}=\int_{0}^{(n-j)^{\alpha}}\mathbb{P}\left(Y_{j}> \frac{x}{(n-j)^{\alpha}}\right)dx\geq\int_{0}^{(n-j)^{\alpha}}\left(1-D_{2} \frac{x^{\frac{1}{\alpha}}}{n-j}\right)^{n-j}dx. \tag{2.8}\] To evaluate the integral in (2.8), we use \(1-y\geq e^{-2y}\) for all \(0<y<\frac{1}{2}.\) Letting \(y=\frac{D_{2}x^{\frac{1}{\alpha}}}{n-j}\) for \(0<x<\left(\frac{n-j}{2D_{2}}\right)^{\alpha}\) we then have \((1-y)^{n-j}\geq e^{-2D_{2}x^{\frac{1}{\alpha}}}\) and substituting this in (2.8) and using \(D_{2}\geq 1\), we get \[(n-j)^{\alpha}\mathbb{E}Y_{j}\geq\int_{0}^{\left(\frac{n-j}{2D_{2}}\right)^{ \alpha}}e^{-2D_{2}x^{\frac{1}{\alpha}}}dx\geq\int_{0}^{\left(\frac{1}{2D_{2}} \right)^{\frac{1}{\alpha}}}e^{-2D_{2}x^{\frac{1}{\alpha}}}dx=:C_{1}\] for all \(1\leq j\leq n-1\). For upper bounding \(\mathbb{E}Y_{j}\), we again use the fact that the term \(Y_{j}\) is the minimum of \(n-j\) edge weights and so for \(0<x<(n-j)^{\alpha}\) we use the lower bound for the cdfs in (2.2) to get \(\mathbb{P}\left(Y_{j}>\frac{x}{(n-j)^{\alpha}}\right)\leq\left(1-D_{1}\frac{x ^{\frac{1}{\alpha}}}{n-j}\right)^{n-j}\leq e^{-D_{1}x^{\frac{1}{\alpha}}}.\) Thus \[(n-j)^{\alpha}\mathbb{E}Y_{j}=\int_{0}^{(n-j)^{\alpha}}\mathbb{P}\left(Y_{j}> \frac{x}{(n-j)^{\alpha}}\right)dx\leq\int_{0}^{\infty}e^{-D_{1}x^{\frac{1}{ \alpha}}}dx=:C_{2}, \tag{2.9}\] a finite positive constant not depending on the choice of \(\{a_{i}\}\). To compute \(\mathbb{E}e^{sY_{j}}\) we split \(\mathbb{E}e^{sY_{j}}=I_{1}+I_{2}\), where \(I_{1}=\mathbb{E}e^{sY_{j}}\mathbf{1}\left(Y_{j}\leq\frac{1}{2s}\right)\) and \(I_{2}=\mathbb{E}e^{sY_{j}}\mathbf{1}\left(Y_{j}>\frac{1}{2s}\right)\) and estimate each term separately. To evaluate \(I_{1}\), we bound \(e^{x}\leq 1+2x\) for \(x\leq\frac{1}{2}\) and set \(x=sY_{j}\leq\frac{1}{2}\) to get that \(e^{sY_{j}}\leq 1+2sY_{j}.\) Thus \[I_{1}\leq 1+2s\mathbb{E}Y_{j}\leq 1+\frac{2sC_{2}}{(n-j)^{\alpha}}, \tag{2.10}\] using (2.9). To evaluate \(I_{2}\), we recall that \(Y_{j}=\min_{a\notin\{a_{1},\ldots,a_{j-1}\}}w(a_{j},a)\leq 1\) is the minimum of \(n-j\) independent edge weights. Using the upper bound for the cdfs in (2.2) we have \(\mathbb{P}\left(w(a_{j},a)>\frac{1}{2s}\right)\leq 1-\frac{D_{1}}{(2s)^{ \frac{1}{\alpha}}}<1\), since \(D_{1}\leq 1\) and \(s\ >\ 1.\) Setting \(e^{-\theta}=1-\frac{D_{1}}{(2s)^{\frac{1}{\alpha}}}\), we therefore have \(\theta>0\) and that \(\mathbb{P}\left(Y_{j}>\frac{1}{2s}\right)\leq e^{-\theta(n-j)}.\) Finally using \(Y_{j}\leq 1\) (since all edge weights are at most one) we get \[I_{2}=\mathbb{E}e^{sY_{j}}\mathbf{1}\left(Y_{j}>\frac{1}{2s}\right)\leq e^{s} \mathbb{P}\left(Y_{j}>\frac{1}{2s}\right)\leq e^{s}e^{-\theta(n-j)}\leq\frac {e^{s}C_{2}}{(n-j)^{\alpha}}, \tag{2.11}\] where \(C_{2}>0\) is as in (2.10), provided \(n-j\geq K+1\) and \(K=K(s,C_{2})\) is large. The final estimate in (2.11) is obtained using \(x^{\alpha}e^{-\theta x}\longrightarrow 0\) as \(x\rightarrow\infty\). From (2.10) and (2.11), we therefore get for \(1\leq j\leq n-K\) that \[\mathbb{E}e^{sY_{j}}\leq 1+\frac{2sC_{2}}{(n-j)^{\alpha}}+\frac{e^{s}C_{2}}{(n-j)^ {\alpha}}\leq\exp\left(\frac{2sC_{2}+e^{s}C_{2}}{(n-j)^{\alpha}}\right),\] proving (2.6). _Proof of Theorem 2.1_: We obtain the lower deviation bound by counting the number of edges with large enough weight and the upper deviation bound by constructing a spanning path with low weight analogous to Aldous (1990). The expectation bounds then follow from (2.3). To compute the variance, we use the martingale difference method. In fact, from the variance bound, we get that \(var\left(\frac{M_{n}}{n^{1-\alpha}}\right)\leq\frac{C}{n^{1-2\alpha}}\) and so if \(\alpha<\frac{1}{2},\) then \(\frac{M_{n}-\mathbb{E}M_{n}}{n^{1-\alpha}}\) converges to zero in probability. Details follow. We begin with the proof of the lower deviation bound. For \(\gamma>0\) a small constant, let \(R_{tot}:=\sum_{e\in K_{n}}\mathbf{1}\left(w(e)<\left(\frac{\gamma}{n}\right)^{ \alpha}\right)\) be the number of edges of weight at most \(\left(\frac{\gamma}{n}\right)^{\alpha}.\) We estimate \(R_{tot}\) using the standard deviation bound (2.7). First, we have from the bounds for the cdfs in (2.2) that \(\mathbb{P}\left(w(e)<\left(\frac{\gamma}{n}\right)^{\alpha}\right)<\frac{D_{2} \gamma}{n}\) and since the edge weights are independent, we use (2.7) with \(m=\binom{n}{2},\mu_{2}=\frac{D_{2}\gamma}{n}\) and \(\epsilon=\frac{1}{4}\) to get that \[\mathbb{P}\left(R_{tot}>\frac{5mD_{2}\gamma}{4n}\right)\leq\exp\left(-m\frac{ D_{2}\gamma}{64n}\right)\leq\exp\left(-\frac{nD_{2}\gamma}{256}\right),\] since \(m=\frac{n(n-1)}{2}>\frac{n^{2}}{4}.\) Let \(\mathcal{T}_{n}\) be any tree with weight \(M_{n}\) and containing at least \(\tau\geq\rho n\) edges. Using \(m<\frac{n^{2}}{2},\) we get that with probability at least \(1-e^{-\frac{nD_{2}\gamma}{256}},\) the weight of \(\mathcal{T}_{n}\) is at least \[\left(\rho n-\frac{5mD_{2}\gamma}{4n}\right)\cdot\left(\frac{\gamma}{n}\right) ^{\alpha}\geq\left(\rho n-\frac{5nD_{2}\gamma}{8}\right)\cdot\left(\frac{ \gamma}{n}\right)^{\alpha}\geq Cn^{1-\alpha}\] for some constant \(C>0,\) provided \(\gamma>0\) is small. This completes the proof of the lower deviation bound in (2.3). For the upper deviation bound, we consider the spanning path obtained by an incremental approach similar to Aldous (1990). Let \(i_{1}=1\) and among all edges with endvertex \(i_{1},\) let \(i_{2}\) be the index such that \(w(i_{1},i_{2})\) has the least weight. Similarly, among all edges with endvertex in \(i_{2}\setminus\{i_{1}\},\) let \(i_{3}\) be such that \(w(i_{2},i_{3})\) has the least weight. Continuing this way, the path \(\mathcal{P}_{iter}:=(i_{1},\ldots,i_{n})\) is a spanning path containing all the nodes and so letting \(Z_{j}=w(X_{i_{j-1}},X_{i_{j}})\) be the weight of the \(j^{th}\) edge in \(\mathcal{P}_{iter},\) we have \(M_{n}\leq W(\mathcal{P}_{iter})=\sum_{j=1}^{n}Z_{j}.\) For \(s>0\) we therefore have \[\mathbb{E}e^{sM_{n}}\leq\mathbb{E}e^{\sum_{j=1}^{n-1}sZ_{j}} \tag{2.12}\] and in what follows, we find an upper bound for the right hand side of (2.12). Let \(a_{1}:=1\) and \(a_{l},2\leq l\leq j-1\) be deterministic numbers and suppose the event \(\{i_{1}=a_{1},\ldots,i_{j-1}=a_{j-1}\}\) occurs so that \(Z_{l}=w(a_{l},a_{l+1})\) for \(1\leq l\leq j-1\) and \(Z_{j}=Y_{j}=Y_{j}(a_{1},\ldots,a_{j})=\min_{a\notin\{a_{1},\ldots,a_{j-1}\}}w (a_{j},a)\) is as in (2.4). The event \(\{i_{1}=a_{1},\ldots,i_{j-1}=a_{j-1}\}\) and the random variables \(w(a_{l},a_{l+1}),1\leq l\leq j-1\) depend only the state of edges having at least one endvertex in \(\{a_{1},\ldots,a_{j-1}\}.\) On the other hand, the random variable \(Y_{j}\) depends only on the state of edges having both endvertices in \(\{a_{1},\ldots,a_{j-1}\}.\) Thus \[\mathbb{E}e^{s\sum_{l=1}^{j}Z_{l}}\mathbf{1}\!\!1\!\!\left(i_{1}=a_{ 1},\ldots,i_{j-1}=a_{j-1}\right)\] \[=\mathbb{E}e^{sY_{j}}e^{\sum_{l=1}^{j-1}sZ_{l}}\mathbf{1}\!\!\! \left(i_{1}=a_{1},\ldots,i_{j-1}=a_{j-1}\right)\] \[=\mathbb{E}e^{sY_{j}}\mathbb{E}e^{\sum_{l=1}^{j-1}sZ_{l}}\mathbf{ 1}\!\!\!\left(i_{1}=a_{1},\ldots,i_{j-1}=a_{j-1}\right). \tag{2.13}\] Using (2.5) we have \(\mathbb{E}e^{sY_{j}}\leq\exp\left(\frac{C}{(n-j)^{\alpha}}\right)\) for all \(1\leq j\leq n-K,\) where \(K\) and \(C\) do not depend on the choice of \(\{a_{i}\}.\) Thus summing (2.13) over all possible \(a_{1},\ldots,a_{j-1},\) we get \(\mathbb{E}e^{s\sum_{l=1}^{j}Z_{l}}\leq\exp\left(\frac{C}{(n-j)^{\alpha}} \right)\mathbb{E}e^{s}\sum_{l=1}^{j-1}Z_{l}\) and continuing iteratively, we get \(\mathbb{E}e^{s\sum_{l=1}^{j}Z_{l}}\leq\exp\left(C\sum_{l=1}^{j}\frac{1}{(n-l) ^{\alpha}}\right)\) for \(n-j\geq K.\) For \(n-j<K,\) we use the bound \(\mathbb{E}e^{sY_{j}}\leq e^{s}\) since \(Y_{j}\leq 1\) (all the edge weights are at most one) and argue as before to get that \[\mathbb{E}e^{s\sum_{l=1}^{n-1}Z_{l}}\leq\exp\left(C\sum_{l=1}^{n-K}\frac{1}{(n -l)^{\alpha}}\right)e^{sK}. \tag{2.14}\] Comparing with integrals, the term \[\sum_{l=1}^{n-K}\frac{1}{(n-l)^{\alpha}}=\sum_{j=K+1}^{n-1}\frac{1}{j^{\alpha} }\leq C_{3}\int_{K}^{n-1}\frac{1}{x^{\alpha}}dx\leq C_{4}n^{1-\alpha}\] for some positive constants \(C_{3},C_{4}.\) We therefore get from (2.14) and (2.12) that \(\mathbb{E}e^{sM_{n}}\leq e^{C_{5}n^{1-\alpha}}\) for some constant \(C_{5}=C_{5}(s).\) Therefore by Chernoff estimate, we have \(\mathbb{P}(M_{n}\geq C_{6}n^{1-\alpha})\leq e^{-C_{7}n^{1-\alpha}}\) for some positive constants \(C_{6},C_{7}.\) This completes the proof of the lower deviation bound in (2.3). Finally, the lower bound on the expectation \(\mathbb{E}M_{n}\) follows directly from the lower deviation bound (2.3). For the expectation upper bound, we use the fact that the edge weights are at most one and so total weight of any tree containing at least \(\rho n\) edges is at most \(n.\) Consequently from the upper deviation bound in (2.3), we get that \(\mathbb{E}M_{n}\leq C_{2}n^{1-\alpha}+n\cdot e^{-Cn^{1-\alpha}}\leq 2C_{2}n^{1-\alpha}.\) The proof of the variance bound is analogous to the pivotal edge argument in Kesten (1993) together with the fact that the number of edges in a spanning tree is at most \(n.\) This completes the proof of the Theorem. ## 3 Edge Constrained Minimum Passage Time Paths Consider the square lattice \(\mathbb{Z}^{d},\) where two vertices \(w_{1}=(w_{1,1},\ldots,w_{1,d})\) and \(w_{2}=(w_{2,1},\ldots,w_{2,d})\) are _adjacent_ if \(\sum_{i=1}^{d}|w_{1,i}-w_{2,i}|=1\) and adjacent vertices are joined together by an edge. Let \(\{q_{i}\}_{i\geq 1}\) denote the set of edges. Each edge \(q_{i}\) is equipped with a random passage time \(t(q_{i})\) and we define the random sequence \((t(q_{1}),t(q_{2}),\ldots)\) on the probability space \((\Omega,\mathcal{F},\mathbb{P})\). A _path_\(\pi\) is a sequence of distinct adjacent vertices \((w_{1},\ldots,w_{r+1}).\) If \(e_{i},1\leq i\leq r\) is the edge with endvertices \(w_{i}\) and \(w_{i+1},\) then we denote \(\pi=(e_{1},...,e_{r}).\) By definition, \(\pi\) is self-avoiding and \(w_{1}\) and \(w_{r+1}\) are said to be the _endvertices_ of \(\pi.\) The length of \(\pi\) is the number of edges in \(\pi\) and the passage time of \(\pi\) is defined as \(T(\pi):=\sum_{i=1}^{r}t(e_{i}).\) **Definition 3.1**.: For \(k\geq 1\) we define the \(k-\)_constrained_ minimum passage time between the origin and the vertex \((n,\mathbf{0})\) as \(T_{n}(k):=\min_{\pi}T(\pi),\) where the minimum is over all paths \(\pi\) of length at most \(k\) and with endvertices \((0,\mathbf{0})\) and \((n,\mathbf{0}).\) We define the _unconstrained_ minimum passage time as \(T_{n}:=\inf_{k\geq 1}T_{n}(k).\) By definition \(T_{n}(k)\downarrow T_{n}\) a.s. as \(k\rightarrow\infty.\) In this section, we are primarily interested in studying how \(T_{n}(k)\) varies as the constraint parameter \(k\) increases and also how fast \(T_{n}(k)\) converges to \(T_{n}.\) The following are the main results of this section. Throughout constants do not depend on \(n.\) **Theorem 3.1**.: _Suppose_ \[\sup_{i\geq 1}\mathbb{P}(t(q_{i})\leq\epsilon)\longrightarrow 0\text{ as } \epsilon\downarrow 0\text{ and }\mu_{2}:=\sup_{i\geq 1}\mathbb{E}t^{2}(q_{i})<\infty. \tag{3.1}\] \((a)\) _There are constants \(C_{1},C_{2}>0\) such that for every \(k\geq n:\)_ \[\mathbb{P}\left(C_{1}n\leq T_{n}\leq T_{n}(k)\leq C_{2}n\right)\geq 1-\frac{C_{2} }{n},\qquad var(T_{n}(k))\leq C_{2}n \tag{3.2}\] _and \(C_{1}n\leq\mathbb{E}T_{n}\leq\mathbb{E}T_{n}(k)\leq C_{2}n.\)_ \((b)\) _There exists a constant \(C_{3}>0\) such that if \(k\geq C_{3}n\) then_ \(\mathbb{P}(T_{n}\neq T_{n}(k))\leq\frac{C_{3}}{k}.\) _If \(k\geq n^{1+\epsilon}\) for some \(\epsilon>0,\) then both \(\frac{T_{n}(k)-\mathbb{E}T_{n}(k)}{n}\) and \(\frac{T_{n}-\mathbb{E}T_{n}}{n}\) converge to zero a.s. as \(n\rightarrow\infty.\)_ \((c)\) _If the edge weights are uniformly square integrable in the sense that_ \(\sup_{i\geq 1}\mathbb{E}t^{2}(q_{i})\mathbf{1}(t(q_{i})\geq M)\longrightarrow 0\) _as \(M\rightarrow\infty,\) then \(var\left(\frac{T_{n}}{n}\right)\longrightarrow 0\) as \(n\rightarrow\infty.\)_ _If \(\sup_{i\geq 1}\mathbb{E}t^{p}(q_{i})<\infty\) for some \(p>2,\) then \(var(T_{n})\leq Cn\) for some constant \(C>0.\)_ _Proof of Theorem 3.1\((a)\)_: Let \(\mu:=\sup_{f}\mathbb{E}t(f)\) be the maximum expected passage time of an edge. We begin by showing that there exists a constant \(0<\beta\leq\mu\) such for any integer \(m\geq 1\) and any path \(\pi\) containing \(m\) edges, \[\mathbb{P}\left(T(\pi)\leq\beta m\right)\leq e^{-dm} \tag{3.3}\] for some positive constant \(\beta=\beta(d)\leq\mu,\) not depending on \(m\) or \(\pi.\) Here \(d\) is the dimension of the integer lattice under consideration. Indeed, let \(\pi=(e_{1},\ldots,e_{m})\) so that \(T(\pi)=\sum_{i=1}^{m}t(e_{i}).\) Using the Chernoff bound we obtain for \(\delta,s>0\) that \[\mathbb{P}(T(\pi)\leq\delta m)=\mathbb{P}\left(\sum_{i=1}^{m}t(e_{i})\leq \delta m\right)\leq e^{s\delta m}\prod_{i=1}^{m}\mathbb{E}\left(e^{-st(e_{i})} \right). \tag{3.4}\] For a fixed \(\eta>0,\) we write \(\mathbb{E}e^{-st(e_{i})}=\int_{t(e_{i})<\eta}e^{-st(e_{i})}d\mathbb{P}+\int_{t (e_{i})\geq\eta}e^{-st(e_{i})}d\mathbb{P}\) and use \[\int_{t(e_{i})<\eta}e^{-st(e_{i})}d\mathbb{P}\leq\mathbb{P}(t(e_{i})<\eta)\text { and }\int_{t(e_{i})\geq\eta}e^{-st(e_{i})}d\mathbb{P}\leq e^{-s\eta}\] to get that \[\mathbb{E}e^{-st(e_{i})}\leq\mathbb{P}(t(e_{i})<\eta)+e^{-s\eta}.\] Since \(F(0)=0,\) we choose \(\eta>0\) small so that \(\mathbb{P}(t(e_{i})<\eta)\leq\frac{e^{-6\epsilon}}{2}.\) Fixing such an \(\eta\) we choose \(s=s(\eta,\epsilon)>0\) large so that the second term \(e^{-s\eta}<\frac{e^{-6\epsilon}}{2}.\) This implies that \(\mathbb{E}e^{-st(e_{i})}\leq e^{-6\epsilon}\) and so from (3.4) we then get that \(\mathbb{P}(W(\mathcal{P})\leq\delta m)\leq e^{s\delta m}e^{-6\epsilon m}\leq e ^{-2\epsilon m}\) for all \(m\geq 1,\) provided \(\delta=\delta(s,\epsilon)>0\) is small. This completes the proof of (3.3). Next, for integer \(m\geq 1\) define the event \(E_{m}\) as \[E_{m}:=\bigcap_{r\geq\frac{3\mu}{\beta}m}\ \bigcap_{\pi}\{T(\pi)\geq\beta r\} \tag{3.5}\] where the second intersection is over all paths with origin as an endvertex and consisting of \(r\) edges. Thus \(E_{m}\) is the event that every path \(\pi\) with origin as an endvertex and consisting of \(r\geq\frac{3\mu}{\beta}m\) edges has passage time \(T(\pi)\geq\beta r.\) Since there are at most \((2d)^{r}\) paths of length \(r\) starting from the origin, the estimate (3.3) gives \[\mathbb{P}(E_{m}^{c})\leq\sum_{r\geq 3\mu\beta^{-1}m}(2d)^{r}e^{-dr}\leq\sum_{ r\geq 3\mu\beta^{-1}m}(2e^{-1})^{r}\leq\frac{e^{-\delta m}}{1-2e^{-1}} \tag{3.6}\] for all \(m\geq 1\) and some positive constant \(\delta=\delta(d,\mu),\) not depending on \(m.\) Here, the second inequality in (3.6) is obtained using the fact that the function \(xe^{-x}\) attains its maximum at \(x=1\) and so \(2de^{-d}\leq 2e^{-1}<1\) for all \(d\geq 2.\) Let \(F_{m}\) be the event that \(\sum_{i=1}^{m}t(f_{i})\leq 2\mu m\) where \(f_{i}\) is the horizontal edge with endvertices \((i-1,\mathbf{0})\) and \((i,\mathbf{0}).\) Letting \(X_{i}:=t(f_{i})-\mathbb{E}t(f_{i})\) and using the fact that \(\{X_{i}\}\) are independent, we then get from Chebychev's inequality that \[\mathbb{P}(F_{m}^{c})\leq\mathbb{P}\left(\sum_{i=1}^{m}X_{i}\geq\mu m\right) \leq\frac{\sum_{i=1}^{m}var(X_{i})}{\mu^{2}m^{2}}\leq\frac{C}{m} \tag{3.7}\] for some constant \(C>0.\) Now set \(m=\frac{\beta n}{3\mu}<n\) and suppose \(E_{m}\cap F_{n}\) occurs. From (3.6), (3.7) and the union bound, we get that \(\mathbb{P}(E_{m}\cap F_{n})\geq 1-\frac{C}{n}\) for some constant \(C>0.\) Since occurs, we get that \(T_{n}(k)\leq 2\mu n\) and since \(E_{m}\) occurs, we get that any path starting from the origin and containing \(r\geq\frac{3\mu}{\beta}m=n\) edges has weight at least \(\beta r\geq 3\mu m=\beta n.\) This obtains the first estimate (3.2). Next, using the bounded second moment assumption in (3.1) and arguing as in the variance estimate in Theorem 1, Kesten (1993) we get that \(var(T_{n}(k))\leq C\mathbb{E}N_{n}(k),\) where \(N_{n}(k)\) is the number of edges in the path with passage time \(T_{n}(k).\) If \(E_{m}\cap F_{n}\) occurs, then from the discussion in the previous paragraph, we get that \(N_{n}(k)\leq\frac{3\mu}{\beta}n.\) For \(x\geq\frac{3\mu}{\beta}n\) we assume for simplicity that \(y=\frac{\beta x}{3\mu}\) is an integer and write \[\mathbb{P}(N_{n}(k)\geq x) \leq \mathbb{P}\left(\left\{N_{n}(k)\geq x\right\}\cap E_{y}\right)+ \mathbb{P}\left(E_{y}^{c}\right) \tag{3.8}\] \[\leq \mathbb{P}\left(\left\{N_{n}(k)\geq x\right\}\cap E_{m}\right)+D _{1}e^{-D_{2}x}\] for some constants \(D_{1},D_{2}>0\) by (3.6). If \(N_{n}(k)\geq x\) and the event \(E_{y}\) occurs, then every path containing \(r\geq\frac{3\mu}{\beta}y=x\) edges has weight at least \(\beta r\geq\beta x.\) Thus \(T_{n}(k)\geq\beta x\) and so \[\mathbb{P}\left(\left\{N_{n}(k)\geq x\right\}\cap E_{y}\right)\leq\mathbb{P}( T_{n}(k)\geq\beta x)\leq\mathbb{P}\left(\sum_{i=1}^{n}t(f_{i})\geq\beta x \right).\] Since \(\beta x\geq 3\mu n\) we have that \(\beta x-\sum_{i=1}^{n}\mathbb{E}t(f_{i})\geq\beta x-\mu n\geq\frac{2\beta x}{3}.\) Consequently, recalling that \(X_{i}=t(f_{i})-\mathbb{E}t(f_{i})\) and using the fact that \(var(X_{i})\leq\mathbb{E}t^{2}(f_{i})\leq C\) for some constant \(C>0,\) we get from Chebychev's inequality that \[\mathbb{P}\left(\left\{N_{n}(k)\geq x\right\}\cap E_{y}\right)\leq\mathbb{P} \left(\sum_{i=1}^{n}X_{i}\geq\frac{2\beta x}{3}\right)\leq\frac{D_{3}\sum_{i= 1}^{n}var(X_{i})}{x^{2}}\leq\frac{D_{4}n}{x^{2}} \tag{3.9}\] for some constants \(D_{3},D_{4}>0.\) Combining (3.8) and (3.9), we get that \(\mathbb{P}(N_{n}(k)\geq x)\leq\frac{D_{5}}{x^{2}}\) for \(x\geq\frac{3\mu}{\beta}n\) and so \(\mathbb{E}N_{n}(k)\leq D_{6}n\) for some constant \(D_{6}>0.\) Plugging this into the variance estimate for \(T_{n}(k)\) obtained in the previous paragraph, we get the second estimate in (3.2). The lower expectation bound follows directly from the lower deviation bound in (3.2). The upper expectation bound follows from the fact that \(\mathbb{E}T_{n}\leq\sum_{i=1}^{n}\mathbb{E}t(f_{i})\leq\mu n.\) This completes the proof of part \((a).\) _Proof of Theorem 3.1\((b)\)_: Let \(\beta,\mu\) be as in part \((a)\) and for \(k\geq n\) suppose that the event \(E_{k}\cap F_{k}\) occurs. From the discussion following (3.7), we know that \(\mathbb{P}(E_{k}\cap F_{k})\geq 1-\frac{C}{k}\) for some constant \(C>0.\) The minimum passage time between the origin and \((n,\mathbf{0})\) is at most \(2\mu k\) and any path starting from the origin and containing \(r\geq\frac{3\mu}{\beta}k\) edges has passage time at least \(\beta r\geq 3\mu k.\) Thus \(T_{n}\left(\frac{3\mu k}{\beta}\right)=T_{n}\) and this obtains the probability estimate in \((b)\) with \(C_{3}\ =\ \frac{3\mu}{\beta}.\) We prove the a.s. convergence in two steps. In the first step, we use a subsequence argument to show that \(\frac{T_{n}(k)-\mathbb{E}T_{n}(k)}{n}\) converges to zero a.s. In the second step we show that the _difference_\(\frac{T_{n}(k)-T_{n}}{n}\) converges to zero a.s. and in \(L^{1}\), provided \(k\) is sufficiently large. This then obtains the a.s. convergence for \(\frac{T_{n}-\mathbb{E}T_{n}}{n}\). _Step 1_: We begin with a description of the sub-additivity property of the unconstrained passage time \(T_{n}.\) If \(T_{n,m}\) is the minimum passage time between \((n,\mathbf{0})\) and \((m,\mathbf{0}),\) then \(T_{n}\leq T_{m}+T_{n,m}.\) This is because the concatenation of the minimum passage time path with endvertices \((0,\mathbf{0})\) and \((m,\mathbf{0})\) and the minimum passage time path with endvertices \((m,\mathbf{0})\) and \((n,\mathbf{0})\) contains a path with endvertices \((0,\mathbf{0})\) and \((n,\mathbf{0}).\) Switching the roles of \(m\) and \(n\) we therefore have that \(|T_{n}-T_{m}|\leq T_{n,m}\) and we refer to this estimate as the _sub-additive property_ of \(T_{n}.\) Letting \(k\geq n^{1+\epsilon}\) with \(\epsilon>0,\) we now perform the subsequence argument. Setting \(U_{n}:=T_{n}(k)\) and \(S_{n}:=U_{n}-\mathbb{E}U_{n},\) we first show that \(\frac{S_{n}}{n}\longrightarrow 0\) a.s. as \(n\ \rightarrow\ \infty.\) Indeed, from the variance estimate for \(U_{n}\) in (3.2), we know that \(\mathbb{E}S_{n}^{2}\leq Cn\) for some constant \(C>0\) and so for a fixed \(\delta>0,\) the sum \[\sum_{n\geq 1}\mathbb{P}(|S_{n^{2}}|>n^{2}\delta)\leq\sum_{n\geq 1}\frac{ \mathbb{E}S_{n^{2}}^{2}}{\delta^{2}n^{4}}\leq\sum_{n\geq 1}\frac{C}{\delta^{2}n^ {2}}<\infty.\] Since this is true for all \(\delta>0,\) Borel-Cantelli Lemma implies that \(\frac{S_{n}2}{n^{2}}\) converges to zero a.s. as \(n\rightarrow\infty.\) To estimate the intermediate values of \(S_{j},\) we let \(n^{2}\leq j<(n+1)^{2}\) and set \(R_{n}:=\max_{n^{2}\leq j<(n+1)^{2}}|S_{j}-S_{n^{2}}|\) and show below that \(\frac{R_{n}}{n^{2}}\longrightarrow 0\) a.s. as \(n\ \rightarrow\ \infty.\) This would imply that for \(n^{2}\leq j<(n+1)^{2},\) \[\frac{|S_{j}|}{j}\leq\frac{|S_{j}-S_{n^{2}}|}{j}+\frac{|S_{n^{2}}|}{j}\leq \frac{|S_{j}-S_{n^{2}}|}{n^{2}}+\frac{|S_{n^{2}}|}{n^{2}}\leq\frac{D_{n}}{n^{2 }}+\frac{|S_{n^{2}}|}{n^{2}}\] and so \(\frac{S_{j}}{j}\) converges to zero a.s. as \(j\rightarrow\infty.\) To estimate \(R_{n},\) use first the triangle inequality to get that \[|S_{j}-S_{n^{2}}|\leq|U_{j}-U_{n^{2}}|+\mathbb{E}|U_{j}-U_{n^{2}}| \tag{3.10}\] We know that \(\mathbb{P}(T_{j}\neq U_{j})\leq\frac{D_{1}}{k(j)}\leq\frac{D_{2}}{j^{1+ \epsilon}}\) for some constants \(D_{1},D_{2}>0\) and so setting \(E_{tot}:=\bigcap_{j=n^{2}}^{(n+1)^{2}}\{T_{j}=U_{j}\},\) we get from the union bound that \[\mathbb{P}(E_{tot})\geq 1-\sum_{j=n^{2}}^{(n+1)^{2}}\frac{D_{2}}{j^{1+\epsilon}} \geq 1-\frac{D_{3}n}{n^{2+2\epsilon}}=1-\frac{D_{3}}{n^{1+2\epsilon}} \tag{3.11}\] for some constant \(D_{3}>0.\) We now write \(|U_{j}-U_{n^{2}}|=|T_{j}-T_{n^{2}}|\mathbf{l}(E_{tot})+|U_{j}-U_{n^{2}}| \mathbf{l}(E_{tot}^{c})\) and evaluate each term separately. For \(n^{2}\leq j<(n+1)^{2}\) we know by the subadditivity property that \(|T_{j}-T_{n^{2}}|\leq T_{j,n^{2}}\leq\sum_{i=n^{2}}^{(n+1)^{2}}t(f_{i})=:A_{n}\) and by definition, we have that \(U_{j}\leq\sum_{i=1}^{(n+1)^{2}}t(f_{i})=:J_{n}.\) Thus \(|U_{j}-U_{n^{2}}|\leq A_{n}+2J_{n}{\bf 1}(E_{tot}^{c})\) for all \(n^{2}\leq j\leq(n+1)^{2}\) and so from (3.10), we see that \[R_{n}\leq A_{n}+2J_{n}{\bf 1}(E_{tot}^{c})+\mathbb{E}A_{n}+2\mathbb{E}J_{n}{\bf 1 }(E_{tot}^{c}). \tag{3.12}\] Based on (3.12), it suffices to show that both the terms \(\frac{A_{n}}{n^{2}}\) and \(\frac{J_{n}{\bf\mathbb{I}}(E_{tot}^{c})}{n^{2}}\) converge to zero a.s. and in \(L^{1}\) as \(n\to\infty.\) First, from the estimate \(\mathbb{E}A_{n}\leq\sum_{i=n^{2}}^{(n+1)^{2}}\mathbb{E}t(f_{i})\leq Cn\) for some constant \(C>0,\) we get that \(\frac{\mathbb{E}A_{n}}{n^{2}}\longrightarrow 0\) as \(n\ \to\ \infty.\) Next, let \(0<\theta<1\) be any constant. Using Chebychev's inequality and arguing as in (3.7), we get that \(\mathbb{P}(A_{n}\geq 2\mu n^{1+\theta})\leq\frac{C}{n^{1+2\theta}}\) for all \(n\) large and so by the Borel-Cantelli Lemma we get that \(\mathbb{P}\left(A_{n}\leq 2\mu n^{1+\theta}\mbox{ for all large }n\right)=1.\) Since \(\theta<1,\) we get that \(\frac{A_{n}}{n^{2}}\longrightarrow 0\) a.s. as \(n\to\infty.\) To evaluate \(J_{n}{\bf 1}(E_{tot}^{c}),\) we use (3.11) and the Borel-Cantelli Lemma to get \({\bf 1}\!\!1(E_{tot}^{c})\longrightarrow 0\) a.s. as \(n\to\infty.\) Thus \(\frac{J_{n}{\bf\mathbb{I}}(E_{tot}^{c})}{n^{2}}\longrightarrow 0\) a.s. as \(n\to\infty.\) Using the Cauchy-Schwartz inequality, we also get that \(\mathbb{E}J_{n}{\bf 1}(E_{tot}^{c})\leq\left(\mathbb{E}J_{n}^{2}\right)^{1/2} \left(\mathbb{P}(E_{tot}^{c})\right)^{1/2}.\) By the AM-GM inequality and the bounded second moment assumption in (3.1), we get that \[\mathbb{E}J_{n}^{2}\leq(n+1)^{2}\sum_{i=1}^{(n+1)^{2}}\mathbb{E}t^{2}(f_{i}) \leq C_{2}n^{4}\] for some constant \(C_{2}>0\) and so using (3.11), we get that \[\mathbb{E}J_{n}{\bf 1}\!\!1(E_{tot}^{c})\leq\sqrt{C_{2}}n^{2}\cdot\left(\frac{D} {n^{1+2\epsilon}}\right)^{1/2}.\] Thus \(\frac{\mathbb{E}J_{n}{\bf\mathbb{I}}(E_{tot}^{c})}{n^{2}}\ \longrightarrow\ 0\) as \(n\to\infty\) as well and this completes the proof of \(\frac{S_{n}}{n}\longrightarrow 0\) a.s. as \(n\to\infty.\) _Step 2_: We now set \(k=n^{1+\epsilon}\) and show that \(\frac{T_{n}(k)-T_{n}}{n}\) converges to zero a.s. and in \(L^{1}.\) First using \(\mathbb{P}(T_{n}(k)\neq T_{n})\leq\frac{D_{1}}{n^{1+\epsilon}}\) for some constant \(D_{1}>0,\) we get from the Borel-Cantelli Lemma that \(\frac{T_{n}(k)-T_{n}}{n}\longrightarrow 0\) a.s. as \(n\to\infty.\) Next using the fact that \(T_{n}(k)\) and \(T_{n}\) are both bounded above by \(\sum_{i=1}^{n}t(f_{i})\) we have that \(\mathbb{E}|T_{n}(k)-T_{n}|=\mathbb{E}|T_{n}(k)-T_{n}|{\bf 1}\!\!1(T_{n}(k) \neq T_{n})\) is bounded above by \[\mathbb{E}\sum_{i=1}^{n}t(f_{i}){\bf 1}\!\!1(T_{n}(k)\neq T_{n}) \leq \mathbb{E}^{1/2}\left(\sum_{i=1}^{n}t(f_{i})\right)^{2}\mathbb{P} ^{1/2}(T_{n}(k)\neq T_{n})\] \[\leq \mathbb{E}^{1/2}\left(\sum_{i=1}^{n}t(f_{i})\right)^{2}\left( \frac{D_{1}}{n^{1+\epsilon}}\right)^{1/2}.\] Using \((\sum_{i=1}^{l}a_{i})^{2}\leq l\sum_{i}a_{i}^{2}\) we have \(\mathbb{E}\left(\sum_{i=1}^{n}t(f_{i})\right)^{2}\leq n\sum_{i=1}^{n}\mathbb{ E}t^{2}(f_{i})\leq D_{2}n^{2}\) for some constant \(D_{2}>0.\) Thus \(\mathbb{E}|T_{n}(k)-T_{n}|\leq D_{3}(n^{1-\epsilon})^{1/2}=o(n)\) and so \(\frac{\mathbb{E}[T_{n}(k)-T_{n}]}{n}\longrightarrow 0\) as \(n\to\infty.\) This completes the proof of a.s. convergence in part \((b).\) _Proof of Theorem 3.1\((c)\)_: Using \((a+b)^{2}\leq 2(a^{2}+b^{2})\) for any two real numbers \(a\) and \(b\) we have that the variance of the sum of any two random variables \(X\) and \(Y\) satisfies \[var(X+Y)=\mathbb{E}\left((X-\mathbb{E}X)+(Y-\mathbb{E}Y)\right)^{2}\leq 2( var(X)+var(Y)). \tag{3.13}\] Setting \(T=T_{n},U=T_{n}(k),X=T-U\) and \(Y=U\) we get that \[var(T)\leq 2var(T-U)+2var(U)\leq 2var(T-U)+D_{1}n \tag{3.14}\] for all \(n\) large and some constant \(D_{1}>0,\) by the variance estimate for \(U\) in part \((a)\) of this Theorem. We estimate \(var(T-U)\) as follows. Using \(T\leq U\) we write \[\mathbb{E}(T-U)^{2}=\mathbb{E}(T-U)^{2}\mathbf{1}(T\neq U)\leq 2\mathbb{E}(T^{2 }+U^{2})\mathbf{1}(T\neq U)\leq 4\mathbb{E}U^{2}\mathbf{1}(T\neq U).\] Since \(U\leq\sum_{i=1}^{n}t(f_{i}),\) we have that \(U^{2}\leq n\sum_{i=1}^{n}t^{2}(f_{i})\) and so \(var(T-U)\) is bounded above by \[\mathbb{E}(T-U)^{2}\leq 4n\sum_{i=1}^{n}\mathbb{E}t^{2}(f_{i})\mathbf{1}(T \neq U)\leq 4n^{2}\sup_{i}\mathbb{E}t^{2}(f_{i})\mathbf{1}(T\neq U). \tag{3.15}\] Let \(\theta>0\) be a constant and split \[\mathbb{E}t^{2}(f_{i})\mathbf{1}(T\neq U) = \mathbb{E}t^{2}(f_{i})\mathbf{1}(\{T\neq U\}\cap\{t(f_{i})<n^{ \theta}\}) \tag{3.16}\] \[\qquad+\ \ \mathbb{E}t^{2}(f_{i})\mathbf{1}(\{T\neq U\}\cap\{t(f_{i}) \geq n^{\theta}\}).\] From (3.2), we know that there are constants \(D_{2},D_{3}>0\) such that if \(k\geq D_{2}n\) then \(\mathbb{P}(T\neq U)\leq\frac{D_{3}}{k}.\) With this choice of \(k,\) the first term in (3.16) is bounded above by \[\mathbb{E}t^{2}(f_{i})\mathbf{1}(\{T\neq U\}\cap\{t(f_{i})<n^{ \theta}\})\leq n^{2\theta}\mathbb{P}(T\neq U)\leq\frac{D_{2}n^{2\theta}}{k} \leq\frac{1}{n^{3}} \tag{3.17}\] provided we choose \(k\) larger if necessary so that \(k\geq n^{2\theta}\log n.\) We now consider the case where the uniform square integrability condition holds. For any \(\eta>0\) and all \(n\) large, the final term in (3.16) is then at most \(\mathbb{E}t^{2}(f_{i})\mathbf{1}(t(f_{i})\geq n^{\theta})\leq\eta.\) Combining this estimate with (3.17), we get that \(\mathbb{E}t^{2}(f_{i})\mathbf{1}(T\neq U)\leq\frac{1}{n^{3}}+\eta\leq 2\eta\) for all \(n\) large and so from (3.15) we get that \(\mathbb{E}(T-U)^{2}\leq 8n^{2}\eta.\) Plugging this into (3.14) we get that \(var(T)\leq D_{1}n+8n^{2}\eta\) and since \(\eta>0\) is arbitrary, this implies that \(var\left(\frac{T}{n}\right)=o(1).\) Suppose now that bounded \(p^{th}\) moment condition holds for some \(p>2.\) Using Holder's inequality and Markov inequality in succession, the final term in (3.16) is at most \[\mathbb{E}t^{2}(f_{i})\mathbf{1}(t(f_{i})\geq n^{\theta}) \leq \left(\mathbb{E}t^{p}(f_{i})\right)^{2/p}\mathbb{P}\left(t(f_{i}) \geq n^{\theta}\right)^{1-2/p} \tag{3.18}\] \[\leq \left(\mathbb{E}t^{p}(f_{i})\right)^{2/p}\left(\frac{\mathbb{E}t ^{p}(f_{i})}{n^{\theta p}}\right)^{1-2/p}\] \[\leq \frac{D_{4}}{n^{\theta(p-2)}}\] for some constant \(D_{4}>0,\) by the bounded \(p^{th}\) moment assumption. We choose \(\theta>0\) large so that the final term in (3.18) is at most \(\frac{1}{n^{3}}.\) With this choice of \(\theta\) we get from (3.17) that \(\mathbb{E}t^{2}(f_{i})\mathbf{1}\!\!1(T\neq U)\leq\frac{2}{n^{3}}\) and plugging this into (3.15), we get that \(\mathbb{E}(T-U)^{2}\leq\frac{D_{5}}{n}\) for constant \(D_{5}>0.\) From (3.14), we then get that \(var(T)\leq D_{6}n\) for some constant \(D_{6}>0.\) This completes the proof of part \((c).\) _Remark_: For \(p>2,\) the bounded \(p^{th}\) moment condition in Theorem 3.1\((c)\) is stronger than the uniformly square integrable condition which in turn is stronger than the bounded second moment condition in (3.1). For the particular case of i.i.d. passage times, uniform square integrability is implied by the bounded second moment condition and the first condition in (3.1) above simply states that the passage times are a.s. positive. ### Acknowledgement I thank Professors Rahul Roy, C. R. Subramanian and the referee for crucial comments that led to an improvement of the paper. I also thank IMSc and IISER Bhopal for my fellowships.
2310.12923
Atomistic Study of Irradiation-Induced Plastic and Lattice Strain in Tungsten
We demonstrate a practical way to perform decomposition of the elasto-plastic deformation directly from atomistic simulation snapshots. Through molecular dynamics simulations on a large single crystal, we elucidate the intricate process of converting plastic strain, atomic strain, and rigid rotation during irradiation. Our study highlights how prismatic dislocation loops act as initiators of plastic strain effects in heavily irradiated metals, resulting in experimentally measurable alterations in lattice strain. We show the onset of plastic strain starts to emerge at high dose, leading to the spontaneous emergence of dislocation creep and irradiation-induced lattice swelling. This phenomenon arises from the agglomeration of dislocation loops into a dislocation network. Furthermore, our numerical framework enables us to categorize the plastic transformation into two distinct types: pure slip events and slip events accompanied by lattice swelling. The latter type is particularly responsible for the observed divergence in interstitial and vacancy counts, and also impacts the behavior of dislocations, potentially activating non-conventional slip systems.
Jintong Wu, Daniel R. Mason, Fredric Granberg
2023-10-19T17:19:12Z
http://arxiv.org/abs/2310.12923v1
# Atomistic Study of Irradiation-Induced Plastic and Lattice Strain in Tungsten ###### Abstract We demonstrate a practical way to perform decomposition of the elasto-plastic deformation directly from atomistic simulation snapshots. Through molecular dynamics simulations on a large single crystal, we elucidate the intricate process of converting plastic strain, atomic strain, and rigid rotation during irradiation. Our study highlights how prismatic dislocation loops act as initiators of plastic strain effects in heavily irradiated metals, resulting in experimentally measurable alterations in lattice strain. We show the onset of plastic strain starts to emerge at high dose, leading to the spontaneous emergence of dislocation creep and irradiation-induced lattice swelling. This phenomenon arises from the agglomeration of dislocation loops into a dislocation network. Furthermore, our numerical framework enables us to categorize the plastic transformation into two distinct types: pure slip events and slip events accompanied by lattice swelling. The latter type is particularly responsible for the observed divergence in interstitial and vacancy counts, and also impacts the behavior of dislocations, potentially activating non-conventional slip systems. Tungsten; Irradiation; Molecular dynamics; Strain evolution; Lattice swelling ## I Introduction The consequences of irradiation in materials, mainly seen as microstructural evolution, originates from the interaction between the incident particles and the lattice atoms of the material. Among them, radiation damage produced during the collision cascade process leads to swelling, embrittlement, and creep. These mechanical properties determine the lifetime of nuclear plant parts. Tungsten (W) has been chosen to play a pivotal role in fusion test reactors under construction, such as the International Thermonuclear Experimental Reactor (ITER) [1]. Here it will be used as the material facing the plasma inside the reactor, owing to its high atomic mass, leading to a low sputtering yield [2], high melting temperature and high thermal conductivity [3; 4; 5]. Irradiation induces significant changes in both the microstructure and mechanical properties of materials [6; 7; 8; 9]. The response of tungsten to irradiation has been extensively studied both experimentally and computationally. On the experimental side, Reza et al. [10] used transient grating spectroscopy (TGS) and found that 20 MeV self-ion irradiation showed defect saturation at doses between 0.06 and 0.1 dpa. Another positron annihilation spectroscopy study by Hollingsworth et al. [11] showed that vacancies appear as small clusters at low doses at room temperature. Recent research suggests that the accumulation of small vacancy clusters due to irradiation can eventually lead to the formation of macroscopic voids, resulting in observable swelling of the material [12]. Another study elucidates that this swelling phenomenon primarily arises from the agglomeration of self-interstitial defects into dislocation loops and extended dislocation networks. These dislocations are produced during collision cascades initiated by, for example, neutron irradiation, leading to the accumulation of uncompensated relaxation volumes [13]. Radiation promotes the glide of dislocations at high doses, thereby further extending the dislocation network. This process involves the thermally activated motion of screw dislocations, particularly evident in BCC metals [14]. Understanding the underlying mechanism of plastic flow at high-dose irradiation is crucial to gaining insights into the macroscopic swelling mechanism. On the simulation side, high-dose simulations are computationally heavy and more efficient methods have been proposed to generate characteristic high-dose microstructures. Derlet et al. [15] proposed the creation relaxation algorithm (CRA) to quickly reach high dose levels without full MD cascades. Defect densities obtained by the CRA are found to be higher than experimental values due to the lack of thermal effects [16]. To address this, combination of CRA and cascade simulation, known as cascade annealing (CA), have been used and showed good agreement with full MD simulations at similar doses [17; 18]. In the high-dose irradiation simulation scenario, particularly in the cell following CRA for sped-up dose accumulation, the atomic structure becomes highly disordered, and the complex defect network hinders our ability to accurately characterize the deformation process within the atomic system during the CA process. Nevertheless, understanding the intricate and nonlinear dynamics of the damage microstructure at high irradiation doses is a multifaceted task, encompassing various length and timescales, while also being influenced by exposure and environmental conditions [19]. Consequently, creating a concise atomistic model that accurately encompasses all relevant factors at the appropriate scales remains a formidable challenge. The detectable fluctuations in strains and stresses within irradiated materials provide an avenue for directly confirming the accuracy of real-space simulations. Elasticity equations serve as the cornerstone for connecting atomic-scale defects to macroscopic strains. In classical MD simulations, elasto-plastic deformation is inherently coupled, and quantifying the magnitude of elasto-plastic deformation to gain a deeper understanding of the underlying defect evolution mechanism remains challenging. Earlier works [20; 21; 22; 23; 24; 25; 26; 27; 28; 29] explored the plastic deformation of nanoscale metal crystals at low temperatures using MD simulations, but these qualitative studies lacked quantitative magnitude of plastic deformation. Vo et al. [30] proposed a decomposition algorithm primarily based on dislocation motion-induced plastic deformation. However, it is only applicable to one-dimensional configurations. In cases of complex dislocation motion, such as high defect density cases, the interaction between dislocations often renders the decomposition results inaccurate. Another previous study provided a decomposition method [31], based on the virtual intermediate configuration concept. However, if the identification algorithm fails to accurately determine the local atomic structure, it cannot perform elasto-plastic decomposition. For instance, a cell exposed to high-dose irradiation can cause the algorithm to fail. In addition to advances in simulation methodology, there have been recent advances in microstructural characterization. The Wigner-Seitz (W-S) method [32] has been used extensively to identify and characterize isolated defects embedded in a perfect reference crystal, but fails when the reference lattice itself can evolve, or when large local displacements comparable with the lattice spacing exist [33]. Recently Mason et al. [17] developed a method based on detecting the Wigner-Seitz cell _locally_, and using this to detect void isosurfaces. We refer to this method as Void Isosurface Detection (VID). Machine Learning analyses have also been developed to detect defect features related to vacancies and interstitials [34; 35]. In summary, achieving a clear, unambiguous, and consistent interpretation of data from ion-irradiated materials, particularly in the high-dose regime, remains a significant challenge. Existing experimental interpretation models rely on kinetic equations that involve numerous parameters but fail to account for the microscopic, fluctuating stresses and strains driving defect interactions at the nanoscale, as evidenced by prior studies [36; 37; 38]. Therefore, leveraging recent algorithmic advancements, we attempted to answer the following key research inquiries in this paper: 1. How can we construct a quantitative explanatory model for irradiation effects at high doses? Can we effectively employ the microscopic fluctuating strain model as a bridge connecting atomic-scale defects to macroscopic strains [13]? Additionally, can various defect detection methods accurately signify the accumulation or abrupt release of internal loading resulting from high-dose irradiation at the atomic level? 2. In simulations, lattice strain measurements can be derived from atomic positions via diffraction pattern peak positions. However, performing this calculation for overlapping cascade MD simulations involving tens of thousands of frames is prohibitively resource-intensive. Are there alternative approaches to acquire plastic and lattice strain information, directly from real-space simulations, even in the high dose regimes? 3. Under different dosage conditions, how do dislocation loops behave, and do they induce plastic deformation? If so, does the slip only occurs on certain planes? Furthermore, can we further categorize the specific type of plastic transformation involved? In this article we provide a quantitative interpretation of the role played by prismatic dislocation loops as initiators of plastic strain effects in heavily irradiated metals. This phenomenon leads to experimentally measurable alterations in lattice strain. First, by employing different defect characterization techniques, we introduce the "anomalies" observed in high-dose conditions (as detailed in section III.1). To find the mechanisms behind these phenomena at high doses and provide answers to pertinent questions, we show how atomic and plastic strain can be identified from snapshots of molecular dynamics simulations (section II.2). Importantly, we do this without tracking the relative positions of the atoms, as irradiation mixing makes this difficult. In section III.2.1 we show how the accumulation of lattice defects at low dose leads to an increasing homogeneous lattice strain, and how individual defects can reorient themselves spontaneously to reduce elastic energy. Finally in section III.2.2 we show how plastic strain starts to emerge at high dose, and so dislocation creep and irradiation-induced swelling both arise spontaneously through the agglomeration of dislocation loops into a dislocation network. We conclude that simulated irradiation shows both lattice and plastic strain responding to the evolution of defects on molecular dynamics timescales. This demonstrates unambiguously that a high density of irradiation induced defects self-organises almost instantaneously to reduce elastic energy density by dislocation-induced plastic slip. Simulation methods ### Irradiation simulations All simulations were conducted with LAMMPS [39], implemented with an adaptive timestep [40; 41; 32]. The dimensions of the cell were \(120\times 120\times 120\) conventional BCC unit cells, resulting in 3.456 million atoms. Periodic boundary conditions were applied in all dimensions. A friction force was applied to atoms according to the energy loss table provided by "Stopping and Range of Ions in Matter" (SRIM) [42] code to include effects of electronic stopping power, when the kinetic energy of an atom exceeds 10 eV [43]. We used the embedded atom method (EAM) potential by Ackland and Thetford [44] with the short-range modified by Zhong et al. [45], AT-ZN. This potential has previously shown a good agreement with experimental results [46]. For full MD cascade simulations, a 30 keV PKA was introduced at the centre of the system, in a random direction. Most of the cell was able to evolve without any constraints, except a two nm thick layer at the borders which were thermally controlled to 300 K. No pressure control was active during this simulation. Each cascade was followed for 30 ps, followed by a relaxation period of 10 ps with temperature control at 300 K on all atoms, and stress-strain boundary conditions \(\sigma_{x}=\sigma_{y}=\sigma_{z}=\varepsilon_{xy}=\varepsilon_{yz}=\varepsilon_ {zx}=0\). Thereafter, the cell was shifted before the next cascade event, in order to obtain a homogeneous irradiation. This procedure was repeated 4000 times to achieve the dose of \(\sim\)0.11 dpa (with a threshold displacement energy of 90 eV) according to the classical Norgett-Robinson-Torrens displacements per atom (NRT-dpa) model [47; 48; 49]. Following Ref. [17], we find the canonical DPA level, cdpa, defined as the expected atomic fraction of vacancies produced. This value was obtained from the first 30 overlapping cascades to be cdpa \(=4.2\times 10^{-6}N_{\text{casc}}\). The CRA method [15] directly inserts FPs to quickly construct a damaged system. For this work, we randomly inserted 1000 interstitials and deleted 1000 atoms per step, and relaxed the system with the conjugate gradient method (CG), similarly to previous studies [15; 50]. This was repeated enough times to achieve doses of 0.01, 0.0143, 0.03, 0.1 and 0.2 cdpa. As the CRA method does not include either thermal annealing or cascade annealing (CA), to produce a high dose microstructure comparable with full MD simulations we subsequently performed CA with 1600 30 keV cascades (0.0078 cdpa) on the CRA simulation boxes. This combination of CRA+MD has been shown to produce simulated irradiation microstructures almost indistinguishable from MD alone [17; 18]. A long thermal annealing was also investigated, however, this method did not produce comparable results to full MD, like the CA did. More details about the results and confirmation of convergence in the combination of CRA+MD used in this work can be found in the Supplementary Information (SI). All quantitative simulation results presented are the average of three different simulation runs carried out. ### Atomic vs plastic strain in a single crystal In this section we derive the relationship between lattice strain and plastic strain in a single crystal simulation. The _total_ strain, \(\underline{\underline{\varepsilon}}\), is well-defined, being the change in dimensions of the supercell. But this strain can be decomposed into different contributing factors. The _elastic_ strain, \(\underline{\underline{\mathbf{e}}}\), is related by Hooke's law to the stress on the boundary via the elastic compliance tensor, ie \(\underline{\underline{\mathbf{e}}}=\underline{\underline{\mathbf{C}}}^{-1} \underline{\underline{\mathbf{\sigma}}}\). In Mura's formulation [51], the total strain \(\underline{\underline{\varepsilon}}=\underline{\underline{\mathbf{e}}}+ \underline{\underline{\varepsilon}}^{\star}\), where the second term \(\underline{\underline{\varepsilon}}^{\star}\), known as the _eigenstrain_, is the _stress-free_ strain in the body due to defects. Here we are treating the atoms explicitly, and so can make a further distinction. A part of the strain is visible to the atoms, visible in the location of the x-ray diffraction peaks, comes from the eigenstrains arising from the irradiation induced defects. This strain affects the evolution of small irradiation-induced defects, as they have some freedom to rotate to compensate the elastic energy density. But a second part of the strain comes from plastic events, such as dislocation slip or creep, which after having occurred leave the crystal lattice essentially unaffected. These _plastic_ strains appear in the diffraction pattern in the detailed fine-structure of each individual diffraction peak, not in its location. Consider a periodic supercell with repeat vectors \(\mathbf{A}_{1}\), \(\mathbf{A}_{2}\), \(\mathbf{A}_{3}\), containing a single crystal with primitive lattice vectors \(\mathbf{b}_{1}\), \(\mathbf{b}_{2}\), \(\mathbf{b}_{3}\). We can compactly write these vectors as the matrix \(\underline{\underline{\mathbf{A}}}\), whose columns are \(\mathbf{A}_{i}\), and the matrix \(\underline{\underline{\mathbf{B}}}\) whose columns are \(\mathbf{b}_{i}\). Then for the lattice to be continuous across the supercell periodic boundary requires [17] \[\underline{\underline{\mathbf{A}}}=\underline{\underline{\mathbf{B}}}\, \underline{\underline{\mathbf{N}}}, \tag{1}\] where \(\underline{\underline{\mathbf{N}}}\) is a \(3\times 3\) matrix of integers representing the number of lattice repeats. The number of lattice sites in the periodic supercell is \(N_{\text{latt}}=\text{det}[\underline{\underline{\mathbf{N}}}]\times N_{ \text{motif}}\), where \(N_{\text{motif}}\) is the number of atoms in the motif of the primitive lattice. The two matrices \(\underline{\underline{\mathbf{B}}}\) and \(\underline{\underline{\mathbf{N}}}\) define a _reference lattice_ for the simulation, compatible with the simulation box size and shape. Now consider the box of atoms after some simulated irradiation. If we are running with a general set of stress- and strain-boundary conditions, the periodic supercell now has repeat vectors \(\underline{\underline{\mathbf{A}}}^{\prime}\). The change in the box can be written \(\underline{\underline{\mathbf{A}}}^{\prime}=\underline{\underline{\mathbf{F}}} \underline{\underline{\mathbf{A}}}\), where the deformation is given by the product of linear atomic and plastic deformations, denoted with subscripts \({}_{a}\) and \({}_{p}\), respectively, i.e. \(\underline{\underline{\mathbf{F}}}=\underline{\underline{\mathbf{F}}}_{a} \underline{\underline{\mathbf{F}}}_{p}\). The primitive cell can be atomically strained, and can also be rotated [52], so the lattice vectors can also change. We assume here that no second phase emerges, and a single grain is present after irradiation. If we can compute the best fit \(\underline{\underline{\mathbf{B}}}^{\prime}\) to the atomic positions after irradiation, then we can write \(\underline{\underline{\mathbf{B}}}^{\prime}=\underline{\underline{\mathbf{F}}} _{a}\,\underline{\underline{\mathbf{R}}}\,\underline{\underline{\mathbf{B}}}\), where \(\underline{\underline{\mathbf{R}}}\) is a rotation matrix. We can use the method of polar decomposition to find the atomic strain \[\underline{\underline{\mathbf{F}}}_{a}\,\underline{\underline{ \mathbf{F}}}_{a}^{T} = \left(1+\underline{\underline{\mathbf{\varepsilon}}}_{a}\right) \left(1+\underline{\underline{\mathbf{\varepsilon}}}_{a}\right)^{T}\] \[= \left(\underline{\underline{\mathbf{B}}}^{\prime}\underline{ \underline{\mathbf{B}}}^{-1}\underline{\underline{\mathbf{R}}}^{T}\right) \left(\underline{\underline{\mathbf{B}}}^{\prime}\underline{\underline{ \mathbf{B}}}^{-1}\underline{\underline{\mathbf{R}}}^{T}\right)^{T},\] \[\underline{\underline{\mathbf{\varepsilon}}}_{a} \approx \frac{1}{2}\left(\left(\underline{\underline{\mathbf{B}}}^{ \prime}\underline{\underline{\mathbf{B}}}^{-1}\right)\left(\underline{ \underline{\mathbf{B}}}^{\prime}\underline{\underline{\mathbf{B}}}^{-1} \right)^{T}-\underline{\underline{1}}\right). \tag{2}\] where to get the last line we have taken a linear approximation. From the arguments above, the matrix of lattice repeats after irradiation must solve \(\underline{\underline{\mathbf{A}}}^{\prime}=\underline{\underline{\mathbf{B}} }^{\prime}\underline{\underline{\mathbf{N}}}^{\prime}\). As \(\underline{\underline{\mathbf{A}}}^{\prime}\) is read from the size of the MD simulation box, we can rearrange to give the plastic strain \[\underline{\underline{\mathbf{F}}}_{p}=\underline{\underline{\mathbf{R}}} \,\underline{\underline{\mathbf{B}}}\,\underline{\underline{\mathbf{B}}}^{ \prime-1}\underline{\underline{\mathbf{A}}}^{\prime}\,\underline{\underline{ \mathbf{A}}}^{-1}, \tag{3}\] or as a function of the lattice repeats, \[\underline{\underline{\mathbf{F}}}_{p}=\underline{\underline{\mathbf{R}}}\, \underline{\underline{\mathbf{B}}}\,\underline{\underline{\mathbf{N}}}^{ \prime}\,\underline{\underline{\mathbf{N}}}^{-1}\underline{\underline{\mathbf{ B}}}^{-1}. \tag{4}\] Using polar decomposition we can find the plastic strain. As that the matrix product \(\underline{\underline{\mathbf{N}}}^{\prime}\,\underline{\underline{\mathbf{N}} }^{-1}\) in Eq. 4 is not symmetric, we write \(\underline{\underline{\mathbf{F}}}_{p}=\underline{\underline{\mathbf{R}}}^{ \prime}\left(1+\underline{\underline{\mathbf{\varepsilon}}}_{p}\right)\), where \(\underline{\underline{\mathbf{\varepsilon}}}_{p}\) is the (symmetric) plastic strain tensor. Then \[\underline{\underline{\mathbf{F}}}^{T}\,\underline{\underline{ \mathbf{F}}}_{p} = \left(1+\underline{\underline{\mathbf{\varepsilon}}}_{p}\right)^{T} \left(1+\underline{\underline{\mathbf{\varepsilon}}}_{p}\right)\] \[= \left(\underline{\underline{\mathbf{R}}}\,\underline{\underline{ \mathbf{B}}}\,\underline{\underline{\mathbf{B}}}^{\prime-1}\underline{ \underline{\mathbf{A}}}^{\prime}\,\underline{\underline{\mathbf{A}}}^{-1} \right)^{T}\left(\underline{\underline{\mathbf{R}}}\,\underline{\underline{ \mathbf{B}}}\,\underline{\underline{\mathbf{B}}}^{\prime-1}\underline{ \underline{\mathbf{A}}}^{\prime}\,\underline{\underline{\mathbf{A}}}^{-1} \right),\] \[\underline{\underline{\mathbf{\varepsilon}}}_{p} \approx \frac{1}{2}\left(\left(\underline{\underline{\mathbf{B}}}\, \underline{\underline{\mathbf{B}}}^{\prime-1}\underline{\underline{\mathbf{A} }}^{\prime}\,\underline{\underline{\mathbf{A}}}^{-1}\right)^{T}\left( \underline{\underline{\mathbf{B}}}\,\underline{\underline{\mathbf{B}}}^{\prime -1}\underline{\underline{\mathbf{A}}}^{\prime}\,\underline{\underline{ \mathbf{A}}}^{-1}\right)-\underline{\underline{1}}\right),\] where the last line is a linear approximation. Alternatively we can express Eq. 5 in terms of the number of lattice repeats. \[\underline{\underline{\mathbf{\varepsilon}}}_{p}\approx\frac{1}{2}\left(\left( \underline{\underline{\mathbf{B}}}\,\underline{\underline{\mathbf{N}}}^{ \prime}\,\underline{\underline{\mathbf{N}}}^{-1}\underline{\underline{\mathbf{ B}}}^{-1}\right)+\left(\underline{\underline{\mathbf{B}}}\,\underline{\underline{ \mathbf{N}}}^{\prime}\,\underline{\underline{\mathbf{N}}}^{-1}\underline{ \underline{\mathbf{B}}}^{-1}\right)^{T}\right)-\underline{\underline{1}} \tag{6}\] We conclude that the deformation corresponding to plastic strain is directly related to the change in the reference lattice for the simulation, and can be simply computed from the change in number of lattice repeats. This implies that both transformations which shear the reference lattice, but preserve the number of lattice sites (\(\det[\underline{\underline{\mathbf{N}}}^{\prime}]=\det[\underline{\underline{ \mathbf{N}}}]\)) _and_ transformations which change the number of lattice sites (\(\det[\underline{\underline{\mathbf{N}}}^{\prime}]\neq\det[\underline{ \underline{\mathbf{N}}}]\)) can be represented within the same plastic deformation \(\underline{\underline{\mathbf{F}}}_{p}\). Hence in this work, the formalism for the plastic deformation gradient \(\underline{\underline{\mathbf{F}}}_{p}\) contains both radiation-induced slip and swelling. Now consider a general plastic transformation defined by a Burgers vector change \(\mathbf{b}\) each time we move along the line \(\mathbf{n}\), with no associated homogeneous lattice strain. The magnitude of the Burgers vector is \(b=|\mathbf{b}|\). We can define the degree of the transformation with a single number \(\gamma=b/|\mathbf{n}|\). \(\gamma\) is simply the angle of shear if \(\mathbf{b}\) is in the plane with normal \(\mathbf{n}\). Then the periodic repeat vector \(\mathbf{A}_{1}\) changes to \(\mathbf{A}_{1}+(\mathbf{A}_{1}\cdot\hat{\mathbf{n}}/|\mathbf{n}|)\mathbf{b}\), where the caret', denotes a normal vector, and so the supercell changes to \[\underline{\underline{\mathbf{A}}}^{\prime}=\underline{\underline{\mathbf{A}}} +\frac{\gamma}{b}\left(\begin{array}{ccc}\left(\mathbf{A}_{1}\cdot\hat{ \mathbf{n}}\right)\mathbf{b}&\left(\mathbf{A}_{2}\cdot\hat{\mathbf{n}}\right) \mathbf{b}&\left(\mathbf{A}_{3}\cdot\hat{\mathbf{n}}\right)\mathbf{b}\\ \left|&&\left|\left|\left|\left|\left|\left|\left|\left|\left|\left|\left| \left|\left|\left|\left|\left|\left|\left|\|\left|\left|\left|\left|\|\left| \left|\left|\left|\left|\left|\left|\left|\|\left|\left|\|\left|\left|\right|\right| \right|\right|\right|\right|\right|\right|\right|\right|\right|}\right|\right| \right|\right|\right|\right|\right|\right|\right|\right|\right|\right|\}. \tag{7}\] It is convenient at this point to consider as an illustrative example a cubic simulation cell, of side \(ma_{0}\), i.e. \(\underline{\mathbf{A}}=m\,a_{0}\underline{\mathbf{1}}\). Then the change in such a supercell due to a slip event is with Burgers vector \(\mathbf{b}\) on plane \(\hat{\mathbf{n}}\) is given by the outer product \[\underline{\mathbf{A}}^{\prime}=\underline{\mathbf{A}}+\frac{m\gamma}{b} \mathbf{b}\otimes\hat{\mathbf{n}}, \tag{8}\] and the linearised plastic strain is recognised as the Schmidt tensor, \[\underline{\underline{\varepsilon}}_{p}=\frac{\gamma}{2b}\left(\mathbf{b} \otimes\hat{\mathbf{n}}+\hat{\mathbf{n}}\otimes\mathbf{b}\right). \tag{9}\] The new number of lattice repeats, \(\underline{\mathbf{N}}^{\prime}=\underline{\mathbf{B}}^{-1}\underline{ \mathbf{A}}^{\prime}=\underline{\mathbf{N}}+(\gamma/b)\underline{\mathbf{N}} \left(\mathbf{b}\otimes\hat{\mathbf{n}}\right)\). For BCC metals, \(N_{\mathrm{motif}}=1\), so \[\underline{\mathbf{B}}^{\mathrm{(BCC)}} = \frac{a_{0}}{2}\left(\begin{array}{ccc}-1&1&1\\ 1&-1&1\\ 1&1&-1\end{array}\right),\] \[\underline{\mathbf{N}}^{\mathrm{(BCC)}} = \left(\begin{array}{ccc}0&m&m\\ m&0&m\\ m&m&0\end{array}\right), \tag{10}\] and the new number of lattice repeats in the BCC crystal due to a plastic event is \[\underline{\mathbf{N}}^{\mathrm{(BCC)}^{\prime}}=\underline{\mathbf{N}}^{ \mathrm{(BCC)}}+\frac{m\gamma}{b}\left(\begin{array}{ccc}(b_{2}+b_{3})n_{1}& (b_{2}+b_{3})n_{2}&(b_{2}+b_{3})n_{3}\\ (b_{3}+b_{1})n_{1}&(b_{3}+b_{1})n_{2}&(b_{3}+b_{1})n_{3}\\ (b_{1}+b_{2})n_{1}&(b_{1}+b_{2})n_{2}&(b_{1}+b_{2})n_{3}\end{array}\right), \tag{11}\] while a similar process using an FCC primitive cell gives \[\underline{\mathbf{N}}^{\mathrm{(FCC)}^{\prime}}=\underline{\mathbf{N}}^{ \mathrm{(FCC)}}+\frac{m\gamma}{b}\left(\begin{array}{ccc}(-b_{1}+b_{2}+b_{3} )n_{1}&(-b_{1}+b_{2}+b_{3})n_{2}&(-b_{1}+b_{2}+b_{3})n_{3}\\ (b_{1}-b_{2}+b_{3})n_{1}&(b_{1}-b_{2}+b_{3})n_{2}&(b_{1}-b_{2}+b_{3})n_{3}\\ (b_{1}+b_{2}-b_{3})n_{1}&(b_{1}+b_{2}-b_{3})n_{2}&(b_{1}+b_{2}-b_{3})n_{3}\end{array}\right) \tag{12}\] Note that, if we impose the condition that the reference lattice is a single crystal, then the number of lattice repeats must be integer after a plastic event, this constrains permissible values of \(\gamma\) within the simulation box. In particular, we see the smallest \(\gamma\) must scale as \(\sim 1/m\), so small plastic deformations are not permitted in a small simulation box. For the BCC/FCC cases, we find the change in lattice sites, \(\Delta N_{\mathrm{latt}}=\mathrm{Det}[\underline{\mathbf{N}}^{\prime}]- \mathrm{Det}[\underline{\mathbf{N}}]\), is given by \[\Delta N_{\mathrm{latt}}^{\mathrm{(BCC)}} = 2\frac{m^{3}\gamma}{b}\mathbf{b}\cdot\hat{\mathbf{n}},\] \[\Delta N_{\mathrm{latt}}^{\mathrm{(FCC)}} = 4\frac{m^{3}\gamma}{b}\mathbf{b}\cdot\hat{\mathbf{n}}, \tag{13}\] from which we confirm that pure slip events (\(\mathbf{b}\cdot\hat{\mathbf{n}}=0\)) are associated with no change in lattice sites, but other plastic events change the number of lattice sites and so cause swelling. In Fig. 1, we illustrate the relationship between the slip direction normal and the Burgers vector. Establishing a connection between atomistic observations and continuum slip events offers the potential to develop a comprehensive interpretative model for high-dose irradiation scenarios. In section III.2.2, we outline both the similarities and distinctions between atomic-level slip phenomena and macroscopic models of plastic deformation. ### Computing the elasto-plastic strain decomposition from atomic positions The reference primitive lattice vectors making the matrix \(\underline{\mathbf{B}}\) can be found directly from the position of peaks in the diffraction pattern. But this is an expensive calculation, so below we describe an order-N approximation for \(\underline{\mathbf{B}}\), which we find works well in the high-dose irradiation cases described here. Given a reference lattice \(\underline{\mathbf{B}}\), the reference atom positions are \[\mathbf{y}=\underline{\mathbf{B}}[u,v,w]^{T}+\mathbf{m}_{t}, \tag{14}\] where \([u,v,w]\) is a triplet of integers and \(\mathbf{m}_{t}\) is the position of the \(t^{th}\) motif point in the reference unit cell. Given the observed atom positions, \(\{\mathbf{x}_{j}\}\), \(j\in\{1,\ldots N\}\), the Wigner-Seitz occupation, \(o_{i}\), is the number of atoms closest to of reference atom position \(i\), i.e. the number of atoms for which \(|\mathbf{x}_{j}-\mathbf{y}_{i}|<|\mathbf{x}_{j}-\mathbf{y}_{k}|\ \forall\,k\neq i\). The index \(i\in\{1,\ldots N_{\text{latt}}\}\), as defined above. Note that the sum of the occupation, \(\sum_{i}o_{i}=N_{\text{atoms}}\), but that \(N_{\text{atoms}}\) is not necessarily equal to \(N_{\text{latt}}\). The number of point defects is defined by \(N_{\text{PD}}=1/2\,\sum_{i}|o_{i}-1|\). We can make the _ansatz_ that there exists a set of reference atom positions \(\{\mathbf{y}_{i}\}\) for which \(N_{\text{PD}}\) is minimised. For a single crystal, we can in principle find this reference by minimising \(N_{\text{PD}}\) with respect to a vector offset common to the motif points \(\{\mathbf{m}_{t}\}\) and the elements of \(\underline{\mathbf{N}}\). We can find a new reference lattice \((\underline{\mathbf{B}}^{\prime},\underline{\mathbf{N}}^{\prime})\) from atom positions as follows. We compute the fit function on atom \(i\): \[S_{i}=\underline{\mathbf{T}}_{i}\sum_{j}\left(\mathbf{x}_{j}-\mathbf{x}_{i}- \delta\mathbf{y}_{k}\right)+\delta_{i}, \tag{15}\] where \(\delta\mathbf{y}=\underline{\mathbf{B}}[u^{\prime},v^{\prime},w^{\prime}]^{T} +\mathbf{m}_{t}-\mathbf{m}_{1}\) is an expected separation between reference atom sites. This we minimise with respect to the nine matrix elements of \(\underline{\mathbf{T}}_{i}\), displacement vector \(\delta_{i}\), and matching of neighbours to \(\{\delta\mathbf{y}_{k}\}\). We take the sum over neighbours with cutoff between 3rd and 4th nearest neighbours. This method is slower than polyhedral template matching [53], but robust when point defects are present. The matrix \(\underline{\mathbf{T}}_{i}=\underline{\mathbf{R}}_{i}(1+\underline{\mathbf{ c}}_{i})\) is the deformation gradient, a product of atomic strain and rotation at atom \(i\). The rotation matrix and strain can be separated using polar decomposition. To compute an approximate homogeneous lattice strain and orientation, we need an appropriate average of the spatially varying deformation gradient. To do this, we compute an arithmetic mean atomic deformation gradient \(\langle\underline{\mathbf{T}}\rangle=1/N\sum_{i}\underline{\mathbf{T}}_{i}\), and mean strain \(\langle\underline{\mathbf{c}}\rangle=1/N\sum_{i}\underline{\mathbf{c}}_{i}\) separately. Then we compute an appropriate rotation matrix, \(\underline{\mathbf{R}}\), from the arithmetic mean using polar decomposition. Finally we compute the homogeneous atomic deformation gradient compatible with the unitary rotation matrix and mean strain, as \(\underline{\mathbf{T}}=\underline{\mathbf{R}}(1+\langle\underline{\mathbf{ c}}\rangle)\). The method above finds new reference lattice vectors \(\underline{\mathbf{B}}^{\prime}=\underline{\mathbf{T}}\underline{\mathbf{B}}\) in \(\mathcal{O}(N)\) time, and from this we find \(\underline{\mathbf{N}}^{\prime}=\underline{\mathbf{A}}^{\prime}\underline{ \mathbf{B}}^{\prime-1}\) to complete the reference lattice. We then search for an optimal vector offset in \(\{\mathbf{m}_{t}\}\). The Wigner-Seitz occupations \(\{o_{i}\}\) reported here therefore minimise \(N_{\text{PD}}\) subject to a \(\mathcal{O}(N)\) estimation of the atomic deformation gradient. Figure 1: Schematic representation depicting single crystal slip and corresponding rigid rotation. (a) Prior to irradiation, (b) during irradiation, as deformation initiates, (c) a pure slip event, and (d) spontaneous atomic system rotation, within the constraint of a simulation box disallowing shear. ### Further analysis methods The "Open Visualization Tool" (OVITO) [54] was used for visualization and the dislocation extraction algorithm (DXA) [55] implemented was used for dislocation identification. For the results obtained with our W-S analysis, a cluster size distribution analysis was performed. In cases where a cluster contains both vacancies and interstitials, we determined the net content of defects to accurately assess the defect size. It is important to highlight that such mixed clusters were infrequent in our study because of our process of finding the reference lattice described above, and the disparity in the cluster's overall content before and after calculating the net defects was approximately 1%, even at the higher doses. The cutoff value for vacancies and interstitials are intermediate value between the nearest neighbor (1NN) and 2NN, and between the 3NN and 4NN, respectively [56; 57]. The shorter cutoff for vacancies compared to previous studies was chosen to give an estimate for possible deuterium retention, as for vacancies in clusters to collectively add to the empty volume they need to be in the nearest position to another vacancy. ## III Results and discussion ### Defect evolution during irradiation In this section, we highlight the core question of this paper by contrasting the similarities and differences in the accumulation of radiation-induced damage through various characterization methods. Fig. 2(a) shows the evolution of point defects as a function of dose. The atomic fraction of vacancies, computed from void isosurfaces using VID, is similar to the atomic fraction of unoccupied lattice sites (\(o_{i}=0\)) computed with Wigner-Seitz below \(10^{-3}\) cdpa, but the levels diverge above this dose. Fig. 2(b) shows the difference between these two methods. Unoccupied lattice sites which do not correspond to an open volume are associated most commonly with vacancy loops. This can be seen by comparing Fig. 2(c) and Fig. 2(d). The former, computed with W-S, clearly shows vacancy loops bounded by dislocation lines in addition to monovacancies and vacancy clusters, whereas the latter, computed with VID, shows only monovacancies and vacancy clusters. Multiply occupied Wigner-Seitz lattice sites (\(o_{i}>1\)) are associated with interstitials, either individually as crowdions or in interstitial clusters and dislocation loops. We see in Fig. 2(a) that the count of Wigner-Seitz unoccupied and multiply occupied lattice sites diverges above \(10^{-2}\) cdpa. Previous studies have shown that similar apparent artefacts by W-S might appear if the perfect lattice structure of a whole layer is identified as interstitials [58]. In Fig. 2(c), the alignment of vacancies by W-S and dislocations by DXA confirms that the significant increase in vacancy concentration observed in our work is not an artefact. Hence, we pose the following question: What factors contribute to the emergence of these vacancy loops and the divergence in the evolution of vacancies and interstitials? This question will be answered in detail in section III.2.2. Moreover, a full description of defect evolution and clustering under our simulation conditions and a convergence study for the CA can be found in the SI (see Table 1 and Figs. S1-S8). ### Microscopic deformation processes contributing to atomic and plastic strain In this section, we will illustrate swelling and dislocation creep phenomena and their associated atomic and plastic strain responses that take place during the irradiation process. First we consider the average atomic strain, computed as \(1/3\,\mathrm{Tr}[\varepsilon_{a}]\). The evolution as a function of dose is shown in Fig. 3. These results are qualitatively similar to the experimental and simulation results reported in Ref. [50]. The atomic lattice strain is tensile at low dose, during the accumulation of interstitial clusters and vacancies. Interstitials have a large positive relaxation volume, while the vacancies have a small negative relaxation volume, so if the numbers of each balance, the strain is positive. At high dose, the interstitials have coalesced into a network, leaving behind a vacancy-dominated microstructure with compressive atomic strain. Here we show an improved quantitative match to the experimental strain compared to Ref. [50]. This improvement is largely because we use MD cascade annealing here, whereas the previous work used CRA only. We should, however, note that the experimental boundary conditions were different - the experimental study considered self-ion irradiation into a slab, and so expansion was only possible in the \(z\)-direction. The micro-Lause x-ray measurement detected \(\varepsilon_{a}\) in the \(z\)-direction. As noted above, in this work the simulation cell was allowed to expand in the \(x\)-, \(y\)-, and \(z\)-directions. The change of sign in the strain at a dose of 0.01 cdpa corresponds to the point in Fig. 2(a) where the number of lattice sites is changing, and the count of unoccupied and multiply-occupied W-S sites diverges. In Ref. [50] this was attributed to interstitial plane formation and here we consider the change in atomic and plastic deformation more generally. The full evolution for both atomic and plastic deformation, rotation and the other measurables are found in the SI, for all our investigated samples. In the following subsections we are going through the main mechanisms and their implication for selected examples. #### iii.2.1 Atomic deformation We first consider the atomic strain in one of the simulations at a dose around 0.01 cdpa. At this point, the interstitial loops have grown large, but a network has not formed. Fig. 4 shows the atomic strain \(\varepsilon_{a}\) in the \(x\)-, \(y\)-, and \(z\)-directions as a function of dose. In a small simulation cell, it is quite expected that the components of the strain are different, as they are determined by relatively few dislocation loops. In Fig. 4 we see the \(x\)- and \(z\)-components are large below 0.01 cdpa, with the \(y\)-component smaller. This is associated with the growth of a \(1/2\langle 1\,1\,1\rangle\) interstitial loop with habit plane (\(1\,0\,1\)), which can be seen in the top panel of Fig. 4, i.e. snapshot **A**. Also visible in the snapshot is a smaller loop with habit plane (\(\overline{1}\,0\,1\)). Over the course of 20 cascades (0.0001 cdpa), these two loops join together to produce a mixed-habit-plane object seen in snapshot **B**, and then coalesce to form a single large loop with habit plane (\(1\,1\,0\)) in snapshot **C**. The dislocation line length is shown in the bottom panel of Fig. 4. There is no significant change in the fraction of W-S or VID defects during this event (as can be seen from Figs. S2 and S3 in SI). This habit plane rotation from (\(1\,0\,1\)) to (\(1\,1\,0\)) has a dramatic effect on the _atomic_ strain components. In Fig. 4 middle panel we see the \(y\)-component rapidly increase while the \(z\)-component decreases. However, this event is not linked to a change in _plastic_ strain, as per the definition provided in this paper. This lack of association stems from the unaltered matrix of lattice repeats \(\underline{\textbf{N}}\), as evidenced by the horizontal dot line, coupled with the fact that all plastic components remain at a constant value of 0. In the SI we provide several other examples of loop coalescence and habit plane rotation leading to significant and rapid changes in elastic stress, while not changing the plastic strain at Figure 2: (a) Point defect evolution as a function of dose characterized by different methods. (b) The concentration of vacancies in loops calculated as the difference between total unoccupied sites by the W-S method and vacancies using the VID method. Solid lines: full MD simulations. Red squares: after 1600 CA steps. (c) Vacancy distribution obtained by W-S and (d) the void distribution obtained by VID after CA, cdpa = 0.2047. all (see Figs. S9-S15 and S17-S19). These rapid fluctuations in local stress lead to rapid local changes in the microstructure. It is unclear from the current simulations whether such rapid local changes in the atomic stress will be so pronounced at greater length scales comparable to grain sizes. These results therefore show the ease with which the elastic dipole tensor of individual defects can be changed in response to stress [59]. At low dose, the habit planes of defects can shift, leading to a reorientation of their contribution to swelling [13]. The key role of Eq. (13) lies in correlating atomistic observations with continuum slip events, simplifying the comprehension that rotation of the habit plane or a localized slip event does not impact the global cell, maintaining a zero plastic response. Nonetheless, local slips occur in the simulations, for instance, during the glide of dislocation lines/loops, as depicted in Fig. 4. We formulated a straightforward slip tracing model to gain deeper insights into the dislocation movement, emphasizing that this method is an estimation, without meticulously converting each dislocation line network into a continuous field that accurately signifies the network's geometry and Burgers vector alterations [60]. Notably, the minor slip amplitude from cascade events allows for easy tracking of dislocation line movements without employing the Sweep-tracing algorithm (STA), detailed in another study [60]. Fig. 5(a) schematically illustrates the slip tracing model. The core concept of our model involves identifying the relevant slip facet by extracting dislocation lines exhibiting glide. This is achieved by pairing segments one-to-one before and after the slip event. The cross product of the slip vector and the unit tangent vector of the dislocation line symbolizes the \(\hat{\mathbf{n}}_{i}\) for each slip facet. Fig. 5(b) displays the integral statistics of slip facet normals calculated on each pair of dislocation loops extracted from 20 full MD cascades, when the habit plane rotates from state \(\mathbf{A}\) to \(\mathbf{C}\) in Fig. 4. The distribution of slip normals are plotted in projection as a function of the azimuthal (\(\theta\)) and elevation (\(\phi\)) angle, by color mapping the density of the slip facet distributions. The high density slip normals are identified as \((1\,1\,\overline{2})\) and \((0\,1\,\overline{1})\). In BCC metals, the planes with the largest interplanar spacing are the \(\{1\,1\,0\}\) planes followed by the \(\{1\,1\,2\}\) and the \(\{1\,2\,3\}\) planes [61]. These slip planes often translates to lower energy barriers for slip compared to other planes, especially under irradiation which can enhance point defect mobility and dislocation line interaction. Other peaks are also observed (\((0\,0\,1)\)), which can be the contribution from dislocation loops growing and rotating. Figure 3: Lattice strain derived from simulations. The gray shaded area represents the error fluctuation range of the full MD section, showing the range of three simulations. The error bars for the CRA+CA points is also associated with the three different runs. The experimental strain data is from Ref. [50]. #### iii.2.2 Elasto-plastic deformation To quantify the plastic strain, we proceed to examine the high-dose cases through CRA+CA simulations, which result in highly deformed microstructures, featuring abundant vacancies, offering the potential for plastic transformation. To begin with, the strain response at high doses is investigated. As an example, we consider the 1st CA case with the highest starting dose of 0.2 cdpa. As shown in Fig. 6(a), the evolution of atomic strain \(\varepsilon_{a}\) with both normal and shear components is observed. Initially, due to unstable isolated defects introduced by the CRA, the cell exhibits an ill-defined structure, leading to intense fluctuations in different indicators when the dose is below 0.203 cdpa. Subsequently, the sudden increases or decreases in excess lattice sites are seen, accompanied by abrupt changes in atomic strain in specific directions. These corresponding changes in shear components signify the occurrence of plastic transformation events, as shown in Fig. 6(b). The lattice strain changes instantly when the excess lattice sites form Figure 4: Bottom row: the overall dislocation density evolution, with a zoomed-in view of the period during which the strain drop occurs. Middle row displays the evolution of atomic strain throughout the full MD simulation, with a closer look at the strain drop during 20 rounds of irradiation. The lattice sites have no change during the whole process, which indicates no plastic transformation happens. To distinguish between three different states of the evolution, a semi-transparent blue, yellow, and red background is used in the zoomed-in figures. Upper row: to illustrate the strain transition process, three representative frames are selected (**A**, **B** and **C**), namely before the release of \(\epsilon_{a}(z)\), during the release, and after the release. The figures show the migration of interstitial loops at these three different stages. Red arrows indicate the direction of loop movements, while red and blue atoms represent vacancies and interstitials, respectively. Dislocation lines with \(1/2\langle 1\,1\,1\rangle\) and \(\langle 1\,0\,0\rangle\) Burgers vectors are shown in green and pink lines, respectively. or disappear, indicating that the plastic transformation happens immediately after a single cascade event. We select two representative regions for a detailed look, denoted as the blue and orange highlighted regions. These particular sections are chosen due to abrupt observable alterations in atomic strain. However, they exhibit different behavior: the region highlighted in blue undergoes a more significant change, coinciding with a global plastic slip event (Fig. 6(b)), which is not observed in the region highlighted in orange. Fig.6(c) and (d) show the atomic responses and cumulative slip normal distributions for the two distinct cases, respectively. In Fig.6(c), the instant change in atomic strain components is apparent, immediately following a single cascade event, accompanying the onset of global plastic deformation. An anomalous slip occurs in this instance (in addition to the conventional \(\{1\,2\,3\}\)), as depicted by the cumulative distribution of slip normals in Fig.6(c). Throughout this transformation, the non-conventional slip systems are activated: the \(\{1\,1\,1\}\) slip normal prevails upon the occurrence of global plastic deformation. The growth of dislocation networks provide additional slip channels or influence the mobility and interaction of dislocations with \(\{0\,0\,1\}\) and \(\{1\,1\,1\}\) planes. Contrarily, Fig. 6(d) shows a \(\{1\,1\,0\}\)-dominated slip duration when there is no global plastic response but a change of atomic strain. Subsequent examinations unveel an additional instance of habit plane rotation. However, the difference here compared to the low-dose case shown in Fig. 4 is that the rotated dislocation loop is vacancy-type. Consequently, owing to the relaxation volume of a vacancy loop is negative [62], vacancies distributed in a material produce negative lattice strain, which can be readily observed using x-ray diffraction [62, 63, 50, 64] in experiments. Therefore, an evident opposite change occurs in the _atomic_ strain components compared to the interstitial loop case shown in Fig. 4. Analyzing the local slip event, it can be found that, even though the habit plane rotation provides some additional slip channels, the slip planes should still be dominated by the ones with largest interplanar spacings. When global plastic deformation happens, this constraint no longer holds true. For the region highlighted in blue, this process is associated with a change in the number of lattice points, causing a plastic transformation characterized by swelling events. Consequently, the count of interstitials and vacancies diverges. This observed behavior is related to the Burgers vector \(\mathbf{b}\) occurring across plane \(n\), and it satisfies the relationship \(\mathbf{b}\cdot\hat{\mathbf{n}}\neq 0\), as indicated by Eq. (13). It is important to highlight that this type of plastic event predominates in our high-dose simulations, which might be a consequence of the orthorhombic constraint imposed during the supercell relaxation. This observation suggests that swelling tends to occur spontaneously when a plastic transformation is triggered. The prevalence of this phenomenon underscores the strong correlation between plasticity and the occurrence of swelling in our simulations at high doses. The other cases experiencing plastic deformation are shown in SI (see Figs. S16 and S20-S26). Furthermore, our observations reveal the occurrence of rigid rotation during plastic transformation. Notably, this rotation remains at zero during instances of elastic deformation, such as in the full MD cases. However, here, it is important to highlight that once the microstructure stabilizes, typically at a dose value above 0.205 cdpa, rigid rotation persists even in the absence of additional plastic slip events. Fig. 7 illustrates this phenomenon, particularly in the zoomed-in detail as shown in Fig. 7(a), where vacancy loop habit plane rotation induces symmetric evolution trends in the \(xy\)- and \(xz\)-shear components. Clearly, the \(xz\) components exhibit a more pronounced shear angle when compared Figure 5: Slip behavior of dislocations during the full MD case. (a) Schematic illustration of slip tracing extraction. The gray and black spheres symbolize dislocation segments before and after a plastic slip event (with the interval denoting a single cascade) extracted by the DXA method. The unit slip normal is shown as \(\hat{\mathbf{n}}\). (b) The cumulative distribution of \(\hat{\mathbf{n}}\) extracted over the course of 20 cascades from state \(\mathbf{A}\) to \(\mathbf{C}\) in the full MD case. to the \(xy\) components. As a result, the entire cell undergoes a slight rotation about the \(y\)-axis as the axis of rotation, as depicted in Fig. 7(b) and (c). This phenomenon arises because shear strain is not permitted within the LAMMPS simulation box in this study. Consequently, when the orientation of the straining axis aligns favorably with one of the crystal's slip system, initiating slip along this primary slip vector would distort the cell's shape. However, the cell is constrained to remain aligned with the simulation box, necessitating a rotational adjustment to accommodate for this behavior. By monitoring the direction of rotation, we can establish a connection with the coherence between continuum and atomic-level slip events. Schmidt's prediction suggests that, a crystal should naturally rotate to align its predominant slip direction with the straining axis, based on purely geometrical considerations [65, 66]. When comparing the direction of rotation to that shown in Fig. 6(b), we observe a consistent change in the direction of various shear components. In the preceding discussion, we explored the plastic transformation in cases of spontaneous swelling at high doses. Figure 6: (a) The elastic and (b) plastic responses as a function of dose at the highest starting dose of 0.2 cdpa during the 1st CA. The amount of change in lattice sites (according to the right axis) at different doses is indicated by the red dashed line. The two representative change of atomic strain, i.e. accompanying with global plastic deformation or not, are highlighted by the semi-transparent blue and orange backgrounds, respectively. Closer looks at local atomic strain and associated cumulative slip behavior during (c) global plastic deformation (highlighted in blue) and (d) pure elastic response (highlighted in orange). Now, from Eq. (13), where we identify another distinct microstructural event: when \(\mathbf{b}\cdot\hat{\mathbf{n}}=0\), indicating a pure slip event. Upon careful examination, we discovered that such events are exceptionally rare. For a pure slip phenomenon to occur, there must be a spontaneous change in periodic lattice repeats, which only happens when the elements in the periodic lattice repeat matrix undergo symmetrical position exchanges. During our investigation, we encountered just one instance of this, at the very beginning in the 3rd CA case, with the highest starting dose of 0.2 cdpa. As depicted in Fig. 8(a), the light-blue-highlighted region in the plastic strain \(\varepsilon_{p}\) panel illustrates this pure slip event. Before and after this single cascade event, both the normal and shear components exhibit responses, but the number of excess lattice sites remains constant. A closer examination reveals that the matrix of primitive cell repeats changes from \[\underline{\underline{\mathbf{N}}}_{\text{before}}=\left(\begin{array}{ c c c}0&121&121\\ 120&0&121\\ 120&121&0\end{array}\right)\qquad\qquad\text{to}\qquad\qquad\underline{ \underline{\mathbf{N}}}_{\text{after}}=\left(\begin{array}{ c c c}0&121&121\\ 120&0&120\\ 121&121&0\end{array}\right).\] Figure 7: (a) Rigid rotation and amount of change in lattice sites (right axis legend, red dashed line) at different doses for the highest starting dose (0.2 cdpa) during the 1st CA, \(cos(\alpha_{ij})\) indicates the angle between the shear component and the normal axes, where \(i,j=x,y,z\). (b) Corresponding trajectories of rotation for \([1\,0\,0]\) orientation, the doses are represented in a cold to warm color scale. (c) Schematic illustration of the crystal rotation (from the light grey to the black frame) of the atomic system with the dose above 0.205 cdpa. To facilitate a direct comparison, we have annotated two cascade events, namely labeled as regions **A** and **B**. The transition in region **A** corresponds to a pure slip event, whereas the transition in region **B** represents a slip event accompanied by lattice swelling. In order to gain a visual understanding of the relationship between the angle defined by the Burgers vector and the slip normals, dot product with the spatial Burgers vectors is performed to check the perpendicularity. Fig. 8(b) shows the histogram of \(\mathbf{b}\cdot\hat{\mathbf{n}}\) after applying the Bootstrap method [67]. Employing normal distribution fitting, we obtained estimated result intervals within the 95% confidence range for events shown in the region **A** and **B** respectively. The central values for these two cases are approximately 0 and 7. Consequently, it is interesting to note that by discerning alterations in the unit cell repeat number matrix, we can directly "observe" the occurrence and type of slip events. Additionally, the infrequent pure slip events observed in our simulations could be attributed to the boundary conditions employed. It is conceivable that altering these conditions to include applied shear stress during simulated irradiation might yield different results. In our case, the practical result is that plastic events lead to material swelling. ## IV Conclusions With the combination of full defect build-up and accelerated simulated annealing MD simulations, we have investigated the mechanisms responsible for atomic and plastic strain during damage accumulation and evolution. We devised a strain decomposition method which works by resolving the homogeneous atomic strain field from the atom positions, and using this to infer the plastic strain required for a compatible single crystal. With our approach to dissecting elasto-plastic deformation at the atomic level, we bring clarity to the mechanism by which the gradual accumulation of lattice defects, as the dose increases, leads to the development of a uniform lattice strain. The main conclusions of this study are: 1. At low dose, plastic strain changes are absent, but atomic strain changes are brought about both by the accumulation of irradiation defects, and by the response of the defects to the strain. This was illustrated by the mechanism of habit plane rotation. In this work the defects were able to reduce elastic strain energy in the simulation within molecular dynamics timescales, indicating very low thermal activation barriers. In a real material, impurities such as carbon may slow the movement of sections of the dislocation loop [68]. It is unclear Figure 8: Direct observations of two distinct slip events. (a) Plastic strain response (top panel) and corresponding defect concentration (bottom panel) as a function of dose at the highest starting dose of 0.2 cdpa during the 3rd CA case. The pure slip event and the slip event accompanied by lattice swelling are highlighted by the semi-transparent blue and orange backgrounds, respectively. The amount of change in lattice site (right axis legend) at different doses is indicated by the red dashed line. (b) Schematic illustration of dislocation slip. The gray and black spheres symbolize dislocation segments before and after a plastic slip event (with the interval denoting a single cascade) extracted by the DXA method. (c) Histogram of the distribution of \(\mathbf{b}\cdot\hat{\mathbf{n}}\) after the Bootstrap method for events denoted as **A** and **B**. Normal distribution fitting curves are shown in colors matching their respective data sets. how the competition between the build-up of local stress and the retarding effects of impurities are balanced in technologically relevant conditions, and this merits further study. 2. At high doses, both atomic and plastic strain can undergo alterations. The plastic transformation can be categorized into two types: pure slip events and slip accompanied by lattice swelling. The occurrence of pure slip events is infrequent in our simulations. In cases where the number of lattice sites remains unchanged, interstitials and vacancies show identical evolution trends. However, when the plastic transformation related to swelling occurs, a substantial change in excess lattice sites becomes apparent, which explains why the interstitial and vacancy count diverge. In this case, anomalous slip can occur. Lattice-swelling-related deformation could potentially activate slip on these planes despite them not being the primary slip planes in BCC tungsten, such as the \(\{1\,1\,1\}\) slip plane. The limited probability of pure slip events in the studied dose range accounts for the consistent material swelling observed during high-dose irradiation. This study presents a straightforward and effective method for decoupling the lattice strain field, which influences the evolution of defects resulting from irradiation, from the plastic strain field, which leads to material swelling. The methodology presented here can be used as an automatic analysis for future simulations, which can identify the elasto-plastic deformation. The results of this method can additionally be used to locate the interesting phenomena, which enables analysis by other methods, which cannot be used globally or on the the whole simulation series. This work is therefore a step towards describing radiation defect accumulation within the framework of finite element elasticity models. The findings underscore the detrimental impact of high-dose irradiation, leading to significant creep phenomena that pose challenges for the performance of advanced fission and fusion reactor components. ## Acknowledgements The authors would like to thank Max Boleininger and Luca Reali for helpful discussions. This work has been carried out under the DEVHIS project, funded by the Academy of Finland (Grant number 340538). This work has been partly carried out within the framework of the EUROfusion Consortium, funded by the European Union via the Euratom Research and Training Programme (Grant Agreement No 101052200 -- EUROfusion). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the European Commission can be held responsible for them. Computer time granted by the IT Center for Science - CSC - Finland and the Finnish Grid and Cloud Infrastructure (persistent identifier urn:nbn:fi:research-infras-2016072533) is gratefully acknowledged. ## References * [1] ITER Physics Basis Editors, ITER Physics Basis, Nucl. Fusion **39**, 2137 (1999). * [2] W. Eckstein and J. Laszlo, J. Nucl. Mater. **183**, 19 (1991). * [3] D. R. Lide, ed., _CRC Handbook of Chemistry and Physics_ (CRC Press, Boca Raton, FL, 2005). * [4] L. G. Wang, A. Van De Walle, and D. Alfe, Phys. Rev. B **84**, 092102 (2011). * [5] M. Rieth, J. L. Boutard, S. L. Dudarev, T. Ahlgren, S. Antusch, N. Baluc, M. F. Barthe, C. S. Becquart, L. Ciupinski, J. B. Correia, C. Domain, J. Fikar, E. Fortuna, C. C.. Fu, E. Gagandze, T. Galan, C. Garcia-Rosales, B. Gludovatz, H. Greuner, K. Heinola, N. Holstein, N. Juslin, F. Koch, W. Krauss, K. Kurzydlowski, J. Linke, C. Linsmeier, N. Luzginova, H. Maier, M. Martinez, J. Missanen, M. Muhammred, A. Munoz, M. Muzyk, K. Nordlund, D. Nguyen-Manh, P. Norajitra, J. Opschoor, G. Pintsuk, R. Pippan, G. Ritz, L. Romaner, D. Rupp, R. Schaublin, J. Schlosser, I. Uytdenhouwen, J. van der Laan, L. Veleva, L. Ventelon, S. Wahlberg, F. Willaime, S. Wurster, and M. Yar, J. Nucl. Mater. **417**, 463 (2011). * [6] F. A. Garner, M. L. Hamilton, N. F. Panayotou, and G. D. Johnson, J. Nucl. Mater. **104**, 803 (1981). * [7] G. R. Odette and G. E. Lucas, Radiat. Eff. Defects Solids **144**, 189 (1998). * [8] S. J. Zinkle, P. J. Maziasz, and R. E. Stoller, J. Nucl. Mater. **206**, 266 (1993). * [9] G. Monnet, J. Nucl. Mater. **508**, 609 (2018). * [10] A. Reza, H. Yu, K. Mizohata, and F. Hofmann, Acta Mater. **193**, 270 (2020). * [11] A. Hollingsworth, M. F. Barthe, M. Y. Lavrentiev, P. M. Derlet, S. L. Dudarev, D. R. Mason, Z. Hu, P. Desgardin, J. Hess, S. Davies, B. Thomas, H. Salter, E. F. J. Shelton, K. Heinola, K. Mizohata, A. De Backer, A. Baron-Wiechec, I. Jepu, Y. Zayachuk, A. Widdowson, E. Meslin, and A. Morellec, J. Nucl. Mater. **558**, 153373 (2022). * [12] K. Frydrych, Crystals **13**, 771 (2023). * [13] S. L. Dudarev, D. R. Mason, E. Tarleton, P. W. Ma, and A. E. Sand, Nucl. Fusion **58**, 126002 (2018). * [14] X. Xiao, D. Terentyev, L. Yu, D. Song, A. Bakaev, and H. Duan, J. Nucl, Mater. **466**, 312 (2015). * [15] P. M. Derlet and S. L. Dudarev, Phys. Rev. Mater. **4**, 023605 (2020). * [16] M. Boleininger, D. R. Mason, A. E. Sand, and S. L. Dudarev, Sci Rep **13**, 1684 (2023). * Mason et al. [2021] D. R. Mason, F. Granberg, M. Boleininger, T. Schwarz-Selinger, K. Nordlund, and S. L. Dudarev, Phys. Rev. Mater. **5**, 095403 (2021). * Granberg et al. [2023] F. Granberg, D. R. Mason, and J. Byggmastar, Comput. Mater. Sci. **217**, 111902 (2023). * Kiener et al. [2011] D. Kiener, P. Hosemann, S. A. Maloy, and A. M. Minor, Nat. Mater. **10**, 608 (2011). * Schi\(\ddot{\rm a}\)tz et al. [1999] J. Schi\(\ddot{\rm a}\)tz, T. Vegge, F. D. Di Tolla, and K. W. Jacobsen, Phys. Rev. B **60**, 11971 (1999). * Van Swygenhoven and Caro [1998] H. Van Swygenhoven and A. Caro, Phys. Rev. B **58**, 11246 (1998). * Van Swygenhoven et al. [1999] H. Van Swygenhoven, M. Spaczer, A. Caro, and D. Farkas, Phys. Rev. B **60**, 22 (1999). * Van Swygenhoven et al. [2001] H. Van Swygenhoven, A. Caro, and D. Farkas, Mater. Sci. Eng. A **309**, 440 (2001). * Van Swygenhoven and Derlet [2001] H. Van Swygenhoven and P. M. Derlet, Phys. Rev. B **64**, 224105 (2001). * Yamakov et al. [2001] V. Yamakov, D. Wolf, M. Salazar, S. R. Phillpot, and H. Gleiter, Acta Mater. **49**, 2713 (2001). * Yamakov et al. [2002] V. Yamakov, D. Wolf, S. R. Phillpot, and H. Gleiter, Acta Mater. **50**, 5005 (2002). * Yamakov et al. [2002] V. Yamakov, D. Wolf, S. R. Phillpot, A. K. Mukherjee, and H. Gleiter, Nat. Mater. **1**, 45 (2002). * Yamakov et al. [2003] V. Yamakov, D. Wolf, S. R. Phillpot, and H. Gleiter, Acta Mater. **51**, 4135 (2003). * Yamakov et al. [2004] V. Yamakov, D. Wolf, S. R. Phillpot, A. K. Mukherjee, and H. Gleiter, Nat. Mater. **3**, 43 (2004). * Vo et al. [2008] N. Q. Vo, R. S. Averback, P. Bellon, S. Odunuga, and A. Caro, Phys. Rev. B **77**, 134108 (2008). * Stukowski and Arsenlis [2012] A. Stukowski and A. Arsenlis, Modell. Simul. Mater. Sci. Eng. **20**, 035012 (2012). * Nordlund et al. [1998] K. Nordlund, M. Ghaly, R. S. Averback, M. Caturla, T. Diaz de La Rubia, and J. Tarus, Phys. Rev. B **57**, 7556 (1998). * Leino et al. [2018] A. A. Leino, G. D. Samolyuk, R. Sachan, F. Granberg, W. J. Weber, H. Bei, J. Liu, P. Zhai, and Y. Zhang, Acta Mater. **151**, 191 (2018). * Bhardwaj et al. [2021] U. Bhardwaj, A. E. Sand, and M. Warrier, Modell. Simul. Mater. Sci. Eng. **29**, 065015 (2021). * Goryaeva et al. [2020] A. M. Goryaeva, C. Lapointe, C. Dai, J. D\(\grave{\rm e}\)res, J. B. Maillet, and M. C. Marinica, Nat. Commun. **11**, 4691 (2020). * Anderson et al. [2017] P. M. Anderson, J. P. Hirth, and J. Lothe, _Theory of dislocations_ (Cambridge University Press, 2017). * Dudarev et al. [2010] S. L. Dudarev, M. R. Gilbert, K. Arakawa, H. Mori, Z. Yao, M. L. Jenkins, and P. M. Derlet, Phys. Rev. B **81**, 224107 (2010). * Mason et al. [2014] D. R. Mason, X. Yi, M. A. Kirk, and S. L. Dudarev, J. Phys. Condens. Matter **26**, 375701 (2014). * Plimpton [1995] S. Plimpton, J. Comput. Phys, **117**, 1 (1995). * Ghaly et al. [1999] M. Ghaly, K. Nordlund, and R. S. Averback, Phil. Mag. A **79**, 795 (1999). * Nordlund [1995] K. Nordlund, Comp. Mater. Sci. **3**, 448 (1995). * Ziegler [2004] J. F. Ziegler, Nucl. Instr. Meth. Phys. Res. Sec. B **219**, 1027 (2004). * Sand et al. [2013] A. E. Sand, S. L. Dudarev, and K. Nordlund, EPL (Europhysics Lett.) **103**, 46003 (2013). * Ackland and Theftord [1987] G. J. Ackland and R. Theftord, Phil. Mag. A **56**, 15 (1987). * Zhong et al. [1998] Y. Zhong, K. Nordlund, M. Ghaly, and R. S. Averback, Phys. Rev. B **58**, 2361 (1998). * Granberg et al. [2021] F. Granberg, J. Byggmastar, and K. Nordlund, J. Nucl. Mater. **556**, 153158 (2021). * Kinchin and Pease [1955] G. H. Kinchin and R. S. Pease, Reports on progress in physics **18**, 1 (1955). * Robinson and Torrens [1974] M. T. Robinson and I. M. Torrens, Phys. Rev. B **9**, 5008 (1974). * Norgett et al. [1975] M. J. Norgett, M. T. Robinson, and I. M. Torrens, Nucl. Eng. Des. **33**, 50 (1975). * Mason et al. [2020] D. R. Mason, S. Das, P. M. Derlet, S. L. Dudarev, A. J. London, H. Yu, N. W. Phillips, D. Yang, K. Mizohata, R. Xu, and F. Hofman, Phys. Rev. Lett. **125**, 225503 (2020). * Mura [2013] T. Mura, _Micromechanics of defects in solids_ (Springer Science & Business Media, 2013). * [52] To see that the primitive cell can be rotated, we use proof by _reductio ad absurdum_: The most extreme irradiation disordering could be taking out all the atoms, replacing them in amorphous positions, and annealing back to a crystal lattice. For no rotation to be true it is necessary that the exact same lattice orientation is always recovered from amorphisation + annealing. * Larsen et al. [2016] P. M. Larsen, S. Schmidt, and J. Schi\(\ddot{\rm a}\)tz, Modell. Simul. Mater. Sci. Eng. **24**, 055007 (2016). * Stukowski [2009] A. Stukowski, Modell. Simul. Mater. Sci. Eng. **18**, 015012 (2009). * Stukowski et al. [2012] A. Stukowski, V. V. Bulatov, and A. Arsenlis, Modell. Simul. Mater. Sci. Eng. **20**, 085007 (2012). * Terentyev et al. [2006] D. Terentyev, C. Lagerstedt, P. Olsson, K. Nordlund, J. Wallenius, C. S. Becquart, and L. Malerba, J. Nucl. Mater. **351**, 65 (2006). * Bj\(\ddot{\rm a}\)rkas and Nordlund [2007] C. Bj\(\ddot{\rm a}\)rkas and K. Nordlund, Nucl. Instr. Meth. Phys. Res. Sec. B **259**, 853 (2007). * Wu et al. [2021] J. Wu, Z. Xu, L. Liu, A. Hartmaier, M. Rommel, K. Nordlund, T. Wang, R. Janisch, and J. Zhao, J. Mat. Chem. C **9**, 2258 (2021). * Dudarev and Sutton [2017] S. L. Dudarev and A. P. Sutton, Acta Mater. **125**, 425 (2017). * Bertin et al. [2022] N. Bertin, L. A. Zepeda-Ruiz, and V. V. Bulatov, Mater. Theory **6**, 1 (2022). * Weinberger et al. [2013] C. R. Weinberger, B. L. Boyce, and C. C. Battaile, Int. Mater. Rev. **58**, 296 (2013). * Mason et al. [2019] D. R. Mason, D. Nguyen-Manh, M. C. Marinica, R. Alexander, A. E. Sand, and. L. Dudarev, J. Appl. Phys. **126** (2019). * Simmons and Balluffi [1960] R. O. Simmons and R. W. Balluffi, Phys. Rev. **117**, 52 (1960). * Hertz et al. [1973] W. Hertz, W. Waidelich, and H. Peisl, Phys. Lett. A **43**, 289 (1973). * Meyers and Chawla [2008] M. A. Meyers and K. K. Chawla, _Mechanical behavior of materials_ (Cambridge university press, 2008). * Zepeda-Ruiz et al. [2021] L. A. Zepeda-Ruiz, A. Stukowski, T. Oppelstrup, N. Bertin, N. R. Barton, R. Freitas, and V. V. Bulatov, Nat. Mater. **20**, 315 (2021). * Efron [1992] B. Efron, in _Breakthroughs in statistics: Methodology and distribution_ (Springer, 1992) pp. 569-593. * Castin et al. [2019] N. Castin, A. Dubinko, G. Bonny, A. Bakaev, J. Likonen, A. De Backer, A. E. Sand, K. Heinola, and D. Terentyev, J. Nucl. Mater. **527**, 151808 (2019). Supplementary Information for "Atomistic Study of Irradiation-Induced Plastic and Lattice Strain in Tungsten" Jintong Wu Department of Physics, University of Helsinki, Post-office box 43, FIN-00014 University of Helsinki, Finland Daniel R. Mason UK Atomic Energy Authority, Culham Science Centre, Oxfordshire OX14 3DB, UK Frederic Granberg [email protected] Department of Physics, University of Helsinki, Post-office box 43, FIN-00014 University of Helsinki, Finland ###### Abstract ## I Convergence Tests To investigate how much of the annealing in cascade annealing (CA) is happening due to the thermal part and how much due to the cascades, a few CRA samples were subjected to only thermal annealing (TA). We look at both how much effect the initial heat up from 0 K to 300 K over 30 ps have on the evolution as well as a longer 50 ns relaxation. The 50 ns relaxation is the same duration as the full 1600 CA steps. Three independent simulations at 3 different doses at 0.01, 0.03 and 0.2 cdpa were selected for the longer thermal annealing. In Fig. S1(a) and (b) the convergence during CA and TA for one of the cases is shown, respectively. In Fig. S1(b), the solid symbols show the initial cells after 30 ps "warm up" at room temperature (before subsequent CA simulations). ## II Energy Deposition After careful inspection we found that about 1400\(\pm\)100 times of CA can ensure that all defect levels reach saturation state at the different initial doses. An interesting idea arises based on this, if we combine the published results in Ref [1], for the 10 keV case, about 600\(\pm\)100 times of CA can reach the saturation level. In Ref [2], this value is about 1600\(\pm\)100 times, for a different cell size and interatomic potential. Based on the total energy deposited in the system (the irradiation energies minus the electronic stopping powers, which are roughly 8000 eV and 2200 eV for 30 keV and 10 keV, respectively), a simple estimate on how much CA is needed in form of energy deposited per atom can be found, as shown in Table 1. We can see that it is between 7 - 11 eV/atom needed in order to fully converge the results. This value can be used in further studies as an initially rough estimate on how many CA steps are needed for a certain energy and simulation cell size. ## III Void Evolution by the Vid Looking at the vacancy concentration evolution at different doses analyzed by the Void Isosurface Detection (VID), Fig. S2, we observe differences between different simulation methods. In order to facilitate the distinction, we adopted fixed number of inserted FPs/time + repetitions of this process to name the CA cases. E.g. "CA-35" means repeating the process of inserting 1000 FPs 35 times to reach an initial dose of 0.01 cdpa. First examining previous full MD results (10 keV PKAs in a 0.5 million atoms cell [1]), shown as an orange solid line, compared to our full MD results (30 keV PKAs in a \(\sim\)3.5 million atoms cell). Although naturally, increasing the PKA energy would introduce more defects in the cell, while if we convert the data to defect concentration as shown here, it could be found that the trend of vacancy concentration evolution is quite similar in the two cases. First an increasing trend is seen, and then a slow climb up until it reaches the saturation level. At the same doses as the previous results, we see a slightly lower defect concentration at the higher energy PKAs, however, the previous high-dose results are on a very similar level our results. We observe a good agreement with the saturation levels reported in the experiments from Ref. [3] (obtained by 20 MeV self-ion irradiation). Additionally, calculating the deuterium retention in our cells by assuming 5D per vacancy, we get a value of approximately 1.5% deuterium retention, which is close to experimental values [2]. The CRA simulations show a dramatically increase in defect concentration during the whole process. During the non-thermodynamic CRA simulations make the randomly inserted FPs isolated and stable in the system. In addition, combined with previous reports on cascade overlap effects of W [4] and the full MD results shown here, it can be seen that the linear increase in damage at the beginning (where no or little overlap is happening) will progressively become a trend of zero damage increase with increasing dose. This process, however, plays a negligible part during the CRA simulations, thus leading to a dramatic increase in the degree of lattice disorder. For the TA cases, depicted as black filled stars in Fig. S2 (also appearing in the other figures), it can be seen that the concentration of vacancies is further reduced by \(\sim\)0.05 at.% in 50 ns compared to the 30 ps case. Thermal annealing will indeed reduce the number of defects, but it does not anneal out everything, as there is a discrepancy between the methods, even though both of them are on the same timescales. The not so large difference between the short thermalization and the long TA, indicates that mainly the low barrier events and close-by defects are combined and annealed out. ## IV Defect evolution by the W-S method Fig. S3 shows the at.% defect concentration for all methodologies by the W-S method. ## V Volumetric swelling and potential increase In Fig. S4, the evolution of the volumetric swelling and the potential energy increase as a function of dose are shown. A similar conclusion for a different setup is also reflected in the recent work of Boleininger et al. [5], where their simulations showed that the formation of extra planes only led to a sudden drop in the dislocation volume while the box volume was not affected. Note that the volumetric swelling remains unaffected by plastic deformation-induced slipping events associated with lattice sites swelling, which shown as the core part in the main manuscript. In addition, it can be found that the swelling caused by different energy irradiation is almost the same at low doses \(\lessapprox\) 0.01 cdpa, with a value about 0.2%, for both full MD and CRA+CA. A slight separation is observed at higher doses, both full MD and CRA+CA results for 30 keV PKAs are slightly lower than that of 10 keV PKAs. One of the reasons related to the PKA energy is the self-annealing effect of lower irradiation energy is relatively lower, which can explain why the damage level for 10 keV PKAs is higher than that of 30 keV PKAs at the same dose of \(\approx\)0.01 cdpa (Fig.S2). Despite minor differences, the values found in the experimental studies for different doses different temperatures vary in the range of 0.2-0.4% [6; 7], close to both simulation cases. ## VI Cluster statistics Fig. S6 shows the number of vacancies found in different-sized clusters by the VID method. Compared to Fig. S7, the vacancies obtained by W-S method, except for the discontinuous fluctuation of the largest size clusters (Fig. S7(f)), the evolution of vacancy clusters with different sizes detected by both methods show good agreements between the full MD and CRA+CA cases. As mentioned in the main text, the defects are often spatially separated from each other utilizing only CRA. This can be well observed in Supplementary Figs. S6(a) and S7(a): The number of mono-vacancies is observed to decrease with the subsequent CA, and finally stabilize at the saturation level. Di-vacancies are also initially decreasing during CA for the CA-35 set, and after around 30 times of CA, the vacancy concentration then climbs to the saturation level of full MD. In Fig. S6(b), di-vacancy clusters in the CA-50 set initially drops quickly to the full MD level and remained almost unchanged in the subsequent 1600 times irradiation. The other higher initial dose sets showed a similar trend as the CA-50 set, but required more CA steps before converging to the full MD results. For the tri-vacancy case, the two sets with the lowest initial doses show a climbing trend during CA, while the value of CA-105 set fluctuates at the saturation level throughout the whole process, according to the VID method as shown in Fig. S6(c). Notably, the W-S method with a cluster size of 4-5 also exhibits the same trend as observed in the VID method, where the two lowest initial dose sets display an upward trend while the CA-105 case remains at a fluctuation level. By analogy, until the 10+ size set, the emergence of large clusters shows an upward trend under all five sets of different initial doses. It needs to be emphasized again that the CA is of great importance to regress the evolution of vacancy clusters to the correct saturation level, including the formation and evolution of interstitial clusters as well. For the interstitial clustering discussion, please see the main text. ## VII Strain and corresponding defect evolution Fig. S9 to Fig. S11 illustrate strain responses of full MD simulations, which also show dislocation habit plane changes (which introduce the sudden fluctuate of the atomic strain). The corresponding defect evolution was analyzed using the VID and W-S methods to identify vacancies and interstitials. It should be noted that the concentration of vacancies and interstitials remains constant throughout the simulation, while the concentration of voids is slightly lower than the defects detected by the W-S method. The results of the CA simulations are shown in Fig. S12 to S26, for all cases investiagted. It can be observed that in the simulations with a starting dose of 0.01, 0.0143, and 0.03 cdpa, the probability of excess plane formation is quite low, due to the lower amount of defects. Out of the three independent simulations conducted for each of these cases, only two cases showed excess plane formation, specifically the second case with a starting dose of 0.0143 cdpa and the third case with a starting dose of 0.03 cdpa. No plastic deformation occurred in the other simulations at this low dose regime. However, when the starting dose exceeded 0.1 cdpa, elasto-plastic deformation and rigid rotation were observed in all simulations, most of them accompanying with swelling event (formation of extra lattice sites).
2308.07802
Neuromorphic Seatbelt State Detection for In-Cabin Monitoring with Event Cameras
Neuromorphic vision sensors, or event cameras, differ from conventional cameras in that they do not capture images at a specified rate. Instead, they asynchronously log local brightness changes at each pixel. As a result, event cameras only record changes in a given scene, and do so with very high temporal resolution, high dynamic range, and low power requirements. Recent research has demonstrated how these characteristics make event cameras extremely practical sensors in driver monitoring systems (DMS), enabling the tracking of high-speed eye motion and blinks. This research provides a proof of concept to expand event-based DMS techniques to include seatbelt state detection. Using an event simulator, a dataset of 108,691 synthetic neuromorphic frames of car occupants was generated from a near-infrared (NIR) dataset, and split into training, validation, and test sets for a seatbelt state detection algorithm based on a recurrent convolutional neural network (CNN). In addition, a smaller set of real event data was collected and reserved for testing. In a binary classification task, the fastened/unfastened frames were identified with an F1 score of 0.989 and 0.944 on the simulated and real test sets respectively. When the problem extended to also classify the action of fastening/unfastening the seatbelt, respective F1 scores of 0.964 and 0.846 were achieved.
Paul Kielty, Cian Ryan, Mehdi Sefidgar Dilmaghani, Waseem Shariff, Joe Lemley, Peter Corcoran
2023-08-15T14:27:46Z
http://arxiv.org/abs/2308.07802v1
Neuromorphic Seatbelt State Detection for In-Cabin Monitoring with Event Cameras ###### Abstract Neuromorphic vision sensors, or event cameras, differ from conventional cameras in that they do not capture images at a specified rate. Instead, they asynchronously log local brightness changes at each pixel. As a result, event cameras only record changes in a given scene, and do so with very high temporal resolution, high dynamic range, and low power requirements. Recent research has demonstrated how these characteristics make event cameras extremely practical sensors in driver monitoring systems (DMS), enabling the tracking of high-speed eye motion and blinks. This research provides a proof of concept to expand event-based DMS techniques to include seatbelt state detection. Using an event simulator, a dataset of 108,691 synthetic neuromorphic frames of car occupants was generated from a near-infrared (NIR) dataset, and split into training, validation, and test sets for a seatbelt state detection algorithm based on a recurrent convolutional neural network (CNN). In addition, a smaller set of real event data was collected and reserved for testing. In a binary classification task, the fastened/unfastened frames were identified with an F1 score of 0.989 and 0.944 on the simulated and real test sets respectively. When the problem extended to also classify the action of fastening/unfastening the seatbelt, respective F1 scores of 0.964 and 0.846 were achieved. **Keywords:** CNN, Driver Monitoring, Event Camera, Neuromorphic Sensing, Seatbelt ## 1 Introduction Neuromorphic vision describes a class of sensors designed to mimic biological perceptual functions. One such sensor is an event camera, which differs from a conventional camera in that each pixel records data asynchronously. Whenever one of these pixels detects a relative change in brightness above a set threshold an 'event' is logged. Each event is comprised of a timestamp, the coordinate of the pixel that reported the event, and a polarity to indicate whether an increase or decrease in brightness occurred. The event camera does not output images, but a list of events generated by motion or lighting changes in the scene. The event data has no intrinsic framerate, however, its time resolution exceeds that of video captured at 10,000 frames per second. Event cameras also offer higher dynamic range and lower power consumption than most conventional shutter cameras [14]. A 2018 meta-analysis found that a fastened seatbelt reduces the risk of injury in road collisions by 65% [20], and in the United States, seatbelt use was shown to reduce mortality by 72% [3]. Existing seatbelt alert systems in modern vehicles rely on pressure sensors in the seat to determine occupancy and simply detect if the seatbelt tongue is inserted in the buckle. This can easily be spoofed by buckling and sitting in front of the seatbelt, and has no ability to determine if a seatbelt has been fastened correctly. Also, it is often only implemented in the front seats of the vehicle. Camera-based seatbelt detection systems have the potential to rectify these flaws. With the ever-increasing demand for safer, more intelligent vehicles, there have been remarkable developments in camera-based DMS. At this stage they have been fully implemented in many modern consumer vehicles. With the camera systems already in place, it is possible to add new DMS features with minimal additional cost. Recent research has revealed how event cameras hold many advantages over standard shutter cameras in for driver monitoring tasks, particularly when it comes to face and eye motion analysis (Ryan et al., 2021; Chen et al., 2020). In this paper, we demonstrate the viability of another feature in an event-based DMS by creating the first event-based seatbelt state detector. ## 2 Event Data Simulation and Collection An obstacle regularly faced in event camera research is the lack of publicly available large-scale datasets. This has driven the development of event simulators such as V2E (Delbruck et al., 2020), which enables the synthesis of realistic events from NIR or RGB videos by analysing the differences between consecutive frames. Most of the event data used for this research was simulated with V2E from a non-public industry dataset of NIR videos. Using a wide field of view camera on the rear-view mirror of a car, various subjects were recorded fastening and unfastening their seatblets repeatedly. The video frames were labelled according to the following classes: (0) The subject's seatbelt is fastened. (1) The subject's seatbelt is unfastened. (2) The subject is fastening their seatbelt. (3) The subject is unfastening their seatbelt. A set of real event data was also collected for testing the network. A Prophesee EVK4 event camera was mounted beside the rear-view mirror of a driving simulator and focused on the driver's seat. Six subjects were asked to fasten and unfasten their seatbelt at random intervals throughout each recording. These videos were labelled manually with the same 4 classes as the NIR dataset. ## 3 Pre-processing of Event Data The event data, both simulated and real, are saved as lists of events in text format. To use this data in CNNs and other image-based systems, it must first be represented in a 2D array. This is typically achieved by accumulating a group of events and summing the positive and negative events at each pixel location to create a 2D frame (Gallego et al., 2022). When transforming an event recording into frames with this technique, the decision of how many events should be accumulated per frame must be carefully considered. The two most common approaches are to accumulate events over a fixed duration or accumulate a fixed number of events for each frame. The former method of grouping the events by a fixed duration is useful in tasks that could benefit from the temporal information in a sequence of frames as the generated frames will have fixed time spacing, much like conventional video formats. However, this approach is prone to generating frames with few events if there is little motion in the scene over the fixed duration. The alternative approach of forming each frame from a fixed number of events gives some assurance of a minimum amount of spatial information in each frame, at the loss of much of this temporal information. This works better for keeping the seatbelt visible when there is minimal motion in the frame, but motion of the head or background can quickly saturate the event count and generate many frames where the seatbelt is absent. A custom accumulation approach was developed for the final iteration of the dataset. Each frame was defined by a fixed number of events, however, only events within a rectangle bounding the subject's torso were counted. This maintained seatbelt visibility in more frames than the original two methods, as demonstrated in Fig. 1. where the fixed counts/duration were specified so each method generates 75 frames of the same "Seatbelt Fastened" clip. In the full 75 frames, the seatbelt Figure 1: Frames from a ”Seatbelt Fastened” event clip generated using (a) a fixed time period, (b) a fixed event count, and (c) a fixed event count over the torso region. was visible in (a) 27%, (b) 71%, and (c) 93%. The final dataset contained 108,691 synthetic event frames and 8,317 real event frames. The simulated videos were randomly separated into training, validation, and test sets. The real event videos were all reserved for the testing. ## 4 Network Architecture It is difficult to distinguish individual fastening/unfastening frames, but it becomes obvious when the whole sequence of frames is considered. Additionally, for the static classes with unreliable seatbelt visibility, using a sequence of frames can provide a more reliable result. For these reasons we used a recurrent CNN architecture which takes a frame sequence is as the input for each prediction. Fig. 2 gives a high-level overview of the structure. The MobileNetV2 network is used as an efficient, lightweight backbone for initial feature extraction [2]. Recent years have seen self-attention introduced to many CNN tasks for its ability to contextualize and apply a weighting to input features, with only a small computational cost. The self-attention module in our proposed network is implemented according to [11]. When attended feature maps have been generated for every frame of the input sequence, they are stacked and passed to the recurrent head of the network. This is comprised of a 2 stacked bi-directional LSTM layers [1]. ## 5 Training In this work, two models were trained. The first was for binary classification of frame sequences using the static fastened/unfastened classes only. For the second model, all classes were included to determine if the 4 states could be reliably identified, as they must all be handled in a real-world implementation. To train the network, the videos were split into single-class sequences of 15 frames, before randomized cropping and downsampling to a resolution of 256x256. Using cross-entropy loss and a batch size of 15 sequences, the network was trained for 30 epochs. The initial learning rate of 1x10\({}^{4}\) was halved every 5 epochs. An added benefit of the self-attention layer is allowing us to visualize the areas in each frame that are more heavily weighted by the network. This can be helpful to verify that the network is utilizing appropriate features. Fig. 3 shows these weighted regions tracking the seatbelt when visualized on the real event videos in the test set. ## 5 Results and Conclusion The results of the 2-class model and 4-class model on both the simulated and real test sets are compared in Table 1. As expected, the Figure 3: **Visualized attention maps on test frames generated from real events.** Figure 2: **Proposed network architecture.** 2-class model was more accurate, but the 4-class model demonstrates that handling of all classes is possible without a dramatic reduction in performance. This model treats the 4 classes as independent, but we know they can only transition in a fixed sequence. Future work will leverage this fact for improved accuracy. ## Acknowledgements This research was conducted with the financial support of Science Foundation Ireland at ADAPT, the SFI Research Centre for AI-Driven Digital Content Technology at the University of Galway [13/RC/2106_P2]. For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.
2305.06538
Use VQE to calculate the ground energy of hydrogen molecules on IBM Quantum
Quantum computing has emerged as a promising technology for solving problems that are intractable for classical computers. In this study, we introduce quantum computing and implement the Variational Quantum Eigensolver (VQE) algorithm using Qiskit on the IBM Quantum platform to calculate the ground state energy of a hydrogen molecule. We provide a theoretical framework of quantum mechanics, qubits, quantum gates, and the VQE algorithm. Our implementation process is described, and we simulate the results. Additionally, experiments are conducted on the IBM Quantum platform, and the results are analyzed. Our fi ndings demonstrate that VQE can effi ciently calculate molecular properties with high accuracy. However, limitations and challenges in scaling the algorithm for larger molecules are also identifi ed. This work contributes to the growing body of research on quantum computing and highlights the potential applications of VQE for real-world problem-solving.
Maomin Qing, Wei Xie
2023-05-11T02:53:26Z
http://arxiv.org/abs/2305.06538v1
# Use VQE to calculate the ground energy of hydrogen molecules on IBM Quantum ###### Abstract Quantum computing has emerged as a promising technology for solving problems that are intractable for classical computers. In this study, we introduce quantum computing and implement the Variational Quantum Eigensolver (VQE) algorithm using Qiskit on the IBM Quantum platform to calculate the ground state energy of a hydrogen molecule. We provide a theoretical framework of quantum mechanics, qubits, quantum gates, and the VQE algorithm. Our implementation process is described, and we simulate the results. Additionally, experiments are conducted on the IBM Quantum platform, and the results are analyzed. Our findings demonstrate that VQE can efficiently calculate molecular properties with high accuracy. However, limitations and challenges in scaling the algorithm for larger molecules are also identified. This work contributes to the growing body of research on quantum computing and highlights the potential applications of VQE for real-world problem-solving. ## I Introduction Quantum computing is a rapidly growing field that explores the potential of quantum mechanics to develop new technologies[1]. Unlike classical computers which use binary digits (bits) to represent information, quantum computers use quantum bits (qubits) that can exist in superpositions of states, allowing for massive parallelism and the ability to solve problems faster than classical computers. In this study, we focus on the implementation of the Variational Quantum Eigensolver (VQE) algorithm[2] using Qiskit[3] on the IBM Quantum platform[4] to calculate the ground state energy of a hydrogen molecule. The calculation of molecular properties is of great significance in chemistry, as it can aid in the design of new drugs, catalysts, and materials. However, the computational complexity of solving the Schrodinger equation for large molecules using classical methods limits the accuracy and efficiency of these calculations. VQE has emerged as a promising algorithm for calculating molecular energies on quantum computers, offering significant speed-ups over classical methods. Quantum computing is an emerging field of computer science that utilizes the principles of quantum mechanics to perform calculations on data. In traditional computing, bits are used to represent information in binary form (0 or 1), whereas in quantum computing, qubits are used to store and manipulate information. Qubits are quantum systems that can exist in a superposition of states, which means they can represent both 0 and 1 at the same time. This property makes quantum computers capable of performing certain calculations exponentially faster than classical computers. One of the most famous algorithms in quantum computing is Shor's algorithm[5], which efficiently factors large numbers into their prime factors. This algorithm has important implications for cryptography, as it would allow for the efficient breaking of many commonly used encryption schemes. Another important algorithm in quantum computing is Grover's algorithm[6], which provides a quadratic speedup for searching unstructured databases. Grover's algorithm has numerous applications in fields such as optimization and machine learning. The basic building block of a quantum computer is a quantum gate, which is a unitary transformation that acts on one or more qubits. Some common examples of quantum gates include the Hadamard gate, which creates a superposition state, and the Pauli gates, which perform rotations around the x, y, and z axes. The state of a quantum system is represented by a ket vector, which is an element of a complex Hilbert space. In order to perform quantum computations, it is necessary to perform operations on qubits while maintaining their coherence. This is a significant challenge due to the susceptibility of quantum systems to decoherence from environmental interactions. Despite these challenges, there has been significant progress in the development of quantum hardware and algorithms in recent years. Many companies and research institutions are actively working on developing practical quantum computers that can be used to solve real-world problems. The VQE algorithm is a hybrid quantum-classical approach for finding the ground state energy of a given molecule on a quantum computer. The goal of the VQE algorithm is to find the lowest eigenvalue of the molecular Hamiltonian, which represents the ground energy of the molecule. In essence, the VQE algorithm involves preparing a trial wavefunction called Ansatz on the quantum computer, measuring its energy, and then optimizing the parameters of the wavefunction using a classical optimization algorithm. This process is repeated iteratively until an optimal set of parameters is found that minimizes the energy of the trial wavefunction. Given a molecular Hamiltonian \(\hat{H}\), we can express the ground state energy \(E_{0}\) as the minimum of the expecta tion value of \(\hat{H}\) with respect to some trial wavefunction \(\ket{\psi(\theta)}\) parametrized by a set of parameters \(\theta\): \[E_{0}=\min_{\theta}\bra{\psi(\theta)}\hat{H}\ket{\psi(\theta)} \tag{1}\] We can use a quantum computer to measure \(\bra{\psi}\hat{H}\ket{\psi}\), and a classical optimizer to vary the parameters \(\theta\) in order to minimize this expectation value. This process is repeated until convergence is achieved and an approximation of the ground state energy is obtained. This work will begin with an introduction to quantum computing and explain the principle behind the VQE algorithm. In addition, we utilized IBM's Quantum platform and Qiskit library to calculate the ground-state energy of a hydrogen molecule using the VQE algorithm for our research. Finally, we will summarize the current development direction of quantum algorithms and provide future outlooks. ## II Quantum Computing ### Represent of Qubits In quantum computing, wave functions and qubits are usually represented using Dirac notation, which is written in the form of a Ket vector. Typically, the state of a qubit can be represented by a ket vector as follow: \[\ket{\psi}=\alpha\ket{0}+\beta\ket{1}=\begin{bmatrix}\alpha\\ \beta\end{bmatrix} \tag{2}\] \[\ket{0}=\begin{bmatrix}1\\ 0\end{bmatrix},\ket{1}=\begin{bmatrix}0\\ 1\end{bmatrix}\] The \(\alpha\) and \(\beta\) are called probability amplitude, satisfy the normalization of probability, means \(\ket{\alpha}^{2}+\ket{\beta}^{2}=1\). The \(\ket{0}\) and \(\ket{1}\) are called standard basis vectors, correspond to 0 and 1 respectively. Qubits can also be represented in three-dimensional space by Bloch spheres, as shown in FIG. 1. At this time, since the length of the vector is specified as a unit length, the quantum state can be determined by the angle \(\phi\) between the projection of the vector on the \(x\)-\(y\) plane and the \(x\) axis and the angle \(\theta\) between the vector and the \(z\) axis. At this time, the state vector can be expressed as: \[\ket{\psi}=\cos\frac{\theta}{2}\ket{0}+e^{i\phi}\sin\frac{\theta}{2}\ket{1} \tag{3}\] One of the benefits of this representation is the ability to distinguish global phase. The so-called global phase is a factor whose norm is 1. The state of a system composed of \(n\) qubits is represented by the tensor product composite of the states of these \(n\) qubits, that is, it can be represented by a \(2^{n}\)-dimensional complex vector, where each element corresponds to a ground state composed of a qubit. For example, in a system of two qubits, if the first qubit is in state \(\ket{0}\) and the second qubit is in state \(\ket{1}\), the state of the entire system can be expressed as: \[\ket{\psi}=\ket{1}\otimes\ket{0}=\ket{10}=\begin{bmatrix}0\\ 0\\ 1\\ 0\end{bmatrix} \tag{4}\] Similar to the single-qubit case, we can also use the Dirac notation to represent the state of a multi-qubit system, namely: \[\ket{\psi}=\sum_{i=0}^{2^{n}-1}c_{i}\ket{i}_{n} \tag{5}\] where \(c_{i}\) is a complex coefficient, and \(\ket{i}_{n}\) denotes the \(i+1\) th ground state in a system of \(n\) qubits. In quantum computing, we usually only focus on the states associated with a certain measurement, i.e. if we measure the system, it will only be in one of the ground states. ### Operations on Qubits In quantum computing, all operations can be equivalently performed by applying a unitary matrix to a qubit. For example, if you want to flip the state vectors of qubits around their respective axes, that is, rotate \(\pi\) around the corresponding axis, this operation corresponds to the Pauli matrix. In quantum computing, Pauli matrices are also known as Pauli gates. \[X=\begin{bmatrix}0&1\\ 1&0\end{bmatrix},Y=\begin{bmatrix}0&-i\\ i&0\end{bmatrix},Z=\begin{bmatrix}1&0\\ 0&-1\end{bmatrix} \tag{6}\] Generally, the identity matrix \(I\) is regarded as a kind of Pauli gate. Compared with the Pauli gate, there is also a Figure 1: An example of a state vector represented in a Bloch sphere special gate that rotates around a certain axis, called the rotation gates, denoted as \(R_{k}(\theta)\), where \(k\) represents the corresponding axis, and \(\theta\) represents the rotation angle. \[\begin{split} R_{x}(\theta)&=\exp(-iX\theta/2)= \begin{bmatrix}\cos(\theta/2)&-i\sin(\theta/2)\\ -i\sin(\theta/2)&\cos(\theta/2)\end{bmatrix}\\ R_{y}(\theta)&=\exp(-iY\theta/2)=\begin{bmatrix}\cos(\theta/2)&-\sin( \theta/2)\\ \sin(\theta/2)&\cos(\theta/2)\end{bmatrix}\\ R_{z}(\theta)&=\exp(-iZ\theta/2)=\begin{bmatrix}\exp(-i\theta/2)&0\\ 0&\exp(i\theta/2)\end{bmatrix}\end{split} \tag{7}\] According to Euler's formula \(\exp(i\pi)+1=0\), the \(Z\) gate can be squared to obtain the \(S\) gate, and the \(S\) gate can also be squared to obtain the \(T\) gate. Correspondingly, the \(S\) gate rotates \(\pi/2\) around the \(z\)-axis, while the \(T\) gate rotates \(\pi/4\). \[\begin{split} S&=Z^{1/2}\\ &=\begin{bmatrix}1&0\\ 0&i\end{bmatrix}=\exp(i\pi/4)R_{z}(\pi/2)\\ T&=S^{1/2}=Z^{1/4}\\ &=\begin{bmatrix}1&0\\ 0&\exp(i\pi/4)\end{bmatrix}=\exp(i\pi/8)R_{z}(\pi/4)\end{split} \tag{8}\] In quantum computing, there is another common operation, which is to prepare a uniformly distributed superposition state. The so-called uniform distribution superposition state is a quantum superposition state, where \(\ket{0}\) and \(\ket{1}\) have the same probability, that is, \((1/\sqrt{2})(\ket{0}+\ket{1})\) and \((1/\sqrt{2})(\ket{0}-\ket{1})\). These two states can usually be represented by \(\ket{+}\) and \(\ket{-}\). A uniform superposition state is usually achieved using a Hadamard gate, denoted as \(H\). It introduces transitions between uniformly distributed superposition states and non-uniformly distributed states. The so-called non-uniformly distributed states are \(\ket{0}\) and \(\ket{1}\) for a qubit. \[\begin{split} H&=\frac{1}{\sqrt{2}}\begin{bmatrix}1&1\\ 1&-1\end{bmatrix}\\ H\ket{0}=\ket{+},\;H\ket{1}=\ket{-}\\ H\ket{+}=\ket{0},\;H\ket{-}=\ket{1}\end{split} \tag{9}\] For multi-qubit systems, the more common quantum gate is the controlled NOT gate (CNOT). CNOT is divided into two parts, the control qubit and the target qubit. If the state of the control qubit is \(\ket{1}\), it is equivalent to applying an \(X\) gate to the target qubit, otherwise it will do nothing to the target qubit. \[\begin{split}\text{CNOT}=\begin{bmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&0&1\\ 0&0&1&0\end{bmatrix}\\ \text{CNOT}\ket{10}=\ket{11},\;\text{CNOT}\ket{11}=\ket{10}\\ \text{CNOT}\ket{--}=\ket{+-},\;\text{CNOT}\ket{+-}=\ket{--}\end{split} \tag{10}\] A set of quantum gates is said to be universal if there is a set of quantum gates such that all quantum operations, or quantum programs, can be approximated by sequences of gates in this set. Any quantum program can be represented by a sequence of quantum circuits and classical near-time computation. A more common universal gate set is \(\{\text{CNOT},SGs\}\)[7], where \(SGs\) represents all signle-qubit gates. Next is \(\{\text{CNOT},H,T\}\)[8]. ### Quantum Circuits In quantum computing, quantum circuits are graphical representations used to describe the evolution and manipulation of quantum states. It consists of a series of quantum gates, each of which represents an operation on one or more qubits. By constructing different quantum circuits, we can realize tasks such as various quantum algorithms and quantum communication protocols[7]. For example, the FIG. 2 shows the process of preparing entangled states using \(H\) gate and CNOT gate. ## III Use VQE to GET hydrogen molecule The VQE algorithm is a quantum algorithm for solving the smallest eigenvalue, and is usually used to solve the molecular ground state energy. The core idea of the VQE algorithm is to construct a parameterized quantum circuit, and then use a classical computer to optimize the quantum circuit so that the output state of the circuit is close to the target state, and calculate the expected value of the target Hamiltonian on this state. By minimizing this expected value, the minimum eigenvalue of the target Hamiltonian can be obtained, and then the solution to the molecular ground state energy can be completed. Before starting to solve the problem, we first organize the Hamiltonian and wave function of the hydrogen atom into the form required in quantum computing, that is, the form represented by the combination of quantum gates. Figure 2: An example shows that use \(H\) gate and CNOT gate to prepare entangled states In quantum mechanics, for a quantum state \(\ket{\psi}\), we can use the Hamiltonian operator to replace its energy. In other words, the system energy is the eigenvalue of its Hamiltonian, and the ground state energy is its minimum eigenvalue. \[\begin{split}\hat{H}\ket{\psi}=E\ket{\psi}\implies\bra{\psi}\hat{ H}\ket{\psi}=E\\ \bra{\psi}\hat{H}\ket{\psi}=E\geq E_{\text{min}}\end{split} \tag{11}\] In this way, our goal becomes to find the minimum value of \(\bra{\psi}\hat{H}\ket{\psi}\), that is, to find \(\min_{\theta}\bra{\psi(\theta)}\hat{H}\ket{\psi(\theta)}\). ### The Hamiltonian The Hamiltonian of a system can generally be split into two parts: momentum \(\hat{T}\) and potential energy \(\hat{V}\). where \(\hat{V}\) can usually be expressed as \(V(r)\), which is a function of distance. In an electronic system, the potential energy can be roughly divided into three parts, electrons and electrons, nucleus and nuclei, and electrons and nuclei, which can be expressed as follows: \[\begin{split}\hat{V}=V(r)=&\sum_{i,j}^{\text{ electrons}}\frac{e^{2}}{4\pi\epsilon_{0}\ket{r_{i}-r_{j}}}+\sum_{i,j}^{\text{ nuclei}}\frac{Z_{i}Z_{j}e^{2}}{4\pi\epsilon_{0}\ket{r_{1}-r_{j}}}\\ &-\sum_{i}^{\text{electrons}}\sum_{j}^{\text{nuclei}}\frac{Z_{j}e ^{2}}{4\pi\epsilon_{0}\ket{r_{i}-r_{j}}}\end{split} \tag{12}\] where \(Z_{j}\) is the number of protons in the \(j\)-th nucleus, \(e\) is the electronic charge. Kinetic energy can be divided into two parts: electron kinetic energy and atomic nuclear kinetic energy. \[\hat{T}=-\sum_{i}^{\text{nuclei}}\frac{\hbar^{2}}{2m_{i}}\nabla_{i}^{2}-\sum_{ i}^{\text{electrons}}\frac{\hbar^{2}}{2m_{e}}\nabla_{i}^{2} \tag{13}\] where \(m_{i}\) is the mass of a nucleus, \(m_{e}\) is the electron mass, \(\nabla_{i}^{2}\) is the Laplace operator of the \(i\)-th particle. For these two complex formulas, we can simplify them. First, we can assume that the nucleus is stationary, but this is relative to the electrons. Secondly, we use atomic units[9] and set some constants such as \(\hbar^{2}/m_{e}\) to \(1\), so that our formula can be simplified to the following form. \[\begin{split}\hat{H}&\approx-\sum_{i}^{\text{ electrons}}\frac{1}{2}\nabla_{i}^{2}+\sum_{i,j}^{\text{electrons}}\frac{1}{\ket{r_{i}-r_{j}}}\\ &-\sum_{i}^{\text{electrons}}\sum_{j}^{\text{nuclei}}\frac{Z_{j}}{ \ket{r_{i}-r_{j}}}+C_{n}^{\prime}\end{split} \tag{14}\] Further, we can divide the Hamiltonian into two parts, the one-particle part \(\sum_{i}h_{1}\) and the two-particle part \(\sum_{i}h_{2}\). Assuming that \(\psi_{j}\) represents the spin-orbit that constitutes the system, we will use the generation operator and annihilation operator to represent the single-particle part. \[\begin{split}\sum_{i}h_{1}(x_{i})=\sum_{p,q}\bra{p}\hat{H}\ket{q }\hat{a}_{p}^{\dagger}\hat{a}_{q}\\ \bra{p}\hat{H}\ket{q}\\ =\int_{-\infty}^{\infty}\psi_{p}^{\star}(x_{i})-\frac{1}{2}\nabla _{i}^{2}+\sum_{j}\frac{Z_{j}}{\ket{r_{i}-r_{j}}}\psi_{q}(x_{i})\text{d}x_{i}\\ =h_{pq}\end{split} \tag{15}\] The two-particle part can also be similarly expressed by the production operator and annihilation operator. \[\begin{split}\sum_{i,j}h_{2}(x_{i},x_{j})=\sum_{p,q,r,s}\bra{pq} \hat{H}\ket{rs}\hat{a}_{p}^{\dagger}\hat{a}_{q}^{\dagger}\hat{a}_{r}\hat{a}_{ s}\\ \bra{pq}\hat{H}\ket{rs}\\ =\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\psi_{p}^{\star} (x_{i})\psi_{q}^{\star}(x_{j})\frac{1}{\ket{x_{1}-x_{2}}}\psi_{r}(x_{j})\psi_{ s}(x_{i})\text{d}x_{i}\text{d}x_{j}\\ =h_{pqrs}\end{split} \tag{16}\] As mentioned above, we can get the final representation of our quadratic quantized Hamiltonian. \[\hat{H}=\sum_{p,q}h_{pq}\hat{a}_{p}^{\dagger}\hat{a}_{q}+\frac{1}{2}\sum_{p,q, r,s}h_{pqrs}\hat{a}_{p}^{\dagger}\hat{a}^{\dagger}\hat{a}_{r}\hat{a}_{s}+h_{0} \tag{17}\] where \(h_{0}\) is a correction constant used to correct errors due to simplification. Next, we also need to map the twice-quantized Hamiltonian into the quantum computer. Here, we have many options, such as using Jordan-Wigner transformation[10], or Bravyi-Kitaev transformation[11]. For simplicity, we use the Jordan-Wigner transform. \[\begin{split} a_{n}^{\dagger}&\mapsto\frac{1}{2} \left[\prod_{j=0}^{n-1}-Z_{j}\right](X_{n}-iY_{n})\\ a_{n}&\mapsto\frac{1}{2}\left[\prod_{j=0}^{n-1}-Z_{j }\right](X_{n}+iY_{n})\end{split} \tag{18}\] where \(X_{n}\), \(Y_{n}\) and \(Z_{n}\) are Pauli matrices in quantum computing, respectively. After doing this, the Hamiltonian of our hydrogen molecule can be expressed as: \[\begin{split}\hat{H}&=-\frac{1}{2}\left(\hat{I} \otimes\hat{I}+\hat{X}\otimes\hat{X}+\hat{Y}\otimes\hat{Y}+\hat{Z}\otimes\hat{Z }\right)\\ &+d\left(\hat{Z}\otimes\hat{I}+\hat{I}\otimes\hat{Z}\right),\end{split} \tag{19}\] ### The Wave Function In computational physics, usually, the wave function of our system can be expressed in the form of Slater determinant[12]. \[\psi_{(x_{1},x_{2},\cdots,x_{n})}=\frac{1}{\sqrt{N!}}\begin{vmatrix}\chi_{i}(x_{1 })&\chi_{j}(x_{1})&\cdots&\chi_{k}(x_{1})\\ \chi_{i}(x_{2})&\chi_{j}(x_{2})&\cdots&\chi_{k}(x_{2})\\ \vdots&\vdots&\ddots&\vdots\\ \chi_{i}(x_{n})&\chi_{j}(x_{n})&\cdots&\chi_{k}(x_{n})\end{vmatrix} \tag{20}\] where \(1/\sqrt{N!}\) is the normalization parameter and \(\chi_{m}(x_{l})\) represents the molecular orbital wavefunction of the \(m\)th orbital of the \(l\)th atom. Because it basically satisfies the format of quantum computing, sometimes we can express it as \(\left|\chi_{1}\cdots\chi_{k}\right\rangle\). But in order to make our calculation more convenient, we can quantize this wave function twice and express it in the form of occupation number representation. \[\left|\psi\right\rangle=\sum C_{i}\left|n_{1}n_{2}\cdots n_{m}\right\rangle \tag{21}\] Among them, \(C_{i}\) is a constant coefficient, while \(n_{i}\) represents the number of possession, and \(n_{i}\in\{0,1\}\) indicates whether the electron is in a certain state. For the wave function represented by the representation of the occupation number, we use the generation operator \(\hat{a}_{i}^{\dagger}\) and the annihilation operator \(\hat{a}_{i}\) to operate, that is, flip the corresponding occupancy number state, the generation operator can change the number of possession from 0 to 1, and the annihilation operator can change the number of possession from 1 to 0. ## IV Use Qiskit to Solve Theoretically, we need to prepare the objective function and quantum circuit by ourselves, that is, use the formula (17) and the formula (18) to perform quadratic Quantize and express it in the form of a quantum circuit. For example, for hydrogen molecule, its Hamiltonian can be represented by the formula (19). However, in the process of actually calculating the molecular ground state energy, we do not need to manually solve and simplify the molecular Hamiltonian. In chemistry, scientists already store some common molecular properties in some databases. When we calculate, in most cases, we only need to query the information of the corresponding molecule to get what we need. In Python, there is a library named PySCF[13], which stores the information of many elements and molecules. We can use the drivers or APIs provided by PySCF to query molecular information or build molecules relatively easily. For example, if we want to build a hydrogen molecule here, we can use the code showing in TABLE. I to complete it. After building the model, we also need to perform secondary quantization and mapping. These operations have been defined in Qiskit. We can use the second_q_ops method to obtain the operator after the second quantization. Among the returned operators, the first one is the Hamiltonian we need, that is, hamiltonian = problem.second_q_ops()[0]. Secondary quantization is not enough, we also need to map it to a form that can be represented in quantum computing, here we can directly use Jordan Wigner mapping to complete this step: mapper = JordanWignerMapper() converter = QubitConverter(mapper, two_qubit_reduction=False) qubit_op = converter.convert(hamiltonian) After processing, the Hamiltonian can be represented using a Pauli matrix: \[\begin{split} H=&-0.807184\ I\otimes I\otimes I \otimes I+0.175106\ Z\otimes I\otimes Z\otimes I\\ &+0.169404\ I\otimes Z\otimes I\otimes Z\\ &-0.230474\ (I\otimes I\otimes Z\otimes I+Z\otimes I\otimes I \otimes I)\\ &+0.173740\ (I\otimes I\otimes I\otimes Z+I\otimes Z\otimes I \otimes I)\\ &+0.045094\ (Y\otimes Y\otimes Y\otimes Y+X\otimes X\otimes Y \otimes Y)\\ &+0.045094\ (Y\otimes Y\otimes X\otimes X+X\otimes X\otimes X \otimes X)\\ &+0.166582\ (X\otimes I\otimes I\otimes Z+I\otimes Z\otimes Z\otimes I)\\ &+0.121488\ (Z\otimes Z\otimes I\otimes I+I\otimes I\otimes Z \otimes Z)\end{split} \tag{22}\] In order to use the VQE algorithm, we also need to parameterize our quantum circuit and need a suitable initial state. The VQE algorithm itself has a certain degree of randomness, which does not guarantee that a convergence result can be obtained, and a reasonable selection of the initial state can also accelerate the convergence of the algorithm. So here, we use UCSD[14] as an assumption (Ansatz), and the initial state of the circuit is initialized to \(\left|0101\right\rangle\) using the Hartree-Fock method. Here, our parameterized quantum circuit can be represented by FIG. 3. The three circuit gates connecting four qubits in the figure are circuit gates after packag \begin{table} \begin{tabular}{c} molecule = Molecule( \\ geometry=[ \\ [ "H", [0.0, 0.0, -dist / 2]], \\ ["H", [0.0, 0.0, dist / 2]]], \\ multiplicity=1, charge=0) \\ driver = ElectronicStructureMoleculeDriver( \\ molecule=molecule, \\ basis="sto3g", \\ driver_type= ElectronicStructureDriverType.PYSCF) \\ problem = ElectronicStructureProblem(driver, \\ [electronic.FreezeCoreTransformer( \\ freeze_core=True)]) \\ \end{tabular} \end{table} Table 1: Building Hydrogen Molecules Using PySCF ing. The mark on the gate indicates which part the gate represents, and \(t\) is our parameter. In this way, the preparatory work is completed, and the next step is to use the VQE algorithm. Here, we choose to use the IBM Quantum online computing platform provided by IBM Corporation to calculate our circuit. We can use the IBMQ interface to connect to the computing platform. After selecting the computing device to be used, we can start computing. Here I choose _ibmq_jakrata_. ``` provider=IBMQ.enable_account(token) backend=least_busy( provider.backends( filters=lambda x: x.configuration().n.qubits>=6 and notx.configuration().simulator and x.status().operational==True)) print("leastbusybackend:",backend) ``` Similarly, the VQE algorithm has already been implemented in Qiskit, so there is no need to implement it yourself. At the same time, the VQE algorithm needs to be used in conjunction with classic optimization algorithms, and most of these algorithms are implemented in Qiskit and can be used directly. Here, for the classical optimization algorithm, considering the situation and accuracy of the quantum computer, the L-BFGS algorithm is chosen to be used. The L-BFGS algorithm, is an optimization algorithm in the family of quasi-Newton methods[15]. This algorithm approximates the Broyden Fletcher Goldfarb Shanno algorithm using limited computer memory. This algorithm is used to minimize \(f(\mathbf{x})\) over an unconstrained vector \(\mathbf{x}\), where \(f\) is a differentiable function. As a comparison, we use the result obtained by the built-in ground state energy method in Qiskit as the exact solution, and compare it with the solution obtained by using the VQE algorithm. ## V Summary We make the atomic distance start from 0.2 A, take 0.05 A as the step distance, and continuously calculate the energy ground state of the hydrogen molecular system at different distances, we can get FIG. 4. It can be seen that the results of the VQE algorithm are in good agreement with the exact solution using the numerical method, where the energy units are Hartree. If compared with the actual data, when the geometric distance is \(-0.3625\) and \(0.3625\), that is, dist = 0.725, the actual data \(H_{h}=-1.117506\), and the data obtained by our VQE algorithm is \(H^{\prime}_{h}=-1.134167\), the error is about \(1.47\%\). Although the data difference \(0.016661\) is greater than the chemical precision \(0.0016\), it is still a relatively accurate solution method. There may be three sources of error: 1. The error caused by assuming that the nucleus is stationary; 2. There is a certain error in the actual measurement; 3. The error caused by the randomness of the VQE algorithm itself. \begin{table} \begin{tabular}{c} \hline \hline \(f(\mathbf{x})\) \\ The second error cannot be eliminated, the first error can be reduced using a more accurate model, and the last error may be the most influential. Especially when running an algorithm on an actual quantum device, due to the limitations of the current quantum device, there is a large background noise, which will affect the coherence of the quantum, thereby affecting the result. ###### Acknowledgements. This work was partially supported by the National Natural Science Foundation of China under Grant Nos. 11875178 and 12005114. The work of A.W. was supported by the start-up funding from China Three Gorges University. A.W. is also grateful for the support from the Chutian Scholar Program of Hubei Province.
2306.08045
Efficient 3D Semantic Segmentation with Superpoint Transformer
We introduce a novel superpoint-based transformer architecture for efficient semantic segmentation of large-scale 3D scenes. Our method incorporates a fast algorithm to partition point clouds into a hierarchical superpoint structure, which makes our preprocessing 7 times faster than existing superpoint-based approaches. Additionally, we leverage a self-attention mechanism to capture the relationships between superpoints at multiple scales, leading to state-of-the-art performance on three challenging benchmark datasets: S3DIS (76.0% mIoU 6-fold validation), KITTI-360 (63.5% on Val), and DALES (79.6%). With only 212k parameters, our approach is up to 200 times more compact than other state-of-the-art models while maintaining similar performance. Furthermore, our model can be trained on a single GPU in 3 hours for a fold of the S3DIS dataset, which is 7x to 70x fewer GPU-hours than the best-performing methods. Our code and models are accessible at github.com/drprojects/superpoint_transformer.
Damien Robert, Hugo Raguet, Loic Landrieu
2023-06-13T18:03:05Z
http://arxiv.org/abs/2306.08045v2
# Efficient 3D Semantic Segmentation with Superpoint Transformer ###### Abstract We introduce a novel superpoint-based transformer architecture for efficient semantic segmentation of large-scale 3D scenes. Our method incorporates a fast algorithm to partition point clouds into a hierarchical superpoint structure, which makes our preprocessing \(7\) times faster than existing superpoint-based approaches. Additionally, we leverage a self-attention mechanism to capture the relationships between superpoints at multiple scales, leading to state-of-the-art performance on three challenging benchmark datasets: S3DIS (76.0% mIoU 6-fold validation), KITTI-360 (63.5% on Val), and DALES (79.6%). With only \(212\)k parameters, our approach is up to \(200\) times more compact than other state-of-the-art models while maintaining similar performance. Furthermore, our model can be trained on a single GPU in \(3\) hours for a fold of the S3DIS dataset, which is \(7\times\) to \(70\times\) fewer GPU-hours than the best-performing methods. Our code and models are accessible at github.com/drprojects/superpoint_transformer. ## 1 Introduction As the expressivity of deep learning models increases rapidly, so do their complexity and resource requirements [15]. In particular, vision transformers have demonstrated remarkable results for 3D point cloud semantic segmentation [61, 41, 18, 25, 36], but their high computational requirements make them challenging to train effectively. Additionally, these models rely on regular grids or point samplings, which do not adapt to the varying complexity of 3D data: the same computational effort is allocated everywhere, regardless of the local geometry or radiometry of the point cloud. This issue leads to needlessly high memory consumption, limits the number of points that can be processed simultaneously, and hinders the modeling of long-range interactions. Superpoint-based methods [29, 26, 23, 45] address the limitation of regular grids by partitioning large point clouds into sets of points-- superpoints--which adapt to the local complexity. By directly learning the interaction between superpoints instead of individual points, these methods enable the analysis of large scenes with compact and parsimonious models that can be trained faster than standard approaches. However, superpoint-based methods often require a costly preprocessing step, and their range and expressivity are lim Figure 1: **Model Size vs. Performance. We visualize the performance of different methods on the S3DIS dataset (6-fold validation) in relation to their model size in log-scale. The area of the markers indicates the GPU-time to train on a single fold. Our proposed method Superpoint Transformer (SPT) achieves state-of-the-art with a reduction of up to \(200\)-fold in model size and \(70\)-fold in training time (in GPU-h) compared to recent methods. The even smaller SPT-nano model achieves a fair performance with \(26\)k parameters only.** ited by their use of local graph-convolution schemes [51]. In this paper, we propose a novel superpoint-based transformer architecture that overcomes the limitations of both approaches, see Figure 1. Our method starts by partitioning a 3D point cloud into a hierarchical superpoint structure that adapts to the local properties of the acquisition at multiple scales simultaneously. To compute this partition efficiently, we propose a new algorithm that is an order of magnitude faster than existing superpoint preprocessing algorithms. Next, we introduce the Superpoint Transformer (SPT) architecture, which uses a sparse self-attention scheme to learn relationships between superpoints at multiple scales. By viewing the semantic segmentation of large point clouds as the classification of a small number of superpoints, our model can accurately classify millions of 3D points simultaneously without relying on sliding windows. SPT achieves near state-of-the-art accuracy on various open benchmarks while being significantly more compact and able to train much quicker than common approaches. The main contributions of this paper are as follows: \({}^{\bullet}\) **Efficient Superpoint Computation:** We propose a new method to compute a hierarchical superpoint structure for large point clouds, which is more than 7 times faster than existing superpoint-based methods. Our preprocessing time is also comparable or faster than standard approaches, addressing a significant drawback of superpoint methods. \({}^{\bullet}\) **State-of-the-Art Performance:** Our model reaches performance at or close to the state-of-the-art for three open benchmarks with distinct settings: S3DIS for indoor scanning [3], KITTI-360 for outdoor mobile acquisitions [32], and DALES for city-scale aerial LiDAR [55]. \({}^{\bullet}\) **Resource-Efficient Models:** SPT is particularly resource-efficient as it only has \(212\)k parameters for S3DIS and DALES, a \(200\)-fold reduction compared to other state-of-the-art models such as PointNeXt [44] and takes \(70\) times fewer GPU-h to train than Stratified Transformer [25]. The even more compact SPT-nano reaches \(70.8\%\) 6-Fold mIoU on S3DIS with only \(26\)k parameters, making it the smallest model to reach above \(70\%\) by a factor of almost \(300\). ## 2 Related Work This section provides an overview of the main inspirations for this paper, which include 3D vision transformers, partition-based methods, and efficient learning for 3D data. 3D Vision Transformers.Following their adoption for image processing [10, 34], Transformer architectures [56] designed explicitly for 3D analysis have shown promising results in terms of performance [61, 18] and speed [41, 36]. In particular, the Stratified Transformer of Lai _et al._ uses a specific sampling scheme [25] to model long-range interactions. However, the reliance of 3D vision transformers on arbitrary K-nearest or voxel neighborhoods leads to high memory consumption, which hinders the processing of large scenes and the ability to leverage global context cues. Partition-Based Methods.Partitioning images into superpixels has been studied extensively to simplify image analysis, both before and after the widespread use of deep learning [1, 54]. Similarly, superpoints are used for 3D point cloud segmentation [40, 33] and object detection [19, 11]. SuperPointGraph [29] proposed to learn the relationship between superpoints using graph convolutions [51] for semantic segmentation. While this method trains fast, its preprocessing is slow and its expressivity and range are limited, as it operates on a single partition. Recent works have proposed ways of learning the superpoints themselves [26, 23, 53], which yields improved results but at the cost of an extra training step or a large point-based backbone [24]. Hierarchical partitions are used for image processing [2, 59, 60] and 3D analysis tasks such as point cloud compression [12] and object detection [7, 31]. Hierarchical approaches for semantic segmentation use Octrees with fixed grids [39, 48]. On the contrary, SPT uses a multi-scale hierarchical structure that adapts to the local geometry of the data. This leads to partitions that conform more closely to semantic boundaries, enabling the network to model the interactions between objects or object parts. Efficient 3D Learning.As 3D scans of real-world scenes can contain hundreds of millions of points, optimizing the efficiency of 3D analysis is an essential area of research. PointNeXt [44] proposes several effective techniques that allow simple and efficient methods [43] to achieve state-of-the-art performance. RandLANet [22] demonstrates that efficient sampling strategies can yield excellent results. Sparse [16] or hybrid [35] point cloud representations have also helped reduce memory usage. However, by leveraging the local similarity of dense point clouds, superpoint-based methods can achieve an input reduction of several orders of magnitude, resulting in unparalleled efficiency. ## 3 Method Our method has two key components. First, we use an efficient algorithm to segment an input point cloud into a compact multi-scale hierarchical structure. Second, a transformer-based network leverages this structure to classify the elements of the finest scale. ### Efficient Hierarchical Superpoint Partition We consider a point cloud \(\mathcal{C}\) with positional and radiometric information. To learn multiscale interactions, we compute a hierarchical partition of \(\mathcal{C}\) into geometrically-homogeneous superpoints of increasing coarseness; see Figure 2. We first define the concept of hierarchical partitions. **Definition 1**: **Hierarchical Partitions.** A partition of a set \(\mathcal{X}\) is a collection of subsets of \(\mathcal{X}\) such that each element of \(\mathcal{X}\) is in one and only one of such subsets. \(\mathcal{P}:=[\mathcal{P}_{0},\cdots,\mathcal{P}_{I}]\) is a hierarchical partition of \(\mathcal{X}\) if \(\mathcal{P}_{0}=\mathcal{X}\), and \(\mathcal{P}_{i+1}\) is a partition of \(\mathcal{P}_{i}\) for \(i\in[0,I-1]\). Throughout this paper, all functions or tensors related to a specific partition level \(i\) are denoted with an exponent \(i\). Hierarchical Superpoint Partitions.We propose an efficient approach for constructing hierarchical partitions of large point clouds. First, we associate each point \(c\) of \(\mathcal{C}\) with features \(f_{c}\) representing its local geometric and radiometric information. These features can be handcrafted [17] or learned [26, 23]. See the Appendix for more details on point features. We also define a graph \(\mathcal{G}\) encoding the adjacency between points usually based on spatial proximity, _e.g_. \(k\)-nearest neighbors. We view the features \(f_{c}\) for all \(c\) of \(\mathcal{C}\) as a signal \(f\) defined on the nodes of the graph \(\mathcal{G}\). Following the ideas of SuperPoint Graph [29], we compute an approximation of \(f\) into constant components by solving an energy minimization problem penalized with a graph-based notion of _simplicity_. The resulting constant components form a partition whose granularity is determined by a regularization strength \(\lambda>0\): higher values yield fewer and coarser components. For each component of the partition, we can compute the mean position (centroid) and feature of its elements, defining a coarser point cloud on which we can repeat the partitioning process. We can now compute a hierarchical partition \(\mathcal{P}:=[\mathcal{P}_{0},\cdots,\mathcal{P}_{I}]\) of \(\mathcal{C}\) from a list of regularization strengths \(\lambda_{1},\cdots,\lambda_{I}\). First, we set \(\mathcal{P}_{0}\) as the point cloud \(\mathcal{C}\) and \(f^{0}\) as the point features \(f\). Then, for \(i=1\) to \(I\), we compute (i) a partition \(\mathcal{P}_{i}\) of \(f^{i-1}\) penalized with \(\lambda_{i}\); (ii) the mean signal \(f^{i}\) for all components of \(\mathcal{P}_{i}\). The coarseness of the resulting partitions \([\mathcal{P}_{0},\cdots,\mathcal{P}_{I}]\) is thus strictly increasing. See the Appendix for a more detailed description of this process. Hierarchical Graph Structure.A hierarchical partition defines a polytree structure across the different levels. Let \(p\) be an element of \(\mathcal{P}_{i}\). If \(i\in[0,I-1]\), \(\operatorname{parent}(p)\) is the component of \(\mathcal{P}_{i+1}\) which contains \(p\). If \(i\in[1,I]\), \(\operatorname{children}(p)\) is the set of components of \(\mathcal{P}_{i-1}\) whose parent is \(p\). Superpoints also share adjacency relationships with superpoints _of the same partition level_. For each level \(i\geq 1\), we build a _superpoint-graph_\(\mathcal{G}_{i}\) by connecting adjacent components of \(\mathcal{P}_{i}\), _i.e_. superpoints whose closest points are within a distance gap \(\epsilon_{i}>0\). For \(p\in\mathcal{P}_{i}\), we denote \(\mathcal{N}(p)\subset\mathcal{P}_{i}\) the set of neighbours of \(p\) in the graph \(\mathcal{G}_{i}\). More details on the superpoint-graph construction can be found in the Appendix. Hierarchical Parallel \(\ell_{0}\)-Cut Pursuit.Computing the hierarchical components involves solving a recursive sequence of non-convex, non-differentiable optimization problems on large graphs. We propose an adaptation of the \(\ell_{0}\)-cut pursuit algorithm [28] to solve this problem. To improve efficiency, we adapt the graph-cut parallelization strategy initially introduced by Raguet _et al_. [46] in the convex setting. ### Superpoint Transformer Our proposed SPT architecture draws inspiration from the popular U-Net [50, 14]. However, instead of using grid, point, or graph subsampling, our approach derives its different resolution levels from the hierarchical partition \(\mathcal{P}\). General Architecture.As represented in Figure 3, SPT comprises an encoder with \(I\) stages and a decoder with \(I-1\) stages: the prediction takes place at the level \(\mathcal{P}_{1}\) and not on individual points. We start by computing the relative positions \(x\) of all points and superpoints with respect to their parent. For a superpoint \(p\in\mathcal{P}_{i}\), we define \(x^{i}_{p}\) as the position of the centroid of \(p\) relative to its parent's. The coarsest superpoints of \(\mathcal{P}_{I}\) have no parent and use the center of the scene as a reference centroid. We then normalize these values so that the sets \(\{x^{i}_{p}|p\in\operatorname{children}(q)\}\) have a radius of \(1\) for all \(q\in\mathcal{P}_{i+1}\). We compute features for each 3D point by Figure 2: **Superpoint Transformer. Our method takes as input a point cloud a) and computes its hierarchical partition into geometrically homogeneous superpoints at multiple scales: c) and e). For all partition levels, we construct superpoint adjacency graphs d) and f), which are used by an attention-based network to classify the finest superpoints.** using a multi-layer perceptron (MLP) to mix their relative positions and handcrafted features: \(g^{0}:=\phi^{0}_{\text{enc}}([x^{0},f^{0}])\), with \([\cdot,\cdot]\) the channelwise concatenation operator. Each level \(i\geq 1\) of the encoder maxpools the features of the finer partition level \(i-1\), adds relative positions \(x^{i}\) and propagates information between neighboring superpoints in \(\mathcal{G}_{i}\). For a superpoint \(p\) in \(\mathcal{P}_{i}\), this translates as: \[g^{i}_{p}=\mathcal{T}^{i}_{\text{enc}}\circ\phi^{i}_{\text{enc}}\left(\left[x ^{i}_{p},\max_{q\in\operatorname{children}(p)}\left(g^{i-1}_{q}\right)\right]\right) \tag{1}\] with \(\phi^{i}_{\text{enc}}\) an MLP and \(\mathcal{T}^{i}_{\text{enc}}\) a transformer module explained below. By avoiding communication between the 3D points of \(\mathcal{P}_{0}\), we bypass a potential computational bottleneck. The decoder passes information from the coarser partition level \(i+1\) to the finer level \(i\). It uses the relative positions \(x^{i}\) and the encoder features \(g^{i}\) to improve the spatial resolution of its feature maps \(h^{i}\)[50]. For a superpoint \(p\) in partition \(\mathcal{P}_{i}\) with \(1\leq i<I-1\), this can be expressed as: \[h^{i}_{p}=\mathcal{T}^{i}_{\text{dec}}\circ\phi^{i}_{\text{dec}}\left(\left[ x^{i}_{p},g^{i}_{p},h^{i+1}_{\text{parent}(p)}\right]\right) \tag{2}\] with \(h^{I}=g^{I}\), \(\phi^{i}_{\text{dec}}\) an MLP, and \(\mathcal{T}^{i}_{\text{dec}}\) an attention-based module similar to \(\mathcal{T}^{i}_{\text{enc}}\). Self-Attention Between Superpoints.We propose a variation of graph-attention networks [57] to propagate information between neighboring superpoints of the same partition level. For each level of the encoder and decoder, we associate to superpoint \(p\in\mathcal{P}_{i}\) a triplet of key, query, value vectors \(K_{p},Q_{p},V_{p}\) of size \(D_{\text{key}},D_{\text{key}}\) and \(D_{\text{val}}\). These values are obtained by applying a linear layer to the corresponding feature map \(m\) after GraphNorm normalization [5]. We then characterize the relationship between two superpoints \(p\), \(q\) of \(\mathcal{P}_{i}\) adjacent in \(\mathcal{G}_{i}\) by a triplet of features \(a^{\text{key}}_{p,q},a^{\text{qae}}_{p,q},a^{\text{val}}_{p,q}\) of dimensions \(D_{\text{key}},D_{\text{key}}\) and \(D_{\text{val}}\), and whose computation is detailed in the next section. Given a superpoint \(p\), we stack the vectors \(a^{\text{key}}_{p,q},a^{\text{que}}_{p,q}\), \(a^{\text{val}}_{p,q}\) for \(q\in\mathcal{N}(p)\) in matrices \(A^{\text{key}}_{p},A^{\text{que}}_{p},A^{\text{val}}_{p}\) of dimensions \(|\mathcal{N}(p)|\times D_{\text{key}}\) or \(|\mathcal{N}(p)|\times D_{\text{val}}\). The modules \(\mathcal{T}^{i}_{\text{enc}}\) and \(\mathcal{T}^{i}_{\text{dec}}\) gather contextual information as follows: \[\left[\mathcal{T}(m)\right]_{p}^{\;\pm}\operatorname{att}(Q^{\intercal}_{p} \oplus A^{\text{que}}_{p},K_{\mathcal{N}(p)}+A^{\text{key}}_{p},V_{\mathcal{N }(p)}+A^{\text{val}}_{p})\;, \tag{3}\] with \(\overset{\pm}{=}\) a residual connection [20], \(\oplus\) the addition operator with broadcasting on the first dimension, and \(K_{\mathcal{N}(p)}\) the matrix of stacked vectors \(K_{q}\) for \(q\in\mathcal{N}(p)\). The attention mechanism writes as follows: \[\operatorname{att}(Q,K,V):=V^{\intercal}\operatorname{softmax}\left(\frac{Q \odot K\mathbf{1}}{\sqrt{|\mathcal{N}(p)|}}\right)\;, \tag{4}\] with \(\odot\) the Hadamard termwise product and \(\mathbf{1}\) a column-vector with \(D_{\text{key}}\) ones. Our proposed scheme is similar to classic attention schemes with two differences: (i) the queries adapt to each neighbor, and (ii) we normalize the softmax with the neighborhood size instead of the key dimension. In practice, we use multiple independent attention modules in parallel (multi-head attention) and several consecutive attention blocks. ### Leveraging the Hierarchical Graph Structure The hierarchical superpoint partition \(\mathcal{P}\) can be used for more than guidance for graph pooling operations. Indeed, we can learn expressive adjacency encodings capturing the complex adjacency relationships between superpoints and employ powerful supervision and augmentation strategies based on the hierarchical partitions. Adjacency Encoding.While the adjacency between two 3D points is entirely defined by their distance vector, the relationships between superpoints are governed by additional factors such as their alignment, proximity, and difference in sizes or shapes. We characterize the adjacency of pairs of adjacent superpoints of the same partition level using a set of handcrafted features based on: (i) the relative positions of centroids, (ii) position of paired points in each superpoints, (iii) the superpoint principal directions, and (iv) the ratio between the superpoints' length, volume, surface, and point count. These features are efficiently computed only once during preprocessing. For each pair of superpoints \((p,q)\) adjacent in \(\mathcal{G}_{i}\), we jointly compute the concatenated \(a^{\text{key}}_{p,q},a^{\text{que}}_{p,q},a^{\text{val}}_{p,q}\) by applying an MLP \(\phi^{i}_{\text{adj}}\) to the handcrafted adjacency features defined above. Further details on the superpoint-graph construction and specific adjacency features are provided in the Appendix. Figure 3: **Superpoint Transformer.** We represent our proposed architecture with two partitions levels \(\mathcal{P}_{1}\) and \(\mathcal{P}_{2}\). We use a transformer-based module to leverage the context at different scales, leading to large receptive fields. We only classify the superpoints of the partition \(\mathcal{P}_{1}\) and not individual 3D points, allowing fast training and inference. Hierarchical Supervision.We propose to take advantage of the nested structure of the hierarchical partition \(\mathcal{P}\) into the supervision of our model. We can naturally associate the superpoints of any level \(i\geq 1\) with a set of 3D points in \(\mathcal{P}_{0}\). The superpoints at the finest level \(i=1\) are almost semantically pure (see Figure 6), while the superpoints at coarser levels \(i>1\) typically encompass multiple objects. Therefore, we use a dual learning objective: (i) we predict the most frequent label within the superpoints of \(\mathcal{P}_{1}\), and (ii) we predict the label distribution for the superpoints of \(\mathcal{P}_{i}\) with \(i>1\). We supervise both predictions with the cross-entropy loss. Let \(y^{i}_{p}\) denote the true label distribution of the 3D points within a superpoint \(p\in\mathcal{P}_{i}\), and \(\hat{y}^{i}_{p}\) a one-hot-encoding of its most frequent label. We use a dedicated linear layer at each partition level to map the decoder feature \(g^{i}_{p}\) to a predicted label distribution \(z^{i}_{p}\). Our objective function can be formulated as follows: \[\mathcal{L}=\sum_{p\in\mathcal{P}_{1}}\frac{-N^{1}_{p}}{|\mathcal{C}|}H(\hat{ y}^{1}_{p},z^{1}_{p})+\sum_{i=2}^{I}\sum_{p\in\mathcal{P}_{i}}\frac{\mu^{i}N^{i}_{ p}}{|\mathcal{C}|}H(y^{i}_{p},z^{i}_{p})\, \tag{5}\] where \(\mu^{2},\cdots,\mu^{I}\) are positive weights, \(N^{i}_{p}\) represents the number of points within a superpoint \(p\in\mathcal{P}_{i}\), and \(|\mathcal{C}|\) is the total number of points in the point cloud, and \(H(y,z)=-\sum_{k\in\mathcal{K}}y_{k}\log(z_{k})\) and \(\mathcal{K}\) the class set. Superpoint-Based Augmentations.Although our approach classifies superpoints rather than individual 3D points, we still need to load the points of \(\mathcal{P}_{0}\) in memory to embed the superpoints from \(\mathcal{P}_{1}\). However, since superpoints are designed to be geometrically simple, only a subset of their points is needed to characterize their shape. Therefore, when computing the feature \(g^{1}_{p}\) of a superpoint \(p\) of \(\mathcal{P}_{1}\) containing \(n\) points with Eq. (1), we sample only a portion \(\tanh(n/n_{\text{max}})\) of its points, with a minimum of \(n_{\text{min}}\). This sampling strategy reduces the memory load and acts as a powerful data augmentation. The lightweight version of our model SPT-nano goes even further. It ignores the points entirely and only use handcrafted features to embed the superpoints of \(\mathcal{P}_{1}\), thus avoiding entirely the complexity associated with the size of the input point cloud \(\mathcal{P}_{0}\). To further augment the data, we exploit the geometric consistency of superpoints and their hierarchical arrangement. During the batch construction, we randomly drop each superpoint with a given probability at all levels. Dropping superpoints at the fine levels removes random objects or object parts, while dropping superpoints at the coarser levels removes entire structures such as walls, buildings, or portions of roads, for example. ## 4 Experiments We evaluate our model on three diverse datasets described in Section 4.1. In Section 4.2, we evaluate our approach in terms of precision, but also quantify the gains in terms of pre-processing, training, and inference times. Finally, we propose an extensive ablation study in Section 4.3. ### Datasets and Models Datasets.To demonstrate its versatility, we evaluate SPT on three large-scale datasets of different natures. **S3DIS**[3]. This indoor dataset of office buildings contains over \(274\) million points across \(6\) building floors--or areas. The dataset is organized by individual rooms, but can also be processed by considering entire areas at once. **KITTI-360**[32]. This outdoor dataset contains more than \(100\) k laser scans acquired in various urban settings on a mobile platform. We use the _accumulated point clouds_ format, which consists of large scenes with around \(3\) million points. There are \(239\) training scenes and \(61\) for validation. **DALES**[55]. This \(10\) km\({}^{2}\) aerial LiDAR dataset contains \(500\) millions of points across \(40\) urban and rural scenes, including \(12\) for evaluation. We subsample the datasets using a \(3\)cm grid for S3DIS, and \(10\)cm for KITTI-360 and DALES. All accuracy metrics are reported for the full, unsampled point clouds. We use a two-level partition (\(I=2\)) with \(\mu^{2}=50\) for all datasets and select the partition parameters to obtain a \(30\)-fold reduction between \(\mathcal{P}_{1}\) and \(\mathcal{P}_{0}\) and a further \(5\)-fold reduction for \(\mathcal{P}_{2}\). See Table 1 for more details. Models.We use the same model configuration for all three datasets with minimal adaptations. All transformer modules have a shared width \(D_{\text{val}}\), a small key space of dimension \(D_{\text{key}}=4\), \(16\) heads, with \(3\) blocks in the encoder and \(1\) in the decoder. We set \(D_{\text{val}}=64\) for S3DIS and DALES (\(210\)k parameters), and \(D_{\text{val}}=128\) (\(777\)k parameters) for KITTI360. See the Appendix and our open repository for the detailed configuration of all modules. We also propose SPT-nano, a lightweight version of our model that does not compute point-level features but operates directly on the first partition level \(\mathcal{P}_{1}\). The value of the maxpool over points in Eq. (1) for \(i=1\) is replaced by \(f^{1}\), the aggregated handcrafted point features at the level \(1\) of \begin{table} \begin{tabular}{l l l l l} \hline \hline Dataset & Points & Subsampled & \(|\)\(\mathcal{P}_{1}\) & \(|\)\(\mathcal{P}_{2}\) \\ \hline S3DIS [3] & 273m & 32m & 979k & 292k \\ DALES [55] & 492m & 449m & 14.8m & 2.56m \\ KITTI-360 [32] & 919m & 432m & 16.2m & 2.98m \\ \hline \hline \end{tabular} \end{table} Table 1: **Partition Configuration. We report the point count of different datasets before and after subsampling, as well as the size of the partitions.** the partition. This model never considers the full point cloud \(\mathcal{P}_{0}\) but only operates on the partitions. For this model, we set \(D_{\text{val}}=16\) for S3DIS and DALES (\(26\)k parameters), and \(D_{\text{val}}=32\) for KITTI360 (\(70\)k parameters). Batch Construction.Batches are sampled from large _tiles_: entire building floors for S3DIS, and full scenes for KITTI-360 or DALES. Each batch is composed of \(4\) randomly sampled portions of the partition with a radius of \(7\)m for S3DIS and \(50\)m for KITTI and DALES, allowing us to model long-range interactions. During training, we apply a superpoint dropout rate of \(0.2\) for each superpoint at all hierarchy levels, as well as random rotation, tilting, point jitter and hand-crafted features dropout. When sampling points within each superpoint, we set \(n_{\text{min}}=32\) and \(n_{\text{max}}=128\). Optimization.We use the ADAMW optimizer [38] with default parameters, a weight decay of \(10^{-4}\), a learning rate of \(10^{-2}\) for DALES and KITTI-360 on and \(10^{-1}\) for S3DIS. The learning rate for the attention modules is \(10\) times smaller than for other weights. Learning rates are warmed up from \(10^{-6}\) for \(20\) epochs and progressively reduced to \(10^{-6}\) with cosine annealing [37]. ### Quantitative Evaluation Performance Evaluation.As seen in Table 2, SPT performs at the state-of-the-art on two of three datasets despite being a significantly smaller model. On S3DIS, SPT beats Figure 4: **Qualitative Results. We represent input samples (with color or intensity) of our approach and its predictions for all three datasets. Additionally, we show the coarsest partition level and demonstrate how superpoints can accurately capture the contours of complex objects and classify them accordingly. Black points are unlabeled in the ground truth.** PointNeXt-XL with \(196\times\) fewer parameters. On KITTI-360, SPT outperforms MinkowskiNet despite a size ratio of \(49\), and surpasses the performance of the even larger multimodal point-image model DeepViewAgg. On DALES, SPT outperforms ConvPoint by more than \(12\) points with over \(21\) times fewer parameters. Although SPT is \(1.5\) points behind KPConv on this dataset, it achieves these results with \(67\) times fewer parameters. SPT achieves significant performance improvements over all superpoint-based methods on all datasets, ranging from \(7\) to \(14\) points. SPT overtakes the SSP and SPNet superpoint methods that _learn_ the partition in a two-stage training setup, leading to pre-processing times of several hours. Interestingly, the lightweight SPT-nano model matches KPConv and MinkowskiNet with only \(26\)k parameters. See Figure 4 for qualitative illustrations. Preprocessing Speed.As reported in Table 3, our implementation of the preprocessing step is highly efficient. We can compute partitions, superpoint-graphs, and handcrafted features, and perform I/O operations quickly: \(12.4\)min for S3DIS, \(117\) for KITTI-360, and \(148\) for DALES using a server with a 48-core CPU. An 8-core workstation can pre-process S3DIS in \(26.6\)min. Our preprocessing time is as fast or faster than point-level methods and \(7\times\) faster than SuperPoint Graph's, thus alleviating one of the main drawbacks of superpoint-based methods. Training Speed.We trained several state-of-the-art methods from scratch and report in Figure 5 the evolution of test performance as a function of training time. We used the official training logs for the multi-GPU Point Transformer and Stratified Transformer. SPT can train much faster than all methods not based on superpoints while attaining similar performance. Although Superpoint Graph trains even faster, its performance saturates earlier, \(6.0\) mIoU points below SPT. We also report the inference time of our method in Table 3, which is significantly lower than competing approaches, with a speed-up factor ranging from \(8\) to \(80\). All speed measurements were conducted on a single-GPU server (48 cores, 512Go RAM, A40 GPU). Nevertheless, our model can be trained on a standard workstation (8 cores, 64Go, 2080Ti) with smaller batches, taking only \(1.5\) times longer and with comparable results. SPT performs on par or better than complex models with up to two orders of magnitude more parameters and significantly longer training times. Such efficiency and compactness have many benefits for real-world scenarios where hardware, time, or energy may be limited. ### Ablation Study We evaluate the impact of several design choices in Table 4 and reports here our observations. a) Handcrafted features.Without handcrafted point features, our model perform worse on all datasets. This observation is in line with other works which also remarked the \begin{table} \begin{tabular}{l l l l l l} \hline \hline Model & Size & S3DIS & KITTI & \multirow{2}{*}{DALES} \\ & \(\times 10^{6}\) & 6-Fold & Area 5 & 360 val & \\ \hline PointNet++ [43] & 3.0 & 56.7 & - & - & 68.3 \\ \(\dagger\) SPG [29] & 0.28 & 62.1 & 58.0 & - & 60.6 \\ ConvPoint [4] & 4.7 & 68.2 & - & - & 67.4 \\ \(\dagger\) SPG + SSP [26] & 0.29 & 68.4 & 61.7 & - & - \\ \(\dagger\) SPNet [23] & 0.32 & 68.7 & - & - & - \\ MinkowskiNet [8, 6] & 37.9 & 69.1 & 65.4 & 58.3 & - \\ RandLANet [22] & 1.2 & 70.0 & - & - & - \\ KPConv [52] & 14.1 & 70.6 & 67.1 & - & **81.1** \\ Point Trans.[61] & 7.8 & 73.5 & 70.4 & - & - \\ RepSurf-U [47] & 0.97 & 74.3 & 68.9 & - & - \\ DeepViewAgg [49] & 41.2 & 74.7 & 67.2 & 62.1 & - \\ Strat. Trans. [25, 58] & 8.0 & 74.9 & **72.0** & - & - \\ PointNeXt-XL [44] & 41.6 & 74.9 & 71.1 & - & - \\ \hline \hline \(\dagger\)**SPT** (ours) & 0.21 & **76.0** & 68.9 & **63.5\({}^{*}\)** & 79.6 \\ \(\dagger\)**SPT-nano** (ours) & **0.026** & 70.8 & 64.9 & 57.2\({}^{*}\) & 75.2 \\ \hline \hline \end{tabular} \end{table} Table 2: **Performance Evaluation. We report the Mean Intersection-over-Union of different methods on three different datasets. SPT performs on par or better than recent methods with significantly fewer parameters. \(\dagger\) superpoint-based. \(\star\)/\(*\) model with \(777\)k/\(70\)k parameters.** Figure 5: **Training Speed. We report the evolution of the test mIoU for S3DIS Area 5 for different methods _until the best epoch is reached_. The curves are shifted right according to the preprocessing time. We report in parenthesis the time ratio compared to SPT.** positive impact of well-designed handcrafted features on the performance of smaller models [21, 47]. b) Influence of Edges.Removing the adjacency encoding between superpoints leads to a significant drop of \(6.3\) points on S3DIS; characterizing the relative position and relationship between superpoints appears crucial to exploiting their context. We also find that pruning the \(50\)% longest edges of each superpoint results in a systematic performance drop, highlighting the importance of modeling long relationships. c) Partition-Based Improvements.We assess the impact of several improvements made possible by using hierarchical superpoints. First, we find that using all available points when embedding the superpoints of \(\mathcal{P}_{1}\) instead of random sampling resulted in a small performance drop. Second, setting the superpoint dropout rate to \(0\) worsens the performance by over \(2.5\) points on S3DIS and KITTI-360. While we did not observe better results with three or more partition levels, only using one level leads to a significant loss of performance for all datasets. d) Influence of Partition Purity.In Figure 6, we plot the performance of the "oracle" model which associates to each superpoint of \(\mathcal{P}_{1}\) with its most frequent true label. This model acts as an upper bound on the achievable performance with a given partition. Our proposed partition has significantly higher semantic purity than a regular voxel grid with as many nonempty voxels as superpoints. This implies that our superpoints adhere better to semantic boundaries between objects. We also report the performance of our model for different partitions of varying coarseness, measured as the number of superpoints in \(\mathcal{P}_{1}\). Using, respectively, \(\times 1.5\) (\(\times 3\)) fewer superpoints leads to a performance drop of \(2.2\) (\(4.7\)) mIoU points, but reduce the training time to \(2.4\) (\(1.6\)) hours. The performance of SPT is more than \(20\) points below the oracle, suggesting that the partition does not strongly limit its performance. Limitations.See the Appendix. ## 5 Conclusion We have introduced the Superpoint Transformer approach for semantic segmentation of large point clouds, combining superpoints and transformers to achieve state-of-the-art results with significantly reduced training time, inference time, and model size. This approach particularly benefits large-scale applications and computing with limited resources. More broadly, we argue that small, tailored models can offer a more flexible and sustainable alternative to large, generic models for 3D learning. With training times of a few hours on a single GPU, our approach allows practitioners to easily customize the models to their specific needs, enhancing the overall usability and accessibility of 3D learning. \begin{table} \begin{tabular}{l c c c} \hline \hline Experiment & S3DIS & KITTI & DALES \\ & 6-Fold & 360 Val & \\ \hline Best Model & 76.0 & 63.5 & 79.6 \\ \hline a) No handcrafted features & -0.7 & -4.1 & -1.4 \\ b) No adjacency encoding & -6.3 & -5.4 & -3.0 \\ b) Fewer edges & -3.5 & -1.1 & -0.3 \\ c) No point sampling & -1.3 & -0.9 & -0.5 \\ c) No superpoint sampling & -2.7 & -2.5 & -0.7 \\ c) Only 1 partition level & -8.4 & -5.1 & -0.9 \\ \hline \hline \end{tabular} \end{table} Table 4: **Ablation Study. Impact of some of our design choices on the mIoU for all tested datasets.** \begin{table} \begin{tabular}{l c c c} \hline \hline & Preprocessing & Training & Inference \\ & in min & in GPU-h & in s \\ \hline PointNet++ [43] & 8.0 & 6.3 & 42 \\ KPConv [52] & 23.1 & 14.1 & 162 \\ MinkowskiNet [8] & 20.7 & 28.8 & 83 \\ Stratified Trans. [25] & 8.0 & 216.4 & 30 \\ Superpoint Graph [29] & 89.9 & 1.3 & 16 \\ \hline **SPT (ours)** & 12.4 & 3.0 & 2 \\ **SPT-nano (ours)** & 12.4 & 1.9 & 1 \\ \hline \hline \end{tabular} \end{table} Table 3: **Efficiency Analysis. We report the preprocessing time for the entire S3DIS dataset and the training and inference time for Area 5. SPT and SPT-nano shows significant speedups in pre-processing, training, and inference times.** Figure 6: **Partition Purity. We plot the highest achievable β€œoracle” prediction for our partitions and a regular voxel grid. We also show the performance of SPT for \(4\) partitions with a coarseness ratio from \(\times 1\) to \(\times 10\).** Acknowledgements.This work was funded by ENGIE Lab CRIGEN. This work was supported by ANR project READY3D ANR-19-CE23-0007, and was granted access to the HPC resources of IDRIS under the allocation AD011013388R1 made by GENCI. We thank Bruno Vallet, Romain Loiseau and Ewelina Rupnik for inspiring discussions and valuable feedback.
2306.17165
An Efficient General-Purpose Modular Vision Model via Multi-Task Heterogeneous Training
We present a model that can perform multiple vision tasks and can be adapted to other downstream tasks efficiently. Despite considerable progress in multi-task learning, most efforts focus on learning from multi-label data: a single image set with multiple task labels. Such multi-label data sets are rare, small, and expensive. We say heterogeneous to refer to image sets with different task labels, or to combinations of single-task datasets. Few have explored training on such heterogeneous datasets. General-purpose vision models are still dominated by single-task pretraining, and it remains unclear how to scale up multi-task models by leveraging mainstream vision datasets designed for different purposes. The challenges lie in managing large intrinsic differences among vision tasks, including data distribution, architectures, task-specific modules, dataset scales, and sampling strategies. To address these challenges, we propose to modify and scale up mixture-of-experts (MoE) vision transformers, so that they can simultaneously learn classification, detection, and segmentation on diverse mainstream vision datasets including ImageNet, COCO, and ADE20K. Our approach achieves comparable results to single-task state-of-the-art models and demonstrates strong generalization on downstream tasks. Due to its emergent modularity, this general-purpose model decomposes into high-performing components, efficiently adapting to downstream tasks. We can fine-tune it with fewer training parameters, fewer model parameters, and less computation. Additionally, its modularity allows for easy expansion in continual-learning-without-forgetting scenarios. Finally, these functions can be controlled and combined to meet various demands of downstream tasks.
Zitian Chen, Mingyu Ding, Yikang Shen, Wei Zhan, Masayoshi Tomizuka, Erik Learned-Miller, Chuang Gan
2023-06-29T17:59:57Z
http://arxiv.org/abs/2306.17165v1
# An Efficient General-Purpose Modular Vision Model via Multi-Task Heterogeneous Training ###### Abstract We present a model that can perform multiple vision tasks and can be adapted to other downstream tasks efficiently. Despite considerable progress in multi-task learning, most efforts focus on learning from _multi-label data_: a single image set with multiple task labels. Such multi-label data sets are rare, small, and expensive. We say _heterogeneous_ to refer to image sets with different task labels, or to combinations of single-task datasets. Few have explored training on such heterogeneous datasets. General-purpose vision models are still dominated by single-task pretraining, and it remains unclear how to scale up multi-task models by leveraging mainstream vision datasets designed for different purposes. The challenges lie in managing large intrinsic differences among vision tasks, including data distribution, architectures, task-specific modules, dataset scales, and sampling strategies. To address these challenges, we propose to modify and scale up mixture-of-experts (MoE) vision transformers, so that they can simultaneously learn classification, detection, and segmentation on diverse mainstream vision datasets including ImageNet, COCO, and ADE20K. Our approach achieves comparable results to single-task state-of-the-art models and demonstrates strong generalization on downstream tasks. Due to its emergent modularity, this general-purpose model decomposes into high-performing components, efficiently adapting to downstream tasks. We can fine-tune it with fewer training parameters, fewer model parameters, and less computation. Additionally, its modularity allows for easy expansion in continual-learning-without-forgetting scenarios. Finally, these functions can be controlled and combined to meet various demands of downstream tasks. ## 1 Introduction Comprehensive visual understanding demands a general-purpose model capable of performing diverse vision tasks. With a similar goal, multitask learning (MTL), which enables the simultaneous training of models on multiple tasks and allows them to leverage shared information, has been explored extensively. Most MTL efforts [3; 37; 26] have been made by learning from multi-label datasets, where each input has multiple different types of annotations. However, such data sets with multiple annotations are often impractical to obtain. And the mainstream classification, detection, and segmentation datasets (ImageNet [5], COCO [22], and ADE20K [47]) have no overlapping images. Hence the current paradigm for general-purpose vision models is still dominated by single-task pretraining (_e.g._, image classification [23], self-distillation [2], or multi-modal contrastive learning [41]) and then fine-tuning on downstream tasks. A detailed demonstration of different schemes of pre-training is shown in Fig. 1. The previous work Mod-Squad [3] proposes to use a mixture-of-experts (MoE) and a mutual information loss to address task conflict in MTL. However, it oversimplifies some task-specific network designs and the success of this model heavily relies on multi-label datasets, which are difficult to obtain and scale up. Therefore, it remains unclear: 1) How to scale up this MTL model for multi-task heterogeneous training on conventional computer vision datasets; 2) Whether this model can be utilized as a general-purpose vision backbone that can be easily adapted to many downstream tasks; 3) Whether we can leverage the success of single-task methods instead of removing complicated modules and simplifying the task-specific sub-network. Another issue is that previous large-scale vision models [3, 23, 41, 2] do not consider fast adaptation on downstream tasks. One common limitation of large-scale models is that adapting to downstream tasks requires updating all parameters, which can be prohibitively expensive in terms of time and computational resources. For example, large models like GPT-3 [1] with 175B parameters, can take months to train, making adapting the whole model for a tiny new task impractical. Therefore, efficient adaptation is an important practical feature for successful model deployment. To address these problems, we build a large-scale multi-task heterogeneous training framework based on a modular vision transformer that can simultaneously do three fundamental vision tasks: classification, detection, and segmentation. We refer to this framework as **Multi-task Heterogeneous Learner (MTHL)**. Benefiting from a more diverse training set designed for multiple purposes, the framework generalizes better and is semantically rich enough for rapid downstream adaptation, which is often hard to obtain from a single (homogeneous) pre-training task/dataset. We also address efficient adaptation by leveraging the strong modularity in our model. As shown in Fig. 2, MTHL can adapt efficiently in several aspects including reducing training parameters, model parameters, and computational cost. The mixture-of-experts module enables the model to select the most semantically meaningful part for faster transferring to downstream tasks by simply learning new routers. Further, the model can easily expand by adding experts to address continual learning. Our main contributions can be summarized as follows: * **Large-scale multi-task heterogenous training.** We explore heterogeneous training on three fundamental computer vision tasks: classification, detection, and segmentation with mainstream vision datasets. We demonstrate that one model to perform three tasks can be on par with single-task state-of-the-art. * **Strong generalization on downstream datasets.** Heterogeneous training provides the advantage of diverse perception and the ability to handle a wider range of scenarios, leading to better generalization to downstream datasets. * **Modular adaptation with efficiency.** The emergence of modularity allows for flexible control of architecture and provides several straightforward and efficient ways to adapt the architecture. * **Continual learning without forgetting.** The model can effortlessly leverage existing experts to adapt to new tasks by learning new routers. Additionally, it can incorporate new experts without disrupting the current architecture, thus avoiding catastrophic forgetting. Figure 1: **Different ways of training. (1) Train from Scratch: Train a model for a single task from scratch. (2) Pre-train then Finetune: Pre-train a model on one dataset and later fine-tune the model on other datasets. (3) Multi-task Multi-label Training: Train a model that can produce multiple types of outputs simultaneously. The dataset is expected to have multiple annotations for different tasks on each training image. (4) Multi-task Heterogeneous Training (MTHT): Train a model that can produce different types of outputs corresponding to each task. The model can make use of training data designed for any single task. It can use these in combination to achieve multi-task training.** ## 2 Related Work **Multi-task Learning.** Multi-task learning [15] jointly learns multiple related tasks with a single model. Recently, transformer-based MTL architectures [37] have gained popularity. Some works [14; 25] attempt to unify the input and output space for different tasks. Some works [3; 37; 26] remove complicated task-specific module for simplicity and conducts multi-task learning on a multi-label dataset. However, these works either rely on a single multi-label dataset or lose some perceptual feature when removing the task-specific modules and unifying the input/output [14; 25]. While Ubernet[17] adapts a CNN-based framework that can learn from multiple datasets, it struggles with task conflicts and tends to have lower performance when learning from more than one dataset, which makes it hard to generalize to downstream applications. In contrast, MTHL can learn from diverse datasets and still achieve comparable performance to state-of-the-art single-task methods. Additionally, it can leverage the success of single-task methods by adopting similar designs, including unique task-specific modules (e.g., anchor generator), data pre-processing, and complicated engineering techniques (e.g., non-maximum suppression). These designs are non-trivial but necessary to achieve the best model. **Mixture of Experts (MoE).** Jacobs et al. [13] introduced the MoE as a method to merge sub-models and perform conditional computation. Recently, this technique has been commonly used to decrease computation costs while maintaining a large model capacity [32]. Some studies [19; 8; 30; 27] have leveraged MoE to train massive models with trillions of parameters at a relatively low computation cost. In contrast, we utilize this technique primarily to manage the sub-models and conduct the modular adaptation on downstream tasks. **Parameter-efficient transfer learning.** The Adapter technique was proposed as a standalone layer that can be integrated into an existing neural network for efficient transfer. LoRA [11] utilizes a bottleneck structure to enforce a low-rank constraint on the weight updates. Other approaches integrate CLIP-based adapters [9; 38; 45], upsampling and downsampling modules [20], and additional bias parameters [42] to reduce training parameters during fine-tuning. Our work focuses on selecting the most semantically related part of the model and adapting to downstream tasks efficiently. No additional newly designed module is required. **Continual learning.** Continual learning involves handling a diverse set of tasks and accumulating knowledge through a series of training. Recent efforts have been made to address catastrophic forgetting, including imposing regularization [16; 44; 31] and retaining a small buffer of data for replay [24; 28]. Some approaches [39; 12] dynamically expand the network by adding neurons to each MLP or convolution layer. In contrast, our modular design enables straightforward well-organized expansion by adding experts and learning new routers. Moreover, since each dataset has its router, Figure 2: **Efficient modular adaption. Strong modularity facilitates efficient adaptation to new datasets: 1) Reduce training parameters by only learning new routers and a few optional experts while freezing other parameters. 2) Reduce model parameters via learning new routers and removing rarely selected experts. 3) Reduce computation cost via learning new routers with a smaller Top-K that makes fewer experts chosen in one forward pass. 4) Simple model expansion via adding and learning a few new experts per MoE module while freezing old experts. Above ways of adaptation can be combined to suit specific needs.** the experts added will not be chosen by the previous dataset. Unlike other expansion techniques, our approach does not suffer from catastrophic forgetting. ## 3 Method We start with the definition of multi-task heterogeneous training. Suppose we have \(M\) datasets \(D_{1}\), \(D_{2}\),..., \(D_{M}\). Each dataset contains a set of training pair \(\{I;T_{i}(I)\}\) and \(T_{i}\) is the task on dataset \(D_{i}\) that map images \(I\) to \(T_{i}(I)\). Here, we assume each dataset only has one task to do for simplicity. Multi-task heterogeneous training is to learn a joint model on the \(M\) datasets at once. ### Preliminary **Mixture-of-Experts (MoE).** A MoE layer contains a group of expert networks \(E_{1},E_{2},...,E_{N}\) and a routing network \(G\). The routing network \(G\) calculates the weight \(G^{k}(x)\) for each expert \(E_{k}\) given input \(x\) and the output of an MoE layer is the weighted sum of the output of every expert \(E_{k}(x)\). Formally, the output of an MoE layer is \[y=\sum_{k=1}^{N}G^{k}(x)E_{k}(x). \tag{1}\] The routing network \(G\) is a Top-\(K\) Routing network [32] that only \(K\) experts with the highest weight contribute to the final output: \[G(x)=\mathrm{TopK}(\mathrm{Softmax}(xW_{g}),k) \tag{2}\] where \(\mathrm{TopK}(\cdot,k)\) sets all elements in the vector to zero except the elements with the largest \(K\) values. **Mutual information loss.** Mod-Squad [3] proposes a mutual information loss as an auxiliary loss to better assign experts to _tasks_ so that each expert is more likely to be used for a fixed set of tasks. In contrast, the key motivation in MTHL is to encourage experts to specialize on _datasets_ and then when adapting to downstream tasks, the downstream datasets are more likely to activate a small subset of experts. So we have \(M\)_dataset-specific routing networks_ and modify the loss so that the experts are assigned to datasets instead of tasks: \[L_{MI}=-\sum_{i=1}^{M}\sum_{j=1}^{K}P(D_{i},E_{j})\log P(D_{i},E_{j})+\sum_{i= 1}^{M}P(D_{i})\log P(D_{i})+\sum_{j=1}^{K}P(E_{j})\log P(E_{j}). \tag{3}\] As in [3], we assume that \(P(D_{i})=\frac{1}{M}\) as we want all datasets to be considered equally important. We have \(P(E_{j}|D_{i})=\sum_{x\in D_{i}}G_{i}^{j}(x)\) where \(G_{i}^{j}\) is the weight of expert \(E_{j}\) for dataset \(D_{i}\). With \(P(E_{j}|D_{i})\), we can get \(P(D_{i},E_{j})=P(E_{j}|D_{i})P(D_{i})\) and \(P(E_{j})=\sum_{i=1}^{M}P(D_{i},E_{j})\). ### Multi-Task Heterogeneous Training **Backbone architecture.** Our multi-task heterogeneous training is a general framework that is orthogonal to model architecture design. All Transformer or MLP-based structures are applicable. In this work, we choose two recent state-of-the-art transformer architectures for both image classification and dense prediction tasks as our backbone: Swin-Transformer [23] and DaviT [6]. We replace the MLP layers in these two models with MoE MLP layers. **Task-specific module.** Vision tasks require specific designs of modules to process the data and different ways of perception have a huge impact on performance. While recent studies [25; 14] tend to use a shared task module for all tasks, we believe that the inherent differences in vision tasks make it difficult for a shared module to capture the essential information for all tasks. Thus, MTHL incorporates all task-specific designed modules (e.g., feature pyramid network), with only a backbone transformer shared among all tasks. **Sampling strategy.** Data sampling plays a crucial role in heterogeneous training. Datasets could have varying scale levels with huge gaps in batch size. For example, while a single GPU may work with 128 samples on image classification, it could only afford 2 samples on detection and segmentation. Most multi-task frameworks [3; 37; 17] tend to update the network after forwarding for all tasks. However, such approaches are impractical as the GPU memory is heavily consumed when activating all dense vision modules, _e.g._, detection head and segmentation head. Also, forwarding samples from all tasks in one batch is not scalable when having more tasks. To address the above issue, MTHL adopts a two-step sampling. We first apply weighted sampling to select one out of the \(M\) datasets, then randomly sample a batch of data from the chosen dataset. The weight assigned to each dataset \(D_{i}\) for sampling is denoted as \(w_{sample_{i}}\), which can be pre-defined by the total number of iterations required for convergence in single dataset training, with some empirical tuning. Note that for relatively small datasets, a sufficiently large weight should be assigned to prevent degeneration caused by overtraining on other datasets. **Optimization and convergence.** Each task in our framework is associated with its unique module designed to process the data and its own loss. The losses on datasets \(D_{i}\) are weighted and alternately optimized with a predetermined weight \(w_{l_{i}}\) for each dataset. One challenge in optimization is the presence of gradient conflicts between different tasks. These conflicts interfere with the joint optimization and slow down the convergence. It is also not uncommon to observe that one task dominates the training process while others lag behind. We find that well-defined loss weights and sampling weights contribute to the stabilization of training, and the large batch optimizer Lamb [40] works well in heterogeneous training. Effective convergence in heterogeneous training requires approximately 50 percent more iterations than the combined number of iterations for each individual single-task training. These additional training iterations account for the complexity introduced by joint optimization over diverse vision tasks. **New mutual information loss for multi-task heterogeneous training.** In Mod-Squad [3], the mutual information loss in Equ. 3 can be calculated in each batch as all tasks are contained in one batch. However, calculating \(P(D,E)\) and \(P(E)\) within a sampled batch from one random dataset in heterogeneous training leads to heavy bias. To deal with this, we use an approximation inspired by the following idea: \[\frac{\partial}{\partial x}[x\log x]=1+\log x=\frac{\partial}{\partial x}[(1+ \log c)x]|_{c=x}. \tag{4}\] This suggests that if we replace \(x\log x\) with \((1+\log c)x\), and \(c\) is a good approximation of \(x\), then we will still have a similar gradient. In our case, we will approximate a _running estimate_ of the joint distribution of \(P(D,E)\) with a buffer \(B(D,E)\). The running estimate \(B(D,E)\) avoids the heavy bias caused by estimating \(P(D,E)\) from a single task data set. In each forward pass when we sample dataset \(D_{i}\), we momentum update \(B(D_{i},E)\) with a momentum of \(0.98\). This keeps the estimate of \(B\) close to that of the desired joint distribution. Using this idea, we rewrite Eq. 3 and use the resulting equation as the loss function to calculate the gradient. The equation is given by: \[L_{MI}=-\sum_{i=1}^{M}\sum_{j=1}^{K}[1+\log B(D_{i},E_{j})]P(D_{i},E_{j})+\sum _{j=1}^{K}[1+\log(\sum_{i=1}^{M}B(D_{i},E_{j}))]P(E_{j}). \tag{5}\] Here, \(P(D_{i},E_{j}),P(E_{j})\) is calculated in each forward pass backpropping gradients. If \(D_{i}\) is not sampled in the current forward pass, \(P(D_{i},E_{j})\) is set to \(0\). Note that \(P(D_{i})\log P(D_{i})\) is ignored as a constant. When adapting to new downstream datasets, the buffer still memorizes \(P(D,E)\) for old datasets. Therefore, the MI loss can still be computed to balance experts on new datasets, which is not applicable in [3]. ### Efficient Adaptation on Downstream Tasks Mod-squad [3] explores modular design in multi-task learning, which helps mitigate task conflicts and enables the extraction of sub-models for specific tasks. However, learning from a single multi-label dataset, its applicability is limited to scenarios similar to the pre-trained dataset, thereby restricting its generalizability across diverse vision datasets. Consequently, it's hard for the method to gain enough benefit from multi-task training, and its downstream performance is somehow restrained. In comparison, we scale up multi-task learning to mainstream vision datasets, leading to better generalizations on downstream tasks. Benefiting from the strong modularity, MTHL can be easily decomposed into high-performing components and also allows for a more flexible selection of semantically meaningful components when transferring to downstream tasks, ensuring efficient adaptation capabilities. MTHL has two appealing advantages: **1) Downstream applications can select experts that best match the downstream scenario.** This can be done by learning a new router in each MoE module to find good experts for the downstream task. We consider an expert as a good expert if it is chosen with a high frequency by the router on the downstream dataset. The routers are very lightweight (0.4M in parameters) and can quickly converge to the optimum while freezing all other parameters. **2) We can easily control the architecture within each small component (a small MoE module).** It is easy to expand or prune the model by simply adding or removing experts. This flexibility enables efficient customization of the model based on the specific requirements of the task at hand. With these two advantages, we can do efficient fine-tuning in the following aspects as shown in Fig. 2: **1) fewer training parameters.** The model only needs to learn a new router for the downstream dataset and optionally fine-tune a few experts in each MoE module. **2) fewer model parameters**. After learning a new router for the new downstream dataset, we can rank experts according to the frequency chosen by the routers. Then, we can remove some of the experts rarely being used. **3) lower computation cost**. The new router for the downstream dataset can be learned with a smaller Top-K. So that fewer experts are chosen during one forward pass and can greatly reduce the computation cost and inference latency. Note that all these ways of efficient adaptation can be combined together to meet the demands of downstream datasets. ### Continual Learning The strong modularity also enables simple model expansion and helps conduct continual learning. Specifically, we directly add \(C\) experts in each MoE module along with new task-specific routers every time learning a new task. We train on the new task but freeze all parameters except for the newly added part. There are three main advantages of this approach: 1) No catastrophic forgetting. As all the experts are unchanged after learning and the newly added experts will not be chosen by the router of previous tasks, there is no catastrophic forgetting. 2) Well-organized architecture and knowledge reuse. The model still keeps an elegant modularized design. The routers select experts to reuse knowledge related to the new task and ignore experts with unrelated expertise. 3) The computation cost is constant. Other expanding methods [39; 12] add both computation cost and capacity to the existing model, while our approach only adds capacity. This makes our approach expandable with a large number of tasks. ## 4 Experiments ### Multi-task heterogeneous training. We conduct three fundamental vision tasks (classification, detection, and segmentation) on three datasets: ImageNet-1K [5], COCO [22], and ADE20K [47]. For the downstream datasets, we evaluate classification on the scene dataset Places-365 [46] (P365), the popular fine-grained dataset iNaturalist-2018 [34] (iNat18), the pet dataset Pets [29], the fine-grained bird dataset CUB [35], and the car dataset Cars [18]. We evaluate downstream detection on PASCAL VOC [7] and downstream segmentation on Cityscapes [4] and NYU [33]. **Models and baselines.** We utilize Swin Transformer [23] and DaViT [6] as our backbone transformers, with results reported on three different sizes: tiny (T), small (S), and base (B). Each task has its own task-specific head. For classification, we use a single linear layer. For detection, we use the retina head [21]. For segmentation, we use the UperNet [36]. Each task follows its own input and output format based on single-task methods. We implement our methods and baselines as the following: 1) Train from scratch (Scratch): a vanilla single-task learning baseline that trains models from scratch. 2) Pre-train then fine-tune (Pre. & FT.): pre-training on ImageNet followed by fine-tuning on the target dataset. 3) MTHL.D: our multi-task heterogeneous learner using a dense model (no MoE). 4) MTHL: our multi-task heterogeneous learner using a sparse model (with MoE). **Configs.** We employ 12 experts with Top-K as 4 for all MoE modules, following [3]. For base-size transformers, we replace the MLP with MoE MLP every 2 transformer layers. For small and tiny transformers, we use MoE Mlp in every transformer layer. All models are trained for 240,000 iterations on 96 Tesla V100 GPUs with Lamb [40] as the optimizer for large batch training. Weight decay is set to 0.05 and the maximal gradient norm is clipped to 0.1. We use a simple triangular learning rate schedule with a maximum learning rate of 0.004, as in [6]. Data augmentations for each task follow the common practice in [23, 6]. During multi-task heterogeneous training, data sampling weight is set to {3, 2, 1}, loss weight is set to {1.0, 0.6, 0.2}, and batch size is set to {64, 2, 2} for classification, detection, and segmentation, respectively. For a fair comparison, all results of our method and baselines are obtained from our implementations with the same settings. More details of the training settings, models, and datasets can be found in the Appendix. **Multi-task heterogeneous training.** We compare different training schemes as shown in Tab. 1. Across all three datasets with varying backbones, we observe that: 1) Heterogeneous training performs on par with the state-of-the-art pre-train then fine-tune learning scheme, indicating the gradient conflicts between different tasks are alleviated by our modular design. 2) Notably, for the segmentation task, MTHL consistently outperforms the previous state-of-the-art across all backbone choices, suggesting that joint training with classification and detection tasks improves segmentation. \begin{table} \begin{tabular}{c|c|c c|c c c|c c c|c c} \hline \hline \multirow{2}{*}{Backbone} & \multirow{2}{*}{Model} & \multirow{2}{*}{P365} & \multirow{2}{*}{iNat18} & \multirow{2}{*}{Pets} & \multicolumn{3}{c|}{CUB} & \multirow{2}{*}{Cars} & \multicolumn{2}{c|}{PASC.} & \multicolumn{2}{c|}{City.} & \multicolumn{2}{c|}{NYU} \\ & & & top-1 & top-1 & top-1 & top-1 & top-1 & \(mAP\) & \(mIoU\) & \(mIoU\) & \(mIoU\) & **Mean** \\ \hline \multirow{4}{*}{Swin-B} & IN-1K Pre. & 58.7 & 72.9 & 94.0 & 83.9 & 94.0 & 76.9 & 80.6 & 76.2 & 78.7 \\ & Mod-Squad [3] & 56.4 & 69.4 & 92.3 & 79.8 & 93.7 & 77.2 & 81.1 & 77.5 & 78.1 \\ & MTHL.D & 59.1 & 73.3 & 94.2 & 84.3 & 94.2 & 78.7 & 82.1 & 78.0 & 79.9 \\ & MTHL & 59.4 & 73.6 & 94.6 & 84.7 & 94.9 & 79.1 & 82.5 & 78.7 & **80.4** \\ \hline \multirow{4}{*}{Davit-B} & IN-1K Pre. & 59.2 & 73.4 & 94.4 & 88.4 & 94.9 & 77.4 & 81.5 & 76.7 & 79.5 \\ & MTHL.D & 59.6 & 73.5 & 94.8 & 89.0 & 95.0 & 78.8 & 82.7 & 78.6 & 80.6 \\ \cline{1-1} & MTHL & 60.1 & 73.9 & 94.9 & 89.4 & 95.0 & 79.5 & 83.4 & 79.3 & **81.2** \\ \hline \hline \end{tabular} \end{table} Table 2: **Comparisons of different pre-training schemes on downstream performance. We compare with IN-1K pre-trained model (IN-1K Pre.), and multi-task multi-label pre-training (Mod-Squad [3]) on Taskonomy [43]. To calculate the mean, we first average the performance on classification, detection, and segmentation separately. Afterward, we average the results across all tasks.** \begin{table} \begin{tabular}{c|c|c c|c c|c c c|c c c} \hline \hline \multirow{2}{*}{Backbone} & \multirow{2}{*}{Model} & \multirow{2}{*}{Params} & \multirow{2}{*}{FLOPs} & \multirow{2}{*}{IN-1K} & \multicolumn{4}{c|}{COCO} & \multicolumn{2}{c}{ADE20K} \\ & & (M) & (G) & & top-1 & top-5 & \multicolumn{1}{c}{mAP} & \(mAP_{50}\) & \(mAP_{75}\) & mIoU & \(mAcc\) & \(aAcc\) \\ \hline \multirow{4}{*}{Swin-T} & Scratch & 27.5\(\times\)3 & 4.4 & **80.6** & 95.2 & 34.9 & 54.3 & 36.6 & 32.0 & 41.4 & 75.8 \\ & Pre. \& FT. & 27.5\(\times\)3 & 4.4 & β€” & β€” & 42.0 & 64.7 & 45.9 & 44.3 & 55.8 & 81.0 \\ & MTHL.D & 27.5 & 4.4 & 79.7 & 95.1 & 43.8 & 65.7 & 46.8 & 44.4 & 54.8 & 80.5 \\ & MTHL & 50.9 & 5.1 & 80.3 & 94.7 & **45.0** & 66.5 & 48.2 & **44.6** & 55.0 & 81.0 \\ \hline \multirow{4}{*}{Swin-S} & Scratch & 48.9\(\times\)3 & 8.5 & **82.6** & 96.1 & 36.3 & 55.6 & 38.4 & 34.5 & 43.9 & 77.1 \\ & Pre. \& FT. & 48.9\(\times\)3 & 8.5 & – & – & **46.0** & 68.0 & 49.9 & 47.0 & 56.9 & 81.7 \\ & MTHL.D & 48.9 & 8.5 & 80.7 & 95.5 & 45.8 & 67.8 & 48.7 & **47.7** & 58.4 & 81.8 \\ & MTHL & 89.1 & 9.2 & 82.0 & 95.9 & 45.7 & 66.8 & 49.1 & 46.7 & 57.1 & 81.8 \\ \hline \multirow{4}{*}{Swin-B} & Scratch & 86.7\(\times\)3 & 15.1 & **83.1** & 96.4 & 35.5 & 54.7 & 37.4 & 35.4 & 44.8 & 77.6 \\ & Pre. \& FT. & 86.7\(\times\)3 & 15.1 & – & – & 47.3 & 69.0 & 51.2 & 47.7 & 58.7 & 82.3 \\ & MTHL.D & 86.7 & 15.1 & 82.2 & 96.2 & 47.5 & 69.2 & 51.0 & **48.8** & 59.7 & 82.5 \\ & MTHL & 158.3 & 16.2 & 82.3 & 96.2 & **47.6** & 69.1 & 50.9 & 48.2 & 59.0 & 82.5 \\ \hline \hline \multirow{4}{*}{DaViT-T} & Scratch & 27.6\(\times\)3 & 4.4 & **82.5** & 96.2 & 37.7 & 57.1 & 40.0 & 36.4 & 46.4 & 77.8 \\ & Pre. \& FT. & 27.6\(\times\)3 & 4.4 & – & β€” & **45.4** & 66.9 & 48.4 & 45.8 & 56.0 & 81.8 \\ & MTHL.D & 27.6 & 4.4 & 81.3 & 95.8 & 44.6 & 66.6 & 47.5 & 46.2 & 56.4 & 81.6 \\ & MTHL & 51.2 & 5.1 & 82.0 & 95.8 & 45.1 & 67.5 & 48.1 & **47.4** & 57.1 & 82.1 \\ \hline \multirow{4}{*}{DaViT-S} & Scratch & 49.0\(\times\)3 & 8.6 & **83.8** & 96.8 & 37.8 & 56.7 & 40.5 & 38.2 & 48.4 & 78.8 \\ & Pre. \& FT. & 49.0\(\times\)3 & 8.6 & – & β€” & 47.2 & 68.9 & 50.7 & 48.3 & 60.2 & 82.3 \\ & MTHL.D & 49.0 & 8.6 & 82.6 & 96.5 & **47.3** & 69.2 & 50.6 & **48.7** & 59.1 & 82.7 \\ & MTHL & 88.9 & 9.2 & 83.3 & 96.5 & 46.4 & 67.7 & 49.5 & 47.6 & 57.9 & 82.6 \\ \hline \multirow{4}{*}{DaViT-B} & Scratch & 86.9\(\times\)3 & 15.2 & **84.2** & 96.9 & 38.0 & 57.2 & 40.5 & 38.5 & 48.7 & 78.9 \\ & Pre. \& FT. & 86.9\(\times\)3 & 15.2 & – & β€” & 48.1 & 69.7 & 51.3 & 49.3 & 60.2 & 83.0 \\ \cline{1-1} & MTHL & 60.1 & 73.9 & 94.9 & 89.4 & 95.0 & 79.5 & 83.4 & 79.3 & **81.2** \\ \hline \hline \hline \end{tabular} \end{table} Table 1: **Multi-task heterogeneous training. We compare it with training from scratch (scratch) and pre-training then fine-tuning (pre. & & ft.). Note that on COCO and ADE20K, pre. & & & & & \\ \multirow{4 3) MTHL also works pretty well on image detection and is superior to previous arts in most cases. 4) The MTHL and MTHL_D generally exhibit similar performance on small and base models and MTHL consistently outperforms MTHL.D on tiny models, likely influenced by the relationship between model capacity and dataset scale. **Downstream performance.** As shown in Tab. 2, we compare different training schemes on the downstream datasets. MTHL outperforms the single-task pre-trained model IN-1K Pre. and multi-task multi-label pre-trained model Mod-Squad, particularly on detection and segmentation tasks. We also note that the sparse model MTHL consistently outperforms the dense model MTHL.D, indicating that additional experts for selection could be beneficial for downstream tasks. ### Efficient Adapters In this section, we highlight the potential of MTHL as an efficient adapter. **Efficient in training parameters.** MTHL can adapt quickly to a new task or dataset by tuning the router with a few optional experts and learning a new task head. During this process, all other parameters are frozen. The optional few experts to be fine-tuned are randomly selected. We find that randomly selected experts perform similarly to selecting the expert with the highest or lowest use frequency on the downstream dataset. Please refer to the supplementary for more details. In Tab. 3, our method is referred to as 'Ro. Only', 'Ro. w/ 1 Ex.', and 'Ro. w/ 2 Ex.' that denotes only tuning routers, routers with one expert per MoE module, and routers with two experts per MoE module, respectively. We compare our efficiency in training parameters with the commonly used adapter [10], which adds an adapter module after each MoE MLP block. In contrast, we only need new lightweight routers (0.4M) and one or two additional experts per MoE module. Even updating only new routers outperforms the adapter baseline, and Ro. w/2 Ex. has a very close performance (0.5 points lower in mean) to the fully fine-tuned baseline. For a clearer comparison, please see Fig. 3. **Efficient in model capacity.** In terms of model capacity, MTHL can remove experts after learning a new router on the new task. This can be achieved by removing experts with the least use frequency, followed by fine-tuning the entire model. We explore two methods of pruning: 1) Removing a few experts from each MoE layer. In Tab. 3, we attempt to remove 1/2 experts and 2/3 experts. 2) Removing all experts whose use frequency is lower than a threshold \(\theta\) on the downstream dataset. This approach may result in a different number of experts in each MoE layer, but it has comparable efficiency to the first pruning method. Results and a clear comparison can be referred to Tab. 3 and Fig. 3. \begin{table} \begin{tabular}{l|c c c c|c c c c|c c c|c} \hline \hline \multirow{2}{*}{Method} & Train. & Model & FLOPs & Ratio & P365 & Nat14 & Pets & CUB & Cars & PASC. & City. NYU & \multirow{2}{*}{**Mean**} \\ & Par.(M) & Par.(M) & (G) & & top-1 & top-1 & top-1 & top-1 & top-1 & \(mAP\) & \(mIoU\) & \(mIoU\) & \\ \hline FT-Full & 88.9 & 88.9 & 9.2 & - & 59.0 & 72.9 & 94.0 & 88.2 & 95.0 & 78.6 & 81.4 & 77.4 & 79.9 \\ \hline Adapter [10] & 14.8 & - & - & 16.6\% & 50.7 & 62.4 & 81.1 & 75.8 & 80.8 & 67.7 & 69.9 & 66.8 & 68.7 \\ Ro. Only & **0.4** & - & - & 0.4\% & 52.1 & 64.2 & 83.3 & 77.9 & 78.2 & 69.6 & 71.8 & 68.7 & 70.3 \\ Ro. w/ 1 Ex. & 5.4 & - & - & 6.1\% & 57.4 & 70.7 & 91.3 & 85.8 & 94.7 & 76.5 & 78.8 & 75.2 & 77.8 \\ Ro. w/ 2 Ex. & 10.4 & - & - & 11.7\% & 58.8 & 72.7 & 94.0 & 87.8 & 95.0 & 77.9 & 80.7 & 76.7 & **79.4** \\ \hline Prune \(\theta=1\%\) & - & 60.2 & - & 67.7\% & 58.9 & 72.8 & 93.9 & 88.1 & 95.0 & 78.6 & 81.4 & 77.3 & **79.9** \\ Prune \(\theta=5\%\) & - & 54.4 & - & 61.2\% & 58.8 & 72.7 & 93.8 & 88.0 & 93.9 & 78.6 & 81.4 & 77.2 & 79.7 \\ Prune 1/2 Ex. & - & 59.9 & - & 67.3\% & 58.8 & 72.8 & 93.9 & 88.0 & 93.9 & 78.6 & 81.4 & 77.3 & 79.8 \\ Prune 2/3 Ex. & - & **49.9** & - & 56.1\% & 58.8 & 72.6 & 93.6 & 87.8 & 93.8 & 78.6 & 81.3 & 77.2 & 79.7 \\ \hline Top-K=3 & - & - & 7.7 & 83.7\% & 58.8 & 72.5 & 93.3 & 87.3 & 94.9 & 77.3 & 80.1 & 76.3 & **79.0** \\ Top-K=2 & - & - & 6.2 & 67.4\% & 58.1 & 70.7 & 91.9 & 86.2 & 92.0 & 74.9 & 77.6 & 73.7 & 76.8 \\ Top-K=1 & - & - & **4.7** & 51.0\% & 48.5 & 59.9 & 77.3 & 72.4 & 77.4 & 64.3 & 66.6 & 63.3 & 65.4 \\ \hline Hybrid-A & 5.4 & 49.9 & 6.2 & - & 58.0 & 70.6 & 91.1 & 85.8 & 94.7 & 76.3 & 78.5 & 73.2 & 77.4 \\ Hybrid-B & 10.4 & 49.9 & 7.7 & - & 58.8 & 72.4 & 93.3 & 87.2 & 94.9 & 77.1 & 79.9 & 76.2 & **78.8** \\ \hline \hline \end{tabular} \end{table} Table 3: **Efficient adaptation.** All experiments use MTHL as the pre-trained model with Davit-S as the backbone. The ratio calculates the percentage of efficiency metric compared to the fully fine-tuned baseline. Notations: β€˜Ro.’ for Router, β€˜Ex.’ for expert(s), \(\theta\) is a threshold on the frequency used for an expert. We have two hybrid models: 1) β€˜Hybrid-A’ directly combines β€˜Ro. w/ 1 Ex.’, β€˜Prune 2/3 Ex.’, and β€˜Top-K=2’. 2) β€˜Hybrid-B’ combines β€˜Ro. w/ 2 Ex.’, β€˜Prune 2/3 Ex.’, and β€˜Top-K=3’. **Efficient in computation cost.** Most pre-training may use a relatively large backbone, but the downstream tasks/datasets may not require such a large model capacity. MTHT.S can regulate the computation cost by learning new routers with a reduced Top-K. This would result in a trade-off between performance and computation cost, as illustrated in Fig. 3. For some datasets (_e.g._, P365), it can achieve a relatively low computation cost (_e.g._, 67.4%) while maintaining the same level of performance (_e.g._, <1% drop). **Combine all efficient adapting.** To further improve efficiency, the efficient adapting techniques mentioned above can be combined. In Tab. 3, for Hybrid-B, we first learn a new router and remove 2/3 experts. Then, we fine-tune the router with Top-K as 3 along with two experts per module. This approach achieves a mean performance of 78.8, which is only 1 point lower than fine-tuning the entire model. Moreover, this method reduces training parameters, model parameters, and computation cost simultaneously. ### Continual learning. Continual learning without any forgetting is achievable with MTHL by learning new routers (0.4M) and a few optional experts on the new dataset. We compared it with the common regularization-based continual learning baseline LWFI[16]. As demonstrated in Tab. 4, our method has three significant advantages: 1) No forgetting on the learned datasets. 2) Only a smart part of the model needs to be trained on new datasets, requiring only 10.4M training parameters, while LWF needs to tune the whole model (88.9M). 3) Comparable performance to fully fine-tuning the whole model on every dataset. ## 5 Conclusion Our study focuses on multi-task heterogeneous training and its adaptation ability on downstream datasets. MTHL can achieve outcomes comparable to the previous single-task state-of-the-art on all tasks. Furthermore, we investigate various methods of utilizing modularity to efficiently adapt to downstream tasks. Modularity also allows model expansion easily for continual learning. The broader impact of our work could be significant in terms of advancing general-purpose vision model pre-training and effective adaptation of large-scale models. One limitation of MTHL is model may be biased toward certain datasets and require more training iterations for convergence. \begin{table} \begin{tabular}{c|c|c|c|c c c c c c c|c} \hline \hline \multirow{2}{*}{Method} & New params & Train. params & P365 & iNat18 & Pets & CUB & Cars & PASC. & City. & NYU & \multirow{2}{*}{**Average**} \\ & per task (M) & per task (M) & top-1 & top-1 & top-1 & top-1 & top-1 & \(mAP\) & \(mIoU\) & \(mIoU\) & \\ \hline LWF [16] & 0 & 88.9 & 46.2 & 57.0 & 73.5 & 70.6 & 75.5 & 62.7 & 71.1 & 68.9 & 65.7 \\ Rou. only & 0.4 & **0.4** & 52.1 & 64.2 & 83.3 & 77.9 & 78.2 & 69.6 & 71.8 & 68.7 & 70.7 \\ Rou. w/ 1Ex. & 5.4 & 5.4 & 57.6 & 70.8 & 91.3 & 85.9 & 94.7 & 76.8 & 79.0 & 75.6 & 79.0 \\ Rou. w/ 2Ex. & 10.4 & 10.4 & 58.8 & 72.8 & 94.5 & 88.0 & 95.0 & 78.1 & 80.7 & 76.9 & **80.6** \\ \hline FT-Full & – & – & 59.0 & 72.9 & 94.0 & 88.2 & 95.0 & 78.6 & 81.4 & 77.4 & 80.8 \\ \hline \hline \end{tabular} \end{table} Table 4: **Continual learning.** We conduct continual learning on these datasets one by one after heterogenous pre-training and report the final performance. All experiments use MTHL as the pre-trained model with DaviT-S as the backbone. The number of training parameters and newly added parameters in the backbone per task are measured. Here the average is the average performance on all datasets. Figure 3: **Trade-off between efficiency and performance.** We visualize the trade-off between performance and training parameters, model parameters, and computation cost respectively.
2305.01257
DreamPaint: Few-Shot Inpainting of E-Commerce Items for Virtual Try-On without 3D Modeling
We introduce DreamPaint, a framework to intelligently inpaint any e-commerce product on any user-provided context image. The context image can be, for example, the user's own image for virtual try-on of clothes from the e-commerce catalog on themselves, the user's room image for virtual try-on of a piece of furniture from the e-commerce catalog in their room, etc. As opposed to previous augmented-reality (AR)-based virtual try-on methods, DreamPaint does not use, nor does it require, 3D modeling of neither the e-commerce product nor the user context. Instead, it directly uses 2D images of the product as available in product catalog database, and a 2D picture of the context, for example taken from the user's phone camera. The method relies on few-shot fine tuning a pre-trained diffusion model with the masked latents (e.g., Masked DreamBooth) of the catalog images per item, whose weights are then loaded on a pre-trained inpainting module that is capable of preserving the characteristics of the context image. DreamPaint allows to preserve both the product image and the context (environment/user) image without requiring text guidance to describe the missing part (product/context). DreamPaint also allows to intelligently infer the best 3D angle of the product to place at the desired location on the user context, even if that angle was previously unseen in the product's reference 2D images. We compare our results against both text-guided and image-guided inpainting modules and show that DreamPaint yields superior performance in both subjective human study and quantitative metrics.
Mehmet Saygin Seyfioglu, Karim Bouyarmane, Suren Kumar, Amir Tavanaei, Ismail B. Tutar
2023-05-02T08:41:21Z
http://arxiv.org/abs/2305.01257v1
# DreamPaint: Few-Shot Inpainting of E-Commerce Items for Virtual Try-On without 3D Modeling ###### Abstract We introduce DreamPaint, a framework to intelligently inpaint any e-commerce product on any user-provided context image. The context image can be, for example, the user's own image for virtual try-on of clothes from the e-commerce catalog on themselves, the user's room image for virtual try-on of a piece of furniture from the e-commerce catalog in their room, etc. As opposed to previous augmented-reality (AR)-based virtual try-on methods, DreamPaint does not use, nor does it require, 3D modeling of neither the e-commerce product nor the user context. Instead, it directly uses 2D images of the product as available in product catalog database, and a 2D picture of the context, for example taken from the user's phone camera. The method relies on few-shot fine tuning a pre-trained diffusion model with the masked latents (e.g., Masked DreamBooth) of the catalog images per item, whose weights are then loaded on a pre-trained inpainting module that is capable of preserving the characteristics of the context image. DreamPaint allows to preserve both the product image and the context (environment/user) image without requiring text guidance to describe the missing part (product/context). DreamPaint also allows to intelligently infer the best 3D angle of the product to place at the desired location on the user context, even if that angle was previously unseen in the product's reference 2D images. We compare our results against both text-guided and image-guided inpainting modules and show that DreamPaint yields superior performance in both subjective human study and quantitative metrics. ## 1 Introduction A long-standing problem in E-Commerce is the ability of customers to try-on the products before the purchase. The lack of try-on possibility increases the risk and cost associated with product returns due to misfit of the product after the product is delivered and physically tried on. This problem arises across different product categories. For example, for clothes, shoes, jewelry, watches, glasses, etc, the customer might want to try the product on themselves before the purchase. For pieces of furniture like couches, tables, chairs, decorative object, etc, the customers might want to try them on their own room, and own setting, to visualize the product in-context before the purchase. The same applies for multiple other categories of products. One solution to tackle this problem is to use augmented-reality through 3D modeling. Such solutions, however, typically require a 3D model of the product or an expensive method to reconstruct a 3D model from high quality 2D images. The vast majority of shopping websites catalogs of products do not have 3D models associated with them (either native or reconstructed). As a result, the AR-based virtual try-on capability is only offered 1) on a portion of shopping websites and 2) on a portion of products within general online stores that are not specialized on a single category of products. We propose a solution for virtual try-on that does not require 3D modeling or augmented reality. It only uses whatever set of 2D images is available for the product in the product catalog database of the shopping website. The method relies on a combination of Masked DreamBooth and Stable Diffusion Inpainting module. We name the method DreamPaint. The user provides a 2D image of themselves (for clothes), their room (for furniture), their desk setting (for decorative objects) etc. The user can then specify on that image the rough location in which they want the product to be placed. DreamPaint positions the product in that specified location seamlessly integrating with the environment. The model is not constrained by the images of the product that are available. From these images, it can intelligently extrapolate new angles if such angles are required by the environment configuration to place the object. Some examples generated by DreamPain can be seen on Fig. 1. Image generation using diffusion-based models has attracted a lot of attention recently. Other diffusion-based approaches to the problem typically preserve only one input: either the object is preserved and the environment surrounding the object is generated with text-guidance (Dreambooth-like) or the environment is preserved and the object within the environment is generated with text-guidance (Inpainting-like). Our approaches combines both in a novel way to preserve both the environment and the object inputs. The rest of the paper is organized as follows. In Section 2, we review closely related work. In Section 3, we present the DreamPaint method, combining Masked Stable Diffusion Dreambooth and Inpainting module. In Section 4 we show experimental settings. In Section 5 we compare the results against text guided and image guided inpainting models and show that DreamPaint outperforms them in both qualitative and quantitative comparisons as well as a human study. We do some ablations on Section 6 and end our paper with Conclusions and Limitations of our model. Figure 1: Some example outputs of our DreamPaint model. We fine-tune the U-Net and the text encoder of the Stable Diffusion model in a few shot setting with the given masked reference images, which allows users to inpaint their personal images to virtually "try" the e-commerce items before buying. ## 2 Related Work Text-to-image diffusion models have shown unprecedented success in recent research Ramesh et al. (2022), Saharia et al. (2022), Yu et al. (2022), Rombach et al. (2022). When trained with large-scale datasets of image and text pairs, these models can generate highly accurate, and semantically meaningful images utilizing text prompts, especially for common objects. Diffusion models can also be further trained for personalization tasks such as inpainting Suvorov et al. (2022). As shown in Avrahami et al. (2022), natural language provides a good guidance for inpainting of common objects. However, for uncommon objects, such as the items found in an e-commerce catalog, these models generally fail to generate a satisfactory representation of the item by preserving its characteristic details. Moreover, even in the cases where the model has a generative capacity for the given object, text descriptions are highly ambiguous by their nature and are inefficient in conveying the characteristic details of an object. Thus, an image-based diffusion guidance would better serve for inpainting of e-commerce items. Paint by Example (PBE) Yang et al. (2022), is a recent image-guided diffusion model, which utilizes an exemplar image to guide the diffusion inpainting process. The method achieves superior performance against text-guided inpainting models like Stable Diffusion Rombach et al. (2022) or harmonization models such as DCCF Xue et al. (2022) for in-the-wild object inpainting. However, the PBE method also has some drawbacks in preserving the high-fidelity details of objects, especially for the underrepresented objects, as they embed the exemplar image using only CLIP's CLS embedding for guidance. Relying on such high-level embeddings results in omitting fine-grained details that define the characteristics of many e-commerce items. Therefore, an alternative approach is needed for the virtual try-on setting. Another recent technique called DreamBooth Ruiz et al. (2022) offers high-fidelity concept learning on novel images. Given a few reference images (best if provided from different angles), a new token representing these reference images could be injected in the model by few-shot fine-tuning the model's denoiser. E-commerce data is a good fit for this approach because every product in e-commerce catalogs typically have multiple images taken from different angles. Figure 2: Model overview. Similar to Dreambooth, our model takes a small number of reference images of a specific item taken from different angles and an associated class name (a couch in this example). Then we train the text encoder and U-Net of the Stable Diffusion using randomly masked versions of reference images for the inpainting setting. After the Dreambooth training is done, we load the U-Net and text encoder weights on the Stable Diffusion Inpainting Model, which is capable of inpainting the masked region for the given image. Note that, on the inference setting, the inpainting model generated the item in a novel view. Inspired by DreamBooth, we propose **Dreampaint** (**Dreambooth-inpaint**), where we combine DreamBooth and inpainting modules to learn e-commerce objects by preserving their fidelity. By utilizing a pre-trained Stable Diffusion model, to make the new concept learned by Dreambooth suitable for inpainting, we are modifying the Dreambooth approach into masked training few-shot fine-tuning to learn a new U-Net Ronneberger et al. (2015) and text-encoder that are injected with the new item. We then load our fine-tuned U-Net and text-encoder into a pre-trained Stable diffusion inpainting model, which allows the user to mask a portion of their personal images and generate the injected product in the masked region by preserving both the product specific high-fidelity features as well as the context of the user-provided scene. Furthermore, DreamPaint can fill in the masked region with a standardized prompt as in Ruiz et al. (2022). However, if the results turn out unsatisfactory for the user, the model has the flexibility to be further tuned by additional text prompts. Therefore, in a way, DreamPaint is leveraging both image and text guidance. We compare our results against 1) the image-guided model PBE (since it was domenstrated to outperform other methods in preserving the fidelity, which is our main objective), and 2) a standard text-guided Stable Diffusion inpainting model that inpaints the given image using the catalog title of the reference item. ## 3 Methodology This section is organized in three parts. In the first part we discuss latent diffusion models (3.1) and our first baseline - the text-guided inpainting (3.1.1). In the second part (3.2), we discuss the example-based painting method. Finally, in the third part, we introduce the Dreambooth method (3.3) and we explain our DreamPaint proposed approach for the high-fidelity e-commerce inpainting task (3.4). ### Latent Diffusion Models Diffusion models are generative models that learn the data distribution by reversing a fixed-length Markovian forward process, thereby iteratively denoising a normally distributed variable Sohl-Dickstein et al. (2015). Lately, it is shown that, instead of using the pixel space, denoising can be conducted in a latent space, which is computationally efficient as it reduces the dimension of images, as well as it omits the high frequency noise within the given image. One example of a latent diffusion models is Stable Diffusion Rombach et al. (2022), which consists of three main components: A Variational Autoencoder (VAE) to transform the given input in a latent space, a text encoder to process the given text, and a time-conditioned U-Net Ronneberger et al. (2015) to predict the noise that is added on the image latents which are conditioned by the text embeddings. Mathematically, the conditioned latent diffusion model can be learned by optimizing the following loss: \[L_{LDM}=\mathbb{E}_{\mathcal{E}(x),c,\epsilon,t}\left[\left\|\epsilon_{ \theta}\left(z_{t},t,c\right)-\epsilon\right\|_{2}^{2}\right] \tag{1}\] where, \(z_{t}\) is the latent version of the input \(x_{t}\) provided by the VAE as \(z=\mathcal{E}(x)\). \(x_{t}\) is the noise added version of the input \(x\), at a timestep of \(t\), where \(x_{t}=\alpha_{t}x_{0}+(1-\alpha_{t})\epsilon\) and \(\alpha_{t}\) decreases with the timestamp \(t\). Noise is denoted by \(\epsilon\sim\mathcal{N}(0,1)\). \(\epsilon_{\theta}\) is the U-Net. Lastly, \(c\) denotes the conditioning variable, and for the text guided models, it is given by processing the given text with CLIP text encoder Radford et al. (2021). #### 3.1.1 Image Inpainting For the inpainting task, the objective is defined as follows: Given an image \(x\), a binary map of edit region \(m\) (where edit region pixels are 1), and a reference image (or images), \(r\), the objective is to generate an output image, where the edited region given by \(m\) is as similar as possible to \(r\), and regions defined by \(\mathds{1}-m\) remains as unchanged as possible, where \(\mathds{1}\) denotes all ones matrix. However, the objective is not to just copy paste the given reference image in the mapped region, but to do it as plausible and realistic as possible as preserving the reference image's features is especially important in the e-commerce setting. For the inpainting, the objective can be defined mathematically by: \[L_{LDM}=\mathbb{E}_{\mathcal{E}(x),c,\epsilon,t}\left[\left\|\epsilon_{\theta} \left(z_{t},\mathcal{E}((\mathds{1}-m)\odot x),m,t,c\right)-\epsilon\right\|_{2 }^{2}\right] \tag{2}\] Here, U-Net takes two more inputs in addition to input latents, VAE processed masked image (masked latents), and the mask itself. Stable Diffusion has an inpainting model which was trained on laion-aesthetics v2.5 using classifier-free guidance Ho & Salimans (2022), where during training, synthetic masks are generated to mask 25% of the pixels, which in turn conditions the model for inpainting. ### Paint by Example (PBE) The text conditioned inpainting is generally not enough to embed fine-grained details that define the reference objects especially when preserving the item's fidelity is the main priority. Thankfully, the conditioning of the latent diffusion models are not limited to textual prompts but they can also be guided by images. However, it is not straightforward to condition the diffusion models on images as the model generally tend to copy the object given in \(r\) as is, instead of blending it with the \(x\). More precisely, if \(c\) in Eq. 2 is selected as image, whose embeddings are given by the CLIP image encoder, the model just learns the trivial mapping function, where, \((\mathds{1}-m)\odot x+r=x\). PBE Yang et al. (2022) introduced a number of design choices to tackle the trivial mapping problem. Instead of utilizing all image tokens that CLIP image encoder outputs, they leveraged only the CLS token, which helps preserving semantics while preventing the trivial solution. Furthermore, they added fully connected layers to decode the CLS token, then inject it into the U-Net. ### Dreambooth Instead of providing a reference image during inference time, Dreambooth Ruiz et al. (2022) aims to inject a novel concept into the diffusion model in a few shot fine-tuning setting. This is achieved by fine-tuning the U-Net with a few reference images of the object, and a prompt in a format of "\(a[unique\ token][class\ noun]\)", where \([unique\ token]\) is a word that does not have a strong prior in both the text encoder (e.g. a random word like "nbsn") and the diffusion model. \([class\ noun]\) is the class of the reference images, which is used to limit the model's prior of the reference image's class. This way, the diffusion model learns this unique object and its identifier, and thus could leverage its visual prior to generate the object in novel poses on different backgrounds. This is achieved by fine-tuning Eq. 1 with a few reference images using the same conditioning vector of "\(a[unique\ token][class\ noun]\)". If the reference images are provided from different poses, it greatly affects the models ability to generate the concept in novel views. There are challenges with fine-tuning the entire U-Net of Stable Diffusion with a few images. In Ruiz et al. (2022), authors identified that they had two main issues: Language-Drift and overfitting. Language-Drift is the phenomenon of associating the reference images with the given class noun. For example, if a picture of a tshirt is used as a reference with a prompt "a nbsn tshirt", then the model forgets its generalized understanding of a tshirt and associates the reference image tshirt. However, this is not really an issue in the e-commerce setting since our aim is not to preserve the models generalization capacity over the reference class, but to teach it the reference by keeping its fidelity as high as possible. The authors proposed a loss function called "class specific prior preservation loss" to help prevent overfitting. This loss function uses both the provided reference images and the model's own generated samples for a specific class noun. The purpose of this loss function is to prevent the model from forgetting how to generalize for the specific class noun, which is a problem known as "catastrophic forgetting." However, since our objective is not to keep our class token generalizable, it does not help in our case. Moreover, for the e-commerce setting it also often leads to sub-optimal results as most of the e-commerce items are of novel concepts. For example, when prompted with "roman armor" the model retrieves a lot of irrelevant images, thus using them on fine-tuning misleads to representation of the reference image. ### DreamPaint It is highly likely that the textual conditioning alone is not enough to embed the high-fidelity content of the product images as product titles are not meant to fully describe the item in detail. Especially high-fidelity items can hardly be described by textual prompts only, thus it is clear that a visual reference is needed. Furthermore, pre-trained models do not have strong priors over many of the e-commerce items, as they are not represented in the bulk datasets compared to other natural images like animals, faces etc. Using only global embeddings in PBE results in model that omit high-fidelity details of the reference image, which makes it unsuitable for the e-commerce inpainting setting, especially for the items that the model has a low prior. As buyers would like to see the item as similar as possible as given in the catalog, Dreambooth approach seems more plausible. However, the original Dreambooth does not support inpainting. We propose to merge two pipelines for the e-commerce inpainting case. First we implemented a Masked Dreambooth model to introduce the new items to the diffusion model by providing a number of various poses of the object alongside with a new identifying token. During training, with equal probability, we mask our image latents either with rectangular or elliptic masks (since these are the most common mask shapes used by users). In addition, we generate object-shaped masks by utilizing the ClipSeg Luddecke & Ecker (2022) model along with the class noun of the object, the imperfections from ClipSeg segmentation mask makes our model more robust to arbitrary shaped masks. We then optimize Eq. 2 in the dreambooth fine-tuning, and save the U-Net and Text Encoder weights. On inference time, we load the saved modules into the Stable Diffusion Inpaint Model. The high-level overview of our method is shown in Fig. 2. ## 4 Experiments ### Implementation Details We utilize Stable Diffusion v1-4 as our main generative backbone. We fine-tune it for 500 steps for each item to generate 512x512 images with a learning rate of 5e-6 which is kept constant for DreamPaint. During inference, we set classifier-free guidance to 10. ## 5 Results ### Quantitative Results Quantitative evaluation of the models is very challenging because our objective is to keep the object fidelity, and quantitative metrics mostly are inadequate in that regard. For example, consider two t-shirts with the same color and style, however with different logos, which only spans a couple of pixels. Frechet Inception Distance (FID) between these two would be extremely low, since FID uses the penultimate outputs of a large neural network, which tends to generalize the given image by omitting most fine-grained details. That is why we only compute the CLIP score between the masked region and the reference images as given in Yang et al. (2022). For Dreampaint and Text Guided Stable Diffusion, we compared the inpainted region against all the reference images and calculated the cosine similarity between the CLIP embeddings of all (generated, reference) pairs then reported the maximum score. For PBE, we run the model for all reference images individually. We then calculated the CLIP score between (generated, reference) pairs and reported the maximum score. The results are given in Table 1. Even though PBE uses CLIP CLS token as its reference image embedder, CLIP based evaluation still favors DreamPaint. ### User Study We evaluated each model's performance by conducting a subjective user study. Participants are asked to evaluate each generated image in two different criteria, namely, generated object's similarity to the reference images, and how harmonious the generated object is without knowing which model generated the evaluated \begin{table} \begin{tabular}{c c} \hline \hline Method & CLIP Score(\(\uparrow\)) \\ \hline SD Inpaint with Text Guidance & 0.62 \\ Paint by Example & 0.68 \\ Ours & 0.70 \\ \hline \hline \end{tabular} \end{table} Table 1: Quantitative comparison of CLIP score between our approach and the baselines. image. 10 participants are provided with 60 images, and asked to provide a score from 1 to 5 for each criteria where 1 is the best and 5 is the worst. Average scores can be seen on Table 2. Figure 3: Comparisons against text guided and image guided diffusion models. Note that PBE and Text guided diffusion models often fail to encode the item-specific details while DreamPaint preserves most of the item specific features. ### Qualitative Results Fig. 3 shows some examples for the qualitative comparison of methods. As can be seen, text guided model yields the worst outputs as the representative capacity of item titles in e-commerce catalog is limited. PBE gets some guidance from the reference image and mostly get the general theme right, but it misses the fine-grained details. On the cat sweater example, it understood that there needs to be a cat in the middle but loses the characteristics of the design. DreamPaint's outputs, on the other hand, are the most resembling of the reference images. In some cases, it fails to get some basic features like color right (as seen on the second and fifth examples). This happens because of the context-appearance entanglement issue as mentioned in Ruiz et al. (2022). In these examples, the reference images are all given in white background, which results in the model to build a strong prior for it and when that context is not found in the user provided masked image, the model alters some characteristics of the item mostly in the color space. ## 6 Ablations There are a number of design choices in the implementation of DreamPaint. How much does the prior preservation loss or fine-tuning the text encoder help? Can further textual guidance benefit the model in failed cases? Also how well the the model does given an unfitting mask? To answer these, we run some ablations. **Ablation Study on using Prior Preservation Loss** We found that using prior preservation loss adversely affects the model performance in the e-commerce inpainting setting, especially for rare classes that the model has little to no prior knowledge about. Thus, the model cannot retrieve meaningful class images with the class noun, and this generally results in sub-optimal injection of the new concept. Also, it often helps model to be more generalizing, but we want to preserve the high fidelity details as much as possible. **Ablation Study on Fine-tuning the Text Encoder** Stable Diffusion has a strong semantic prior for common objects, but when it comes to underrepresented tokens, such as "roman armor", it often fails in generating a specific output but uses a more generic image of an armor. We found that, if the text encoder \begin{table} \begin{tabular}{c c c} \hline \hline Method & Similarity(\(\downarrow\)) & Harmony(\(\downarrow\)) \\ \hline SD Inpaint with Text Guidance & 4.41 & 2.75 \\ Paint by Example & 3.82 & 2.57 \\ Ours & 2.68 & 2.33 \\ \hline \hline \end{tabular} \end{table} Table 2: Average user ratings for each method measuring similarity to the reference image and how harmoniously the generated image. Figure 4: DreamPaint outputs with different text prompts. When trained with a photo of [token] cardigan prompt, the model misses the details of the original item. However, with additional keywords, a user can get more resembling results. is fine-tuned along with U-Net, it leads to optimal performance, especially when the model does not have a strong prior for that specific class noun. See Fig. 5 for reference. **Ablation Study on Further Textual Guidance** One benefit of DreamPaint compared to PBE is that we are not limiting with just image guidance. After fine-tuning the model, if the results are not appealing with the generic fine-tuned prompt of "a [token] [class noun]", our model allows us to leverage text guidance to further modify the output, where providing some more context with text prompts may "reveal" the fine grained details of the newly injected concept. An example can be seen on Fig. 4. **Ablation Study on Mask Size** Our evaluation of the model's performance included subjective experiments to assess its ability to handle ill-fitting masks, which occur when the size of the given mask significantly differs from that of the object in the reference image. Results show that, in some cases, the model forcibly places the object in the masked region, while somewhat omitting its characteristic details as seen on Fig. 6, where the wall art has three pieces but here the model just created one bulky mountain art. Figure 5: DreamPaint outputs with and without fine-tuning the text encoder along with the U-Net. Here we used the prompt "a [token] roman armor" Figure 6: When given an unusually large mask size for the given object, our model model focuses on fitting it in the masked region while omitting the characteristic details of the object. Conclusions Text guidance alone is not enough to represent e-commerce items as titles are not designed to be explicitly expressive. Therefore, it is not surprising that text guided SD inpainting fails in all metrics. PBE works well on common objects where the model already has a strong prior, which can be exploited by a single reference image guidance. However, to prevent trivial mapping, they omit most of the high frequency signal by encoding the reference image just with the CLS token of CLIP, which results in losing fidelity in representing the reference image. Thereby, on high fidelity e-commerce items on which the model has a low prior, it yields sub-optimal results. DreamPaint preserves highest amount of fidelity compared to text guidance and image guidance models as can be seen on both quantitative and human study metrics. It also learns a strong prior of the new item by seeing it from different angles as multiple views of an item are readily available in a e-commerce catalog. Thus, the model can learn the concept better and when given a challenging pose, it may generate a novel pose when required. Also, DreamPaint is not just image guided, but the injected token could be further detailed with extra text prompts when required. ### Limitations The limitations given in the Ruiz et al. (2022) applies to DreamPaint mostly. Especially the context-appearance entanglement issue, where the appearance of the item is changing (mostly in its color) because of the given context. Moreover, depending on the mask size, the model generates its output while disregarding the physical size of the reference item in some cases. So mask input remains as a task for the user, where they have to physically measure wherever they want to inpaint the reference image on and compare it against the size of the e-commerce item given in the catalog to have realistic results. Lastly, since DreamPaint requires few-shot fine-tuning for every item, it is hard to scale it for the entire e-commerce catalog.
2307.14745
Using Multi-Agent MicroServices (MAMS) for Agent Based Modelling
This paper demonstrates the use of the Multi-Agent MicroServices (MAMS) architectural style through a case study based around the development of a prototype traffic simulation in which agents model a population of individuals who travel from home to work and vice versa by car.
Martynas Jagutis, Sean Russell, Rem Collier
2023-07-27T10:03:21Z
http://arxiv.org/abs/2307.14745v1
# Using Multi-Agent MicroServices (MAMS) for Agent Based Modelling ###### Abstract This paper demonstrates the use of the Multi-Agent MicroServices (MAMS) architectural style through a case study based around the development of a prototype traffic simulation in which agents model a population of individuals who travel from home to work and vice versa by car. Keywords:Multi-Agent Systems Microservices Traffic Simulation ## 1 Introduction Multi-Agent MicroServices (MAMS) [3] is an architectural style for deploying Multi-Agent Systems (MAS) within Microservices architecture. This has been achieved by introducing of a specific kind of agent, known as a _MAMS Agent_, that has an associated body that consists of a set of web resources that are accessible through REpresentational State Transfer (REST). MAMS agents act like an interface agent for a microservice. Collectively, their bodies form a REST interface that external microservices can use to interact with the MAS via the MAMS agents. Within a microservice, MAMS agents are able to interact with non-MAMS agents through traditional agent communication mechanisms. MAMS has been applied to a number of problem domains, including: decision support tools [2], building management [9] and digital twins for smart agriculture [7]. Additionally, a prototype framework for implementing MAMS applications [10] has been developed, built on a combination of CArtAgO [11] and the ASTRA programming language [5]. The source code for the framework and a number of example applications can be found on Gitlab1. Footnote 1: [https://gitlab.com/mams-ucd/](https://gitlab.com/mams-ucd/) This paper illustrates a potential use of MAMS and microservices in Agent Based Modelling (ABM) [1]. The basic idea is to decompose the environment part of an ABM into a set of web resources. For example, a road network can be decomposed into street and junction resources. Each resource is a kind of _"micro-environment"_ that agents can inhabit and interact with. They are created and accessed through specially designed environment microservices. Inter-resource relationships are modelled based on the URL associated with each resource. A second set of microservices are used to implement the agent part of the ABM by leveraging the MAMS architectural style. ## 2 Overview of Prototype The scenario demonstrated in this paper is a simple traffic simulation scenario in which agents model a population of individuals who travel from home to work and vice versa by car. The environment for this scenario is decomposed into four types of resource: home resources, work resources and the street and junction resources that model the road network. These resources are implemented through three _sub-environment_ microservices. The design of the street and junction resources is based on best practices drawn from established traffic simulators such as MATSim [12] and SUMO [8]. Figure 1 illustrates the set of microservices, implemented using _Java_ and _Spring Boot2_, that underpin the prototype. This includes three sub-environment microservices described above. The **Road Network** service is the most complex of the three and is underpinned by the _Neo4J database3_ which maintains a graph of the constituent streets and junctions. The **Home** and **Work** services provide a minimal model that includes access to the current time and a single activity (e.g. _Watch TV_ or _Work_). A **Clock Service** provides a discrete time model for the simulation; a **Traffic Lights Service** implements an algorithm to control traffic lights in the **Road Network** and a **Management Service** supports the configuration and execution of a simulation run. Finally, the **Driver Service** implements the agent part of the system which is described next. Footnote 2: [https://spring.io](https://spring.io) Footnote 3: [https://neo4j.com/](https://neo4j.com/) Figure 1: Overview of Simulation Architecture To connect to the simulation, MAMS Agents must register with an environment microservice based on the resource they wish to interact with. The microservice can reject the request, but if accepted, it creates an agent body resource4. The environment state is passed to the agent using a HTTP PUT Request to a webhook associated with the MAMS agents body. Agents submit actions using a HTTP PUT Request to the agents body on the environment microservice. The environment microservice tracks which resource an agent is associated with. When an agent moves to another resource (e.g. moving from a junction to a street), the microservice registers the change. If the agent moves to a resource that is located on a different microservice, its body is transferred to the new microservice via a HTTP POST Request. Further details can be found in [6] and the source code is available on Gitlab5. Footnote 4: This is not the same as the MAMS body described above Footnote 5: [https://gitlab.com/mams-ucd/examples/microservice_traffic_simulator](https://gitlab.com/mams-ucd/examples/microservice_traffic_simulator) ## 3 Using MAMS to implement agent behaviours This section focuses on the implementation of the **Driver Service** using the MAMS prototype that has been developed for the ASTRA agent programming language. A simplified version of the driver agent implementation used in the demo is shown in Figure 2. The overall behaviour begins with the handling of the!updatedObject(...) goal in the second rule. The argument of this goal is a Java object that represents the environment state. The plan part of the plan rule defines two sub-goals!decide(...) and!act(...) which must be achieved in sequence. The last two rules in the program highlight two possible sub-plans for achieving the!decide(...) goal. The agent will choose only one of these options based on the current state of the environment. For example, the last rule requires that the vehicle controlled by the agent be stopped. This is expressed by the isStopped(...) belief. This belief is evaluated based on the second of the inference rules at the top of the code snippet, which are denoted by the inference keyword. The ObjectAccess module provides a generic mechanism for the agent to query the internal state of Java objects. In this case it retrieves the value of the vehicleSpeed field of the EnvironmentState object. The selection conditions are expressed by the context part which appears after the colon (:) and before the opening brace (\(\{\}\) of the plan. The!act(..) goal sends the chosen action to the server using the low level!put(...) goal provided by the MAMS implementation. MAMS is visible in two parts of the example code. The latter place is in the rule associated with the!act(...) goal where the!put(...) goal is adopted to submit the action to the server. The representation actually sent to the simulation has been simplified for readability. The former place where MAMS is visible is in the rule that handles the!main(...) goal. The goals specified in this rule connect the agent to the MAMS infrastructure and create a resource that is exposed on the web under the /{agent-name}/notification URL. The simulation service sends the environment state to the agent in the same way; by updating this resource using a PUT request. Upon the processing of a new PUT request, the underlying MAMS infrastructure generates the updatedObject(...) goal to trigger a response from the agent. ## 4 Conclusions This paper presents an early prototype of an novel approach to Agent Based Modelling (ABM) using a combination of microservices and the Multi-Agent MicroServices (MAMS) architectural style. The prototype presented is a traffic simulation scenario that decomposes the environment into four types of web resource that are hosted across three microservices. Each resource acts as a "micro-environment". Agents interact with a resource by registering a "body" with the corresponding microservice, indicating which resource they wish to be associated with. Hypermedia links are used to relate resources to one another, for example, a junction resource in the road network can be linked to a home Figure 2: ASTRA-MAMS Implementation or work resource. A key part of the approach is the design of mechanisms to allow agents to transition between web resources which can be achieved either internally or via a HTTP POST request. A number of shortcomings and opportunities were identified during its evaluation [6]. The most interesting opportunity is the potential use of the linked data structure to create decentralised knowledge graphs that capture global knowledge of the simulation environment. Such knowledge could be consumed by individual agents and used in concert with local contextual knowledge of their environment to offer improved decision making capabilities. Details of this proposed approach can be found in [4].
2306.03110
SwinRDM: Integrate SwinRNN with Diffusion Model towards High-Resolution and High-Quality Weather Forecasting
Data-driven medium-range weather forecasting has attracted much attention in recent years. However, the forecasting accuracy at high resolution is unsatisfactory currently. Pursuing high-resolution and high-quality weather forecasting, we develop a data-driven model SwinRDM which integrates an improved version of SwinRNN with a diffusion model. SwinRDM performs predictions at 0.25-degree resolution and achieves superior forecasting accuracy to IFS (Integrated Forecast System), the state-of-the-art operational NWP model, on representative atmospheric variables including 500 hPa geopotential (Z500), 850 hPa temperature (T850), 2-m temperature (T2M), and total precipitation (TP), at lead times of up to 5 days. We propose to leverage a two-step strategy to achieve high-resolution predictions at 0.25-degree considering the trade-off between computation memory and forecasting accuracy. Recurrent predictions for future atmospheric fields are firstly performed at 1.40625-degree resolution, and then a diffusion-based super-resolution model is leveraged to recover the high spatial resolution and finer-scale atmospheric details. SwinRDM pushes forward the performance and potential of data-driven models for a large margin towards operational applications.
Lei Chen, Fei Du, Yuan Hu, Fan Wang, Zhibin Wang
2023-06-05T05:11:03Z
http://arxiv.org/abs/2306.03110v1
SwinRDM: Integrate SwinRNN with Diffusion Model towards High-Resolution and High-Quality Weather Forecasting ###### Abstract Data-driven medium-range weather forecasting has attracted much attention in recent years. However, the forecasting accuracy at high resolution is unsatisfactory currently. Pursuing high-resolution and high-quality weather forecasting, we develop a data-driven model SwinRDM which integrates an improved version of SwinRNN with a diffusion model. SwinRDM performs predictions at 0.25-degree resolution and achieves superior forecasting accuracy to IFS (Integrated Forecast System), the state-of-the-art operational NWP model, on representative atmospheric variables including 500 hPa geopotential (Z500), 850 hPa temperature (T850), 2-m temperature (T2M), and total precipitation (TP), at lead times of up to 5 days. We propose to leverage a two-step strategy to achieve high-resolution predictions at 0.25-degree considering the trade-off between computation memory and forecasting accuracy. Recurrent predictions for future atmospheric fields are firstly performed at 1.40625-degree resolution, and then a diffusion-based super-resolution model is leveraged to recover the high spatial resolution and finer-scale atmospheric details. SwinRDM pushes forward the performance and potential of data-driven models for a large margin towards operational applications. ## Introduction Accurate weather forecasting is beneficial to human beings in several areas such as agriculture, energy, and public transportation. Numerical Weather Prediction (NWP) has long been adopted for weather forecasting. It has been improved by better physics parameterization techniques and high-quality atmospheric observations in the past few decades. However, this approach requires huge amounts of computing power, which may limit its application in industry. With the development of machine learning (ML), especially deep learning (DL) techniques, many studies are employing data-driven DL methods to forecast atmospheric variables. The purely data-driven DL models are often orders of magnitude faster than the NWP model. However, the performance of current DL models is unsatisfactory currently. To facilitate the development of data-driven weather forecasting, some benchmarks [14, 15, 16] are constructed to enable a thorough comparison of different methods. Among them, WeatherBench [14] is one of the widely used benchmarks, which is constructed by regridding the ERA5 reanalysis dataset [1] from \(0.25^{\circ}\) resolution to three different resolutions (i.e., \(5.625^{\circ}\), \(2.8125^{\circ}\) and \(1.40625^{\circ}\)). It focuses on the medium-range global prediction of a few key variables at lead times of up to 5 days. Several works have tried to improve the prediction performance on WeatherBench [14, 15, 16]. Among them, SwinVRNN [12] achieves the best performance by integrating a variational recurrent neural network (SwinRNN) with a feature perturbation module. Although these works have achieved great success in global weather forecasting, their methods are built on low-resolution (usually lower than \(1^{\circ}\)) data. The largest resolution of WeatherBench is \(1.40625^{\circ}\), corresponding to a \(128\times 256\) pixels grid. And the distance between every two pixels is larger than 100km, which is too coarse for a forecasting model to capture the fine-scale dynamics [12]. [11] builds a graph neural network (GNN) to forecast global weather on the 1-degree scale. It shows comparable performance on wind and relative humidity to the IFS. However, its resolution is still relatively low. FourCastNet [12] trains an Adaptive Fourier Neural Operator (AFNO) model directly at \(0.25^{\circ}\) resolution ERA5 dataset, which achieves comparable performance to the IFS at short-range lead times and can resolve many important small-scale phenomena. However, there still exists a performance gap between data-driven models and the IFS at lead times of up to 5 days, especially on representative variables such as Z500 and T850. In this paper, we focus on building a global weather forecasting model at \(0.25^{\circ}\) resolution and propose a SwinRDM model by integrating an improved SwinRNN [12] with a diffusion-based super-resolution model. Since SwinRNN achieves superior performance at low resolution, we employ it as our base model. We experimentally analyze the SwinRNN model and build an improved version named SwinRNN+ by replacing the multi-scale network with a single-scale design and adding a feature aggregation layer. Our SwinRNN+ achieves higher performance than IFS on all key variables at lead times of up to 5 days at \(1.40625^{\circ}\) resolution. Note that this is a considerable improvement compared to SwinRNN that can only compete with the IFS model on surface-level variables at \(5.625^{\circ}\) resolution. Different from FourCastNet, to generate high-resolution global weather prediction, we resort to the super-resolution (SR) technique rather than directly train the SwinRNN+ model on \(0.25^{\circ}\) resolution data due to the prohibitive computational cost. The super-resolution task is implemented using a conditional diffusion model [14], which trains a U-Net model [15] to iteratively refine the outputs starting from pure Gaussian noises. This model is shown to be able to generate photo-realistic outputs compared to traditional super-resolution models [14]. We show in this work that the diffusion model-based super-resolution conditioned on low-resolution predictions can capture small-scale variations and generate high-quality weather forecasting at high resolution. Our contribution can be summarized as follows: * We conduct experimental studies on the SwinRNN model and propose an improved version -- SwinRNN+. It achieves superior performance than the state-of-the-art IFS model on all representative variables at the resolution of \(1.40625^{\circ}\) and lead times of up to 5 days. * We employ a conditional diffusion model for super-resolution conditioned on SwinRNN+ outputs, which achieves high-quality weather forecasting at the resolution of \(0.25^{\circ}\) with an optimal trade-off between computation cost and forecast accuracy. * Experimental results on the ERA5 dataset show that our SwinRDM model not only outperforms IFS but also achieves high-quality forecasts with finer-scale details, which sets a solid baseline for data-driven DL weather forecasting models. ## Related Works In this section, we briefly review some deep learning-based weather forecasting methods and super-resolution methods. ### Deep Learning-based Weather Forecasting Deep learning has been investigated to perform data-driven weather forecasting in recent years, and the goal is to fully replace the NWP model. Some works focus on a particular local area [13, 14, 15]. However, using data in a local spatial domain may result in uncertainty around the boundary regions [13]. For global weather forecasting, some widely used networks in computer vision has been applied, including ResNet [12], U-Net [16], VAE [15], and GNNs [17]. Among global weather forecasting methods, SwinVRNN [18] is the first work that can compete with the IFS on representative surface-level variables (T2M and TP) at lead times of up to 5 days. It constructs a deterministic SwinRNN model based on the Swin Transformer block [13] and builds a perturbation module to perform ensemble forecasting. However, the resolution of SwinVRNN is relatively low, and it cannot achieve comparable performance on pressure-level variables such as Z500 and T850. FourCastNet [15] is the first work that directly builds the network on the \(0.25^{\circ}\) resolution data. Although it achieves high performance on short timescales, it cannot compete with the IFS at a 5-day lead time. In this paper, we also intend to predict the atmospheric variables at \(0.25^{\circ}\) resolution. We improve the SwinRNN network at low resolution and employ the diffusion-based super-resolution model to generate high-resolution and high-quality results. ### Super Resolution The goal of super-resolution is to reconstruct a high-resolution image from a low-resolution counterpart. The simplest way to realize super-resolution is interpolation. It is computationally efficient but often suffers from detail loss in regions with complex textures [10]. SRCNN [12] is a pioneering work that exploits CNNs to perform super-resolution. Later, many deep learning methods [13, 14, 15, 16] have been proposed to improve super-resolution performance. These methods usually employ a reconstruction loss such as MSE loss to train the network. However, the network simply trained with a reconstruction loss can hardly capture high texture details and generate perceptually satisfactory results [13]. Some works [1, 14, 15] employ generative adversarial networks (GANs) [11] to encourage the generator to output high-resolution images that are hard to distinguish from the real high-resolution images. Although these methods can generate high-quality images, GAN-based methods are difficult to optimize [10]. Recently, diffusion models [13] have attracted much attention in image generation since it is able to generate high-quality images [12, 13, 14] that are comparable to GANs. A U-Net architecture is trained with a denoising objective to iteratively refine the outputs starting from pure Gaussian noise. Diffusion models have also been successfully applied to image super-resolution, such as SR3 [14] which employs a diffusion model to generate realistic high-resolution images conditioned on low-resolution images. Since the diffusion model is proven to generate high-quality images, we exploit this model in weather forecasting to generate high-quality and high-resolution predictions. ## Methodology We formulate the high-resolution weather forecasting problem as a combination of weather forecasting at low resolution and super-resolution to high resolution. The proposed model first recurrently forecasts the future atmospheric variables via a recurrent neural network and then reconstructs the high-resolution results from the predicted low-resolution counterparts via a super-resolution network. The recurrent neural network is built based on SwinRNN (Hu et al., 2022). We experimentally analyze the architecture of SwinRNN and design an improved version named SwinRNN+ which considerably improves the performance of SwinRNN at \(1.40625^{\circ}\) resolution. However, SwinRNN+ can hardly be directly trained at \(0.25^{\circ}\) because the training recurrent steps have to be reduced due to limited computational resources, which will lead to inferior performance. Thus, we adopt the super-resolution technique to achieve high-resolution prediction. To generate high-resolution and high-quality results at \(0.25^{\circ}\) resolution, we build our super-resolution model based on the diffusion model, which can help capture fine-grained scales compared to traditional super-resolution methods. The architecture of our method is demonstrated in Figure 1. ### Background on SwinRNN The SwinRNN mainly consists of a multi-scale encoder for historical context information extraction and a multi-scale decoder for hidden state propagation and variable prediction at each recurrent step. The atmospheric variables at each time step are stacked together with a shape \(C_{in}\times H\times W\), where \(C_{in}\) denotes the number of atmospheric variables, and \(H\times W\) denotes the global grid size. The stacked result can be regarded as a multi-channel frame similar to many computer vision tasks. The encoder takes \(k\) consecutive historical frames as input. It first employs a 3D convolutional network-based cube embedding block to project all frames to features with a size \(C\times H\times W\) and then extracts four-scale features \((h_{k}^{1},h_{k}^{2},h_{k}^{3},h_{k}^{4})\) via a hierarchical Swin Transformer. The historical context information is embedded in the features, and they are used to initialize the hidden states of the decoder at time step \(k\). Then in each future time step, the decoder takes as input the combination of hidden states \((h_{k}^{1},h_{k}^{2},h_{k}^{3},h_{k}^{4})\) and current frame \(x_{k}\), and outputs the updated hidden states \((h_{k+1}^{1},h_{k+1}^{2},h_{k+1}^{3},h_{k+1}^{4})\) for next time step and the predicted frame \(x_{k+1}\). The results of SwinRNN show that it achieves higher performance than the IFS on T2M and TP variables. Although SwinRNN achieves high performance on surface-level variables, it is only trained at \(5.625^{\circ}\) (32 \(\times\) 64) resolution and cannot compete with the IFS model on pressure-level variables such as Z500 and T850. Based on its Swin Transformer-based recurrent structure, we propose SwinRNN+ that is able to achieve superior performance on all key surface-level and pressure-level variables than the IFS at \(1.40625^{\circ}\) resolution. ### SwinRNN+ It is non-trivial to transfer the SwinRNN to high-resolution data even at \(1.40625^{\circ}\) (128 \(\times\) 256) since the memory consumption in the training stage is quadratic to resolution (e.g., 1 vs. 16 for \(5.625^{\circ}\) vs. \(1.40625^{\circ}\)) for the Swin Transformer architecture. Thus, it is important to balance the capacity of the network and the computational cost. The improvements of our SwinRNN+ over SwinRNN are two folds. First, we replace the multi-scale network with a single-scale network with higher feature dimensions. Second, we fuse the output features of all Swin Transformer layers in the decoder to generate the hidden states and the output predictions. #### Trade Multi-Scale Design for Higher Feature Dimensions. To increase the capacity of the network, a straightforward way is to increase the dimension of the feature. However, the memory cost increases dramatically with the increase of the feature dimension since there are several recurrent steps during training. We observed from (Hu et al., 2022) that SwinRNN benefits little from the multi-scale architecture, whereas the structure significantly increases the number of parameters and memory consumption. Thus, we conduct an ablation experiment to compare the performance of multi-scale and single-scale structures on different feature dimensions. The experimental results show that the multi-scale network generally achieves better performance compared to the single-scale network with the same feature dimensions. However, with the increase of the feature dimension, the performance gap between the two different structures is narrowed rapidly, whereas the memory and the pa Figure 1: The SwinRDM consists of two parts: (a) the low-resolution forecasting model SwinRNN+ is an improved version of SwinRNN, which adopts a single-scale architecture and adds a multi-layer feature aggregation component, and (b) the diffusion-based super-resolution model conditions on the prediction \(x_{k+1}\) from SwinRNN+. rameters of the multi-scale structure increase dramatically. Notably, the single-scale network with a high feature dimension shows better performance and higher efficiency compared to the multi-scale network with a low feature dimension. Thus, we draw a conclusion that the multi-scale architecture limits the potential of SwinRNN, and it is more effective to increase the feature dimension of the single-scale network rather than use a multi-scale structure. Multi-Layer Feature Aggregation.Our second improvement is to aggregate features from multiple layers to learn the hidden states. As shown in Figure 1, the decoder fuses features from the 6 layers to update the hidden states via a convolutional layer, while the original SwinRNN only treats the features from the final layer as the hidden states. Our multi-layer feature aggregation network has two advantages compared to SwinRNN. First, the representation power of the hidden states is improved, which is beneficial to the prediction in the current time step and feature propagation to the next time step. Second, the gradient backward propagation path is reduced, and the information can be more easily propagated back to former time steps, which eases the optimization of the recurrent network. As shown in Figure 1, the proposed network consists of 6 Swin Transformer blocks with the same scale in both the encoder and decoder. To enable training on \(1.40625^{\circ}\) resolution data, we first use a patch size of \(2\times 2\) to split the image into non-overlapping patches. This is achieved by a convolutional layer with a kernel size of 2 and a stride of 2 in the cube embedding block. Thus, we have a hidden state \(h_{k}\) with a size of \(C\times\bar{H}/2\times W/2\). In the decoder, the frame \(x_{k}\) is also embedded with a convolutional layer with the same settings, and \(x_{k+1}\) is predicted by a transposed convolutional layer. Features from all layers in the decoder are aggregated to increase the representation power. ### Diffusion Model for Super Resolution In order to achieve high-resolution weather forecasting, we integrate the SwinRNN+ with a super-resolution component based on the diffusion model since it is able to generate realistic images with rich details [14], which can help resolve fine-grained features and generate high-quality and high-resolution forecasting results. Diffusion models [13] are a class of generative models consisting of a forward process (or diffusion process) that destroys the training data by successive addition of Gaussian noise and a reverse process that learns to recover the data by reversing this noising process. More specifically, given a sample from data distribution \(y_{0}\sim q(y_{0})\), the diffusion process is a Markov chain that gradually adds Gaussian noise to the data according to a fixed variance schedule \(\beta_{1},\cdots,\beta_{T}\): \[q(y_{t}|y_{t-1})=\mathcal{N}(y_{t};\sqrt{1-\beta_{t}}y_{t-1},(\beta_{t}) \mathcal{I}). \tag{1}\] If the magnitude \(\beta_{t}\) of the noise added at each step is small enough, and the total step \(T\) is large enough, then \(y_{T}\) is equivalent to an isotropic Gaussian distribution. It is convenient to produce samples from a Gaussian noise input \(y_{T}\sim\mathcal{N}(0,\mathcal{I})\) by reversing the above forward process. However, the posterior \(q(y_{t-1}|y_{t})\) need for sampling is hard to compute, and we need to learn a model parameterized by \(\theta\) to approximate these conditional probabilities: \[p_{\theta}(y_{t-1}|y_{T})=\mathcal{N}(\mu_{\theta}(y_{t}),\Sigma_{\theta}(y_{t} )). \tag{2}\] While there exists a tractable variational lower-bound on \(logp_{\theta}(y_{0})\), better results arise from optimizing a surrogate denoising objective: \[E_{\epsilon\sim N(0,I),t\sim[0,T]}[\|\epsilon-\epsilon_{\theta}(y_{t},t)\|^{2}], \tag{3}\] where \(y_{t}\sim q(y_{t}|y_{0})\) is obtained by applying Gaussian noise \(\epsilon\) to \(y_{0}\), and \(\epsilon_{\theta}\) is the model to predict the added noise. Diffusion models can be conditioned on class labels, text, or low-resolution images [1, 13, 14, 15, 16, 17]. In our case, we make it condition on the low-resolution output of SwinRNN+ for the super-resolution task. During training, the low-resolution data \(x_{k}\) is generated on the fly, and forecasting quality is decreased with the lead time. To account for such variation, the model additionally conditions on the time step \(k\) of SwinRNN+. Thus, we have a posterior \[p_{\theta}(y_{k}^{(t-1)}|y_{k}^{(t)},x_{k},t,k), \tag{4}\] where \(y_{k}\) is the corresponding high-resolution target of \(x_{k}\), Instead of using the \(\epsilon\)-prediction formulation, we predict the original targets \(y_{k}\) directly, following [17]. The model acts like a denoising function, trained using a mean squared error loss: \[E_{t\sim[0,T],y_{k}^{(t)}\sim q_{t}}[[|y_{k}-f_{\theta}(y_{k}^{(t)},x_{k},t,k) \|^{2}]. \tag{5}\] For the sampling process, given the same low-resolution input \(x_{k}\), the diffusion model can produce diverse outputs \(y_{k}\) starting from different Gaussian noise samples. Such property makes it possible to perform ensemble forecasting, which is an effective way to improve forecast skills. Thus, we achieve super-resolution and ensemble forecasting at the same time with the diffusion model. ## Experiments ### Experimental Setup DatasetWe evaluate our proposed SwinRDM method on the ERA5 dataset [1] provided by the ECMWF. ERA5 dataset is an atmospheric reanalysis dataset, which consists of atmospheric variables at a \(0.25^{\circ}\) spatial resolution from 1979 to the present day. Data from \begin{table} \begin{tabular}{l c c c c c} \hline \hline Dim. & Fusion & Z500(\(m^{2}s^{-2}\)) & T850(\(T\)) & T2M(\(T\)) & TP(\(mm\)) \\ \hline 128 & & 456 & 2.354 & 2.196 & 2.265 \\ 128 & βœ“ & 386 & 2.050 & 1.926 & 2.182 \\ 256 & & 394 & 2.092 & 1.957 & 2.207 \\ 256 & βœ“ & 371 & 1.971 & 1.843 & 2.148 \\ \hline \hline \end{tabular} \end{table} Table 1: The RMSE results of the SwinRNN+ with and without multi-layer feature aggregation on different feature dimensions. 1979 to 2016 are chosen as the training set, and data from 2017 and 2018 are used for evaluation, following [14]. We sub-sample the dataset at 6-hour intervals to train our model as in [10]. The input signal of the SwinRDM contains 71 variables, including geopotential, temperature, relative humidity, longitude-direction wind, and latitude-direction wind at 13 vertical layers, four single-layer fields (2m temperature, 10m wind, and total precipitation), and two constant fields (land-sea mask and orography). Evaluation MetricsWe follow [14] to evaluate forecast quality using latitude-weighted RMSE (root-mean-square error). In addition, we adopt Frechet inception distance (FID) [1] to quantitatively assess the sample fidelity of the super-resolution outputs. Implementation DetailsWe end-to-end train two models (i.e., SwinRNN+ and SwinRDM) for 50 epochs with a batch size of 16. SwinRNN+ is trained on \(1.40625^{\circ}\) data, while SwinRDM takes \(1.40625^{\circ}\) data as input and outputs \(0.25^{\circ}\) predictions. During training, our models take 6 historical frames as input and recurrently predict 20 frames at 6-hour intervals. For SwinRDM, we randomly select one of them to train the diffusion-based super-resolution model. The dimensions of the feature for the encoder and the decoder are set to 768 and 512, respectively. The cosine learning rate policy is used with initial learning rates of 0.0003 for SwinRNN+ and 0.0002 for the diffusion model. The models are optimized by AdamW using PyTorch on 8 NVIDIA A100 GPUs. For the diffusion model, we adopt the implementation in [12] and use 10 sampling steps to get decent results during inference. et al. 2021), and our diffusion model. Bilinear interpolation simply upsamples the low-resolution output. SwinIR is a state-of-the-art SR method constructed by several residual Swin Transformer blocks. All methods are jointly trained with the same low-resolution forecasting model SwinRNN+. In addition to the RMSE metric, FID is used here to evaluate the ability to generate realistic results. FID is computed over six variables by modifying the input channels and weights of the Inception V3 model (Szegedy et al. 2016). Diffusion-based SR achieves high visual qualityAs can be seen from Table 3, there is little difference between different super-resolution methods in terms of the RMSE metrics of all variables. However, the FID metrics are significantly different. The bilinear interpolation obtains the worst FID score, and SwinIR slightly improves it by 8. Equipped with the diffusion-based super-resolution model, our SwinRDM considerably improves the FID score by 175, indicating that the diffusion-based super-resolution model can help generate high-quality and realistic results. SwinRDM* is a 10-member ensemble version of our SwinRDM, and it makes a good trade-off between the RMSE and FID. Figure 2 shows the FID scores for different lead times. For a data-driven recurrent forecasting model, the predictions may become smoother with the increase in the lead time. The FID scores for bilinear interpolation and SwinIR increase with the forecast time, while our SwinRDM keeps low FID scores for lead times of up to 5 days. This again shows the good property of the diffusion-based super-resolution model. Although our SwinRDM* increases the FID scores of SwinRDM, it still maintains relatively stable FID scores compared with bilinear interpolation and SwinIR. High visual quality means high forecasting qualityThe qualitative results of different SR methods are shown in Figure 3. SwinRDM successfully captures small-scale structures and generates high-quality results at a 5-day lead time. By contrast, both Bilinear and SwinIR produce blurry results and fail to generate rich details. These details are essential for weather forecasting, especially for variables that have complex structures and variations, such as wind speed (WS) and TP. To further verify the superiority of our method, we calculate the critical success index (CSI) (Jolliffe and Stephenson 2012) for TP. The CSI can indicate the prediction performance under different thresholds. As shown in Table 4, SwinRDM surpasses the strong baseline SwinIR by a large margin, which increases the CSI2, CSI5, and CSI10 by 6%, 11%, 10%, respectively. For higher CSI thresholds (20mm and 50mm), only our SwinRDM and SwinRDM* can get decent results, indicating that our methods can forecast extreme precipitation more effectively. Thus, our diffusion-based super-resolution model can generate high-resolution and high-quality forecasting results. Comparison with State-of-the-art MethodsWe compare our methods with state-of-the-art operational IFS and data-driven methods. Since the high-resolution IFS results are not available online due to the data center mi \begin{table} \begin{tabular}{l c c c c c} \hline Methods & CSI2 & CSI5 & CSI10 & CSI20 & CSI50 \\ \hline Bilinear & 0.171 & 0.051 & 0.006 & 0.000 & 0.000 \\ SwinIR & 0.186 & 0.069 & 0.013 & 0.001 & 0.000 \\ SwinRDM & 0.246 & 0.179 & 0.111 & 0.046 & 0.010 \\ SwinRDM* & 0.262 & 0.190 & 0.123 & 0.049 & 0.006 \\ \hline \end{tabular} \end{table} Table 4: Critical success index (CSI) of six-hour accumulate total precipitation (TP) with different thresholds, i.e., 2 mm, 5 mm, 10 mm, 20 mm, and 50 mm. Figure 4: Comparison with IFS and FourCastNet on the test data of 2018. The RMSE of Z500 and T850 is shown. Figure 3: Visual comparison of example fields at the initialization time of June 4, 2018, 00:00 UTC. The first column shows the ERA5 fields for Z500, WS10, and TP at a lead time of 120 hours. The second to fourth columns represent the corresponding forecasting of different SR methods. \begin{table} \begin{tabular}{l c c c c} \hline Methods & Z500 & T850 & T2M & TP \\ \hline IFS & 154/334 & 1.36/2.03 & 1.35/1.77 & 2.36/2.59 \\ SwinRNN & 207/392 & 1.39/2.05 & 1.18/1.63 & 2.01/2.14 \\ SwinRNN+ & 152/316 & 1.12/1.75 & 0.99/1.42 & 1.88/2.07 \\ SwinRDM & 156/316 & 1.23/1.83 & 1.07/1.49 & 2.02/2.24 \\ SwinRDM* & **153/313** & **1.15/1.76** & **1.01/1.43** & **1.87/2.06** \\ \hline \end{tabular} \end{table} Table 5: Comparison with state-of-the-art IFS and SwinRNN. The RMSE scores for 3 and 5 days forecast times in the years 2017 and 2018 are shown. SwinRDM* is the 10-member ensemble version of SwinRDM. The units are the same as Table 1. gration currently, we regrid our results to \(5.625^{\circ}\) so that we can compare them to the IFS results provided by the WeatherBench [14]. Different methods can be directly compared due to the equivalence of evaluation on different resolutions. As stated in [14], there is nearly no difference for evaluation at different resolutions. We have also verified this statement. For data-driven methods, we choose the SwinRNN [13] and FourCastNet [12] for comparison. To the best of our knowledge, SwinRNN is the best method at \(5.625^{\circ}\) resolution, and FourCastNet is the best method at \(0.25^{\circ}\) resolution. Table 5 shows the comparison results for the years 2017 and 2018. Our SwinRNN+ method is trained on \(1.40625^{\circ}\) resolution data, and it achieves significantly better performance compared to SwinRNN, showing the effectiveness of our improvement. The SwinRNN+ is also the first method that can surpass the IFS on both the surface-level and pressure-level variables. Our SwinRDM* trained for forecasting at \(0.25^{\circ}\) resolution also shows better performance compared to the IFS at lead times of 3 days and 5 days. Specifically, at a lead time of 5 days, it outperforms the IFS by 21, 0.27, 0.34, and 0.53 in terms of Z500, T850, T2M, and TP, respectively. Since FourCastNet is only evaluated for the year 2018, we show the comparison results in terms of Z500 and T850 for the year 2018 in Figure 4. The results of FourCastNet are obtained from the original paper. As shown in the figure, our SwinRDM* method shows better performance compared with FourCastNet, indicating that our method shows state-of-the-art performance at \(0.25^{\circ}\) resolution. Note that our SwinRDM* shows slightly lower performance at lead times less than 3 days compared to IFS. This may attribute to the lower representational power of the encoder since we find that the capacity of the encoder is important to the short-range forecasting performance. How to better extract the historical context information is essential for the encoder, and we leave this for future research. ### Qualitative Illustration Figure 5 shows qualitative results of the proposed SwinRDM*. Our model is tested on 8 September 2018, 06:00 UTC to forecast the near-surface wind speeds (WS) at lead times of 3 days and 5 days. The wind speeds are computed from the predicted zonal and meridional components of the wind velocity i.e., \(WS=\sqrt{U_{10}^{2}+V10^{2}}\). The results of ERA5 are the ground truth. As shown in the figure, our method can forecast wind speeds for up to 5 days with high resolution and high quality. Specifically, we can see from the zoom-in area in the figure that our method can successfully forecast and track Super Typhoon Mangkhut. The ability to forecast this kind of extreme event is really beneficial for the mitigation of loss of life and property damage. Our method shows high forecasting accuracy and the ability to capture fine-scale dynamics at high resolution. ## Conclusion We propose a high-resolution data-driven medium-range weather forecasting model named SwinRDM by integrating SwinRNN+ with a diffusion-based super-resolution model. Our SwinRNN+ is improved upon SwinRNN by trading the multi-scale design for higher feature dimensions and adding a feature aggregation layer, which achieves superior performance compared to the operational NWP model on all key variables at \(1.40625^{\circ}\) resolution and lead times of up to 5 days. Our SwinRDM uses a diffusion-based super-resolution model conditioned on the forecasting results of SwinRNN+ to achieve high-resolution forecasting. The diffusion model helps generate high-resolution and high-quality forecasting results and can also perform ensemble forecasting. The experimental results show that our method achieves SOTA performance at \(0.25^{\circ}\) resolution. Figure 5: Qualitative illustration of a global near-surface wind forecast generated by our SwinRDM*. The prediction starts at the initial time of September 8, 2018, 06:00 UTC. The zoom-in area shows the beginning of Super Typhoon Mangkhut. Our method successfully forecasts Super Typhoon Mangkhut with high accuracy and rich fine-scale features.
2305.15222
Neural Summarization of Electronic Health Records
Hospital discharge documentation is among the most essential, yet time-consuming documents written by medical practitioners. The objective of this study was to automatically generate hospital discharge summaries using neural network summarization models. We studied various data preparation and neural network training techniques that generate discharge summaries. Using nursing notes and discharge summaries from the MIMIC-III dataset, we studied the viability of the automatic generation of various sections of a discharge summary using four state-of-the-art neural network summarization models (BART, T5, Longformer and FLAN-T5). Our experiments indicated that training environments including nursing notes as the source, and discrete sections of the discharge summary as the target output (e.g. "History of Present Illness") improve language model efficiency and text quality. According to our findings, the fine-tuned BART model improved its ROUGE F1 score by 43.6% against its standard off-the-shelf version. We also found that fine-tuning the baseline BART model with other setups caused different degrees of improvement (up to 80% relative improvement). We also observed that a fine-tuned T5 generally achieves higher ROUGE F1 scores than other fine-tuned models and a fine-tuned FLAN-T5 achieves the highest ROUGE score overall, i.e., 45.6. For majority of the fine-tuned language models, summarizing discharge summary report sections separately outperformed the summarization the entire report quantitatively. On the other hand, fine-tuning language models that were previously instruction fine-tuned showed better performance in summarizing entire reports. This study concludes that a focused dataset designed for the automatic generation of discharge summaries by a language model can produce coherent Discharge Summary sections.
Koyena Pal, Seyed Ali Bahrainian, Laura Mercurio, Carsten Eickhoff
2023-05-24T15:05:53Z
http://arxiv.org/abs/2305.15222v1
# Neural Summarization of Electronic Health Records ###### Abstract **Corresponding Authors:** Carsten Eickhoff, PhD 233 Richmond Street Providence, RI 02903 [email protected] (401)-863-9665 **Keywords:** summarization; medical report summarization; electronic health record; dataset design; artificial intelligence; deep learning ## Abstract ### Background Electronic Health Record (EHR) summarization is the process of condensing and extracting relevant information from EHRs to provide key patient health details in a concise manner. Such summaries are helpful in improving healthcare efficiency and decision-making for healthcare providers. ### Objective Hospital discharge documentation is among the most essential, yet time-consuming documents written by medical practitioners. The objective of this study was to automatically generate hospital discharge summaries using neural network summarization models. In particular, we studied various data preparation and neural network training techniques that generate discharge summaries. ### Materials and Methods Using nursing notes and discharge summaries from the MIMIC-III dataset, we studied the viability of the automatic generation of various sections of a discharge summary using four state-of-the-art neural network summarization models (BART, T5, Longformer and FLAN-T5). ### Results Our experiments indicated that training environments including nursing notes as the source, and discrete sections of the discharge summary as the target output (e.g. "History of Present Illness") improve language model efficiency and text quality. According to our findings, the fine-tuned BART model improved its ROUGE F1 score by 43.6% against its standard off-the-shelf version. We also found that fine-tuning the baseline BART model with other setups caused different degrees of improvement (up to 80% relative improvement). We also observed that a fine-tuned T5 generally achieves higher ROUGE F1 scores than other fine-tuned models and a fine-tuned FLAN-T5 achieves the highest ROUGE score overall, i.e., 45.6. ### Discussion This study demonstrates the general viability of the automatic generation of parts of the discharge summary; a key step in reducing the clerical burden on healthcare providers. For majority of the fine-tuned language models, summarizing discharge summary report sections separately outperformed the summarization the entire report quantitatively. On the other hand, fine-tuning language models that were previously instruction fine-tuned showed better performance in summarizing entire reports. ### Conclusion This study concludes that a focused dataset designed for the automatic generation of discharge summaries by a language model can produce coherent Discharge Summary sections. ## Introduction When patients leave the hospital, their discharge summary represents a key document in the Electronic Healthcare Record (EHR) describing relevant medical conditions, events, and planned interventions/treatments during their stay, as shown in Figure 3 and Table 3. Together with various other EHR notes, discharge summaries provide key medical information to future providers. As such, it is unsurprising that multiple studies have shown that medical professionals spend at least twice as much time on EHR documentation than on patient care [1, 2]. At a system level, most hospitals face major challenges around digitally exchanging healthcare information between institutions as well as public health agencies [3]. Common reported barriers include lack of capacity (e.g., technical support, staffing), interface-related issues (e.g. cost, complexity), vocabulary inconsistencies, and difficulty in extracting relevant information. To address these barriers, we investigate the effectiveness of modern natural language generation methods and data organization for fine-tuning and automatically generating various sections of a discharge summary report. This work represents a key step towards more efficient text summarization, and hopefully reducing the documentation burden faced by healthcare providers. Previous studies indicate that pre-processing of raw, non-annotated datasets is more complex than that of annotated datasets [4, 5, 6]. Annotated datasets generally include tags to identify concepts or human-written text summaries to describe the data content, while non-annotated datasets do not include human-generated labels. Since non-annotated datasets are more widely available, we utilize them by automatically creating annotated training and testing datasets for improving pre-trained language models. The language models used for text summarization usually have either of the two types of output - extractive summaries or abstractive summaries. The former is created using parts of sentences from original documents while the latter is based on the concepts captured, which is conveyed by the model using its vocabulary span. Neural Network language models are becoming increasingly popular for summarization tasks in both non-medical (e.g., general news [7]) and medical contexts ( [8, 6, 9]). In this study, we compare various data creation setups for training language models and combine them with a variety of different deep learning text summarization techniques to identify the most robust settings in terms of ROUGE metrics for the automatic composition of discharge summary text. We focus on generating text using nursing notes as inputs because they have a notable overlap in content with the information present in the discharge summaries. Concretely, we explore the following research questions: 1. What models and their training setups are best for achieving high-quality medical text summaries? 2. What parts of the discharge summary report can we automatically generate using nursing notes? ### Background and Significance #### 0.0.1 General Text Summarization In this section, we briefly review extractive and abstractive approaches that have been used in non-medical documents. It includes several models central to this paper. Recent works on text summarization algorithms can be broadly classified based on the type of summary generated - extractive and abstractive. Extractive summarization involves taking a subset of phrases and sentences from the input documents and concatenating them to form a summary. On the other hand, abstractive summarization algorithms produce summaries based on their own vocabulary and the concepts they associate with the input document, not necessarily using the exact words of a source article. One of the models in the extractive summarization category is the Luhn summarizer [10]. It selects sentences based on the maximum number of significant words present in a particular sentence. The significance of words is determined through Term Frequency - Inverse Document Frequency, also known as TF-IDF [11]. A common baseline model for extractive summarization is LEAD-3, which is another extractive summarization solution that takes the first three sentences of the input document and sets them as the document's summary. Another sub-category of extractive summary algorithms is topic-based approaches. For example, the model designed by Harabagiu et al. [12] represents topic themes based on events that frequently occur over a set of documentation. They illustrate five ways of determining such frequencies - topic signatures, enhanced topic signatures, thematic signatures, modeling documents' content structure, and templates. There are also graph-based [13, 14] approaches that use text representation in a graph where words or sentences are represented as nodes and semantically-related text elements are connected through edges. Finally, discourse-based approaches [15] integrate linguistic knowledge to represent the connections within and between sentences. A state-of-the-art model for abstractive text summarization is the Bidirectional and Auto-Regressive Transformer (BART) [16]. As the name suggests, BART employs a standard Transformer-based neural machine translation architecture with a bidirectional encoder and a left-to-right decoder. Its encoder behaves similarly to Bidirectional Encoder Representations from Transformers (BERT), another well-known transformer [17], while its decoding nature resembles that of Generative Pre-trained Transformers (GPT) [18, 19, 20]. Other similar recent language models include UniLM [21], MASS [22], and XLNet [23]. Apart from BART, other popular models include Text-To-Text Transfer Transformer (T5) [24] and Longformer [25]. T5 is an encoder-decoder model pre-trained on various text-based language tasks, with input-output definitions converted into a text-to-text format. Longformer also has a transformer architecture; however, it is modified to process lengthy document texts. FLAN-T5 [26] is an instruction-fine tuned T5 model. It is recent state-of-the-art model that was trained on 1000 additional tasks compared to the T5 model. Using these models, we aim to efficiently generate discrete segments of medical discharge summaries from nursing notes using abstractive summarization. In this study, we compare four representative models: BART, T5, FLAN-T5, and Longformer. We employed the BART framework since it has shown superior summarization performance on a number of non-medical benchmark datasets. The T5 model was selected as a candidate because it is an early seq2seq Transformer model which has been trained on a large amount of data for multiple natural language processing applications including summarization. We test the FLAN-T5 model since it has been trained on even more number of tasks than T5. We include both models because we are curious to understand how exactly a large language model such as FLAN-T5 is better than a language model like T5 when applied to the same tasks. Finally, we tested the Longformer because it distinguishes itself from the most Transformer-based models in that its capable of handling also lengthy texts. #### 0.0.2 EHR Summarization Due to information overload and time involved in preparing and utilizing EHR documents [27, 28, 29], healthcare providers report shrinking amounts of interaction with their patients. As a result, there has been an ongoing push for automated integration of clinical reports to produce detailed, yet concise, medical summaries [30, 31]. Current approaches within EHR summarization have mostly been extractive in nature, in which summarized text is directly taken from the original medical document. This approach ranges from extracting relevant sentences from the input text to create a summary [32], topic modeling using Bayesian network or Latent Dirichlet allocation [33], creating heuristic rules [34], to utilizing neural networks [8, 35]. On the other hand, there have been works on medical document summarization that are abstractive in nature [36]. Zhang et al. [37], for instance, utilized the findings and background section from chest x-ray radiology reports to generate an assessment section. MacAvaney et al. [38] furthered by including the encoding of an ontology report section to aid the decoding process. By doing so, they were able to create an ontology-aware clinical abstractive summarization model. To capture the complexity of long medical texts, while retaining the ability to generate abstract summaries, recent studies combined these techniques to create an extractive-abstractive pipeline for summarization. Shing et al. [39], for instance, uses a recall-oriented extractor to extract relevant sentences and then an abstractor component to remove irrelevant or duplicated information. Our dataset design setups include a similar pipeline. Instead of having a two component pipeline while summarizing an input text, we utilize raw clinical documents and an extractive approach to create source-target pairs. By doing so, we aim to train the main one-component abstractive model (BART, T5, FLAN-T5, or Longformer) to identify key sections of the source text to produce the intended target pair with additional information the model deems relevant from the nursing notes. There is a very recent work by Searle et. al [40] that focuses on generating Brief Hospital Course (BHC) text, which is a sub-section in the Discharge Summary Report. They use a novel ensemble model that incorporates a medical concept ontology and show that this model outperforms their baseline models. Our work looks into both partial and full discharge summarization with extractive-abstractive summarization pipelines (Setups 2 and 3 described in the following section) as well as only abstractive schemes. ## Methods and Materials ### Data Description The data for this study is from the Medical Information Mart for Intensive Care (MIMIC-III) database [41]. This relational database contains de-identified health data for over 40,000 patients admitted to critical care units in the Beth Israel Deaconess Medical Center between 2001 and 2012. We used the "noteevents" table, which contains notes written by a wide-range of healthcare providers. There are 15 types of notes present, including hospital discharge summaries, echocardiography reports, electrocardiography (ECG) interpretations, nursing notes, and many more. Among them, we focused on discharge summaries and nursing notes. We selected nursing notes as model inputs because they empirically demonstrated a substantial overlap in content with the information present in the discharge summaries. We then designed the following setups for training text summarization models: * Setup 1: For each patient, we gathered all the nursing notes and placed them together under the "source" column. The corresponding 'target' was set to be the most recent discharge summary for that patient. * Setup 2: For each patient, we combined the earliest and latest nursing notes to represent the'source' part of the source-target pair. To create the 'target,' we used the Luhn summarizer (reviewed in Section 2) on the patient's most recent discharge summary and set the generated texts as the respective 'target.' * Setup 3: The source setup is the same as setup 2. For the 'target,' we extracted the first three lines of each section within the most recent discharge summary report. * Setup 4: For each patient, the single most recent nursing note was considered as'source,' and the 'History of Present Illness' section in their most recent discharge summary was considered as 'target.' * Setup 5: This setup is similar to setup 4, except that we included the 'History of Present Illness' as well as the 'Discharge Instructions' sections as part of the respective 'target' text. Setups 1, 2, and 3 include \(6,157\) training data points, while setups 4 and 5 contain \(6,132\) and \(5,981\) training data points, respectively. For testing purposes, we withheld \(1,000\) patient data reports for each setup. The Python code written to generate these setups is available here1. The purpose of creating these five setups is to understand the input combination best understood by the models with respect to the expected output; a partial discharge summary. Setup 1 aims at using all the existing data, i.e., the entire history of nursing notes to generate a discharge summary report. This setup involves pre-processing in order to create the training dataset, meaning that the source includes all the raw nursing notes for a set of patients, and the target includes the discharge summary reports for the same set of patients. It features examining the usefulness of the entire history of nursing notes for each patient. At the same time, the generation target is the entire discharge summary. Setups 2 and 3 aim at augmenting the dataset with target summaries as opposed to the entire discharge summary, as an attempt to generate a short summary. In these setups, additional processing is required to populate both the source and the target text. These setups use the first and last nursing notes for each patient as the source document. In setup 2, the target summary is generated using the Luhn summarizer [10], while Setup 3 uses a modified version of the LEAD-3 pipeline, which pulls the first three lines of each section in the discharge summary report rather than the first three lines of the entire document. Both of these summarizers represent extractive summarization algorithms. Therefore, Setups 2 and 3 evaluate whether a combination of extractive and abstractive pipelines can generate better summary reports than a purely abstractive approach. Figure 1: **Training Dataset Creation Overview**: Within each setup, there is a source-target pair for each patient in the training data. The figures describe how each source-target pair is created with an example of how they look like. Setups 4 and 5 aim at narrowing their source texts to only the most recent nursing note. This is based on the assumption that the last nursing note is the most recent note which contains the latest and most relevant information regarding discharging a patient. Then, for Setup 4 the target output is one section of the discharge summary report, namely, 'History of Present Illness.' On the other hand, Setup 5 focuses on automatically generating two sections of the discharge summary report as the target, namely, 'History of Present Illness', as well as, the 'Discharge Instructions'. The two sections of a discharge summary are the most text-rich sections, therefore, generating them may be easier automated using a language model. ### Metrics To quantitatively assess the summaries generated by each setup and model combination, we use the popular Recall-Oriented Understudy for Gisting Evaluation (ROUGE) metrics, ROUGE-1, ROUGE-2, ROUGE-L, and ROUGE-L-SUM [42]. ROUGE-1 measures the number of matching unigrams between the generated text and the actual text. Meanwhile, ROUGE-2 measures the number of matching bigrams instead of unigrams. Instead of finding fixed n-gram matches, ROUGE-L computes the longest common subsequence (LCS) - i.e., the number of the longest shared word sequence, between the model output and the actual text. ROUGE-L-SUM makes this calculation and then applies it on a summary level. This means it splits the sentences in the text based on newline characters and takes the union LCS matches between actual (reference) text and every model-generated sentence. For these ROUGE scores, we calculate the precision and recall of the overlapped words between the generated and the original discharge summaries. We then measure \(F_{1}\) scores calculated using the following formula: \[F1\_score=\frac{2*Precision*Recall}{Precision+Recall} \tag{1}\] In Tables 1 and 2, we include two bold rows that are populated with ROUGE values. We display ROUGE scores, namely the precision, recall, and \(F_{1}\) scores for all setup and model combinations. Each value is approximated to two decimal places. The first bold row showcases the highest scores achieved in the given setup. The second bold row represents the average values of ROUGE scores for each metric. This is calculated by taking the average over all the mid scores recorded in all the models, i.e., the pre-trained and fine-tuned versions of the BART, Longformer, T5, and FLAN-T5 models. In order to verify our findings' robustness, we conduct statistical significance tests based on bootstrap re-sampling using the official ROUGE package [42]. ### Models To automate the text generation of discharge summary reports, we selected 4 text summarization models that would be trained and tested under the aforementioned setups. The models that are used for this purpose include BART [16], T5 [24], Longformer [25] and FLAN-T5 [26]. The rationale for the above model selection is: (1) BART represents the state of the art in abstractive summarization on different benchmark datasets among a few domains. Since nursing notes often have overlapping but not complete information, the selected summarization models must extract certain key information and simultaneously understand the overall idea described in the nursing notes. Analogous to the other two models, it is built by using a decoder similar to GPT [19] and an encoder similar to BERT [17]. However, it differentiates itself from the other two models by adding noise to the input data which has shown to help the model perform better in summarization tasks. (2) T5 represents a unified transferred learning framework that has been trained on very large datasets. Consequently, T5 can more easily adapt to most tasks than other models, such as text generation. T5 is therefore a preferred model to train and test on a language-based task such as EHR summarization. (3) The Longformer model can encode and decode multiple lengthy text documents. The previously-mentioned transformer-based models have a length limitation due to the quadratic increase in scale and complexity caused by their attention mechanisms. However, the Longformer model is specifically designed to process long documents. (4) FLAN-T5 represents an enhanced version of T5 since it is fine-tuned on more than 1000 additional tasks covered in more languages compared to T5. It is a recent open-source large language model that is comparable to performance of GPT [19] models. ## Results ### Quantitative Results In Tables 1 and 2, we evaluate the summaries generated by the four models, i.e. BART, T5, Longformer, and FLAN-T5, in their pre-trained and fine-tuned versions. The term "pre-trained version" signifies loading a pre-trained model (for instance, t5-base for T5), while the term "fine-tuned version" implies that we load a pre-trained model, train it further using our training dataset for a given data setup, and then finally test the newly trained model. For Setup 1, Table 1 indicates that a FLAN-T5 fine-tuned model achieves the highest scores in the ROUGE metric set. To understand whether this result is significant, we compare the FLAN-T5 fine-tuned version's low-percentile \(F_{1}\) score against the high-percentile \(F_{1}\) scores achieved by other models. We find that all these values are significantly higher than all the other models. Between T5 fine-tuned and BART fine-tuned models, the difference in the former's low-percentile scores and the latter's high-percentile scores is not significant at 0.1. This behavior is also reflected in Setups 2. Table 1 reflects the scores achieved by the models when they are trained and tested using Setup 2 data setup. Instead of FLAN-T5's fine-tuned \begin{table} \begin{tabular}{|c|l|l|c|c|c|} \hline **Setup No.** & **Model** & _rouge1_ & _rouge2_ & _rougeL_ & _rougeLsum_ \\ \hline \multirow{4}{*}{(1)} & BART (pre-trained) & 32.86 / 9.00 / 14.43 & 17.22 / 5.16 / 7.60 & 28.45 / 8.42 / 12.35 & 28.39 / 8.40 / 12.33 \\ \cline{2-6} & BART (fine-tuned) & 49.34 / 32.96 / 36.11 & 31.09 / 21.67 / 23.42 & 45.08 / 30.08 / 32.94 & 45.09 / 30.07 / 32.89 \\ \cline{2-6} & Longformer (pre-trained) & 11.97 / 42.3 / 5.60 & 5.72 / 17.66 / 2.40 & 10.62 / 3.68 / 4.89 & 10.62 / 3.67 / 4.87 \\ \cline{2-6} & All NN & Longformer (fine-tuned) & 37.73 / 25.10 / 27.20 & 20.30 / 13.93 / 14.95 & 31.84 / 20.06 / 21.92 & 31.80 / 20.89 / 22.72 \\ \cline{2-6} & Training target: & T5 (pre-trained) & 24.13 / 5.21 / 8.23 & 10.02 / 2.12 / 3.41 & 20.70 / 4.40 / 6.99 & 20.64 / 4.39 / 6.97 \\ \cline{2-6} & T5 (fine-tuned) & 53.81 / 32.30 / 38.31 & 33.67 / 19.97 / 23.84 & 49.08 / 29.38 / 34.90 & 49.06 / 29.33 / 34.88 \\ \cline{2-6} & FLAN-T5 (pre-trained) & 25.03 / 7.69 / 10.93 & 11.56 / 3.67 / 5.31 & 21.74 / 6.48 / 9.33 & 21.65 / 6.47 / 9.31 \\ \cline{2-6} & **FLAN-T5 (fine-tuned)** & **55.13 / 42.85 / 45.55** & **39.09 / 30.28 / 32.26** & **51.25 / 39.80 / 42.36** & **51.23 / 39.82 / 42.35** \\ \cline{2-6} & **Average Scores** & **36.25 / 20.03 / 23.31** & **21.08 / 12.32 / 14.15** & **32.35 / 17.79 / 20.71** & **32.31 / 17.88 / 20.79** \\ \hline \multirow{4}{*}{(2)} & BART (pre-trained) & 31.49 / 99.97 / 14.19 & 16.79 / 52.8 / 7.67 & 27.17 / 8.46 / 12.16 & 27.14 / 8.46 / 12.16 \\ \cline{2-6} & BART (fine-tuned) & 33.72 / 76.33 / 20.79 & 19.25 / 9.04 / 11.86 & 27.98 / 13.25 / 18.26 & 27.95 / 13.23 / 17.25 \\ \cline{2-6} & Longformer (pre-trained) & 11.58 / 4.13 / 5.49 & 5.32 / 1.73 / 2.34 & 10.15 / 3.57 / 4.75 & 10.16 / 3.58 / 4.75 \\ \cline{2-6} & Longformer (fine-tuned) & 26.90 / 15.36 / 18.20 & 12.33 / 6.51 / 8.02 & 21.07 / 11.26 / 14.33 & 21.07 / 11.27 / 13.80 \\ \cline{2-6} & Training target: & T5 (pre-trained) & 22.97 / 5.00 / 7.81 & 9.51 / 1.95 / 3.13 & 19.55 / 4.17 / 6.56 & 19.60 / 4.18 / 6.56 \\ \cline{2-6} & Luhm Summarized DS & **T5 (fine-tuned)** & **38.75 / 16.53 / 22.30** & **22.00 / 9.14 / 12.51** & **32.06 / 13.53 / 18.35** & **32.13 / 13.56 / 18.39** \\ \cline{2-6} & FLAN-T5 (pre-trained) & 28.90 / 9.00 / 12.87 & 14.46 / 4.57 / 6.66 & 24.81 / 7.56 / 10.90 & 24.79 / 7.57 / 10.90 \\ \cline{2-6} & Most recent DS & FLAN-T5 (fine-tuned) & 30.91 / 12.56 / 16.72 & 16.38 / 6.58 / 9.00 & 26.30 / 10.53 / 4.14 & 26.37 / 10.55 / 14.18 \\ \cline{2-6} & **Average Scores** & **28.15 / 11.11 / 14.18** & **14.51 / 5.60 / 7.65** & **23.64 / 9.44 / 12.43** & **12.52 / 9.05 / 17.25** \\ \hline \multirow{4}{*}{(3)} & BART (pre-trained) & 31.37 / 9.97 / 14.19 & 16.77 / 52.27 / 7.63 & 27.19 / 8.45 / 12.15 & 27.01 / 8.45 / 12.14 \\ \cline{2-6} & BART (fine-tuned) & 66.69 / 13.47 / 22.10 & 57.19 / 10.91 / 18.07 & 63.89 / 12.99 / 21.28 & 63.85 / 12.99 / 21.29 \\ \cline{2-6} & Longformer (pre-trained) & 11.59 / 4.15 / 5.50 & 5.31 / 1.73 / 2.35 & 10.13 / 3.58 / 4.76 & 10.15 / 3.58 / 4.75 \\ \cline{2-6} & Longformer (fine-tuned) & 66.44 / 13.46 / 22.07 & 56.95 / 10.87 / 18.00 & 63.53 / 12.93 / 21.18 & 63.60 / 12.95 / 21.22 \\ \cline{2-6} & Training target: & T5 (pre-trained) & 22.97 / 5.00 / 7.81 & 9.51 / 1.95 / 3.13 & 19.55 / 4.17 / 6.56 & 19.60 / 4.18 / 6.56 \\ \cline{2-6} & LEAD-3 DS & T5 (fine-tuned) & 63.40 / 13.10 / 22.18 & 53.14 / 10.27 / 16.90 & 60.42 / 15.21 / 20.31 & 60.47 / 12.54 / 20.36 \\ \cline{2-6} & FLAN-T5 (pre-trained) & **28.89 / 9.02 / 12.88** & 14.52 / 4.58 / 6.68 & 24.83 / 7.57 / 10.91 & 24.82 / 7.58 / 10.90 \\ \cline{2-6} & Mean-T5 (fine-tuned) & **71.35 / 10.25 / 15.73** & **61.86 / 6.83 / 10.53** & **68.34 / 9.32 / 14.40 & **68.36 / 9.29 / 1 version, T5's fine-tuned version achieves the highest scores in Setups 2, which are significant compared to all other models except for BART's fine-tuned version. This behavior is reflected in Setup 5 as well as shown in table 2. It means that T5 or BART can be applied to achieve comparatively high and similar ROUGE scores for these data setups. For Setup 3, Table 1 indicates that BART fine-tuned model achieves the highest scores in the ROUGE metric set. This result is significant against all pre-trained versions and FLAN-T5 fine-tuned version. T5 and Longformer fine-tuned models achieve similar results, and the high-percentile \(F_{1}\) ROUGE scores of these models are higher than the low-percentile \(F_{1}\) ROUGE scores of BART's fine-tuned version. This shows that we can interchangeably use any of T5, BART, and Longformer fine-tuned models to achieve similar ROUGE scores in this setup. For Setup 4, Table 2 indicates that BART fine-tuned model also achieves the highest scores in the ROUGE metric set. Similar to Setup 3, this result is significant against all pre-trained versions and mildly significant against the T5 fine-tuned version. However, unlike Setup 3, this model ROUGE values are not significant against FLAN-T5's but are significant against Longformer for ROUGE-1, ROUGE-L, and ROUGE-L-SUM scores, but not in terms of ROUGE-2 scores. Since the majority of scores are significant, the BART fine-tuned model scores can be considered significantly higher than Longformer's fine-tuned version. Hence, we can utilize T5, FLAN-T5, or BART to gain high ROUGE scores whilst generating a section of the discharge summary report using Setup 4. In addition to testing model performance for each setup, we evaluate the ease with which models train and generate summary text by comparing the average \(F_{1}\) values attained in each setup. Amongst Setups 1, 2, and 3 (full discharge summary output), Setup 2 achieves the lowest while Setup 1 achieves the highest. Between Setups 4 and 5 (partial discharge summary output) the difference between each setup ROUGE \(F_{1}\) scores are not much different. Hence, both have similar ease of training and summarizing medical texts. \begin{table} \begin{tabular}{l|l} \hline Actual Discharge Summary Report & Generated Text \\ \hline **Service:** MEDICINE & \\ **Allergies:** Penicillins / Latex / Sulfa & \\ (Sulfonamide Antibiotics) / Shellfish Derived & \\ **Major Surgical or Invasive Procedure:** & \\ s/p ERCP with metal stent placement [**2907-3-3**]. & \\ **History of Present Illness:** & \\ Ms. [**Kknown patient lastname 35261**] is a 77 & \\ year old female with pancreatic cancer with liver & \\ and lung metastases with recent failure to & \\ gemcitabine treatment who presents with new & \\ onset jaundice two days ago. She reports feeling & \\ fatigued and having a poor a[** Location **] ie & \\ ovr the last three days, and noted jaundice on & \\ the morning of [**2907-2-28**]. She denies fevers, & \\ but does report night sweats that have been & \\ going on for weeks. She had one episode of & \\ nausea and vomiting after taking a pain pill on & \\ the evening of the [**2907-3-1**]. She & \\ reports [**Location (un) 685**] colored & \\ stool but denies pruritis or dark colored & \\ urine. She has chronic abdominal pain related & \\ to her cancer, but does not report & \\ any change in her abdominal pain. She has never & \\ had an episode like this before.In the ED, & \\ initial vitals were T 99.8, HR 95, BP 180/106, & \\ RR 18, 99\% on RA. Her labs were notable & \\ for leukocytosis, elevated LFTs with an & \\ obstructive pattern. Her [**Location (un) **] u/s & \\ showed a common bile duct dilation with obstruction & \\ at the level of a large pancreatic mass. She was & \\ given Cipro/Flagyl and morphine for pain. ERCP & \\ was consulted in the ED and plans to peform & \\ ERCP on [**2907-3-3**]. Upon arrival to & \\ the [**Hospital Unit Name 2**], she was in & \\ no acute distress. & \\ **Past Medical History:** & \\ 1. Metastatic pancreatic cancer with liver mets, & \\ diagnosed [**10-11**]. She failed gemcitabine. & \\ She is currently enrolled in hospice but is also & \\ interested in considering treatment options. & \\ 2. Lupus - no current treatment & \\ 3. Hypertension. & \\ 4. Hypercholesterolemia. & \\ 5. GERD. & \\ 6. Hypothyroidism. & \\ 7. COPD. & \\ 8. History of prior TIAs. & \\ \(\cdots\) & \\ \hline \end{tabular} \end{table} Table 3: Example of T5’s Generated Text using data Setup 1. The actual Discharge Summary Report is on the left side of this table \begin{table} \begin{tabular}{l|l} \hline \hline Actual Discharge Summary Report & Generated Text \\ \hline Mr. [**Known patient lastname 35644**] is a 28 year old man from [**Country **]. He states had a fall from a skateboard at 6 m [**2784-10-5**] on his way to work (fishing company employee and some question as to if infection happened at work and was exercabated by fall). \\ Keep your incisions clean and dry &... \\ Please take all medication as prescribed & \\ If you have any increased redness, drainage, & Please shower daily including washing \\ or swelling, or &. \\ Please take only as directed and do not drive or & no baths or swimming \\ operate any machinery while taking this medication. & Monitor wounds \\ There is a 72 hour (Monday through Friday, & No driving for 1 month \\ 9am to 4pm) response time for prescription & Please call with any questions \\ refil requests. & or concerns [**Telephone/Fax (1) 181**] \\ \hline \hline \end{tabular} \end{table} Table 4: Example of T5’s Generated Text using data Setup 5. The actual Discharge Summary Sections are on the left side of this table Figure 2: **History of Present Illness Section Generation**: For each model that was fine-tuned to generate the β€˜History of Present Illness’ section, we qualitatively compare one of their test outputs amongst each other. Dark green matches the target text, i.e., History of Present Illness section. Light green represents text that match with the source/input text, but not the target text. Orange text is not present in both source and target text. Lastly, Red text is categorically present in the target text, but does not match in number, detail, or other specifics. It also recognizes that certain numbers are important. For instance, it kept some lab result display results such as 99.8, HR 95, and BP 180/160. However, there are other instances where the service type or other information can be wrong. To understand how often an information is accurate, we calculate the accuracy of information for a particular section, namely "Service." Amongst the model's generated texts, 508 out of 1000 have "Service" sections described. 282 out of 508 (55%) have the right service type described. This implies that even if the model recognizes this section, it may not always identify the actual type of service conducted on the patient. These behavior traits also appear in other setup and model combinations. Table 4 takes a closer look at an example taken from one of the generated test outputs by the highest performing model for setup 5, which is T5. This instance illustrates that the model was able to determine the content of two sections, i.e., "History of Present Illness" and "Discharge Instruction" as intended with this particular data setup. While both sections are generally accurate, the latter section has general instructions that may or may not represent the patient's actual discharge instructions. Nevertheless, this example shows promise with respect to generating sections within the Discharge Summary Report. In Figure 2, we compare instances of generated summaries by each of the fine-tuned models, which were fine-tuned to generate "History of Present Illness section." We color sentences that match the source text and/or the target text to visually understand how much overlap there is amongst each text. Between the input (source) and the reference output (target), the former is almost entirely present in the latter's content. This implies that nursing notes do have content that is ultimately utilized in the discharge summary report. As for the model outputs, BART, T5, and FLAN-T5 recognize such content overlap behavior as it copies the most of the source output as part of its generated summary. Amongst all model outputs, Longformer seems to hallucinate the most since we find information that is present in neither of the source and target texts. ## Discussion ### Clinical Importance and Implications Hospital discharge summaries are a key source of patient information. Writing good discharge summaries requires considerable provider resources. Without automated generation, healthcare providers will continue to face a growing burden of documentation resulting in delayed report generation as well as decreased face-to-face interaction with patients. Our study benchmarks various language model and their data setups to find those settings that automatically generate key sections of this document with decent ROUGE scores. Based on the results presented in the previous section, we conclude that the FLAN-T5 can generate the discharge summary with the highest ROUGE score compared across all models and setups. As setups 1, 2, and 3 share the same generation target, their corresponding ROUGE performance metrics can be directly compared. This implies that setup 1 is on average a better direction to follow as compared with setups 2 and 3. Moreover, we observe that, contrary to the common perception in standard summarization tasks such as news summarization were lead-3 is a very strong baseline, for EHR summarization as tested in setup 3, the lead-3 baseline does not perform well when training a model on it and testing that model against the ground-truth discharge summaries. That is a major difference between summarization of EHRs compared with other types of documents such as news articles where the first few sentences of a document present the main gist of the document. Furthermore, we observed in our experiments in setups 4 and 5 the viability of generation of history of present illness and the discharge instructions with a high quality in terms of ROUGE performance. ### Applying models in practice As shown in the results section, the generated text output was not always matching to the actual circumstance. Since these models have been experimented on with the intent to summarize medical documents, their ROUGE scores indicate promising discharge summaries. However, there is a need for further assessment in terms of factual correction in all instances. They are generally accurate, but there are still special cases in which the model's inference is not fully correct. Hence, we should look for ways to add other metrics or algorithms to produce factually-correct information more frequently. By doing so, we can learn to trust the generation process more than we do now. ### Limitations and Future Extensions In the experiment setup, we solely focused on nursing notes as inputs to models for producing discharge summaries. However, the MIMIC-III database contains other types of provider documentation - such as physician notes - that may be used in the construction of discharge summaries. Hence, as future extensions, we can explore if a combination of these notes or some other sets of text improves the overall discharge summary generation with more factual correction. Furthermore, we plan to extend this work to generate topic-based [43, 44] summaries focusing on each organ at a time to construct a coherent summary. ## Conclusions This article benchmarks various training regimes and models to automatically generate EHR Discharge Summaries. In terms of data training setup, we find that utilizing full nursing notes (setup 1) and focusing on generating specific sections (setups 4 and 5) can allow consistent improvement in text summarization by most language models, especially for models such as BART and T5. Amongst the pre-trained models, we find that FLAN-T5 can be more reliable to produce unseen EHR summarizations. In future work, we encourage the research community to continue its collaboration with medical professionals to create summarization-based medical datasets and even better summarization models to further enable more readable and accurate medical documents. **Admission Date:** [Date of Admission] **Date of Birth:** [Patient's Date of Birth] **Service:** [Specifichospital service or unit that the patient was admitted to during hospitalization] **Allergies:** [Patient's known allergies and their related severity and treatment plans] **Attending:** [Physician primarily responsible for the patient's care during hospitalization] **Chief Complaint:** [Reason why patient's sought medical attention and their related symptoms] **Major Surgical or Invasive Procedure:** [Any significant procedures or surgeries that the patient went through during their hospitalization] **History of Present Illness:** [Detailed account of patient's symptoms and medical conditions up to their current admission to the hospital] **Past Medical History:** [Patient's medical history prior to admission] **Social History:** [Patient's social and lifestyle factors that could be relevant to their medical care] **Family History:** [Patient's family medical history] **Physical Exam:** [Detailed account of patient's physical examination performed by healthcare provider during their hospitalization] **PertinentResults:** [Summary of significant lab tests, imaging studies or other diagnostic tests performed during their hospitalization and are related to their current illness or medical condition] **Brief Hospital Course:** [Summary of patient's hospitalization, including diagnosis, treatment, and treatment response] **Medications on Admission:** [Patient's medication list at the time of admission to the hospital] **Discharge Medications:** [Patient's medication list prescribed at the time of discharge from the hospital] **Discharge Disposition:**[Location or facility that the patient was discharged from the hospital] **Discharge Diagnosis:** [Primary reason for patient's hospitalization and main medication condition for which they received treatment] **Discharge Condition:** [Patient's overall health status at the time of their discharge from the hospital] **Discharge Instructions:** [Information provided to patients upon their discharge from the hospital] **Followup Instructions:** [Recommended follow-up care for the patient after they leave the hospital] **Completed by:** [Name and position of the healthcare provider that completed and signed off the report] Figure 3: **Discharge Summary Report Template:** A general template of Discharge Summary Report present in the MIMIC-III dataset. ## Funding Statement This research is supported in part by the SNSF (P2TIP2_187932) and grant T32DA013911 from the National Institute on Drug Abuse, of the NIH. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of SNSF, or NIH. ## Competing Interests Statement The authors have no competing interests to declare. ## Contributorship Statement C.E. and S.B. contributed to the research idea. K.P. and S.B. worked on designing the methodology and experiments. K.P. implemented the data processing, modeling, and data analysis. K.P., S.B., L.M., and C.E. discussed the results and contributed to the final manuscript. ## Data Availability The data for this research was gathered from the MIMIC-III dataset. The data organized for all the setups in this article will be released.
2304.02477
Sharp-interface limit of a multi-phase spectral shape optimization problem for elastic structures
We consider an optimization problem for the eigenvalues of a multi-material elastic structure that was previously introduced by Garcke et al. [Adv. Nonlinear Anal. 11 (2022), no. 1, 159--197]. There, the elastic structure is represented by a vector-valued phase-field variable, and a corresponding optimality system consisting of a state equation and a gradient inequality was derived. In the present paper, we pass to the sharp-interface limit in this optimality system by the technique of formally matched asymptotics. Therefore, we derive suitable Lagrange multipliers to formulate the gradient inequality as a pointwise equality. Afterwards, we introduce inner and outer expansions, relate them by suitable matching conditions and formally pass to the sharp-interface limit by comparing the leading order terms in the state equation and in the gradient equality. Furthermore, the relation between these formally derived first-order conditions and results of Allaire and Jouve [Comput. Methods Appl. Mech. Engrg., 194 (2005), pp. 3269--3290] obtained in the framework of classical shape calculus is discussed. Eventually, we provide numerical simulations for a variety of examples. In particular, we illustrate the sharp-interface limit and also consider a joint optimization problem of simultaneous compliance and eigenvalue optimization.
Harald Garcke, Paul HΓΌttl, Christian Kahle, Patrik Knopf
2023-04-05T14:51:25Z
http://arxiv.org/abs/2304.02477v2
# Sharp-Interface Limit ###### Abstract. We consider an optimization problem for the eigenvalues of a multi-material elastic structure that was previously introduced by Garcke et al. [_Adv. Nonlinear Anal._ 11 (2022), no. 1, 159-197]. There, the elastic structure is represented by a vector-valued phase-field variable, and a corresponding optimality system consisting of a state equation and a gradient inequality was derived. In the present paper, we pass to the sharp-interface limit in this optimality system by the technique of formally matched asymptotics. Therefore, we derive suitable Lagrange multipliers to formulate the gradient inequality as a pointwise equality. Afterwards, we introduce inner and outer expansions, relate them by suitable matching conditions and formally pass to the sharp-interface limit by comparing the leading order terms in the state equation and in the gradient equality. Furthermore, the relation between these formally derived first-order conditions and results of Allaire & Jouve [_Comput. Methods Appl. Mech. Engrg._, 194 (2005), pp. 3269-3290] obtained in the framework of classical shape calculus is discussed. Eventually, we provide numerical simulations for a variety of examples. In particular, we illustrate the sharp-interface limit and also consider a joint optimization problem of simultaneous compliance and eigenvalue optimization. Keywords. Shape and topology optimization; structural optimization; eigenvalue problem; sharp-interface limit; formally matched asymptotics; phase-field models; linear elasticity. AMS Subject Classifications. 35C20, 35P05, 35R35, 49Q10, 49R05, 74B05, 74P05, 74P15. ## 1. Introduction The goal of structural shape and topology optimization is to find the optimal distribution of materials in a prescribed region, the so-called design domain. Here, in addition to pure shape optimization, also the topology of the structure is to be optimized. This includes the formation of holes (void regions) in the structure as well as the merging and splitting of connected material components. In many applications, certain properties of the materials (such as their elastic properties) as well as additional side conditions (e.g., volume constraints or support conditions) need to be taken into account within the optimization problem. Besides the optimization of shape and topology, the optimization of eigenvalues is an important task in engineering science to make structures robust against vibrations. It has been observed that structures are less susceptible against vibrations if their principal eigenvalue is large, see [18, Section 2], [4] and also [45] for concrete examples and further references. Heuristically, this can be explained by the fact that larger principal eigenvalues are associated with higher temporal frequencies which correspond to smaller wavelengths of the oscillations. The traditional mathematical tool to handle shape optimization problems is the calculus of shape derivatives based on boundary variations (see, e.g., [5, 66, 35, 59, 67, 6]). However, frequent remeshing leads to high computational costs and it cannot deal with topological changes, see also [62] for a comprehensive discussion. In some situations, it is possible to handle topology changes by means of homogenization methods (see, e.g., [3]) or variants of this approach such as the SIMP method (see, e.g., [18, 26]). A drawback of this method occuring in applications to spectral problems is the phenomenon of so-called _localized eigenmodes_ (also often referred to as _spurious eigenmodes_), see [7, 18, 29, 63]. In this context localized eigenmodes are eigenfunctions which are supported only in the void regions and pollute the spectrum with low eigenvalues. Especially in recent times, the level-set method has become a popular approach for topology optimization problems. After the method was developed in [61], it has been used extensively in the literature (see, e.g., [7, 10, 30, 55, 60, 62]). Although the level-set method is capable of dealing with topological changes, difficulties can arise if voids are to be created. In this paper, we consider an optimization problem that was introduced in [45]. There, the authors employed a different method to optimize the shape and the topology as well as a finite selection of eigenvalues of an elastic structure, namely the so-called _(multi-)phase-field approach_. This method for shape and topology optimization was first developed in [26] and subsequently used frequently in the literature. We refer the reader to [11, 19, 20, 22, 27, 31, 32, 34, 36, 37, 38, 58, 64] to at least mention some of the various contributions. In [45], an elastic structure consisting of \(N-1\) materials is described by a _multi-phase-field variable_. This is a vector-valued function \(\boldsymbol{\varphi}:\Omega\to\mathbb{R}^{N}\) whose components \(\varphi^{1},...,\varphi^{N-1}\) represent the volume fractions of the materials, and \(\varphi^{N}\) represents the void (i.e., the region where no material is present). In particular, the components of \(\boldsymbol{\varphi}\) are restricted to attain their values only in the interval \([0,1]\). In most parts of the design domain, the materials are expected to appear in their pure form, meaning that the corresponding component of the multi-phase-field \(\boldsymbol{\varphi}\) attains the value one, whereas all other components are zero. These regions are separated by _diffuse interfaces_, which are thin layers between the pure phases whose thickness is proportional to a small parameter \(\varepsilon>0\). In particular, \(\boldsymbol{\varphi}\) is expected to exhibit a continuous transition between the values zero and one at these diffuse interfaces. The main advantage of the phase-field approach in the context of shape and topology optimization is that topological changes (such as merging or splitting of material components or the creation of holes) during the optimization process can be handled without any problems. The optimization problem in [45] is formulated as a minimization problem for an objective functional which involves a selection of eigenvalues as well as a Ginzburg-Landau type penalisation term for the phase-field. For this problem, the existence of at least one global minimizer was established and a first-order necessary optimality condition for local minimizers was derived. A detailed mathematical formulation of the optimization problem from [45] will be presented in Section 2. The main goal of this paper is to derive the _sharp-interface limit_ of the aforementioned optimization problem from [45]. This means that we want to send the parameter \(\varepsilon\), that is related to the thickness of the diffuse interface, to zero. In this way, we can relate the diffuse-interface approach from [45] to the physically reasonable scenario of sharp-interfaces. In particular, one of our key goals is to show that minimizers of the problem in the diffuse-interface framework converge to minimizers of a corresponding sharp-interface optimization problem. Qualitatively, there are two ways to deal with this passage to the limit: the rigorous investigation of the \(\Gamma\)_-limit_ of the involved cost functional, and the formal method of _matched asymptotic expansions_. For a rigorous discussion of the sharp-interface limit of diffuse-interface models describing elastic systems, we refer the reader to [8, 21]. There, the void is modeled as a further material having low but non-degenerate stiffness, which is crucial for the analysis. Up to now, to the best of our knowledge, there is no rigorous \(\Gamma\)-limit analysis for spectral problems in the case of degenerating stiffness in the void regions. As a first step towards the task of dealing with this delicate problem, the sharp-interface \(\Gamma\)-limit for an optimization problem involving a selection of eigenvalues of the Dirichlet Laplacian was rigorously established in [44]. A relation between the minimization of the principal eigenvalue of the Dirichlet Laplacian on the phase-field level and the Faber-Krahn inequality on the sharp-interface level was discussed in [54]. In order to understand the sharp interface limit, we thus intend to apply the technique of _formally matched asymptotic expansions_ on the optimization problem from [45]. This technique has already been employed on different phase-field models (especially of Allen-Cahn or Cahn-Hilliard type), see, e.g., [1, 2, 16, 20, 28, 42, 46, 47, 53]. For comprehensive overviews of this technique, we refer to [39, 41, 56]. The basic strategy of this formal approach is as follows: We assume that the phase-field as well as the corresponding eigenvalues and eigenfunctions each possess an _inner asymptotic expansion_ and an _outer asymptotic expansion_, both given by a power series with respect to the interface parameter \(\varepsilon\). The inner expansions approximate the aforementioned quantities "close" to the diffuse interface where the phase-transition takes place, whereas the outer expansions approximate these quantities in regions that are "far" away from the interface where only the pure phases are present. Plugging the outer expansions into the eigenvalue equation on the diffuse-interface level, a comparison of the leading order terms leads to limit eigenvalue equations on the sharp-interface level. At this point we will include a discussion about localized eigenmodes. As also mentioned above, in numerical simulations the formation of eigenmodes that are supported only in void areas and produce eigenvalues which pollute the low part of the spectrum (which we are interested in) is a major problem. We will see that our asymptotic approach is able to deal with such localized eigenmodes. More precisely, we will see that if such modes appear, then the corresponding eigenvalues will diverge to infinity as \(\varepsilon\to 0\). Thus, if \(\varepsilon>0\) is sufficiently small, localized eigenmodes do not affect the lower part of the spectrum that is considered in our optimization problem. The inner expansions are used to describe the aforementioned quantities in tubular neighborhoods around the interfaces. The distinction between inner and outer regions is needed as we expect the phase-field to change its values rapidly in regions close to the interface. This is because the diffuse interface will become infinitesimally thin as \(\varepsilon\to 0\). Here, the main idea is to introduce a rescaled coordinate system which takes the \(\varepsilon\)-scaling of this region into account. After studying these two forms of expansions separately, it is crucial to match both expansions in a so-called intermediate region. This means we compare both extensions by exploiting the two different coordinate systems they are formulated in. Plugging these relations into the optimality system and comparing the leading order terms, we obtain boundary conditions for the previously obtained limit eigenvalue equations. We observe that the boundary condition on the free boundary will essentially be of homogeneous Neumann type. Furthermore, we use the inner expansions to derive a limit equality from the strong formulation of the gradient inequality. The limit eigenvalue equations together with this gradient equality will then constitute the optimiality system of a corresponding sharp-interface optimization problem. This will be justified from the viewpoint of classical shape calculus (see, e.g., [7]) by relating the limit of the gradient inequality to the shape derivative of the associated cost functional. However, in order to apply the technique of formally matched asymptotics, we first need to reformulate the gradient inequality on the diffuse-interface level as a pointwise gradient equality by introducing suitable Lagrange multipliers. Under an additional regularity assumption on the involved eigenfunctions, this is achieved by employing a regularization technique following the ideas of [23], in which we eventually pass to the limit. A key benefit of this strategy is that it provides an explicit construction of the Lagrange multipliers arising from the constraints of our optimization problem. This specific knowledge about the Lagrange multipliers will turn out to be essential for the asymptotic analysis. As a byproduct, we also prove that the phase-field variable \(\boldsymbol{\varphi}\) solving the original gradient inequality is actually \(H^{2}\)-regular under the aforementioned assumption of suitably regular eigenfunctions. The present paper is structured as follows. In Section 2, we first introduce the theory that is necessary to formulate the diffuse-interface optimization problem along with its first-order necessary optimality condition. The derivation of the strong formulation of the gradient inequality will then be performed in Section 3. Using the outer expansions, we derive the state equations of the limit problem in Section 4.1. In order to construct inner expansions we first analyze in Section 4.3 how the involved differential operators are reformulated in a suitable rescaled coordinate system. Suitable matching conditions connecting the outer expansions with the inner expansions are then derived in Section 5. In Section 6, we use the inner expansions to derive boundary conditions on the free boundary in the sharp-interface setting as well as the sharp-interface limit of the gradient inequality. Then, in Section 7, we comprehensively state the limit optimality system, and in Section 8, the first-order necessary optimality condition on the sharp-interface level is related to classical shape calculus. Eventually, in Section 9, we present several numerical solutions for concrete optimization problems on the diffuse interface-level. In this context, we also discuss suitable choices of the model parameters. In particular, we observe that our results using the phase-field approach compare very well to similar numerical results obtained in [7] by means of the level-set method combined with classical shape calculus on the sharp interface level. ## 2. Formulation of the problem In this section, we recall the framework introduced in [45] in order to formulate and understand the optimality system our analysis is based on. Therefore, we first introduce the key assumptions which shall hold throughout the paper. ### General assumptions 1. The _design domain_\(\Omega\subset\mathbb{R}^{d}\) is a bounded Lipschitz domain with \(d\in\mathbb{N}\) and outer unit normal vector field \(\mathbf{n}\). Its boundary is split into two disjoint parts: A homogeneous Dirichlet boundary \(\Gamma_{D}\) with strictly positive \((d-1)\)-dimensional Hausdorff measure and a homogeneous Neumann boundary \(\Gamma_{0}\). We define \[H^{1}_{D}(\Omega;\mathbb{R}^{d})\coloneqq\left\{\mathbf{\eta}\in H^{1}(\Omega; \mathbb{R}^{d})\;\middle|\;\mathbf{\eta}=\mathbf{0}\text{ a.e. on }\Gamma_{D}\right\}.\] 2. The potential \(\psi:\mathbb{R}^{N}\to\mathbb{R}_{+}\cup\{+\infty\}\) attains exactly \(N\) global minima of value \(0\) attained at the points \(\mathbf{e}_{i}\), i.e., \[\min\psi=\psi(\mathbf{e}_{i})=0\quad\text{for all }i\in\{1,...,N\},\] where \(\mathbf{e}_{i}\in\mathbb{R}^{N}\) denotes the \(i\)-th standard basis vector in \(\mathbb{R}^{N}\). Additionally, we assume \(\psi\) to be decomposed into \(\psi(\mathbf{\varphi})=\psi_{0}(\mathbf{\varphi})+I_{\mathbf{G}}(\mathbf{\varphi})\) with \(\psi_{0}\in C^{1}(\mathbb{R}^{N},\mathbb{R})\) and the indicator functional \[I_{\mathbf{G}}(\mathbf{\varphi})=\begin{cases}0&\text{if }\mathbf{\varphi}\in\mathbf{G},\\ +\infty&\text{otherwise},\end{cases}\] where \(\mathbf{G}:=\mathbb{R}^{N}_{+}\cap\Sigma^{N}\) with \[\Sigma^{N} :=\left\{\mathbf{\xi}\in\mathbb{R}^{N}\;\middle|\;\sum_{i=1}^{N}\xi^ {i}=1\right\},\] (2.1) \[\mathbb{R}^{N}_{+} :=\left\{\,\mathbf{\xi}\in\mathbb{R}^{N}\;\middle|\;\forall i\in\{1, \ldots,N\}:\;\xi^{i}\geq 0\right\}.\] (2.2) The set \(\mathbf{G}\) is referred to as the _Gibbs simplex_. A prototype example for the continuous part \(\psi_{0}\) would be \(\psi_{0}(\mathbf{\varphi})=\frac{1}{2}(1-\mathbf{\varphi}\cdot\mathbf{\varphi})\) (cf. [20]). 3. The function \(\Psi:\left(\mathbb{R}_{>0}\right)^{l}\to\mathbb{R}\) is continuous. ### The phase-field variable To describe the material distribution of \((N-1)\) different materials in the design domain \(\Omega\), we introduce the phase-field \(\mathbf{\varphi}:\Omega\to\mathbb{R}^{N}\). Its components \(\varphi^{i}\), \(i=1,...,N-1\) represent the materials, whereas \(\varphi^{N}\) represents the void. We expect \(\mathbf{\varphi}\) to _continuously_ change its values at the diffuse interface. From a physical point of view, this means that the considered materials can be mixed at the interfacial region. In order for the phase-field to behave in a physically reasonable way we impose suitable constraints. First of all, we fix the total amount of each material by the mean value constraint \[\fint_{\Omega}\mathbf{\varphi}\,\mathrm{d}x=\mathbf{m}=\big{(}m^{i}\big{)}_{i=1}^{N}, \tag{2.3}\] with \(m^{i}\in(0,1)\) and \(\mathbf{m}\in\Sigma^{N}\) (cf. (2.1)). The constraint \(\mathbf{m}\in\Sigma^{N}\) is a consequence of the physical assumption that the amount of the individual volume fractions \(\varphi^{i}\) needs to sum up to \(1\) at each point in the domain. Furthermore, it is physically reasonable to assume that each volume fraction shall attain its values only in the interval \([0,1]\). This property is incorporated by assuming that any \(\mathbf{\varphi}\) belongs to the _set of admissible phase-fields_ \[\mathbf{\mathcal{G}^{m}}=\left\{\mathbf{\varphi}\in\mathbf{\mathcal{G}}\,\middle|\fint_{ \Omega}\mathbf{\varphi}\,\mathrm{d}x=\mathbf{m}\right\},\] with \[\mathbf{\mathcal{G}}:=\left\{\left.\mathbf{\varphi}\in H^{1}(\Omega;\mathbb{R}^{N}) \right|\,\mathbf{\varphi}(\mathbf{x})\in\mathbf{G}\;\;\text{for almost all}\;\mathbf{x}\in \Omega\right\}.\] Here, \(\mathbf{G}\) is the Gibbs simplex that was introduced in (A2). ### The Ginzburg-Landau energy In order to make our optimzation problem well-posed, we need to include a regularizing term for the phase-field in the cost functional. For this purpose, we use the so-called _Ginzburg-Landau_ energy \[E^{\varepsilon}(\mathbf{\varphi})=\int_{\Omega}\left(\frac{\varepsilon}{2}\left| \nabla\mathbf{\varphi}\right|^{2}+\frac{1}{\varepsilon}\psi(\mathbf{\varphi})\right) \,\mathrm{d}x, \tag{2.4}\] for all \(\mathbf{\varphi}\in H^{1}(\Omega;\mathbb{R}^{N})\). Here, the parameter \(\varepsilon>0\) is related to the the thickness of the diffuse-interface and therefore, it is usually chosen very small. In the sharp-interface limit, we intend to (formally) send this parameter to zero. Due to assumption (A2), the potential \(\psi\) enforces the phase-field \(\mathbf{\varphi}\) to attain its values only in the Gibbs simplex. However, as we already include the the Gibbs simplex constraint in the set of admissible phase-fields, it suffices to merely consider the regular part \(\psi_{0}\) of the potential \(\psi\) in the Ginzburg-Landau energy as long as \(\mathbf{\varphi}\in\mathbf{\mathcal{G}}\). This means that \[E^{\varepsilon}(\mathbf{\varphi})=\int_{\Omega}\left(\frac{\varepsilon}{2}\left| \nabla\mathbf{\varphi}\right|^{2}+\frac{1}{\varepsilon}\psi_{0}(\mathbf{\varphi}) \right)\,\mathrm{d}x\] for all \(\mathbf{\varphi}\in\mathbf{\mathcal{G}}\). ### The elasticity tensor and the density function As we intend to consider an elastic structure, we next introduce the two tensors of linear elasticity, which will be used to formulate the state equation. The _strain tensor_ of a vector-valued function \(\mathbf{u}\in H^{1}_{D}(\Omega;\mathbb{R}^{d})\) is given as \[\mathcal{E}(\mathbf{u})\coloneqq(\nabla\mathbf{u})^{\text{sym}}=\frac{1}{2}\left( \nabla\mathbf{u}+\nabla\mathbf{u}^{T}\right).\] The _elasticity tensor_\(\mathbb{C}:\mathbb{R}^{N}\to\mathbb{R}^{d\times d\times d\times d}\) is a fourth order tensor with the following properties. 1. \(\mathbb{C}_{ijkl}\in C^{1,1}_{\text{loc}}(\mathbb{R}^{N};\mathbb{R})\). 2. \(\mathbb{C}\) is symmetric, i.e., \[\mathbb{C}_{ijkl}=\mathbb{C}_{jikl}=\mathbb{C}_{ijlk}=\mathbb{C}_{klij}\,,\] for \(i,j,k,l=1,\ldots,d\). * \(\mathbb{C}\) is coercive for any fixed \(\varepsilon>0\), i.e., there exists \(\theta_{\varepsilon}>0\) such that \[\theta_{\varepsilon}\left|\mathcal{B}\right|^{2}\leq\mathbb{C}(\mathbf{\xi})\, \mathcal{B}:\mathcal{B}\,,\] for all \(\mathbf{\xi}\in\mathbb{R}^{N}\) and all symmetric matrices \(\mathcal{B}\in\mathbb{R}^{d\times d}\). For two matrices \(\mathcal{A},\mathcal{B}\in\mathbb{R}^{d\times d}\) this product is defined as \[\mathcal{A}:\mathcal{B}\coloneqq\sum_{i,j=1}^{d}\mathcal{A}_{ij}\mathcal{B}_{ ij}\;.\] The component specific densities are modeled by a density function \(\rho:\mathbb{R}^{N}\to\mathbb{R}\) with the following properties. * \(\rho\in C^{1,1}_{\mathrm{loc}}(\mathbb{R}^{N};\mathbb{R})\,.\) * \(\rho\) is uniformly positive for any fixed \(\varepsilon>0\), i.e., there is a constant \(\rho_{0,\varepsilon}>0\) such that \(\rho(\mathbf{\varphi})\geq\rho_{0,\varepsilon}\) for all \(\mathbf{\varphi}\in\mathbb{R}^{N}\). As in [20], we want \(\mathbb{C}\) and \(\rho\) to possess a decomposition that reflects the material specific elasticity and density of the \(N-1\) materials. Therefore, for \(\mathbf{\varphi}\in\mathbf{G}\), we set \[\begin{split}\mathbb{C}(\mathbf{\varphi})&=\overline{ \mathbb{C}}(\mathbf{\varphi})+\tilde{\mathbb{C}}^{N}\varepsilon\varphi^{N}=\sum_{ i=1}^{N-1}\mathbb{C}^{i}\varphi^{i}+\tilde{\mathbb{C}}^{N}\varepsilon\varphi^{N}, \\ \rho(\mathbf{\varphi})&=\widetilde{\rho}(\mathbf{\varphi})+ \tilde{\rho}^{N}\varepsilon\varphi^{N}=\sum_{i=1}^{N-1}\rho^{i}\varphi^{i}+ \tilde{\rho}^{N}\varepsilon\varphi^{N}.\end{split} \tag{2.5}\] This means, for \(i\in\{1,...,N-1\}\), we choose component specific but constant elasticity tensors \(\mathbb{C}^{i}\in\mathbb{R}^{d\times d\times d\times d}\) and densities \(\rho^{i}>0\). As the void obviously has neither a stiffness nor a density, we approximate the void components by some fixed elasticity tensor \(\tilde{\mathbb{C}}^{N}\in\mathbb{R}^{d\times d\times d\times d}\) and density \(\tilde{\rho}^{N}>0\) that are multiplied by the small interface parameter \(\varepsilon\) that was introduced in Section 2.3 in the context of the Ginzburg-Landau energy. Of course, these constant prefactors need to be chosen such that the assumptions (B2), (B3) and (C2) are satisfied, see [45]. Even though an adequate scaling of the void components \(\tilde{\mathbb{C}}^{N}\) and \(\tilde{\rho}^{N}\) and especially their scaling with respect to \(\varepsilon\) will be crucial for the numerical simulations in order to avoid spurious eigenmodes, we emphasize that our formal analysis works for any kind of decomposition as in (2.5) as long as the void components are scaled with \(\varepsilon^{p}\) for some \(p>0\). Thus, in terms of our analysis, we will stick to the natural linear approximation in (2.5), but we will also justify in the framework of asymptotic expansions how a suitable scaling with respect to \(\varepsilon\) is capable of dealing with localized eigenmodes, see Section 4.2. As in [20] and [45], we extend the definition (2.5) to the whole hyperplane \(\Sigma^{N}\) by introducing a cut-off function for a small parameter \(\omega>0\). We define \[\sigma_{\omega}:\mathbb{R}\to\mathbb{R},\quad s\mapsto\begin{cases}-\omega& \text{if }s\leq-\omega,\\ a_{\omega}&\text{if }-\omega<s<0,\\ s&\text{if }0\leq s\leq 1,\\ b_{\omega}&\text{if }1<s<1+\omega,\\ 1+\omega&\text{if }s\geq 1+\omega,\end{cases} \tag{2.6}\] where \(a_{\omega}\) and \(b_{\omega}\) are monotonically increasing \(C^{1,1}\) functions that are constructed in such a way that \(\sigma_{\omega}\) is also a \(C^{1,1}\) function. Then we consider the extension \[\rho:\mathbb{R}^{N}\to\mathbb{R},\quad\mathbf{\varphi}\mapsto\sum_{i=1}^{N-1}\rho ^{i}\sigma_{\omega}([P_{\Sigma}(\mathbf{\varphi})]^{i})+\tilde{\rho}^{N}\varepsilon \sigma_{\omega}([P_{\Sigma}(\mathbf{\varphi})]^{N}), \tag{2.7}\] where \[P_{\Sigma}:\mathbb{R}^{N}\to\Sigma^{N},\quad\mathbf{\varphi}\mapsto \operatorname*{arg\,min}_{\mathbf{v}\in\Sigma^{N}}\frac{1}{2}\left\|\mathbf{\varphi}- \mathbf{v}\right\|_{\ell^{2}}\] denotes the \(\ell^{2}\) projection of \(\mathbb{R}^{N}\) onto the convex set \(\Sigma^{N}\). The tensor \(\mathbb{C}\) is extended analogously. To conclude this subsection, let us introduce some further notation. For \(\mathbf{\varphi}\in L^{\infty}(\Omega;\mathbb{R}^{N})\), we define a weighted scalar product on \(L^{2}(\Omega;\mathbb{R}^{d})\) by \[(\mathbf{f},\mathbf{g})_{\rho(\mathbf{\varphi})}\coloneqq\int_{\Omega}\rho( \mathbf{\varphi})\mathbf{f}\cdot\mathbf{g}\,\mathrm{d}x\quad\text{for all }\mathbf{f},\mathbf{g}\in L^{2}( \Omega;\mathbb{R}^{d}),\] and a weighted scalar product on \(H^{1}_{D}(\Omega;\mathbb{R}^{d})\) by \[\langle\mathcal{E}(\mathbf{u}),\mathcal{E}(\mathbf{v})\rangle_{\mathbb{C}(\mathbf{ \varphi})}\coloneqq\int_{\Omega}\mathbb{C}(\mathbf{\varphi})\mathcal{E}(\mathbf{u}) :\mathcal{E}(\mathbf{v})\,\mathrm{d}x\quad\text{for all }\mathbf{u},\mathbf{v}\in H^{1}_{D}( \Omega;\mathbb{R}^{d}).\] In the following, we write \(L^{2}_{\mathbf{\varphi}}(\Omega;\mathbb{R}^{d})\) in order to emphasize the fact that we equip \(L^{2}(\Omega;\mathbb{R}^{d})\) with the scalar product \((\cdot,\cdot)_{\rho(\mathbf{\varphi})}\). ### The state equation We now introduce the system of equations describing the elastic structure, which will be referred to as the _state equation_. It reads as \[\left\{\begin{array}{rcll}-\nabla\cdot\left[\mathbb{C}(\mathbf{\varphi}) \mathcal{E}(\mathbf{w}^{\varepsilon,\mathbf{\varphi}})\right]&=\lambda^{\varepsilon, \mathbf{\varphi}}\rho(\mathbf{\varphi})\mathbf{w}^{\varepsilon,\mathbf{\varphi}}&\text{in }\Omega,\\ \mathbf{w}^{\varepsilon,\mathbf{\varphi}}&=\mathbf{0}&\text{on }\Gamma_{D},\\ \left[\mathbb{C}(\mathbf{\varphi})\mathcal{E}(\mathbf{w}^{\varepsilon,\mathbf{\varphi}}) \right]\mathbf{n}&=\mathbf{0}&\text{on }\Gamma_{0},\end{array}\right.\] ( \[SE^{\varepsilon}\] ) and its weak formulation is given by \[\langle\mathcal{E}\left(\mathbf{w}^{\varepsilon,\mathbf{\varphi}}\right),\mathcal{E} \left(\mathbf{\eta}\right)\rangle_{\mathbb{C}(\mathbf{\varphi})}=\lambda^{\varepsilon,\mathbf{\varphi}}\left(\mathbf{w}^{\varepsilon,\mathbf{\varphi}},\mathbf{\eta}\right)_{\rho( \mathbf{\varphi})} \tag{2.8}\] for all \(\mathbf{\eta}\in H^{1}_{D}(\Omega;\mathbb{R}^{d})\). In [45], using classical spectral theory, it was shown that for any \(\mathbf{\varphi}\in L^{\infty}(\Omega,\mathbb{R}^{N})\), there exists a sequence of eigenvalues (with multiple eigenvalues being repeated according to their multiplicity) which can be ordered as \[0<\lambda_{1}^{\varepsilon,\mathbf{\varphi}}\leq\lambda_{2}^{\varepsilon,\mathbf{ \varphi}}\leq\lambda_{3}^{\varepsilon,\mathbf{\varphi}}\leq\cdots\to\infty. \tag{2.9}\] This comprises all eigenvalues of (2.8). Moreover, the corresponding eigenfunctions \[\{\mathbf{w}_{1}^{\varepsilon,\mathbf{\varphi}},\mathbf{w}_{2}^{\varepsilon,\mathbf{\varphi} },...\}\subset H^{1}_{D}(\Omega;\mathbb{R}^{d})\] can be chosen as an orthonormal basis of \(L^{2}_{\mathbf{\varphi}}(\Omega;\mathbb{R}^{d})\), meaning that \[(\mathbf{w}_{i},\mathbf{w}_{j})_{\rho(\mathbf{\varphi})}=\int_{\Omega}\rho( \mathbf{\varphi})\,\mathbf{w}_{i}\cdot\mathbf{w}_{j}\,\mathrm{d}x=\delta_{ij} \tag{2.10}\] for all \(i,j\in\mathbb{N}\). This property will be crucial when considering the formal asymptotics of the eigenfunctions. In the following, when we talk about eigenvalues and eigenfunctions, we will always refer to the pairs \((\lambda_{i}^{\varepsilon,\mathbf{\varphi}},\mathbf{w}_{i}^{\varepsilon,\mathbf{\varphi}})\) with \(i\in\mathbb{N}\), which have the aforementioned properties. ### The optimization problem and the gradient inequality Finally, we are in a position to state the optimization problem \[\left\{\begin{array}{ll}\min&J_{l}^{\varepsilon}(\mathbf{\varphi}),\\ \text{over}&\mathbf{\varphi}\in\mathbf{\mathcal{G}}^{\mathbf{m}},\\ \text{s.t.}&\lambda_{n_{1}}^{\varepsilon,\mathbf{\varphi}},\ldots,\lambda_{n_{l}}^ {\varepsilon,\mathbf{\varphi}}\text{ are eigenvalues of \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: with \[J_{l}^{\varepsilon}(\mathbf{\varphi})\coloneqq\Psi(\lambda_{n_{1}}^{ \varepsilon,\mathbf{\varphi}},\ldots,\lambda_{n_{l}}^{\varepsilon,\mathbf{\varphi}})+ \gamma E^{\varepsilon}(\mathbf{\varphi}),\] for some \(l\in\mathbb{N}\), where \(n_{1},\ldots,n_{l}\in\mathbb{N}\) indicate a selection of eigenvalues. Here, \(\gamma>0\) is a fixed constant related to surface tension. **Remark 2.1**.: It is worth mentioning that we do not need any boundedness assumption on \(\Psi\) in order to prove the existence of a minimizer to \((\mathcal{P}_{l}^{\varepsilon})\) in the same way as in [45, Theorem 6.1]. In analogy to [44, Lemma 3.7], one can show that there are constants \(C_{1,\varepsilon},\,C_{2,\varepsilon}>0\) depending only on the choice of \(\mathbb{C}_{\varepsilon}\) and \(\rho_{\varepsilon}\) such that \[C_{1,\varepsilon}\lambda_{k}^{M}\leq\lambda_{k}^{\varepsilon, \mathbf{\varphi}}\leq C_{2,\varepsilon}\lambda_{k}^{M}\quad\text{for all }\mathbf{\varphi}\in\mathbf{\mathcal{G}}.\] Here, \(\lambda_{k}^{M}\) denotes the \(k\)-th eigenvalue of the problem (2.8) with \(\mathbb{C}\equiv\mathrm{Id}\) and \(\rho\equiv 1\). Qualitatively speaking, \(\lambda_{k}^{M}\) denotes an eigenvalue in the situation when the whole design domain is occupied by one material. In [45, Theorem 6.2], the following first-order necessary optimality conditions was derived. **Theorem 2.2**.: _Let \(\mathbf{\varphi}\in\mathbf{\mathcal{G}}^{\mathbf{m}}\) be a local minimizer of \((\mathcal{P}_{l}^{\varepsilon})\), i.e., there exists \(\delta>0\) such that \(J_{l}^{\varepsilon}(\mathbf{\varphi})\leq J_{l}^{\varepsilon}(\mathbf{\zeta})\) for all \(\mathbf{\zeta}\in\mathbf{\mathcal{G}}^{\mathbf{m}}\) with \(\|\mathbf{\zeta}-\mathbf{\varphi}\|_{H^{1}(\Omega,\mathbb{R}^{N})\cap L^{\infty}( \Omega,\mathbb{R}^{N})}<\delta\). We further assume that the eigenvalues \(\lambda_{n_{1}}^{\varepsilon,\mathbf{\varphi}},\ldots,\lambda_{n_{l}}^{ \varepsilon,\mathbf{\varphi}}\) are simple. Then the gradient inequality_ \[\sum_{r=1}^{l} \left\{[\partial_{\lambda_{n}}\Psi]\big{(}\lambda_{n_{1}}^{ \varepsilon,\mathbf{\varphi}},\ldots,\lambda_{n_{l}}^{\varepsilon,\mathbf{\varphi}} \big{)}\right.\] \[\left.\cdot\left(\langle\mathcal{E}(\mathbf{w}_{n_{r}}^{ \varepsilon,\mathbf{\varphi}}):\mathcal{E}(\mathbf{w}_{n_{r}}^{\varepsilon,\mathbf{ \varphi}})\rangle_{C^{\prime}(\mathbf{\varphi})(\tilde{\mathbf{\varphi}}-\mathbf{\varphi} )}-\lambda_{n_{r}}^{\varepsilon,\mathbf{\varphi}}\int_{\Omega}\rho^{\prime}(\mathbf{ \varphi})\big{(}\tilde{\mathbf{\varphi}}-\mathbf{\varphi}\big{)}\big{|}\mathbf{w}_{n_{r}} ^{\varepsilon,\mathbf{\varphi}}\big{|}^{2}\,\mathrm{d}x\right)\right\}\] ( \[GI^{\varepsilon}\] \[+\gamma\varepsilon\int_{\Omega}\nabla\mathbf{\varphi}:\nabla(\tilde{ \mathbf{\varphi}}-\mathbf{\varphi})\,\mathrm{d}x+\frac{\gamma}{\varepsilon}\int_{ \Omega}\psi_{0}^{\prime}(\mathbf{\varphi})(\tilde{\mathbf{\varphi}}-\mathbf{\varphi})\, \mathrm{d}x\;\geq\;0\] _holds for all \(\tilde{\mathbf{\varphi}}\in\mathbf{\mathcal{G}}^{\mathbf{m}}\)._ The upcoming sharp-interface analysis will be concerned with passing to the limit in the state equation \((SE^{\varepsilon})\) as well as in the gradient inequality \((GI^{\varepsilon})\). ## 3. Analysis of the gradient inequality In this section, we will show under a suitable regularity assumption on the eigenfunctions involved in \((GI^{\varepsilon})\) that there exists a solution of the above gradient inequality possessing even the regularity \(\mathbf{\varphi}\in H^{2}(\Omega;\mathbb{R}^{N})\). This will be carried out by applying a regularization process to the non-smooth potential \(\psi\), which was employed in a similar fashion in [40, 25, 23, 15]. Our approach mainly follows the ideas of [23]. We regularize the gradient inequality in order to deal with the indicator functional \(I_{\mathbf{G}}\) contained in the definition of the potential \(\psi\). This will yield a sequence of \(H^{2}\)-regular approximating phase-fields \((\mathbf{\varphi}_{\delta})_{\delta>0}\) solving regularized equations and converging to the desired phase-field \(\mathbf{\varphi}\). Another convenient aspect of this procedure is that it will generate Lagrange multipliers that will allow us to transform the gradient inequality into an equality. This strong formulation of \((GI^{\varepsilon})\) will be the starting point for our asymptotic analysis in Section 6. ### Regularization of the potential \(\psi\) and rewriting the constraints We notice that \(\boldsymbol{\varphi}\in\boldsymbol{\mathcal{G}}^{m}\) needs to satisfy the constraint \[\varphi^{i}(x)\geq 0\] for almost every \(x\in\Omega\) and \(i=1,\ldots,N\). To deal with this constraint we regularize the potential appearing in the Ginzburg-Landau energy which was initially given as \[\psi(\boldsymbol{\varphi})=\psi_{0}(\boldsymbol{\varphi})+I_{\boldsymbol{G}}( \boldsymbol{\varphi}).\] **Definition 3.1**.: _For \(\delta>0\) we define the regularized potential_ \[\psi_{\delta}:\mathbb{R}^{N}\to\mathbb{R},\quad\psi_{\delta}(\boldsymbol{ \varphi})=\psi_{0}(\boldsymbol{\varphi})+\frac{1}{\delta}\hat{\psi}( \boldsymbol{\varphi}), \tag{3.1}\] _where_ \[\hat{\psi}(\boldsymbol{\varphi})\coloneqq\sum_{i=1}^{N}\big{(}\min(\varphi^{ i},0)\big{)}^{2}. \tag{3.2}\] **Remark 3.2**.: We see that the regularization now approximates the indicator functional \(I_{\mathbb{R}^{N}_{+}}\) by the function \(\frac{1}{\delta}\hat{\psi}\). For \(\delta\searrow 0\), exactly the negative parts of the components of \(\boldsymbol{\varphi}\) are penalized. To deal with the remaining constraints hidden in \(\boldsymbol{\mathcal{G}}^{m}\) namely the integral constraint \(\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$ -$ }}\kern-13.499794pt}}{{\vbox{\hbox{$ -$ }}\kern-12.149815pt}}{{\vbox{\hbox{$ -$ }}\kern-9.899849pt}}\!\int_{\Omega}\boldsymbol{\varphi}\,\mathrm{d}x= \boldsymbol{m}\) and **Definition 3.3**.: _Let us define the linear orthogonal projections_ \[\begin{split} P_{\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$ -$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.899849pt}}{{\vbox{\hbox{$ -$ }}\kern-8.999863pt}}\!\int_{\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$ -$ }}\kern-12.149815pt}}{{\vbox{\hbox{$ -$ }}\kern-9.899849pt}}{{\vbox{\hbox{$ -$ }}\kern-8.999863pt}}\!\int_{\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$ -$ }}\kern-12.149815pt}}{{\vbox{\hbox{$ -$ }}\kern-9.899849pt}}{{\vbox{\hbox{$ -$ }}\kern-8.999863pt}}\!\int_{\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$ -$ }}\kern-12.149815pt}}{{\vbox{\hbox{$ -$ }}\kern-9.899849pt}}{{\vbox{\hbox{$ -$ }}\kern-8.999863pt}}\!\int_{\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$ -$ }}\kern-12.149815pt}}{{\vbox{\hbox{$ -$ }}\kern-9.899849pt}}{{\vbox{\hbox{$ -$ }}\kern-8.999863pt}}\!\int_{\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$ -$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.899849pt}}{{\vbox{\hbox{$ -$ }}\kern-8.999863pt}}\!\int_{\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$ -$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.899849pt}}{{\vbox{\hbox{$ -$ }}\kern-8.999863pt}}\!\int_{\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$ -$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.899849pt}}{{\vbox{\hbox{$ -$ }}\kern-8.999863pt}}\!\int_{\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$ -$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.8999849pt}}{{\vbox{\hbox{$ -$ }}\kern-8.999863pt}}\!\int_{\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$ -$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.8999849pt}}{{\vbox{\hbox{$ -$ }}\kern-8.999863pt}}\!\int_{\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$ -$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.8999849pt}}{{\vbox{\hbox{$ -$ }}\kern-8.999863pt}}\!\int_{\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$ -$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.8999849pt}}{{\vbox{\hbox{$ -$ }}\kern-8.999863pt}}\!\int_{\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$ -$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.8999849pt}}{{\vbox{\hbox{$ -$ }}\kern-8.999863pt}}\!\int_{\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$ -$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.8999849pt}}{{\vbox{\hbox{$ -$ }}\kern-8.999863pt}}\!\int_{\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$ -$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.8999849pt}}{{\vbox{\hbox{$ -$ }}\kern-8.999863pt}}\!\int_{\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$ -$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.8999849pt}}{{\vbox{\hbox{$ -$ }}\kern-8.999863pt}}\!\int_{\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$ -$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.8999849pt}}{{\vbox{\hbox{$ -$ }}\kern-8.999863pt}}\!\int_{\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$ -$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.8999849pt}}{{\vbox{\hbox{$ -$ }}\kern-8.999863pt}}\!\int_{\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$ -$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.8999849pt}}{{\vbox{\hbox{$ -$ }}\kern-8.999863pt}}\!\int_{\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$ -$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.8999849pt}}{{\vbox{\hbox{$ -$ }}\kern-8.9999863pt}}\!\int_{\mathchoice{{\vbox{\hbox{$-$ }}\kern-13. We now fix a parameter \(\varepsilon>0\) as well as a solution \(\mathbf{\varphi}^{\varepsilon}\in\mathbf{\mathcal{G}}^{m}\subset H^{1}(\Omega;\mathbb{R}^ {N})\cap L^{\infty}(\Omega;\mathbb{R}^{N})\) of \((GI^{\varepsilon})\). For a cleaner presentation we omit the superscript \(\varepsilon\) in the eigenvalues and eigenfunctions. A priori, the term \[\big{\langle}\mathcal{E}\big{(}\mathbf{w}_{n_{r}}^{\mathbf{\varphi}}\big{)},\mathcal{E} \big{(}\mathbf{w}_{n_{r}}^{\mathbf{\varphi}}\big{)}\big{\rangle}_{\mathbb{C}^{\prime}( \mathbf{\varphi})\mathbf{\eta}}=\int_{\Omega}\big{[}\mathbb{C}^{\prime}(\mathbf{\varphi}) \mathbf{\eta}\big{]}\mathcal{E}\big{(}\mathbf{w}_{n_{r}}^{\mathbf{\varphi}}\big{)}\, \mathrm{d}x\] is well defined only for \(\mathbf{\eta}\in L^{\infty}(\Omega;\mathbb{R}^{N})\) as the expression \(\mathcal{E}(\mathbf{w}_{n_{r}}):\mathcal{E}(\mathbf{w}_{n_{r}})\) merely belongs to \(L^{1}(\Omega)\). However, in order to consider a suitable regularized problem associated to \((GI^{\varepsilon})\), we need this term to be an element in \(L^{2}(\Omega)\). For this purpose, we require the regularity \(\mathbf{w}_{n_{r}}\in W^{1,4}(\Omega)\). Therefore, we now make the following crucial smoothness assumption which shall hold for the rest of this paper. 1. For \(r=1,\ldots,l\), let the eigenfunctions \(\mathbf{w}_{n_{r}}\) involved in \((GI^{\varepsilon})\) belong to \(W^{1,4}(\Omega;\mathbb{R}^{d})\). **Remark 3.5**.: Note that there exists a regularity theory for the equations of linear and nonlinear elasticity, see, e.g. [52, 65]. However, due to the fact that the coefficient \(\mathbb{C}(\mathbf{\varphi})\) is merely essentially bounded, we could only prove the existence of an (in general arbitrarily small) parameter \(\iota>0\) such that \[\mathcal{E}(\mathbf{w}_{n_{r}})\in L^{2+\iota}(\Omega). \tag{3.5}\] Note that there exist counterexamples going back to De Giorgi for linear systems of elliptic PDEs (see, e.g., [17, Section 4.1]) providing _unbounded_ solutions \(\mathbf{u}\in W^{1,2}(B;\mathbb{R}^{d})\) for \(d\geq 3\) to a system of the form \[\operatorname{div}(\mathcal{A}(x)D\mathbf{u}(x))=\mathbf{0}\text{ in }B\subset\mathbb{R}^{d},\] where \(\mathcal{A}\) is bounded and coercive and \(B\) denotes the unit ball. In particular, in the physically relevant case \(d=3\) where \(W^{1,4}(\Omega;\mathbb{R}^{d})\hookrightarrow C^{0}(\overline{\Omega}; \mathbb{R}^{d})\), the condition \(\mathbf{w}_{n_{r}}\in W^{1,4}(\Omega;\mathbb{R}^{d})\) seems to be a real assumption as unbounded eigenfunctions might exist. In the following, let \((\cdot,\cdot)\) denote the classical scalar product on \(L^{2}(\Omega;\mathbb{R}^{N})\). Recalling \[\mathbb{C}^{\prime}(\mathbf{\varphi})\mathbf{\eta}=\left(\sum_{m=1}^{N}\partial_{m} \mathbb{C}_{ijkl}(\mathbf{\varphi})\eta^{m}\right)_{i,j,k,l=1}^{d}\] for \(\mathbf{\eta}\in L^{2}(\Omega;\mathbb{R}^{N})\), we have \[\langle\mathcal{E}\big{(}\mathbf{w}_{n_{r}}^{\mathbf{\varphi}}\big{)}, \mathcal{E}\big{(}\mathbf{w}_{n_{r}}^{\mathbf{\varphi}}\big{)}\rangle_{\mathbb{C}^{ \prime}(\mathbf{\varphi})\mathbf{\eta}} =\int_{\Omega}\Big{(}\sum_{m=1}^{N}[\partial_{m}\mathbb{C}(\mathbf{ \varphi})]\eta^{m}\Big{)}\mathcal{E}\big{(}\mathbf{w}_{n_{r}}^{\mathbf{\varphi}}\big{)} :\mathcal{E}\big{(}\mathbf{w}_{n_{r}}^{\mathbf{\varphi}}\big{)}\,\mathrm{d}x\] \[=\int_{\Omega}\sum_{m=1}^{N}\Big{[}\Big{(}\mathbb{C}^{\prime}(\bm {\varphi})\mathcal{E}\big{(}\mathbf{w}_{n_{r}}^{\mathbf{\varphi}}\big{)}:\mathcal{E} \big{(}\mathbf{w}_{n_{r}}^{\mathbf{\varphi}}\big{)}\Big{)}\Big{]}_{m}\eta^{m}\,\mathrm{ d}x\] \[=\big{(}\mathbb{C}^{\prime}(\mathbf{\varphi})\mathcal{E}\big{(}\mathbf{w} _{n_{r}}^{\mathbf{\varphi}}\big{)}:\mathcal{E}\big{(}\mathbf{w}_{n_{r}}^{\mathbf{\varphi}} \big{)},\mathbf{\eta}\big{)}\,.\] Note that the term in the last line is to be understood as \[\mathbb{C}^{\prime}(\mathbf{\varphi})\mathcal{E}\big{(}\mathbf{w}_{n_{r}}^{\mathbf{\varphi} }\big{)}:\mathcal{E}\big{(}\mathbf{w}_{n_{r}}^{\mathbf{\varphi}}\big{)}=\Big{(}[ \partial_{m}\mathbb{C}(\mathbf{\varphi})]\mathcal{E}\big{(}\mathbf{w}_{n_{r}}^{\mathbf{ \varphi}}\big{)}:\mathcal{E}\big{(}\mathbf{w}_{n_{r}}^{\mathbf{\varphi}}\big{)}\Big{)}_ {m=1}^{N}\in L^{2}(\Omega;\mathbb{R}^{N}).\] Thus, the projection of this term is well defined and the \(L^{2}\) regularity of this object is ensured by the assumptions 1 and 2. For later purposes, we point out that a straightforward computation reveals \[P_{T\Sigma}\left[\mathbb{C}^{\prime}(\mathbf{\varphi})\mathcal{E}\big{(}\mathbf{w}_{n_ {r}}^{\mathbf{\varphi}}\big{)}:\mathcal{E}\big{(}\mathbf{w}_{n_{r}}^{\mathbf{\varphi}} \big{)}\right]=\Big{[}\big{(}P_{T\Sigma}\left[\mathbb{C}^{\prime}_{ijkl}(\mathbf{ \varphi})\right]\big{)}_{i,j,k,l=1}^{d}\Big{]}\,\mathcal{E}\big{(}\mathbf{w}_{n_{r} }^{\mathbf{\varphi}}\big{)}:\mathcal{E}\big{(}\mathbf{w}_{n_{r}}^{\mathbf{\varphi}}\big{)},\] where \[\mathbb{C}^{\prime}_{ijkl}(\mathbf{\varphi})=\left(\partial_{m}\mathbb{C}_{ ijkl}\right)_{m=1}^{N}\in L^{2}(\Omega;\mathbb{R}^{N}).\] To have a more concise notation, we will write \[\left\langle\mathcal{E}\big{(}\mathbf{w}_{n_{r}}^{\mathbf{\varphi}}\big{)}:\mathcal{E} \big{(}\mathbf{w}_{n_{r}}^{\mathbf{\varphi}}\big{)}\right\rangle_{P_{T\Sigma}[\mathbb{ C}^{\prime}(\mathbf{\varphi})]}:=P_{T\Sigma}\left[\mathbb{C}^{\prime}(\mathbf{\varphi}) \mathcal{E}\big{(}\mathbf{w}_{n_{r}}^{\mathbf{\varphi}}\big{)}:\mathcal{E}\big{(}\mathbf{w }_{n_{r}}^{\mathbf{\varphi}}\big{)}\right].\] Analogously, we use the notation \[\big{(}\mathbf{w}_{n_{r}}^{\mathbf{\varphi}},\mathbf{w}_{n_{r}}^{\mathbf{\varphi}}\big{)}_{ \rho^{\prime}(\mathbf{\varphi})\mathbf{\eta}}=\big{(}\rho^{\prime}(\mathbf{\varphi})\mathbf{w }_{n_{r}}^{\mathbf{\varphi}}\cdot\mathbf{w}_{n_{r}}^{\mathbf{\varphi}},\mathbf{\eta}\big{)}\] for the density term. To reformulate the gradient inequality \((G\mathbb{I}^{\varepsilon})\), we further define the function \[\mathbf{f}^{\mathbf{\varphi}}:=-\sum_{r=1}^{l}\Big{\{} [\partial_{\lambda_{i}^{\varphi}}\Psi]\big{(}\lambda_{n_{1}}^{ \mathbf{\varphi}},\ldots,\lambda_{n_{l}}^{\mathbf{\varphi}}\big{)}\Big{(}\mathbb{C}^{ \prime}(\mathbf{\varphi})\mathcal{E}\big{(}\mathbf{w}_{n_{r}}^{\mathbf{\varphi}}\big{)}: \mathcal{E}\big{(}\mathbf{w}_{n_{r}}^{\mathbf{\varphi}}\big{)} \tag{3.6}\] \[-\lambda_{n_{r}}^{\mathbf{\varphi}}\rho^{\prime}(\mathbf{\varphi})\mathbf{w }_{n_{r}}^{\mathbf{\varphi}}\cdot\mathbf{w}_{n_{r}}^{\mathbf{\varphi}}\Big{)}\Big{\}}- \frac{\gamma}{\varepsilon}\psi^{\prime}_{0}(\mathbf{\varphi})\,.\] From the above considerations, we infer \(\mathbf{f}^{\mathbf{\varphi}}\in L^{2}(\Omega;\mathbb{R}^{N})\). As \(\mathbf{\varphi}\in\mathbf{\mathcal{G}}^{\mathbf{m}}\) is fixed, we write \(\mathbf{f}=\mathbf{f}^{\mathbf{\varphi}}\) in the following. Using this notation, we obtain: **Proposition 3.6**.: _The gradient inequality \((G\mathbb{I}^{\varepsilon})\) is equivalent to_ \[\gamma\varepsilon\left(\nabla\mathbf{\varphi},\nabla(\tilde{\mathbf{ \varphi}}-\mathbf{\varphi})\right)_{L^{2}}\geq\big{(}\mathbf{f},\tilde{\mathbf{\varphi}}- \mathbf{\varphi}\big{)}_{L^{2}}\quad\text{for all }\tilde{\mathbf{\varphi}}\in\mathbf{ \mathcal{G}}^{\mathbf{m}}. \tag{3.7}\] ### The regularized problem and its limit Now that we have introduced the regularized potential and suitable orthogonal projections, and have made the necessary regularity assumption, we can formulate a regularized problem which will approximate our initially fixed solution \(\mathbf{\varphi}\in\mathbf{\mathcal{G}}^{\mathbf{m}}\) of \((G\mathbb{I}^{\varepsilon})\) in order to provide the desired \(H^{2}\)-regularity of \(\mathbf{\varphi}\). Using all the previously introduced notation, we are now in a position to state the so-called regularized problem. **Definition 3.7**.: _Let_ \[\tilde{\mathbf{\mathcal{G}}}^{\mathbf{m}}\coloneqq\left\{\tilde{\mathbf{ \varphi}}\in\ H^{1}(\Omega;\mathbb{R}^{N})\,\middle|\,\fint_{\Omega}\tilde{\mathbf{ \varphi}}=\mathbf{m}\text{ and }\sum_{i=1}^{N}\tilde{\varphi}^{i}=1\,\text{a.e. in }\Omega \right\}. \tag{3.8}\] _We say that \(\mathbf{\varphi}_{\delta}\in\tilde{\mathbf{\mathcal{G}}}^{\mathbf{m}}\) is a solution to the regularized problem if it solves_ \[\gamma\varepsilon\left(\nabla\mathbf{\varphi}_{\delta},\nabla\mathbf{ \eta}\right)+\frac{\gamma}{\delta\varepsilon}\left(P[\hat{\mathbf{\phi}}(\mathbf{ \varphi}_{\delta})],\mathbf{\eta}\right)=(P\mathbf{f},\mathbf{\eta})\quad\text{for all }\mathbf{\eta}\in H^{1}(\Omega;\mathbb{R}^{N}).\] ( \[RE\] ) Before proving the existence of a solution to (\(RE\)), we recall some properties proven in [23] that will be important for the upcoming analysis. **Proposition 3.8**.: _Let \(\hat{\psi}\) be as defined in (3.2). Then the following properties hold true._ * _The weak derivative fulfills_ \[\nabla\hat{\psi}=\hat{\mathbf{\phi}}\] (3.9) _where, for_ \(\mathbf{\xi}\in\mathbb{R}^{N}\)_,_ \(\hat{\phi}^{i}(\mathbf{\xi})\coloneqq\hat{\phi}^{i}(\xi^{i})\coloneqq 2\left[\xi^{i} \right]_{-}\) _with_ \([s]_{-}\coloneqq\min(s,0)\) _for all_ \(s\in\mathbb{R}\) 2. _Monotonicity:_\(\hat{\mathbf{\phi}}\) _is non-decreasing in each component, i.e.,_ \[0\leq\left(\hat{\phi}^{i}(r)-\hat{\phi}^{i}(s)\right)(r-s)\] (3.10) _for all_ \(r,s\in\mathbb{R}\) _and_ \(i=1,\ldots,N\)_._ 3. _Convexity:_\(\hat{\psi}\) _is convex, i.e.,_ \[(\mathbf{\xi}-\mathbf{\eta})\cdot\hat{\mathbf{\phi}}(\mathbf{\eta})\leq\hat{\psi}(\mathbf{\xi})- \hat{\psi}(\mathbf{\eta})\] (3.11) _for all_ \(\mathbf{\xi},\mathbf{\eta}\in\mathbb{R}^{N}\)_._ Using these results, we now prove the well-posedness result for the regularized problem. In order to show \(H^{2}\)-regularity of the solution \(\mathbf{\varphi}_{\delta}\), we need the following regularity assumption on the design domain which shall hold for the rest of the paper. 1. In addition to 1, we assume that \(\Omega\) has at least one of the following properties: 1. The boundary \(\partial\Omega\) is of class \(C^{1,1}\). 2. \(\Omega\) is convex. The well-posedness result for the regularized problem (_RE_) reads as follows. **Lemma 3.9**.: _For \(\delta>0\) there exists a unique solution \(\mathbf{\varphi}_{\delta}\in\mathbf{\tilde{\mathcal{G}}}^{\mathbf{m}}\subset H^{1}( \Omega;\mathbb{R}^{N})\) of (_RE_). The solution possesses the regularity \(\mathbf{\varphi}_{\delta}\in H^{2}(\Omega;\mathbb{R}^{N})\) and it holds_ \[\begin{split}-\Delta\mathbf{\varphi}_{\delta}&=-\frac{ 1}{\delta\varepsilon^{2}}P[\mathbf{\hat{\partial}}(\mathbf{\varphi}_{\delta})]+\frac{ 1}{\gamma\varepsilon}P\mathbf{f}&\text{a.e. in }\Omega\\ \nabla\mathbf{\varphi}_{\delta}\,\mathbf{n}&=\mathbf{0}& \text{a.e. on }\partial\Omega,\end{split}\] ( _PRE_ ) Proof.: First of all, we want to show that there exists at most one solution to (_RE_). To this end, we assume that there are two solutions \(\mathbf{\varphi}_{\delta,1},\mathbf{\varphi}_{\delta,2}\in\mathbf{\tilde{\mathcal{G}}}^{ \mathbf{m}}\). Then, by subtracting the corresponding equations, we obtain \[\gamma\varepsilon\left(\nabla[\mathbf{\varphi}_{\delta,1}-\mathbf{\varphi}_{\delta,2 }],\nabla\mathbf{\eta}\right)+\frac{\gamma}{\varepsilon\delta}\left(P[\hat{\phi}( \mathbf{\varphi}_{\delta,1})-\hat{\phi}(\mathbf{\varphi}_{\delta,2})],\mathbf{\eta}\right)=0\] for all \(\mathbf{\eta}\in H^{1}(\Omega;\mathbb{R}^{N})\). Testing with \(\mathbf{\varphi}_{\delta,1}-\mathbf{\varphi}_{\delta,2}\in L^{2}_{T\Sigma}(\Omega; \mathbb{R}^{N})\cap L^{2}_{0}(\Omega;\mathbb{R}^{N})\), we can drop the projection \(P\) in the second term. Using the monotonicity property (3.10), we infer \[\gamma\varepsilon\left(\nabla[\mathbf{\varphi}_{\delta,1}-\mathbf{\varphi}_{\delta,2 }],\nabla[\mathbf{\varphi}_{\delta,1}-\mathbf{\varphi}_{\delta,2}]\right)\leq 0.\] This yields \(\mathbf{\varphi}_{\delta,1}=\mathbf{\varphi}_{\delta,2}\) as these functions have identical mean value. In order to prove the existence of a solution, we consider a suitable minimization problem. Therefore, we define the functional \[I_{\delta}(\mathbf{\xi})\coloneqq \frac{\gamma\varepsilon}{2}\int_{\Omega}|\nabla\mathbf{\xi}|^{2}\, \,\mathrm{d}x+\frac{\gamma}{\varepsilon\delta}\int_{\Omega}\hat{\psi}(\mathbf{\xi} )\,\mathrm{d}x-\int_{\Omega}\mathbf{f}\cdot\mathbf{\xi}\,\mathrm{d}x \tag{3.12}\] for all \(\mathbf{\xi}\in H^{1}(\Omega;\mathbb{R}^{N})\). If we can now show that there exists a \(\mathbf{\varphi}_{\delta}\in\mathbf{\tilde{\mathcal{G}}}^{\mathbf{m}}\) that solves the minimization problem \[\min_{\mathbf{\xi}\in\mathbf{\tilde{\mathcal{G}}}^{\mathbf{m}}}I_{\delta}(\mathbf{\xi}), \tag{3.13}\] the existence result is proven since then the Gateaux derivative of \(I^{\prime}_{\delta}(\mathbf{\varphi}_{\delta})\), which is given by \[I^{\prime}_{\delta}(\mathbf{\varphi}_{\delta})\mathbf{\eta}= \gamma\varepsilon\left(\nabla\mathbf{\varphi}_{\delta},\nabla\mathbf{\eta }\right)+\frac{\gamma}{\varepsilon\delta}\left(\hat{\mathbf{\phi}}(\mathbf{\varphi}_{ \delta}),\mathbf{\eta}\right)-(\mathbf{f},\mathbf{\eta}), \tag{3.14}\] for all directions \(\boldsymbol{\eta}\in H^{1}(\Omega;\mathbb{R}^{N})\cap L^{2}_{T\Sigma}(\Omega; \mathbb{R}^{N})\cap L^{2}_{0}(\Omega;\mathbb{R}^{N})\), vanishes. By applying the projections \(P_{T\Sigma}\) and \(P_{\int}\) to any \(\boldsymbol{\eta}\in H^{1}(\Omega;\mathbb{R}^{N})\) and then switching them to the other component in the \(L^{2}\) scalar product, it follows that solving (3.13) is equivalent to solving (_RE_). Note that there is no need to project the gradient term. This is justified as follows. By construction, we have \[P_{T\Sigma}\boldsymbol{\eta}=\boldsymbol{\eta}-\left[\frac{1}{N}\sum_{k=1}^{N }\eta^{k}\right]\boldsymbol{1}.\] On the other hand, we compute \[\nabla\left[\sum_{k=1}^{N}\eta^{k}\boldsymbol{1}\right]=\left(\left[\sum_{k=1 }^{N}\partial_{1}\eta^{k}\right]\boldsymbol{1},\ldots,\left[\sum_{k=1}^{N} \partial_{d}\eta^{k}\right]\boldsymbol{1}\right),\] and therefore, the entries in each column are identical. Now, we compute \[\nabla\boldsymbol{\varphi}_{\delta}:\nabla\left[\sum_{k=1}^{N}\eta^{k} \boldsymbol{1}\right]=\sum_{i=1}^{N}\left\{\left[\sum_{k=1}^{N}\partial_{i} \eta^{k}\right]\sum_{j=1}^{N}\partial_{i}\varphi_{\delta}^{j}\right\}. \tag{3.15}\] We see that this term vanishes as by construction, as \(\sum_{i=1}^{N}\varphi_{\delta}^{i}=1\) a.e. in \(\Omega\) because \(\boldsymbol{\varphi}_{\delta}\in\tilde{\boldsymbol{\mathcal{G}}}^{\boldsymbol{ m}}\). In other words, \(\partial_{i}\boldsymbol{\varphi}_{\delta}\in L^{2}_{T\Sigma}(\Omega;\mathbb{R} ^{N})\). As the gradient term is invariant under addition of constants we can also omit the projection \(P_{\int}\). It remains to show that there exists a minimizer of (3.13). By construction, \(\hat{\psi}\geq 0\). Furthermore, using Young's inequality, we find a constant \(C>0\) such that \[\int_{\Omega}\left|\nabla\boldsymbol{\xi}\right|^{2}\,\mathrm{d}x+\int_{ \Omega}\boldsymbol{f}\cdot\boldsymbol{\xi}\,\mathrm{d}x\geq-C\quad\text{for all }\boldsymbol{\xi}\in\tilde{\boldsymbol{\mathcal{G}}}^{ \boldsymbol{m}}. \tag{3.16}\] This is obtained by absorbing the quantity \(\left\|\boldsymbol{\xi}\right\|_{L^{2}}^{2}\) by the term \(\left\|\nabla\boldsymbol{\xi}\right\|_{L^{2}}^{2}\) which controls the whole \(H^{1}(\Omega;\mathbb{R}^{N})\)-norm as all \(\boldsymbol{\xi}\in\tilde{\boldsymbol{\mathcal{G}}}^{\boldsymbol{m}}\) have a fixed mean value. Hence, \(I_{\delta}\) is bounded from below on \(\tilde{\boldsymbol{\mathcal{G}}}^{\boldsymbol{m}}\) and thus, the infimum exists. Consequently, we find a minimizing sequence \((\boldsymbol{\varphi}_{\delta,k})_{k\in\mathbb{N}}\subset\tilde{\boldsymbol{ \mathcal{G}}}^{\boldsymbol{m}}\) such that \[\lim_{k\to\infty}I_{\delta}(\boldsymbol{\varphi}_{\delta,k})=\inf_{ \boldsymbol{\varphi}\in\tilde{\boldsymbol{\mathcal{G}}}^{\boldsymbol{m}}}I_{ \delta}(\boldsymbol{\varphi}).\] In particular, \(\|\boldsymbol{\varphi}_{\delta,k}\|_{H^{1}(\Omega;\mathbb{R}^{N})}\) remains bounded and thus, there exists \(\boldsymbol{\varphi}_{\delta}\in H^{1}(\Omega;\mathbb{R}^{N})\) and such that the convergences \[\boldsymbol{\varphi}_{\delta,k}\rightharpoonup\boldsymbol{\varphi}_{\delta} \quad\text{in }H^{1}(\Omega;\mathbb{R}^{N}),\] \[\boldsymbol{\varphi}_{\delta,k}\rightharpoonup\boldsymbol{\varphi}_{\delta} \quad\text{in }L^{2}(\Omega;\mathbb{R}^{N}),\] \[\boldsymbol{\varphi}_{\delta,k}\rightharpoonup\boldsymbol{\varphi}_{\delta} \quad\text{a.e. in }\Omega\] hold along a non-relabeled subsequence. From these convergences, we deduce \(\boldsymbol{\varphi}_{\delta}\in\tilde{\boldsymbol{\mathcal{G}}}^{ \boldsymbol{m}}\). Noticing that \(\hat{\psi}(\boldsymbol{\varphi}_{\delta,k})\leq|\boldsymbol{\varphi}_{\delta, k}|^{2}\) a.e. in \(\Omega\) and that \(\hat{\psi}\) is continuous, we apply the generalized majorized convergence theorem of Lebesgue, see [69, p. 1015], to deduce \[\int_{\Omega}\hat{\psi}(\boldsymbol{\varphi}_{\delta,k})\,\mathrm{d}x\to\int_{ \Omega}\hat{\psi}(\boldsymbol{\varphi}_{\delta})\,\mathrm{d}x,\] for \(k\to\infty\). The weak lower semi-continuity of norms yields \[\int_{\Omega}\left|\nabla\boldsymbol{\varphi}_{\delta}\right|^{2}\mathrm{d}x \leq\liminf_{k\to\infty}\int_{\Omega}\left|\nabla\boldsymbol{\varphi}_{\delta, k}\right|^{2}\mathrm{d}x,\] and it further holds \[\lim_{k\to\infty}\int_{\Omega}\mathbf{f}\cdot\mathbf{\varphi}_{\delta,k}\, \mathrm{d}x=\int_{\Omega}\mathbf{f}\cdot\mathbf{\varphi}_{\delta}\,\mathrm{d}x.\] Altogether, we thus have \[I_{\delta}(\mathbf{\varphi}_{\delta})\leq\liminf_{k\to\infty}I_{\delta}(\mathbf{ \varphi}_{\delta,k})=\inf_{\mathbf{\varphi}\in\tilde{\mathbf{\mathcal{G}}}^{m}}I_{ \delta}(\mathbf{\varphi}).\] This implies that \(\mathbf{\varphi}_{\delta}\in\tilde{\mathbf{\mathcal{G}}}^{m}\) is a minimizer. Now that we have shown the existence of a solution \(\mathbf{\varphi}_{\delta}\in H^{1}(\Omega;\mathbb{R}^{N})\) to (_RE_), it remains to prove that it possesses the desired regularity \(H^{2}(\Omega;\mathbb{R}^{N})\). Since \(\mathbf{\varphi}_{\delta}\) is a weak solution of (_RE_), it can be interpreted as a weak solution of \[\begin{cases}-\Delta\mathbf{\varphi}_{\delta}=\mathbf{\mathcal{F}}&\quad \text{in }\Omega,\\ \nabla\mathbf{\varphi}_{\delta}\,\mathbf{n}=\mathbf{0}&\quad\text{on }\partial\Omega, \end{cases} \tag{3.17}\] with \[\mathbf{\mathcal{F}}=\mathbf{\mathcal{F}}(\mathbf{\varphi}_{\delta})=-\frac{1}{\delta \varepsilon^{2}}P[\hat{\mathbf{\phi}}(\mathbf{\varphi}_{\delta})]+\frac{1}{\gamma \varepsilon}P\mathbf{f}\in L^{2}(\Omega;\mathbb{R}^{N}). \tag{3.18}\] Due to assumption (D), elliptic regularity theory (see e.g., [49, Theorem 2.4.2.7] in the case (D)(i) or [49, Theorem 3.2.3.1] in the case (D)(ii), respectively) yields \(\mathbf{\varphi}_{\delta}\in H^{2}(\Omega;\mathbb{R}^{N})\). In particular, using this regularity, we conclude from (_RE_) that \(\mathbf{\varphi}_{\delta}\) satisfies (_PRE_). As we want to pass to the limit in the regularized equation, we need some uniform bounds to apply classical compactness results. **Lemma 3.10**.: _Let \(\mathbf{\varphi}_{\delta}\in H^{2}(\Omega;\mathbb{R}^{N})\) be the solution of (_RE_). Then there exist a constant \(C>0\) such that_ \[\big{\|}\mathbf{\varphi}_{\delta}\big{\|}_{H^{2}(\Omega;\mathbb{R}^{ N})} \leq C, \tag{3.19}\] \[\big{\|}\big{[}\mathbf{\varphi}_{\delta}\big{]}_{-}\big{\|}_{L^{2}( \Omega;\mathbb{R}^{N})} \leq C\delta^{\frac{1}{2}},\] (3.20) \[\frac{1}{\delta}\big{\|}P[\hat{\mathbf{\phi}}(\mathbf{\varphi}_{\delta}) ]\big{\|}_{L^{2}(\Omega;\mathbb{R}^{N})} \leq C, \tag{3.21}\] _for all \(\delta>0\)._ Proof.: By the previous lemma, we know that \(\mathbf{\varphi}_{\delta}\) minimizes \(I_{\delta}\) (see (3.12)) over \(\tilde{\mathbf{\mathcal{G}}}^{m}\) (see (3.8)). Thus, we have \[I_{\delta}(\mathbf{\varphi}_{\delta})\leq I_{\delta}(\mathbf{\xi}),\quad \text{for all }\mathbf{\xi}\in\tilde{\mathbf{\mathcal{G}}}^{m}.\] If we now choose any \(\mathbf{\xi}\in\mathbf{\mathcal{G}}^{m}\subset\tilde{\mathbf{\mathcal{G}}}^{m}\), we know that it is additionally componentwise non-negative and therefore \(\hat{\psi}(\mathbf{\xi})=0\) a.e. in \(\Omega\). In view of definition (3.12), this yields \[\frac{\gamma\varepsilon}{2} \int_{\Omega}\big{|}\nabla\mathbf{\varphi}_{\delta}\big{|}^{2}\, \mathrm{d}x+\frac{\gamma}{\varepsilon\delta}\int_{\Omega}\hat{\psi}(\mathbf{ \varphi}_{\delta})-\int_{\Omega}\mathbf{f}\cdot\mathbf{\varphi}_{\delta}\,\mathrm{d}x\] \[\leq\frac{\gamma\varepsilon}{2}\int_{\Omega}\left|\nabla\mathbf{\xi} \right|^{2}\,\mathrm{d}x-\int_{\Omega}\mathbf{f}\cdot\mathbf{\xi}\,\mathrm{d}x \tag{3.22}\] \[\leq C,\] where \(C>0\) is a constant independent of \(\delta\). Recalling the absorption trick (3.16), we obtain \[\big{\|}\mathbf{\varphi}_{\delta}\big{\|}_{H^{1}(\Omega;\mathbb{R}^{ N})}\leq C, \tag{3.23}\] which will be needed in the end of the proof. Furthermore, using the definition of \(\hat{\psi}\) (see (3.2)), we deduce that \[\sum_{i=1}^{N}\big{\|}\big{[}\varphi_{\delta}^{i}\big{]}_{-}\big{\|} _{L^{2}}^{2}\leq C\delta,\] which directly leads to (3.20). We notice that \(\frac{1}{\delta}\hat{\boldsymbol{\phi}}(\boldsymbol{\varphi}_{\delta})\) is weakly differentiable (cf. [48, Lemma 7.6]) and belongs to \(H^{1}(\Omega;\mathbb{R}^{N})\). In order to prove (3.21), we test (_RE_) with \(\boldsymbol{\eta}=\frac{1}{\delta}\hat{\boldsymbol{\phi}}(\boldsymbol{\varphi} _{\delta})\). We obtain \[\begin{split}&\frac{\gamma}{\varepsilon\delta}\left(\nabla \boldsymbol{\varphi}_{\delta},\nabla\hat{\boldsymbol{\phi}}(\boldsymbol{ \varphi}_{\delta})\right)+\frac{\gamma}{\delta^{2}\varepsilon}\int_{\Omega} \big{|}P[\hat{\boldsymbol{\phi}}(\boldsymbol{\varphi}_{\delta})]\big{|}^{2} \,\mathrm{d}x\\ &\quad=\frac{1}{\delta}\int_{\Omega}\boldsymbol{f}\cdot P[\hat{ \boldsymbol{\phi}}(\boldsymbol{\varphi}_{\delta})]\,\mathrm{d}x.\end{split} \tag{3.24}\] Applying [48, Lemma 7.6] to \(\hat{\boldsymbol{\phi}}\), we further deduce \[\frac{\gamma}{\varepsilon\delta}\left(\nabla\boldsymbol{\varphi}_{\delta}, \nabla\hat{\boldsymbol{\phi}}(\boldsymbol{\varphi}_{\delta})\right)\geq 0\] since for a.e. \(x\) in \(\Omega\) either \(\nabla\hat{\boldsymbol{\phi}}(\boldsymbol{\varphi}_{\delta})(x)=\boldsymbol{0}\) or \(\nabla\hat{\boldsymbol{\phi}}(\boldsymbol{\varphi}_{\delta})(x)=\nabla \boldsymbol{\varphi}_{\delta}(x)\). Applying Holder's inequality in (3.24), we thus infer \[\frac{\gamma}{\delta^{2}\varepsilon}\big{\|}P[\hat{\boldsymbol{\phi}}( \boldsymbol{\varphi}_{\delta})]\big{\|}_{L^{2}}^{2}\leq\frac{C}{\delta}\big{\|} P[\hat{\boldsymbol{\phi}}(\boldsymbol{\varphi}_{\delta})]\big{\|}_{L^{2}},\] and thus, \[\frac{1}{\delta}\big{\|}P\big{[}\hat{\boldsymbol{\phi}}(\boldsymbol{\varphi}_ {\delta})\big{]}\big{\|}_{L^{2}}\leq C.\] As we now have bounded both the right-hand side of (_PRE_) and \(\boldsymbol{\varphi}_{\delta}\) itself in \(L^{2}(\Omega;\mathbb{R}^{N})\) uniformly in \(\delta\) (see (3.23)), we can again apply elliptic regularity theory (see [49, Theorem 2.3.1.5] or [49, Theorem 3.2.3.1]) to deduce (3.19). In order to reformulate (_PRE_) by means of Lagrange multipliers that are expected to converge in the weak sense, we need to get rid of the projection in (3.21). This is done analogously to [23, Theorem 2.1] and therefore, we only present the statement of the result without a proof. **Lemma 3.11**.: _There exists a constant \(C>0\) such that_ \[\frac{1}{\delta}\big{\|}\hat{\boldsymbol{\phi}}(\boldsymbol{\varphi}_{\delta} )\big{\|}_{L^{2}}\leq C\quad\text{for all $\delta>0$}. \tag{3.25}\] Now, we introduce suitable Lagrange multipliers and pass to the limit in the the regularized equation. **Theorem 3.12**.: _The initially chosen solution \(\boldsymbol{\varphi}\in\boldsymbol{\mathcal{G}}^{\boldsymbol{m}}\) of (_GI\({}^{\varepsilon}\)_) possesses the regularity \(\boldsymbol{\varphi}\in H^{2}(\Omega;\mathbb{R}^{N})\). Furthermore, there are Lagrange multipliers \(\boldsymbol{\Lambda},\boldsymbol{\mu}\in L^{2}(\Omega;\mathbb{R}^{N})\) and \(\boldsymbol{\vartheta}\in\mathbb{R}^{N}\) such that_ \[\begin{split}-\gamma\varepsilon\Delta\boldsymbol{\varphi}& =\frac{1}{\varepsilon}(\boldsymbol{\Lambda}+\boldsymbol{\vartheta }+\boldsymbol{\mu})+P_{T\Sigma}\boldsymbol{f}^{\boldsymbol{\varphi}}& \text{a.e. in $\Omega$},\\ \nabla\boldsymbol{\varphi}\boldsymbol{n}&= \boldsymbol{0}&\text{on $\partial\Omega$}.\end{split}\] ( \[GS^{\varepsilon}\] ) _with_ \[\Lambda^{i} =\Lambda^{j}\quad\text{for all $i,j\in\{1,\ldots,N\}$}, \tag{3.26}\] \[\mu^{i}\geq 0\text{ and }\mu^{i}\varphi^{i} =0 \text{a.e. in $\Omega$ for all $i\in\{1,\ldots,N\}$},\] (3.27) \[\sum_{i=1}^{N}\vartheta^{i} =0. \tag{3.28}\] Proof.: From (3.19) we deduce the existence of a function \(\overline{\mathbf{\varphi}}\in H^{2}(\Omega;\mathbb{R}^{N})\) such that \[\mathbf{\varphi}_{\delta} \rightharpoonup\overline{\mathbf{\varphi}} \text{in }H^{2}(\Omega;\mathbb{R}^{N}), \tag{3.29}\] \[\mathbf{\varphi}_{\delta} \to\overline{\mathbf{\varphi}} \text{in }H^{1}(\Omega;\mathbb{R}^{N}),\] \[\mathbf{\varphi}_{\delta} \to\overline{\mathbf{\varphi}} \text{a.e. in }\Omega,\] \[\mathbf{\hat{\phi}}(\mathbf{\varphi}_{\delta}) \to 0 \text{in }L^{2}(\Omega;\mathbb{R}^{N}),\] as \(\delta\to 0\) along a non-relabeled subsequence. This directly implies that \(\overline{\mathbf{\varphi}}(x)\in\mathbb{R}^{N}_{+}\) for almost all \(x\in\Omega\). Hence, since \(\mathbf{\varphi}_{\delta}\in\mathbf{\hat{\mathcal{G}}}^{\mathbf{m}}\), we know that \(\overline{\mathbf{\varphi}}\in\mathbf{\mathcal{G}}^{\mathbf{m}}\). Recalling the definition of \(\mathbf{f}\) in (3.6), we now define the Lagrange multipliers of the regularized problem as \[\mathbf{\Lambda}_{\delta} \rightharpoonup\frac{1}{N}\sum_{i=1}^{N}\left(\frac{\gamma}{ \delta}\hat{\phi}^{i}(\mathbf{\varphi}_{\delta})\right)\mathbf{1}, \tag{3.30}\] \[\mathbf{\vartheta}_{\delta} \rightharpoonup\fint_{\Omega}P_{T\Sigma}\left(\frac{\gamma}{ \delta}\mathbf{\hat{\phi}}(\mathbf{\varphi}_{\delta})-\varepsilon\mathbf{f}\right)\, \mathrm{d}x,\] \[\mathbf{\mu}_{\delta} \rightharpoonup\frac{-\gamma}{\delta}\mathbf{\hat{\phi}}(\mathbf{\varphi }_{\delta}).\] The reason why we do not reformulate the projection term \(P_{T\Sigma}\mathbf{f}\) by means of a Lagrange multiplier is that this is a term depending on \(x\), which will produce terms of order \(\mathcal{O}(\frac{1}{\varepsilon^{2}})\) when we consider the inner expansions in Section 6 due to the involved derivative of eigenfunctions. Recalling Definition 3.3, we have \[\mathbf{\Lambda}_{\delta}+\mathbf{\vartheta}_{\delta}+\mathbf{\mu}_{\delta}=-\frac{\gamma} {\delta}P_{T\Sigma}[\mathbf{\hat{\phi}}(\mathbf{\varphi}_{\delta})]-\varepsilon\fint _{\Omega}P_{T\Sigma}\mathbf{f}\,\mathrm{d}x.\] Hence, we can write (_RE_) as \[\gamma\varepsilon\left(\nabla\mathbf{\varphi}_{\delta},\nabla\mathbf{\eta}\right)- \frac{1}{\varepsilon}\left(\mathbf{\Lambda}_{\delta}+\mathbf{\vartheta}_{\delta}+\mathbf{ \mu}_{\delta},\mathbf{\eta}\right)=(P_{T\Sigma}\mathbf{f},\mathbf{\eta})\quad\text{for all }\mathbf{\eta}\in H^{1}(\Omega;\mathbb{R}^{N}). \tag{3.31}\] We point out that the Lagrange multipliers are constructed in such a way that the factor \(\frac{1}{\varepsilon}\) corresponding to the scaling of the original potential \(\psi\) is still present. This will be important in the next sections for the sharp-interface asymptotics. We know from Lemma 3.11 that \(\mathbf{\Lambda}_{\delta},\mathbf{\mu}_{\delta}\in L^{2}(\Omega;\mathbb{R}^{N})\) and \(\mathbf{\vartheta}_{\delta}\in\mathbb{R}^{N}\) are bounded uniformly in \(\delta\). Hence, we find a subsequence and \(\mathbf{\Lambda},\mathbf{\mu}\in L^{2}(\Omega;\mathbb{R}^{N})\) and \(\mathbf{\vartheta}\in\mathbb{R}^{N}\) such that \[\mathbf{\Lambda}_{\delta} \rightharpoonup\mathbf{\Lambda} \text{in }L^{2}(\Omega;\mathbb{R}^{N}), \tag{3.32}\] \[\mathbf{\vartheta}_{\delta} \to\mathbf{\vartheta} \text{in }\mathbb{R}^{N},\] \[\mathbf{\mu}_{\delta} \rightharpoonup\mathbf{\mu} \text{in }L^{2}(\Omega;\mathbb{R}^{N})\] as \(\delta\to 0\). We additionally know from the definition of \(\mathbf{\hat{\phi}}\) in (3.9) that \(\mathbf{\mu}\geq 0\) componentwise as weak convergence in \(L^{2}(\Omega;\mathbb{R}^{N})\) preserves non-negativity. Furthermore, from the construction in (3.30) we directly deduce (3.26) and (3.28). Passing to the limit in (3.31), we infer \[\gamma\varepsilon\left(\nabla\overline{\mathbf{\varphi}},\nabla\mathbf{\eta}\right)- \frac{1}{\varepsilon}\left(\mathbf{\Lambda}+\mathbf{\vartheta}+\mathbf{\mu},\mathbf{\eta} \right)=(P_{T\Sigma}\mathbf{f}^{\mathbf{\varphi}},\mathbf{\eta})\,,\quad\text{for all }\mathbf{\eta}\in H^{1}(\Omega;\mathbb{R}^{N}). \tag{3.33}\] Thus, the regularity \(\overline{\mathbf{\varphi}}\in H^{2}(\Omega;\mathbb{R}^{N})\) and integration by parts yield the equation \[-\gamma\varepsilon\Delta\overline{\mathbf{\varphi}}=\frac{1}{\varepsilon}(\mathbf{ \Lambda}+\mathbf{\vartheta}+\mathbf{\mu})+P_{T\Sigma}\mathbf{f}^{\mathbf{\varphi}}\qquad\text{ a.e. in }\Omega. \tag{3.34}\] If we can now show that for our initially fixed solution \(\mathbf{\varphi}\in\mathbf{\mathcal{G}}^{m}\) of \((GI^{\varepsilon})\) it holds \(\mathbf{\varphi}=\overline{\mathbf{\varphi}}\), the proof is complete. Let us consider the test function \(\mathbf{\eta}\coloneqq\overline{\mathbf{\varphi}}-\mathbf{\varphi}\in H^{1}(\Omega;\mathbb{ R}^{N})\cap L^{\infty}(\Omega;\mathbb{R}^{N})\). Due to (3.26), we have \(\big{(}\mathbf{\Lambda}_{\delta},\mathbf{\eta}\big{)}=0\), as \(\sum_{i=1}^{N}\eta^{i}=0\) because of \(\overline{\mathbf{\varphi}},\mathbf{\varphi}\in\mathbf{\mathcal{G}}^{m}\). In view of (3.28) we know that \(\big{(}\mathbf{\vartheta},\mathbf{\eta}\big{)}=0\), because by construction \(\int_{\Omega}\mathbf{\eta}\,\mathrm{d}x=0\). As already mentioned, we have \(\mathbf{\mu}_{\delta}\geq 0\). Hence, using the monotonicity (3.10), we infer \[(\mathbf{\mu}_{\delta},\mathbf{\varphi}_{\delta})=-\frac{1}{\delta}\left(\hat{\mathbf{ \phi}}(\mathbf{\varphi}_{\delta}),\mathbf{\varphi}_{\delta}\right)\leq 0.\] Using the convergences (3.29) and (3.32), we deduce \((\mathbf{\mu},\overline{\mathbf{\varphi}})\leq 0\). Recalling \(\mathbf{\mu}\geq 0\) and that \(\overline{\mathbf{\varphi}}\in\mathbf{\mathcal{G}}^{m}\) is component wise non-negative, we already deduce \((\mathbf{\mu},\overline{\mathbf{\varphi}})=0\). As also \(\mathbf{\varphi}\in\mathbf{\mathcal{G}}^{m}\) and \(\mathbf{\varphi}\) is component wise non-negative, we have \((\mathbf{\mu},\mathbf{\varphi})\geq 0\). Combining these results and testing (3.33) with our particular choice \(\mathbf{\eta}=\overline{\mathbf{\varphi}}-\mathbf{\varphi}\), we get \[\gamma\varepsilon\left(\nabla\overline{\mathbf{\varphi}},\nabla[\overline{\mathbf{ \varphi}}-\mathbf{\varphi}]\right)=-\frac{1}{\varepsilon}(\mathbf{\mu},\mathbf{\varphi})+ \left(P_{T\Sigma}\mathbf{f},\overline{\mathbf{\varphi}}-\mathbf{\varphi}\right)\leq(P_{T \Sigma}\mathbf{f},\overline{\mathbf{\varphi}}-\mathbf{\varphi}).\] Considering the gradient inequality (3.7) tested with \(\tilde{\mathbf{\varphi}}=\overline{\mathbf{\varphi}}\in\mathbf{\mathcal{G}}^{m}\), we have \[\gamma\varepsilon\left(\nabla\mathbf{\varphi},\nabla[\overline{\mathbf{\varphi}}- \mathbf{\varphi}]\right)\geq(\mathbf{f},\overline{\mathbf{\varphi}}-\mathbf{\varphi})=(P_{T \Sigma}\mathbf{f},\overline{\mathbf{\varphi}}-\mathbf{\varphi}).\] Hence, by subtracting both inequalities, we infer \[\gamma\varepsilon\left(\nabla[\overline{\mathbf{\varphi}}-\mathbf{\varphi}],\nabla[ \overline{\mathbf{\varphi}}-\mathbf{\varphi}]\right)\leq 0.\] As \(\int_{\Omega}\overline{\mathbf{\varphi}}-\mathbf{\varphi}\,\mathrm{d}x=0\), this gives us the desired identity \(\mathbf{\varphi}=\overline{\mathbf{\varphi}}\in H^{2}(\Omega;\mathbb{R}^{N})\). From the previous reasoning we know \[\sum_{i=1}^{N}\int_{\Omega}\mu^{i}\varphi^{i}\,\mathrm{d}x=(\mathbf{\mu},\mathbf{ \varphi})_{L^{2}}=0,\] Furthermore, we know that \(\mathbf{\mu},\mathbf{\varphi}\geq 0\) componentwise and thus, each summand in above equality has to be identical to \(0\). This verifies (3.27). In the following, we use the above knowledge to show that our asymptotic expansions will produce a state equation and a gradient equality in the sharp-interface limit. ## 4. Asymptotic expansions As mentioned above, we will now perform the procedure of sharp-interface asymptotics. Therefore, we start by analyzing outer and inner expansions approximating the quantities involved in our problem. The outer expansions are used to approximate these quantities in regions far away from the interfacial layers. They will be used to derive the state equation in the sharp-interface limit. The inner expansions are used in regions close to the interfacial layers where the phase transition takes place. They will provide boundary conditions for the equations obtained in the sharp-interface limit. As these layers are expected to scale proportionally to \(\varepsilon\), a rescaling is needed here. By comparing the leading order equations, we will obtain jump conditions at the phase interfaces within the design domain and a sharp-interface version of the gradient equality \((GS^{\varepsilon})\). In the following, we choose \(\left(\mathbf{\varphi}^{\varepsilon}\right)_{\varepsilon>0}\subset\mathbf{\mathcal{G} }^{m}\) as a sequence of minimizers of the optimization problem \((\mathcal{P}_{l}^{\varepsilon})\). For \(r=1,\ldots,l\), \(\left(\mathbf{w}_{n_{r}}^{\varepsilon},\lambda_{n_{r}}^{\varepsilon}\right)_{ \varepsilon>0}\subset H^{1}_{D}(\Omega;\mathbb{R}^{d})\times\mathbb{R}\) denotes the corresponding sequence of \(L^{2}_{\mathbf{\varphi}}(\Omega;\mathbb{R}^{d})\)-normalized eigenfunctions and eigenvalues, which are non-trivial solutions of the state equation \((SE^{\varepsilon})\) involved in the optimization problem \((\mathcal{P}_{l}^{\varepsilon})\). ### Outer expansions As in [20], we first consider the asymptotic expansions in regions "far" away from the interface. Therefore, we assume expansions of the form \[\begin{split}\boldsymbol{\varphi}^{\varepsilon}(x)&= \sum_{k=0}^{\infty}\varepsilon^{k}\boldsymbol{\varphi}_{k}(x),\\ \lambda_{n_{r}}^{\varepsilon}&=\sum_{k=0}^{\infty} \varepsilon^{k}\lambda_{k,n_{r}},\\ \boldsymbol{w}_{n_{r}}^{\varepsilon}(x)&=\sum_{k=0 }^{\infty}\varepsilon^{k}\boldsymbol{w}_{k,n_{r}}(x)\end{split} \tag{4.1}\] for all \(x\in\Omega\). Furthermore, we demand for all \(x\in\Omega\) that \(\boldsymbol{\varphi}_{0}(x)\in\boldsymbol{G}\), \(\boldsymbol{\varphi}_{k}(x)\in T\Sigma\), \(\int\boldsymbol{\varphi}_{0}=\boldsymbol{m}\) and \(\int\boldsymbol{\varphi}_{k}=\boldsymbol{0}\) for \(k\geq 1\), in order to be compatible with the constraints on the phase-field formulated in Section 2.2. As we are concerned with a formal limit process, we assume all the appearing quantities to possess a suitable regularity such that we can write the state equation (\(SE^{\varepsilon}\)) in its strong formulation. Using standard arguments relying on the \(\Gamma\)-convergence of the Ginzburg-Landau energy in [14], we can partition the domain as \[\Omega=\bigcup_{i=1}^{N}\Omega_{i}\cup\mathcal{N}\quad\text{with}\quad\Omega_ {i}:=\left\{\boldsymbol{\varphi}_{0}=\boldsymbol{e}_{i}\right\}, \tag{4.2}\] where \(\mathcal{N}\subset\Omega\) is a Lebesgue null set. In general, the sets \(\Omega_{i}\) are only finite perimeter sets. This follows from the boundedness of the Giniburg-Landau energy, the inequality [14, (3.1)] and [14, Proposition 2.2]. Nevertheless, for our asymptotic analysis we assume them to be smooth enough. With this knowledge, we are in a position to derive the limit state equation resulting from (\(SE^{\varepsilon}\)) in the framework of outer asymptotic expansions. **Claim 4.1**.: Recall the scaling of \(\mathbb{C}\) and \(\rho\) in (2.5), i.e., \[\begin{split}\mathbb{C}(\boldsymbol{\varphi})=\overline{\mathbb{C }}(\boldsymbol{\varphi})+\tilde{\mathbb{C}}^{N}\varepsilon\varphi^{N}& =\sum_{i=1}^{N-1}\mathbb{C}^{i}\varphi^{i}+\tilde{\mathbb{C}}^{N }\varepsilon\varphi^{N},\\ \rho(\boldsymbol{\varphi})=\overline{\rho}(\boldsymbol{\varphi} )+\tilde{\rho}^{N}\varepsilon\varphi^{N}&=\sum_{i=1}^{N-1}\rho^{i }\varphi^{i}+\tilde{\rho}^{N}\varepsilon\varphi^{N},\end{split} \tag{4.3}\] for \(\boldsymbol{\varphi}\in\boldsymbol{G}\). Then, for \(r\in\{1,\ldots,l\}\), we obtain that the pair \((\lambda_{0,n_{r}},\boldsymbol{w}_{0,n_{r}})\) fulfills the eigenvalue equations in the material regions \[\left\{\begin{array}{rcl}-\nabla\cdot\left[\mathbb{C}^{i}\mathcal{E}( \boldsymbol{w}_{0,n_{r}})\right]&=\lambda_{0,n_{r}}\rho^{i}\boldsymbol{w}_{0,n _{r}}&\text{in }\Omega_{i},\\ \boldsymbol{w}_{0,n_{r}}&=\boldsymbol{0}&\text{on }\Gamma_{D}\cap \partial\Omega_{i},\\ \left[\mathbb{C}^{i}\mathcal{E}(\boldsymbol{w}_{0,n_{r}})\right]\boldsymbol{n}& =\boldsymbol{0}&\text{on }\Gamma_{0}\cap\partial\Omega_{i},\end{array}\right.\] ( \[SE_{0}^{i}\] ) for \(i=1,\ldots,N-1\). Furthermore, the normalization condition (2.10) is transferred to the limit eigenfunction \(\boldsymbol{w}_{0,n_{r}}\) meaning that \[1=\int_{\Omega}\overline{\rho}(\boldsymbol{\varphi}_{0})\left|\boldsymbol{w}_ {0,n_{r}}\right|^{2}\,\mathrm{d}x=\sum_{i=1}^{N-1}\int_{\Omega_{i}}\overline{ \rho}(\boldsymbol{\varphi}_{0})\left|\boldsymbol{w}_{0,n_{r}}\right|^{2}\, \mathrm{d}x. \tag{4.4}\] In particular, the eigenfunction \(\boldsymbol{w}_{0,n_{r}}\) is non-trivial in \(\Omega_{i}\) for at least one index \(i\in\{1,\ldots,N-1\}\). Thus, \(\boldsymbol{w}_{0,n_{r}}\) cannot be a localized eigenmode as it is not supported only in the void region \(\Omega_{N}\). **Remark 4.2**.: 1. (a) Of course, the eigenvalue \(\lambda_{n_{r}}^{\varepsilon}\) could degenerate in the limit, i.e., \(\lambda_{0,n_{r}}=0\). This is no contradiction to the normalization (4.4) because \(\boldsymbol{w}_{0,n_{r}}\) could potentially be a non-trivial constant in each material region \(\Omega_{i}\). If each material region \(\Omega_{i}\) shares a sufficiently nice part of the boundary with \(\Gamma_{D}\), one can use Korn's inequality (see, e.g., [33, Theorem 6.15-4] or [69, Theorem 62.13]) to deduce that \(\boldsymbol{w}_{0,n_{r}}=0\) in each \(\Omega_{i}\), which would then indeed contradict (4.5). The inner expansions will provide us with boundary conditions that allow us to refine this statement, see Section 7 (especially Remark 7.1). 2. In the case \(\lambda_{0,n_{r}}>0\), even though the limit eigenvalue equations \((SE_{0}^{i})\) hold for all \(i\in\{1,\ldots,N-1\}\), the eigenfunction \(\boldsymbol{w}_{0,n_{r}}\) could potentially be non-trivial only in one particular material region \(\Omega_{i}\) but vanish in all other material regions \(\Omega_{j}\) with \(j\in\{1,\ldots,N-1\}\setminus\{i\}\). This means that a non-trivial equation might hold only in one single material region. Let us show the Claim 4.1 assuming that outer expansions of the form (4.1) exist. For the sake of a cleaner presentation, we will now fix the index \(n_{r}\in\mathbb{N}\) and in the following, we omit the subscript \(n_{r}\). In the spirit of formal asymptotics, we consider the state equation \((SE^{\varepsilon})\), i.e., \[-\nabla\cdot[\mathbb{C}(\boldsymbol{\varphi}^{\varepsilon})\mathcal{E}( \boldsymbol{w}^{\varepsilon})]=\lambda^{\varepsilon}\rho(\boldsymbol{\varphi} ^{\varepsilon})\boldsymbol{w}^{\varepsilon}\quad\text{a.e. in }\Omega,\] and the normalization condition \[1=\int_{\Omega}\rho(\boldsymbol{\varphi}^{\varepsilon})\left|\boldsymbol{w}^{ \varepsilon}\right|^{2}\,\mathrm{d}x \tag{4.5}\] resulting from (2.10). Then, we plug in the asymptotic expansions (4.1) and consider each resulting order in \(\varepsilon\) separately. We deduce that (4.5) reads to order \(\mathcal{O}(1)\) \[1=\int_{\Omega}\overline{\rho}(\boldsymbol{\varphi}_{0})\left|\boldsymbol{w} _{0}\right|^{2}\,\mathrm{d}x=\sum_{i=1}^{N-1}\int_{\Omega_{i}}\rho^{i}\left| \boldsymbol{w}_{0}\right|^{2}\,\mathrm{d}x,\] which proves (4.4). As a consequence, \(\boldsymbol{w}_{0}\) has to be non-trivial in in \(\Omega_{i}\) for at least one index \(i\in\{1,\ldots,N-1\}\). Eventually, we compare the contributions of order \(\mathcal{O}(1)\) in the state equation. We obtain \[-\nabla\cdot\left[\overline{\mathbb{C}}(\boldsymbol{\varphi}_{0})\mathcal{E} (\boldsymbol{w}_{0})\right]=\lambda_{0}\overline{\rho}(\boldsymbol{\varphi}_{0 })\boldsymbol{w}_{0}\quad\text{a.e. in }\Omega, \tag{4.6}\] which reads for each phase \[-\nabla\cdot\left[\mathbb{C}^{i}\mathcal{E}(\boldsymbol{w}_{0})\right]= \lambda_{0}\rho^{i}\boldsymbol{w}_{0}\quad\text{a.e. in }\Omega_{i}\] for \(i=1,\ldots,N-1\). The remaining boundary conditions on the _outer_ boundary \(\Gamma\) follow directly by plugging in the asymptotic expansion into \((SE^{\varepsilon})\). This completes the argumentation. ### Intermezzo on spurious eigenmodes As already mentioned in the introduction, we now want to analytically justify the model that will be chosen for the numerical computations in order to avoid spurious eigenmodes. As we have seen in the above reasoning, assuming outer expansions of the form (4.1) and a decomposition of \(\mathbb{C}\) and \(\rho\) as in (4.3), we recover the desired limit system. Furthermore, we see that in (4.3), it is only important to scale the void contributions \(\tilde{\mathbb{C}}^{N}\) and \(\tilde{\rho}^{N}\) with \(\varepsilon^{p}\) for _some_\(p>0\), but the specific choice of \(p>0\) does not affect the line of argument. Therefore, we keep the linear scaling in \(\varepsilon\) for the analysis in the subsequent sections, noting that also for all subsequent steps any other scaling of the void contributions would work. However, in numerical simulations the phenomenon of spurious eigenmodes is a serious issue, see [7, 18, 29, 63]. The problem is that if the model parameters are not chosen correctly, eigenmodes that are supported only in the void region can actually emerge. Of course, the associated eigenvalues are _unphysical_ as the void should not contribute to the resonance behaviour of the structure. Nevertheless, even though spurious eigenmodes might not be avoided in numerical simulations, they do not pose any problem if their associated eigenvalues are large since then, they do not affect the part of the spectrum that is involved in the optimization problem (\(\mathcal{P}_{l}^{\varepsilon}\)). For this reason, as also observed in the aforementioned literature, the key idea is to choose the scaling in (4.3) in such a way that spurious eigenmodes will only produce large eigenvalues or more precisely, eigenvalues \(\lambda^{\varepsilon}\) with \(\lambda^{\varepsilon}\to\infty\) as \(\varepsilon\to 0\). In particular, this means that by using an adequate scaling, spurious eigenmodes will not enter the sharp interface limit as their eigenvalues leave the considered part of the spectrum. In order to allow for spurious eigenmodes in our asymptotic expansions, we have to include terms of _negative_ order in \(\varepsilon\). **Claim 4.3**.: Assume the following outer asymptotic expansions \[\boldsymbol{\varphi}^{\varepsilon}(x) =\sum_{k=0}^{\infty}\varepsilon^{k}\boldsymbol{\varphi}_{k}(x), \tag{4.7}\] \[\lambda_{n_{r}}^{\varepsilon} =\sum_{k=-m}^{\infty}\varepsilon^{k}\lambda_{k,n_{r}},\] \[\boldsymbol{w}_{n_{r}}^{\varepsilon}(x) =\sum_{k=-m}^{\infty}\varepsilon^{k}\boldsymbol{w}_{k,n_{r}}(x),\] for an arbitrary \(m\in\mathbb{N}\). Let \(\mathbb{C}\) and \(\rho\) be given as \[\mathbb{C}(\boldsymbol{\varphi}) =\overline{\mathbb{C}}(\boldsymbol{\varphi})+\tilde{\mathbb{C}}^ {N}\varepsilon(\varphi^{N})^{2}=\sum_{i=1}^{N-1}\mathbb{C}^{i}(\varphi^{i})^ {2}+\tilde{\mathbb{C}}^{N}\varepsilon(\varphi^{N})^{2}, \tag{4.8}\] \[\rho(\boldsymbol{\varphi}) =\overline{\rho}(\boldsymbol{\varphi})+\tilde{\rho}^{N} \varepsilon^{2}(\varphi^{N})^{2}=\sum_{i=1}^{N-1}\rho^{i}(\varphi^{i})^{2}+ \tilde{\rho}^{N}\varepsilon^{2}(\varphi^{N})^{2},\] for \(\boldsymbol{\varphi}\in\boldsymbol{G}\). Then, for \(r\in\{1,\ldots,l\}\), we obtain \(\boldsymbol{w}_{k,n_{r}}=\boldsymbol{0}\) and \(\lambda_{k,n_{r}}=0\) for \(k<-1\) and the pair \((\lambda_{-1,n_{r}},\boldsymbol{w}_{-1,n_{r}})\) fulfills \[\left\{\begin{array}{rll}-\nabla\cdot\left[\tilde{\mathbb{C}}^{N}\mathcal{E }(\boldsymbol{w}_{-1,n_{r}})\right]&=\lambda_{-1,n_{r}}\left[\tilde{\rho}^{N} +\overline{\rho}(\boldsymbol{\varphi}_{1})\right]\boldsymbol{w}_{0,n_{r}}& \text{in }\Omega_{N},\\ \boldsymbol{w}_{-1,n_{r}}&=\boldsymbol{0}&\text{on }\Gamma_{D}\cap\partial \Omega_{N},\\ \left[\tilde{\mathbb{C}}^{N}\mathcal{E}(\boldsymbol{w}_{0,n_{r}})\right] \boldsymbol{n}&=\boldsymbol{0}&\text{on }\Gamma_{0}\cap\partial \Omega_{N}.\end{array}\right.\] **Remark 4.4**.: The asymptotic analysis in the following argumentation is crucially based on the interplay of the _quadratic_ interpolation of \(\mathbb{C}\) and \(\rho\) and the scaling of the void components in (4.3). Note that these two features are also important for our numerical experiments in Section 9, where the quadratic interpolation of \(\mathbb{C}\) and \(\rho\) as well as the relatively lower scaling in \(\varepsilon\) of the void contribution of \(\rho\) compared to the void contribution of \(\mathbb{C}\) are crucial to obtain meaningful results. It has also already been observed in the literature that a relatively lower scaling of mass compared to stiffness is an appropriate choice to deal with localized eigenmodes, see [7, 29, 63]. We now argue why Claim 4.3 is true. Therefore, we consider the state equation (\(SE^{\varepsilon}\)) and the normalization (4.5). First of all, we note that plugging in the asymptotic expansion of \(\boldsymbol{\varphi}^{\varepsilon}\) into (4.8) yields \[\mathbb{C}(\boldsymbol{\varphi}^{\varepsilon}) =\overline{\mathbb{C}}(\boldsymbol{\varphi}_{0})+\varepsilon \tilde{\mathbb{C}}^{N}(\boldsymbol{\varphi}_{0}^{N})^{2}+\varepsilon^{2} \overline{\mathbb{C}}(\boldsymbol{\varphi}_{1})+\mathcal{O}(\varepsilon^{3}) \tag{4.9}\] \[\rho(\boldsymbol{\varphi}^{\varepsilon}) =\overline{\rho}(\boldsymbol{\varphi}_{0})+\varepsilon^{2}( \overline{\rho}(\boldsymbol{\varphi}_{1})+\tilde{\rho}^{N}(\boldsymbol{ \varphi}_{0}^{N})^{2})+\mathcal{O}(\varepsilon^{3}).\] As a first step let us show that \(\mathbf{w}_{k}=\mathbf{0}\) in \(\Omega\) for \(k=-m,-m+1,\ldots,-2\). Therefore let us start with the contribution of lowest order \(\mathcal{O}(\varepsilon^{-2m})\) in (4.5), which reads as \[0=\int_{\Omega}\overline{\rho}(\mathbf{\varphi}_{0})\left|\mathbf{w}_{-m}\right|^{2}\, \mathrm{d}x. \tag{4.10}\] This implies that \(\mathbf{w}_{-m}=0\) in \(\Omega\backslash\Omega_{N}\), or in other words, \(\mathbf{w}_{-m}\) is localized in the void region. Now, we consider (4.5) to the order \(\mathcal{O}(\varepsilon^{-2m+2})\). We have \[0=\int_{\Omega}\overline{\rho}(\mathbf{\varphi}_{0})\left|\mathbf{w}_{-m+1}\right|^{2 }+2\overline{\rho}(\mathbf{\varphi}_{0})\mathbf{w}_{-m}\cdot\mathbf{w}_{-m+2}+(\overline{ \rho}(\mathbf{\varphi}_{1})+\tilde{\rho}^{N}(\mathbf{\varphi}_{0}^{N})^{2})\left|\mathbf{w }_{-m}\right|^{2}\,\mathrm{d}x. \tag{4.11}\] Here, we used that \(-2m+2<0\). As \(\mathbf{w}_{-m}\) is localized in the void we infer \[0=\int_{\Omega}2\overline{\rho}(\mathbf{\varphi}_{0})\mathbf{w}_{-m}\cdot\mathbf{w}_{-m+2 }\,\mathrm{d}x.\] Thus, due to the non-negativity of the first summand in (4.11) we deduce \[0=\int_{\Omega}(\overline{\rho}(\mathbf{\varphi}_{1})+\tilde{\rho}^{N}(\mathbf{ \varphi}_{0}^{N})^{2})\left|\mathbf{w}_{-m}\right|^{2}\,\mathrm{d}x. \tag{4.12}\] In the light of (4.8), we have \(\overline{\rho}(\mathbf{\varphi}_{1})\geq 0\). Moreover, \(\varphi_{0}=\mathbf{e}_{N}\) in \(\Omega_{N}\) and we thus deduce \[0=\int_{\Omega_{N}}\tilde{\rho}^{N}\left|\mathbf{w}_{-m}\right|^{2}\,\mathrm{d}x. \tag{4.13}\] Hence, since \(\tilde{\rho}^{N}\) is positive, we infer \(\mathbf{w}_{-m}=\mathbf{0}\) in \(\Omega\). These steps can now be repeated until the critical order \(\mathcal{O}(1)\) is reached because up to this order, the normalization equation (4.5) possesses a trivial left hand side. This shows \(\mathbf{w}_{k}=0\) for \(k=-m,-m+1,\ldots,-2\). As in (4.10), we additionally conclude that \(\mathbf{w}_{-1}=\mathbf{0}\) in \(\Omega\backslash\Omega_{N}\). With this knowledge, we are in a position to show \(\lambda_{k}=0\) for \(k=-m,-m+1,\ldots,-2\). Therefore let us consider the energy associated with \((SE^{\varepsilon})\), i.e., \[\lambda^{\varepsilon}=\int_{\Omega}\mathbb{C}(\mathbf{\varphi}^{\varepsilon}) \mathcal{E}(\mathbf{w}^{\varepsilon}):\mathcal{E}(\mathbf{w}^{\varepsilon})\,\mathrm{ d}x. \tag{4.14}\] Due to the fact that \(\mathbf{w}_{k}=\mathbf{0}\) in \(\Omega\) for \(k=-m,-m+1,\ldots,-2\) and \(\mathbf{w}_{-1}=\mathbf{0}\) in \(\Omega\backslash\Omega_{N}\), we deduce that the right hand side is of leading order \(\mathcal{O}(\varepsilon^{-1})\). This directly implies \(\lambda_{k}=0\) for \(k=-m,-m+1,\ldots,-2\) as well as \[\lambda_{-1}=\int_{\Omega_{N}}\tilde{\mathbb{C}}^{N}\mathcal{E}(\mathbf{w}_{-1}) \mathcal{E}(\mathbf{w}_{-1})\,\mathrm{d}x.\] It remains to show that \((\lambda_{-1},\mathbf{w}_{-1})\) solves the desired limit eigenvalue problem. Therefore we consider the state equation \((SE^{\varepsilon})\) to order \(\mathcal{O}(1)\) \[-\nabla\cdot\left[\tilde{\mathbb{C}}^{N}\mathcal{E}(\mathbf{w}_{-1}) +\overline{\mathbb{C}}(\mathbf{\varphi}_{0})\mathcal{E}(\mathbf{w}_{0})\right]= \lambda_{1}\overline{\rho}(\mathbf{\varphi}_{0})\mathbf{w}_{-1}+\lambda_{0 }\overline{\rho}(\mathbf{\varphi}_{0})\mathbf{w}_{0}+\] \[\lambda_{-1}\overline{\rho}(\mathbf{\varphi}_{0})\mathbf{w}_{1}+\lambda_{ -1}(\tilde{\rho}^{N}+\overline{\rho}(\mathbf{\varphi}_{1}))\mathbf{w}_{-1}.\] In \(\Omega_{N}\) this simplifies to \[-\nabla\cdot\left[\tilde{\mathbb{C}}^{N}\mathcal{E}(\mathbf{w}_{-1})\right]= \lambda_{-1}(\tilde{\rho}^{N}+\overline{\rho}(\mathbf{\varphi}_{1}))\mathbf{w}_{-1} \quad\text{in }\Omega_{N}.\] Summing up this intermezzo, we have now seen that even if spurious eigenmodes are not excluded, the appropriate choice of the model parameters will force the associated eigenvalues to leave the spectrum in the limit \(\varepsilon\to 0\). Hence, the spurious modes do not affect our optimization problem as they leave the considered part of the spectrum. ### Inner expansions In the interfacial regions, i.e., in layers separating two outer regions, we need to rescale our coordinate system in order to take into account that \(\mathbf{\varphi}^{\varepsilon}\) changes rapidly in directions perpendicular to the interface. Therefore, for all \(i,j=1,\ldots,N\), we write \(\Gamma=\Gamma_{ij}\) to denote the sharp-interface separating \(\Omega_{i}\) and \(\Omega_{j}\). Moreover, let \(\mathbf{n}_{\Gamma_{ij}}\) denote the unit normal vector field on \(\Gamma\) pointing from \(\Omega_{i}\) to \(\Omega_{j}\). In the following, we omit these indices to provide a cleaner presentation. We now introduce a suitable coordinate system that fits the geometry of the interface. The following discussion can be found, e.g., in [1] and thus we only give the key steps needed for our analysis. Let us choose a local parametrization \[\mathbf{\gamma}:U\to\mathbb{R}^{d},\quad\mathbf{\gamma}(U)\subseteq\Gamma \tag{4.15}\] of \(\Gamma\), where \(U\) is an open subset of \(\mathbb{R}^{d}\). We further define \(\mathbf{\nu}:=\mathbf{n}_{\Gamma}\circ\mathbf{\gamma}\). As we want to describe a whole neighborhood surrounding the local part of the interface \(\mathbf{\gamma}(U)\subset\Gamma\), we introduce the signed distance function relative to \(\Omega_{i}\) which satisfies \(d(x)>0\) if \(x\in\Omega_{j}\) and \(d(x)<0\) if \(x\in\Omega_{i}\). For more details concerning the signed distance function we refer the reader to [48, Sec. 14.6]. By introducing the rescaled distance coordinate \(z(x)\coloneqq\frac{1}{\varepsilon}d(x)\in\mathbb{R}\) we define for fixed \(z\in\mathbb{R}\) and sufficiently small \(\varepsilon>0\) the \((d-1)\)-dimensional submanifold \[\Gamma_{\varepsilon z}\coloneqq\big{\{}\mathbf{\gamma}(\mathbf{s})+ \varepsilon z\mathbf{\nu}(\mathbf{s})\,\big{|}\,\mathbf{s}\in U\big{\}},\] which describes a translation of \(\Gamma\) in the direction \(\mathbf{\nu}\). Here, for \(x\) belonging to a sufficiently thin tubular neighbourhood around \(\gamma(U)\), \(\mathbf{s}(x)\) is the unique point in \(U\) such that \(\gamma(\mathbf{s})\in\Gamma\) is the orthogonal projection of \(x\) onto \(\Gamma\). The summand \[\varepsilon z(x)\mathbf{\nu}\big{(}\mathbf{s}(x)\big{)}=d(x)\mathbf{n}_{\Gamma}\big{(}\bm {\gamma}\big{(}\mathbf{s}(x)\big{)}\big{)}\] shifts the point \(\mathbf{\gamma}\big{(}\mathbf{s}(x)\big{)}\) back onto \(x\). Hence, a sufficiently thin tubular neighborhood around \(\mathbf{\gamma}(U)\) can be expressed by the coordinate system \((\mathbf{s},z)\). Now we can express the transformation of differential operators with respect to the coordinate transformation \(x\mapsto(\mathbf{s}(x),z(x))\). Therefore, let us consider an arbitrary scalar function \[b(x)=\hat{b}(\mathbf{s}(x),z(x)).\] It holds \[\nabla_{x}b=\nabla_{\Gamma_{\varepsilon z}}\hat{b}+\frac{1}{ \varepsilon}\left(\partial_{z}\hat{b}\right)\mathbf{\nu}=\frac{1}{\varepsilon} \left(\partial_{z}\hat{b}\right)\mathbf{\nu}+\nabla_{\Gamma}\hat{b}+\mathcal{O}( \varepsilon), \tag{4.16}\] where \(\nabla_{\Gamma_{\varepsilon z}}\) stands for the surface gradient on \(\Gamma_{\varepsilon z}\). Proceeding analogously, we deduce that the divergence of a vector-valued function \(\mathbf{j}(x)=\hat{\mathbf{j}}(\mathbf{s}(x),z(x))\) can be expressed as \[\nabla_{x}\cdot\mathbf{j}=\nabla_{\Gamma_{\varepsilon z}}\cdot\hat{\mathbf{j}}+\frac {1}{\varepsilon}\partial_{z}\hat{\mathbf{j}}\cdot\mathbf{\nu}=\frac{1}{\varepsilon} \partial_{z}\hat{\mathbf{j}}\cdot\mathbf{\nu}+\nabla_{\Gamma}\cdot\hat{\mathbf{j}}+ \mathcal{O}(\varepsilon), \tag{4.17}\] where \(\nabla_{\Gamma_{\varepsilon z}}\cdot\hat{\mathbf{j}}\) stands for the surface divergence on \(\Gamma_{\varepsilon z}\). Furthermore the full gradient of a vector-valued function is given by \[\nabla_{x}\mathbf{j}=\frac{1}{\varepsilon}\partial_{z}\hat{\mathbf{j}} \otimes\mathbf{\nu}+\nabla_{\Gamma}\hat{\mathbf{j}}+\mathcal{O}(\varepsilon), \tag{4.18}\] where \(\otimes\) denotes the dyadic product that is defined as \(\mathbf{a}\otimes\mathbf{b}=(a_{i}b_{j})_{i,j=1}^{d}\) for all \(\mathbf{a},\mathbf{b}\in\mathbb{R}^{d}\). Analogously, for a matrix-valued function \[\mathcal{A}(x)=\left(a_{ij}(x)\right)_{i,j=1}^{d}=\hat{\mathcal{A}}(\mathbf{s}(x),z(x)),\] we apply formula (4.17) to each component of the row-wise defined divergence \(\nabla_{x}\cdot\mathcal{A}\). We obtain \[\nabla_{x}\cdot\mathcal{A}=\nabla_{\Gamma}\cdot\hat{\mathcal{A}}+\frac{1}{ \varepsilon}\partial_{z}\hat{\mathcal{A}}\boldsymbol{\nu}+\mathcal{O}( \varepsilon). \tag{4.19}\] For the Laplacian we obtain the representation \[\Delta_{x}b=\Delta_{\Gamma_{xz}}\hat{b}+\frac{1}{\varepsilon}\left(\Delta_{x}d \right)\partial_{z}\hat{b}+\frac{1}{\varepsilon^{2}}\partial_{zz}\hat{b} \quad=\frac{1}{\varepsilon^{2}}\partial_{zz}\hat{b}-\frac{1}{\varepsilon} \left(\hat{\kappa}+\varepsilon z\big{|}\hat{\mathcal{W}}\big{|}^{2}\right) \partial_{z}\hat{b}+\Delta_{\Gamma}\hat{b}+\mathcal{O}(\varepsilon). \tag{4.20}\] Here \(\mathcal{W}\) denotes the _Weingarten map_ associated with \(\Gamma\) that is given by \[\mathcal{W}(x)\coloneqq-\nabla_{\Gamma}\boldsymbol{n}_{\Gamma}(x)\in\mathbb{ R}^{d\times d}\quad\text{for all }x\in\Gamma, \tag{4.21}\] see, e.g., [39, Appendix B]. Its non-trivial eigenvalues \(\kappa_{1},\ldots,\kappa_{d-1}\) are the principal curvatures of \(\Gamma\) and its spectral norm can be expressed as \[|\mathcal{W}|=\sqrt{\kappa_{1}^{2}+\ldots\kappa_{d-1}^{2}}.\] Furthermore \(\kappa\) denotes the mean curvature which is defined as the sum of the principal curvatures of \(\Gamma\). Note that in view of (4.21), \(\kappa\) can be expressed as \[\kappa(x)=-\nabla_{\Gamma}\cdot\boldsymbol{n}_{\Gamma}(x)\quad\text{for all }x\in\Gamma, \tag{4.22}\] which will be important for later purposes. To conclude this section, we introduce the inner expansions that we will work with in the next section. Therefore, we make the ansatz \[\boldsymbol{w}_{n_{r}}^{\varepsilon}(x) =\sum_{k=0}^{\infty}\varepsilon^{k}\,\mathbf{W}_{k,n_{r}}\big{(} \boldsymbol{s}(x),z(x)\big{)}, \tag{4.23}\] \[\boldsymbol{\varphi}^{\varepsilon}(x) =\sum_{k=0}^{\infty}\varepsilon^{k}\,\boldsymbol{\Phi}_{k}\big{(} \boldsymbol{s}(x),z(x)\big{)},\] where we assume \(\boldsymbol{\Phi}_{0}(\boldsymbol{s}(x),z(x))\in\boldsymbol{G}\) and \(\boldsymbol{\Phi}_{k}(\boldsymbol{s}(x),z(x))\in T\Sigma^{N}\) for all \(k\geq 1\). In the next section, we will relate these inner expansions to the outer expansions that were introduced before. **Remark 4.5**.: Note that the eigenvalues \(\lambda_{n_{r}}^{\varepsilon}\) do not depend locally on \(x\in\Omega\) and thus, their inner expansion simply equals their outer expansion. ## 5. The matching conditions So far, we have constructed outer expansions which are supposed to hold inside the material regions \(\Omega_{i}\) for \(i=1,\ldots,N\) as well as inner expansions which are supposed to hold in a tubular neighborhood around the sharp-interfaces \(\Gamma_{ij}\). Note that due to the construction in the previous section, the thickness of this tubular neighborhood is proportional to \(\varepsilon\). In order to be compatible, both expansions must match in a suitable intermediate region by suitable matching conditions. This region is approximately given by all points \(x\in\Omega\) with the property \(\operatorname{dist}(x,\Gamma)\leq\varepsilon^{\theta}\) for some fixed \(\theta\in(0,1)\). This means we stretch the tubular neighborhood the inner expansions were constructed on from a thickness proportional to \(\varepsilon\) to a thickness proportional to \(\varepsilon^{\theta}\) and relate both expansions in this region. These matching conditions will be expressed as limit conditions for the inner expansions when \(\varepsilon\to 0\) or equivalently \(z\to\pm\infty\) depending on which side we approach the interface from. This procedure is again standard in the context of formally matched asymptotics and we only state the matching conditions, for the computations see [41]. Using the notation \[(\mathbf{v})_{j}(x)\coloneqq\lim_{\delta\searrow 0}\mathbf{v}\big{(}x\pm\delta\mathbf{n}_{ \Gamma}(x)\big{)} \tag{5.1}\] for the lowest order term we have the matching condition \[\mathbf{\Phi}_{0}(\mathbf{s},z) \to\begin{cases}(\mathbf{\varphi}_{0})_{j}(x)=\mathbf{e}_{j}&\quad\text{as $z \to+\infty$},\\ (\mathbf{\varphi}_{0})_{i}(x)=\mathbf{e}_{i}&\quad\text{as $z\to-\infty$},\end{cases} \tag{5.2}\] \[\partial_{z}\mathbf{\Phi}_{0}(\mathbf{s},z) =0\quad\text{as $z\to\pm\infty$}.\] For the term of order \(\mathcal{O}(\varepsilon)\) we have \[\mathbf{\Phi}_{1}(\mathbf{s},z)\approx\begin{cases}(\mathbf{\varphi}_{1})_{j}(x)+\big{(} \nabla\mathbf{\varphi}_{0}\big{)}_{j}(x)\,\mathbf{n}_{\Gamma}(x)\,z&\quad\text{as $z\to+\infty$},\\ (\mathbf{\varphi}_{1})_{i}(x)+\big{(}\nabla\mathbf{\varphi}_{0}\big{)}_{i}(x)\,\mathbf{n} _{\Gamma}(x)\,z&\quad\text{as $z\to-\infty$}\end{cases} \tag{5.3}\] for all \(x=\mathbf{\gamma}(\mathbf{s})\in\Gamma=\Gamma_{ij}\). Note that here the symbol \(\,\approx\,\) means that the difference of the left-hand side and the right-hand side as well as all its derivatives with respect to \(z\) tend to zero as \(z\to\pm\infty\). In particular (5.3) provides us with \[\partial_{z}\mathbf{\Phi}_{1}(\mathbf{s},z)\to\begin{cases}(\nabla\mathbf{ \varphi}_{0})_{j}(x)\,\mathbf{n}_{\Gamma}(x)&\quad\text{as $z\to+\infty$},\\ (\nabla\mathbf{\varphi}_{0})_{i}(x)\,\mathbf{n}_{\Gamma}(x)&\quad\text{as $z\to-\infty$}. \end{cases} \tag{5.4}\] The analogous relations also hold true for the expansions of \(\mathbf{w}_{n_{r}}^{\varepsilon}\). In the following, we will also see that the _jump_ across the interfaces \(\Gamma_{ij}\) is an important quantity. It is defined by \[[\mathbf{v}]_{i}^{j}(x)\coloneqq\lim_{\delta\searrow 0}\Big{(}\mathbf{v} \big{(}x+\delta\mathbf{n}_{\Gamma}(x)\big{)}-\mathbf{v}\big{(}x-\delta\mathbf{n}_{\Gamma} (x)\big{)}\Big{)}, \tag{5.5}\] for any \(x=\mathbf{\gamma}(\mathbf{s})\in\Gamma=\Gamma_{ij}\). Now, we have made all the necessary computations to analyze the state equations and the gradient equality near the interfaces \(\Gamma_{ij}\). In particular, we are able to investigate their behavior as \(\varepsilon\to 0\). ## 6 Comparison of the leading order terms Now, we want to apply our knowledge about the inner and outer expansions to the optimality system consisting of \((SE^{\varepsilon})\) and \((GS^{\varepsilon})\). This means we apply the formulas for the differential operators discussed in Section 4.3 to the optimality system, compare the terms with same orders in \(\varepsilon\) and apply the matching conditions. In this section, we will suppress the index \(n_{r}\) to provide a clearer notation. ### Comparison of the leading order terms in the state equation We point out that our state equation \((SE^{\varepsilon})\) differs from the one in [20] only in terms of the right-hand side. In contrast to [20], where the right-hand side is just a given function \(\mathbf{f}\), our right-hand is given by \[\lambda^{\varepsilon,\mathbf{\varphi}}\rho(\mathbf{\varphi})\mathbf{w}^{ \varepsilon,\mathbf{\varphi}}. \tag{6.1}\] In particular, it depends on the phase-field \(\mathbf{\varphi}\) as well as the corresponding eigenvalue \(\lambda^{\varepsilon,\mathbf{\varphi}}\) and its associated eigenfunction \(\mathbf{w}^{\varepsilon,\mathbf{\varphi}}\). Recall that the inner expansion of \(\lambda^{\varepsilon,\mathbf{\varphi}}\) equals its outer expansion (cf. Remark 4.5) as the eigenvalue does not depend locally on \(x\in\Omega\). As no derivatives of \(\rho\), or \(\mathbf{\varphi}\) are involved, we conclude that the inner expansion of (6.1) possesses only summands of non-negative orders in \(\varepsilon\). As the discussion of the left-hand side of the state equation works exactly as in [20], we can thus proceed in a completely analogous manner. We will therefore only summarize the most important results. For the functions \(\mathbf{W}_{0}\) and \(\mathbf{W}_{1}\) involved in the inner expansion of the eigenfunction, we deduce the following relations: \[\partial_{z}\mathbf{W}_{0}(\mathbf{s},z)\to\mathbf{0} \text{as }z\to\pm\infty, \tag{6.2}\] \[\partial_{z}\mathbf{W}_{0}=\mathbf{0} \text{around }\Gamma_{ij},\] (6.3) \[\partial_{z}\left[\overline{\mathbb{C}}(\mathbf{\Phi}_{0})\left( \partial_{z}\mathbf{W}_{1}\otimes\mathbf{\nu}+\nabla_{\Gamma}\mathbf{W}_{0} \right)^{\text{sym}}\mathbf{\nu}\right]=\mathbf{0} \text{around }\Gamma_{ij}, \tag{6.4}\] \[\mathbf{W}_{0}(\mathbf{s},z)\to\begin{cases}\left(\mathbf{w}_{0}\right)_{j}(x)&\text {as }z\to+\infty,\\ \left(\mathbf{w}_{0}\right)_{i}(x)&\text{as }z\to-\infty,\end{cases} \tag{6.5}\] \[\nabla_{\Gamma}\mathbf{W}_{0}(\mathbf{s},z)+\partial_{z}\mathbf{W}_{1}(\mathbf{s},z) \otimes\mathbf{\nu}(\mathbf{s})\to\begin{cases}\left(\nabla_{x}\mathbf{w}_{0}\right)_{j}( x)&\text{as }z\to+\infty,\\ \left(\nabla_{x}\mathbf{w}_{0}\right)_{i}(x)&\text{as }z\to-\infty,\end{cases} \tag{6.6}\] for all \(x=\mathbf{\gamma}(\mathbf{s})\in\Gamma_{ij}\). Here and in the remainder of this paper, the expression "around \(\Gamma_{ij}\)" means that the statement is valid in a sufficiently thin tubular neighborhood around \(\Gamma_{ij}\) where our inner expansions hold. We thus arrive at the jump condition \[\left[\mathbf{w}_{0}\right]_{i}^{j}=\mathbf{0}\quad\text{for all }i,j=1,\ldots,N. \tag{6.7}\] However, we point out that the jump condition on an interface between a material region and a void region (i.e., \(i=N\) or \(j=N\)) is negligible as we do not have any information about the behavior of \(\mathbf{w}_{0}\) in the void. In other words, we will obtain a closed system of PDEs forming the state equations of the sharp-interface problem in Section 7 without needing this additional jump condition at the material-void boundary. For the function \(\overline{\mathbb{C}}(\mathbf{\Phi}_{0}(z))\), where \(\mathbf{\Phi}_{0}\) is the lowest order term of the inner expansion of the phase-field, we obtain: \[\overline{\mathbb{C}}(\mathbf{\Phi}_{0}(\mathbf{s},z))\to\begin{cases} \mathbb{C}^{j}&\text{as }z\to+\infty,\\ \mathbb{C}^{i}&\text{as }z\to-\infty,\end{cases}\quad\text{if }i,j\neq N, \tag{6.8}\] \[\overline{\mathbb{C}}(\mathbf{\Phi}_{0}(\mathbf{s},z))\to 0 \text{as }z\to+\infty,\quad\text{if }j=N, \tag{6.9}\] \[\overline{\mathbb{C}}(\mathbf{\Phi}_{0}(\mathbf{s},z))\to 0 \text{as }z\to-\infty,\quad\text{if }i=N\] Here, the convergence (6.9) follows due to the additional factor \(\varepsilon\) in the void contribution of \(\overline{\mathbb{C}}(\mathbf{\varphi}^{\varepsilon})\) (see (4.3)). Eventually, we obtain that \[\mathbb{C}^{i}\mathcal{E}_{i}(\mathbf{w}_{0})\mathbf{n}_{\Gamma}=\begin{cases} \mathbf{0}&\text{if }j=N,\\ \mathbb{C}^{j}\mathcal{E}_{j}(\mathbf{w}_{0})\mathbf{n}_{\Gamma}&\text{if }j\neq N,\end{cases} \tag{6.10}\] holds on each \(\Gamma_{ij}\) with \(i\neq N\), where \[\mathcal{E}_{i}(\mathbf{w}_{0})\coloneqq\lim_{\delta\searrow 0}\mathcal{E}(\mathbf{w}_{0} )(x-\delta\mathbf{n}_{\Gamma})\quad\text{and}\quad\mathcal{E}_{j}(\mathbf{w}_{0}) \coloneqq\lim_{\delta\searrow 0}\mathcal{E}(\mathbf{w}_{0})(x+\delta\mathbf{n}_{ \Gamma}).\] ### Comparison of the leading order terms in the gradient equality Now, we want to analyse the gradient equality \((GS^{\varepsilon})\), which reads as \[\sum_{r=1}^{l}\Big{\{}[\partial_{\lambda_{n_{r}}}\Psi]\,\big{(} \lambda_{n_{1}}^{\varepsilon},\ldots,\lambda_{n_{l}}^{\varepsilon}\big{)}\Big{[} \langle\mathcal{E}(\mathbf{w}_{n_{r}}^{\varepsilon}),\mathcal{E}(\mathbf{w}_{n_{r}}^{ \varepsilon})\rangle_{P_{\text{TE}}[\mathcal{C}^{\prime}(\mathbf{\varphi}^{ \varepsilon})]}\] \[-\lambda_{n_{r}}^{\varepsilon}\big{(}\mathbf{w}_{n_{r}}^{ \varepsilon},\mathbf{w}_{n_{r}}^{\varepsilon}\big{)}_{P_{\text{TE}}[\rho^{\prime}( \mathbf{\varphi}^{\varepsilon})]}\Big{]}\Big{\}}\] \[=\gamma\varepsilon\Delta\mathbf{\varphi}^{\varepsilon}+\frac{1}{ \varepsilon}(\mathbf{\Lambda}^{\varepsilon}+\mathbf{\vartheta}^{\varepsilon}+\mathbf{ \mu}^{\varepsilon})-\frac{\gamma}{\varepsilon}P_{\text{TE}}\left[\psi_{0}^{ \prime}(\mathbf{\varphi}^{\varepsilon})\right]. \tag{6.11}\] Here, we recall that the Lagrange multipliers were constructed in Theorem 3.12 in such a way that their sum appearing in the gradient equality (6.11) is scaled by the factor \(\frac{1}{\varepsilon}\). We now assume the Lagrange multipliers to have the following inner asymptotic expansions: \[\mathbf{\Lambda}^{\varepsilon}(x)=\sum_{k=0}^{\infty}\varepsilon^{k}\mathbf{\Lambda}_ {k}(\mathbf{s},z)\quad\mathbf{\vartheta}^{\varepsilon}=\sum_{k=0}^{\infty}\varepsilon ^{k}\mathbf{\vartheta}_{k},\quad\mathbf{\mu}^{\varepsilon}(x)=\sum_{k=0}^{\infty} \varepsilon^{k}\mathbf{\mu}_{k}(\mathbf{s},z), \tag{6.12}\] Furthermore, in order to deal with the nonlinear terms in (6.11) involving \(\mathbb{C}^{\prime}\), \(\rho^{\prime},\psi_{0}^{\prime},\partial_{\lambda_{n_{r}}}\Psi\), we perform a (componentwise) Taylor expansion around the leading order term \(\mathbf{\Phi}_{0}\) to obtain the inner expansions \[\mathbb{C}^{\prime}(\mathbf{\varphi}^{\varepsilon}) =\mathbb{C}^{\prime}(\mathbf{\Phi}_{0})+\mathcal{O}(\varepsilon),\] \[\rho^{\prime}(\mathbf{\varphi}^{\varepsilon}) =\rho^{\prime}(\mathbf{\Phi}_{0})+\mathcal{O}(\varepsilon),\] \[\psi_{0}^{\prime}(\mathbf{\varphi}^{\varepsilon}) =\psi_{0}^{\prime}(\mathbf{\Phi}_{0})+\mathcal{O}(\varepsilon),\] \[=[\partial_{\lambda_{n_{r}}}\Psi]\big{(}\lambda_{0,n_{1}}^{ \varepsilon},\ldots,\lambda_{n_{l}}^{\varepsilon}\big{)}\] We now take a closer look at the quantities \(\mathbb{C}^{\prime}(\mathbf{\Phi}_{0})\) and \(\rho^{\prime}(\mathbf{\Phi}_{0})\). To this end, we recall the definition of \(\rho\) in (2.7), which reads as \[\rho:\mathbb{R}^{N}\to\mathbb{R},\quad\mathbf{\varphi}\mapsto\sum_{i=1}^{N-1}\rho ^{i}\sigma_{\omega}(P_{\Sigma}(\mathbf{\varphi})_{i})+\hat{\rho}^{N}\varepsilon \sigma_{\omega}(P_{\Sigma}(\mathbf{\varphi})_{N}).\] Note that we can write the projection \(P_{\Sigma}\) as \[P_{\Sigma}(\mathbf{\varphi})=\mathbf{\varphi}-\left(\frac{1}{N}\sum_{i=1}^{N}\varphi^ {i}\right)\mathbf{1}+\frac{1}{N}\mathbf{1},\] for \(\mathbf{\varphi}\in\mathbb{R}^{N}\), where \(\mathbf{1}=(1,...,1)^{T}\in\mathbb{R}^{N}\). For the partial derivatives with respect to \(\varphi^{j}\) with \(j\in\{1,...,N\}\), we thus obtain \[(\partial_{j}P_{\Sigma})(\mathbf{\varphi})=\mathbf{e}_{j}-\frac{1}{N}\mathbf{1}\] and therefore, \[(\partial_{j}\rho)(\mathbf{\varphi})=\sum_{i=1}^{N}\rho^{i}\sigma_{\omega}^{\prime }(P_{\Sigma}(\mathbf{\varphi})_{i})\left(\delta_{ij}-\frac{1}{N}\right),\] where \(\delta_{ij}\) denotes the Kronecker delta and \(\rho^{N}:=\varepsilon\hat{\rho}^{N}\) to simplify the notation. Inserting \(\mathbf{\Phi}_{0}\) (which belongs pointwise to \(\Sigma^{N}\) and thus, no projection is necessary) and recalling that \(\sigma_{\omega}\) is the identity on \([0,1]\) (cf. (2.6)), we arrive at \[\rho^{\prime}(\mathbf{\Phi}_{0})=((\partial_{j}\rho)(\mathbf{\Phi}_{0}))_{j=1}^{N}= \big{(}\rho^{j}-\frac{1}{N}\sum_{i=1}^{N}\rho^{i}\big{)}_{j=1}^{N}. \tag{6.13}\] Keeping in mind that \(\rho^{N}=\varepsilon\tilde{\rho}^{N}\) still produces terms of order \(\mathcal{O}(\varepsilon)\), considering (6.13) to the lowest order \(\mathcal{O}(1)\) gives \[\overline{\rho}^{\prime}(\mathbf{\Phi}_{0})=\left(\rho^{1}-\frac{1}{N}\sum_{i= 1}^{N-1}\rho^{i},\;\ldots\;,\;\rho^{N-1}-\frac{1}{N}\sum_{i=1}^{N-1}\rho^{i},- \frac{1}{N}\sum_{i=1}^{N-1}\rho^{i}\right)^{T}. \tag{6.14}\] Thus, it obviously holds \(\overline{\rho}^{\prime}(\mathbf{\Phi}_{0})\in T\Sigma^{N}\). The function \(\mathbb{C}^{\prime}(\mathbf{\Phi}_{0})\) can be expressed analogously. Altogether, this allows us to drop the projection acting on the left-hand side in (6.11) when considering only the lowest order contributions. Now that we have considered all the quantities appearing in (6.11), we begin with our formal asymptotics. First of all, applying formula (4.18) on the lowest order contribution \(\mathbf{W}_{0}\) of the inner expansion of \(\mathbf{w}^{\varepsilon}\), we find that \[\mathcal{E}(\mathbf{w}^{\varepsilon})=\left(\nabla_{x}\mathbf{w}^{\varepsilon}\right) ^{\text{sym}}=\left(\nabla_{\Gamma}\mathbf{W}_{0}+\frac{1}{\varepsilon} \partial_{z}\mathbf{W}_{0}\otimes\mathbf{\nu}\right)^{\text{sym}}+\mathcal{O}( \varepsilon). \tag{6.15}\] Comparing the contributions of order \(\mathcal{O}(\varepsilon^{-2})\) in (6.11), we use (6.15) to obtain \[\mathbf{0}=\sum_{r=1}^{l} [\partial_{\lambda_{n}}\!\Psi]\left(\lambda_{0,n_{1}},\ldots, \lambda_{0,n_{l}}\right)\] \[\cdot\left[\overline{\mathbb{C}}^{\prime}(\mathbf{\Phi}_{0}) \!\left(\partial_{z}\mathbf{W}_{0,n_{r}}\otimes\mathbf{\nu}\right)^{\text{sym}}: \left(\partial_{z}\mathbf{W}_{0,n_{r}}\otimes\mathbf{\nu}\right)^{\text{sym}}\right]\!,\] This equation is obviously fulfilled since \(\partial_{z}\mathbf{W}_{0}\) vanishes according to (6.3). Let us now consider (6.11) to order \(\mathcal{O}(\varepsilon^{-1})\). First of all, we infer from (6.3) and (6.15) that the left-hand side has no contribution of order \(\mathcal{O}(\varepsilon^{-1})\). We thus have \[\mathbf{0}=\gamma\partial_{zz}\mathbf{\Phi}_{0}+(\mathbf{\Lambda}_{0}+\mathbf{ \vartheta}_{0}+\mathbf{\mu}_{0})-\gamma P_{T\Sigma}\left[\psi_{0}^{\prime}( \mathbf{\Phi}_{0})\right], \tag{6.16}\] where we used the formula (4.20) to compute the Laplacian. Multiplying (6.16) by \(\partial_{z}\mathbf{\Phi}_{0}\) and integrating with respect to \(z\) from \(-\infty\) to \(\infty\), we deduce \[-\int_{-\infty}^{\infty}(\mathbf{\Lambda}_{0}+\mathbf{\vartheta}_{0} +\mathbf{\mu}_{0})\cdot\partial_{z}\mathbf{\Phi}_{0}\,\mathrm{d}z \tag{6.17}\] \[=\gamma\int_{-\infty}^{\infty}\partial_{zz}\mathbf{\Phi}_{0}\cdot \partial_{z}\mathbf{\Phi}_{0}\,\mathrm{d}z-\gamma\int_{-\infty}^{\infty}P_{T \Sigma}\left[\psi_{0}^{\prime}(\mathbf{\Phi}_{0})\right]\partial_{z}\mathbf{ \Phi}_{0}\,\mathrm{d}z.\] Now, we consider each of the terms in (6.17) separately. First of all, we see \[\int_{-\infty}^{\infty}\partial_{zz}\mathbf{\Phi}_{0}\cdot \partial_{z}\mathbf{\Phi}_{0}\,\mathrm{d}z=\int_{-\infty}^{\infty}\frac{1}{2 }\frac{\mathrm{d}}{\mathrm{d}z}\left|\partial_{z}\mathbf{\Phi}_{0}\right|^{2} \,\mathrm{d}z \tag{6.18}\] \[=\frac{1}{2}\left(\lim_{z\to+\infty}\partial_{z}\mathbf{\Phi}_{0} (z)-\lim_{z\to-\infty}\partial_{z}\mathbf{\Phi}_{0}(z)\right)=\mathbf{0},\] where the last equality follows from the matching condition (5.2). As \(\mathbf{\Phi}_{0}\in\mathbf{G}\) pointwise, we know that \(\partial_{z}\mathbf{\Phi}_{0}\in T\Sigma^{N}\) pointwise. Hence, we obtain \[\int_{-\infty}^{\infty}P_{T\Sigma}\left[\psi_{0}^{\prime}(\mathbf{ \Phi}_{0})\right]\partial_{z}\mathbf{\Phi}_{0}\,\mathrm{d}z=\int_{-\infty}^{ \infty}\psi_{0}^{\prime}(\mathbf{\Phi}_{0})\partial_{z}\mathbf{\Phi}_{0}\, \mathrm{d}z \tag{6.19}\] \[=\int_{-\infty}^{\infty}\frac{\mathrm{d}}{\mathrm{d}z}\left[\psi_{ 0}(\mathbf{\Phi}_{0})\right]\,\mathrm{d}z=\lim_{z\to+\infty}\!\psi_{0}(\mathbf{ \Phi}_{0}(z))-\lim_{z\to-\infty}\!\psi_{0}(\mathbf{\Phi}_{0}(z))=\mathbf{0}.\] For the last equality, we used the fact that \(\psi_{0}\) vanishes on \(\mathbf{e}_{i}\) for \(i=1,\ldots,N\) along with the matching condition (5.2). We have thus shown that the right-hand side of (6.17) vanishes. Recall from (3.26) that \(\mathbf{\Lambda}^{\varepsilon}\) is identical in each component. It is therefore natural to assume that every term in the inner expansion of \(\mathbf{\Lambda}^{\varepsilon}\) also has this property. Thus, recalling that \(\partial_{z}\mathbf{\Phi}_{0}\in T\Sigma^{N}\) pointwise, we infer \[\int_{-\infty}^{\infty}\mathbf{\Lambda}_{0}\cdot\partial_{z}\mathbf{\Phi}_{0}\,\mathrm{ d}z=\int_{-\infty}^{\infty}\Lambda_{0}\sum_{i=1}^{N}[\partial_{z}\mathbf{\Phi}_{0}] ^{i}\,\mathrm{d}z=0, \tag{6.20}\] where \(\Lambda_{0}\) denotes an arbitrary component of \(\mathbf{\Lambda}_{0}\). Recall from Theorem 3.12 that \(\mathbf{\vartheta}^{\varepsilon}\in\mathbb{R}^{N}\) is constant. Thus, assuming that this property is transferred to the inner expansion, \(\mathbf{\vartheta}_{0}\) is independent of \(z\), we infer by means of the matching condition (5.2) that \[\int_{-\infty}^{\infty}\mathbf{\vartheta}_{0}\cdot\partial_{z}\mathbf{\Phi}_{0}\, \mathrm{d}z=\int_{-\infty}^{\infty}\frac{\mathrm{d}}{\mathrm{d}z}[\mathbf{ \vartheta}_{0}\cdot\mathbf{\Phi}_{0}]\,\mathrm{d}z=\mathbf{\vartheta}_{0}\cdot(\mathbf{e }_{j}-\mathbf{e}_{i})\,. \tag{6.21}\] Eventually, we want to justify that the remaining Lagrange multiplier fulfills \[\int_{-\infty}^{\infty}\mathbf{\mu}_{0}\cdot\partial_{z}\mathbf{\Phi}_{0}\,\mathrm{d}z =0. \tag{6.22}\] Therefore, we recall (3.27) which tells us for \(i=1,\ldots,N\) that \[\mu_{i}^{\varepsilon}=0\quad\text{a.e. in }\Omega_{i}^{+}=\big{\{}\mathbf{x}\in \Omega\,\big{|}\,\varphi_{i}^{\varepsilon}(\mathbf{x})>0\big{\}}=\Omega\backslash \big{\{}\mathbf{x}\in\Omega\,\,\big{|}\,\varphi_{i}^{\varepsilon}(\mathbf{x})=0\big{\}}.\] Using [48, Lemma 7.7], we infer that for all \(i\in\{1,\ldots,N\}\), \[\mu_{i}^{\varepsilon}\,\nabla_{x}\varphi_{i}^{\varepsilon}=\mathbf{0}. \tag{6.23}\] Using (4.16) and comparing the terms of order \(\mathcal{O}(\varepsilon^{-1})\), we deduce \[\mu_{0}^{i}\,\partial_{z}\Phi_{0}^{i}\,\mathbf{\nu}=\mathbf{0} \tag{6.24}\] for all \(i\in\{1,\ldots,N\}\). In particular, by multiplying with \(\mathbf{\nu}\) and integrating with respect to \(z\) from \(-\infty\) to \(\infty\), we arrive at \[\int_{-\infty}^{\infty}\mu_{0}^{i}(z)\partial_{z}\Phi_{0}^{i}(z)\,\mathrm{d}z =0.\] for all \(i\in\{1,\ldots,N\}\). This proves (6.22). Combining (6.18)-(6.22), we conclude from (6.17) that \[\mathbf{\vartheta}_{0}\cdot(\mathbf{e}_{j}-\mathbf{e}_{i})=0,\] for all \(i,j=1,\ldots,N\), meaning that all components of \(\mathbf{\vartheta}_{0}\) are equal. Since \(\mathbf{\vartheta}^{\varepsilon}\in T\Sigma^{N}\) in (3.28), we also assume \(\mathbf{\vartheta}_{0}\in T\Sigma^{N}\). This implies that \(\mathbf{\vartheta}_{0}=\mathbf{0}\) and thus, (6.16) can be rewritten as \[\mathbf{0}=-\gamma\partial_{zz}\mathbf{\Phi}_{0}+\gamma P_{T\Sigma}\left[\psi_{0}^{ \prime}(\mathbf{\Phi}_{0})\right]-\mathbf{\Lambda}_{0}-\mathbf{\mu}_{0}. \tag{6.25}\] Let now \(\tilde{z}\in\mathbb{R}\) be arbitrary. Multiplying (6.25) by \(\partial_{z}\mathbf{\Phi}_{0}\) and integrating with respect to \(\tilde{z}\) from \(-\infty\) to \(\tilde{z}\), we obtain \[\int_{0}^{\tilde{z}}\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}z}\left|\partial_{z }\mathbf{\Phi}_{0}\right|^{2}\,\mathrm{d}z=\int_{0}^{\tilde{z}}\frac{\mathrm{d}}{ \mathrm{d}z}\left[\psi_{0}(\mathbf{\Phi}_{0})\right]\,\mathrm{d}z-\frac{1}{\gamma} \int_{0}^{\tilde{z}}(\mathbf{\Lambda}_{0}+\mathbf{\mu}_{0})\cdot\partial_{z}\mathbf{\Phi}_{0 }\,\mathrm{d}z.\] Here, the last equality holds because of (6.20) and (6.22). By the fundamental theorem of calculus, we thus have \[|\partial_{z}\mathbf{\Phi}_{0}(\tilde{z})|^{2}-2\psi_{0}(\mathbf{\Phi}_{0}(\tilde{z}))= |\partial_{z}\mathbf{\Phi}_{0}(0)|^{2}-2\psi_{0}(\mathbf{\Phi}_{0}(0))\] for all \(\tilde{z}\in\mathbb{R}\). We further know from the matching condition (5.2) that the left-hand side vanishes as \(\tilde{z}\to\pm\infty\). This entails \[|\partial_{z}\boldsymbol{\Phi}_{0}(0)|^{2}-2\psi_{0}(\boldsymbol{\Phi}_{0}(0))=0, \tag{6.26}\] and thus, we arrive at \[\left|\partial_{z}\boldsymbol{\Phi}_{0}(z)\right|^{2}=2\psi_{0}(\boldsymbol{ \Phi}_{0}(z))\quad\text{for all }z\in\mathbb{R}. \tag{6.27}\] In order to obtain further information, we next show that (6.25) can be interpreted as the first-order optimality condition of a particular optimization problem that is similar to the minimization of the one-dimensional Ginzburg-Landau energy. Therefore, we first assume that \[\sigma_{ij}:=\inf\left\{\int_{-1}^{1}\sqrt{2\psi_{0}(\boldsymbol{\theta}(t))} \left|\boldsymbol{\theta}^{\prime}(t)\right|\,\mathrm{d}t\,\left|\begin{array} []{l}\boldsymbol{\theta}\in C^{0,1}([0,1];\mathbb{R}^{N}),\;\boldsymbol{ \theta}\in\boldsymbol{G}\text{ pointwise},\\ \boldsymbol{\theta}(1)=\boldsymbol{e}_{j}\;\;\text{and}\;\;\boldsymbol{ \theta}(-1)=\boldsymbol{e}_{i}\end{array}\right\} \tag{6.28}\] possesses a minimizer, which we call \(\boldsymbol{\theta}_{ij}\). This means that \(\boldsymbol{\theta}_{ij}\) is a geodesic with respect to the degenerate metric induced by the potential \(\psi_{0}\) that connects the values \(\boldsymbol{e}_{i}\) and \(\boldsymbol{e}_{j}\). Now, proceeding as in [68, proof of formula (15)], this geodesic can be used to construct a minimizer \(\boldsymbol{\Phi}\) of the problem \[\inf\left\{\int_{-\infty}^{+\infty}\left|\partial_{z}\boldsymbol{\Phi}\right| ^{2}+2\psi_{0}(\boldsymbol{\Phi})\,\mathrm{d}z\,\left|\begin{array}{l} \boldsymbol{\Phi}\in C^{0,1}([0,1];\mathbb{R}^{N}),\;\boldsymbol{\Phi}\in \boldsymbol{G}\text{ pointwise},\\ \lim_{z\to\infty}\boldsymbol{\Phi}(z)=\boldsymbol{e}_{j}\;\;\text{and}\;\; \lim_{z\to-\infty}\boldsymbol{\Phi}(z)=\boldsymbol{e}_{i}\end{array}\right.\right\}. \tag{6.29}\] This means that \(\boldsymbol{\Phi}\) describes an optimal transition between the values \(\boldsymbol{e}_{i}\) and \(\boldsymbol{e}_{j}\). As in [68, proof of formula (15)], we further see that \(\boldsymbol{\Phi}\) solves (6.25) and (6.27), where \(\boldsymbol{\Lambda}_{0}+\boldsymbol{\mu}_{0}\) is the Lagrange multiplier for the Gibbs-Simplex constraint. Consequently, choosing \(\boldsymbol{\Phi}_{0}=\boldsymbol{\Phi}\) we have found a solution of (6.25) and (6.27). Moreover, [68, formula (15)] states that \(2\sigma_{ij}\) is exactly the value of the minimum sought in (6.29). As the minimizer \(\boldsymbol{\Phi}_{0}=\boldsymbol{\Phi}\) of (6.29) satisfies (6.27), we further conclude \[\sigma_{ij}=\int_{-\infty}^{\infty}\left|\partial_{z}\boldsymbol{\Phi}_{0} \right|^{2}\,\mathrm{d}z=2\int_{-\infty}^{\infty}\psi_{0}(\boldsymbol{\Phi}_ {0})\,\mathrm{d}z<\infty, \tag{6.30}\] which will be important for later purposes. Finally, we now consider (6.11) to the order \(\mathcal{O}\left(1\right)\). Using (4.20) to reformulate the term \(\gamma\varepsilon\Delta\boldsymbol{\varphi}\), employing (6.3), and recalling that (6.14) holds analogously for \(\overline{\mathbb{C}}^{\prime}(\boldsymbol{\Phi}_{0})\), we conclude \[\begin{split}&\frac{1}{\gamma}(\boldsymbol{\Lambda}_{1}+ \boldsymbol{\vartheta}_{1}+\boldsymbol{\mu}_{1})+\partial_{zz}\boldsymbol{ \Phi}_{1}-P_{T\Sigma}\left[\psi_{0}^{\prime\prime}(\boldsymbol{\Phi}_{0}) \boldsymbol{\Phi}_{1}\right]\\ &\quad=\hat{\kappa}\partial_{z}\boldsymbol{\Phi}_{0}+\frac{1}{ \gamma}\sum_{r=1}^{l}\Big{\{}[\partial_{\lambda_{n_{r}}}\Psi](\lambda_{0,n_{1 }},\ldots,\lambda_{0,n_{l}})\\ &\quad\quad\cdot\Big{[}\overline{\mathbb{C}}^{\prime}( \boldsymbol{\Phi}_{0})(\nabla_{\Gamma}\mathbf{W}_{0,n_{r}}+\partial_{z} \mathbf{W}_{1,n_{r}}\otimes\boldsymbol{\nu})^{\text{sym}}:(\nabla_{\Gamma} \mathbf{W}_{0,n_{r}}+\partial_{z}\mathbf{W}_{1,n_{r}}\otimes\boldsymbol{\nu})^ {\text{sym}}\\ &\qquad\quad-\lambda_{0,n_{r}}\overline{\rho}^{\prime}( \boldsymbol{\Phi}_{0})\left|\mathbf{W}_{0,n_{r}}\right|^{2}\Big{]}\Big{\}}. \end{split} \tag{6.31}\] We now multiply this equation by \(\partial_{z}\boldsymbol{\Phi}_{0}\) and integrate with respect to \(z\) from \(-\infty\) to \(\infty\). Let us consider each term of the resulting equation separately. Analogously to (6.20) and (6.21), we obtain \[\int_{-\infty}^{\infty}\boldsymbol{\Lambda}_{1}\cdot\partial_{z} \boldsymbol{\Phi}_{0}\,\mathrm{d}z =0\quad\text{and} \tag{6.32}\] \[\int_{-\infty}^{\infty}\boldsymbol{\vartheta}_{1}\cdot\partial_{z} \boldsymbol{\Phi}_{0}\,\mathrm{d}z =\boldsymbol{\vartheta}_{1}\cdot(\boldsymbol{e}_{j}-\boldsymbol{e}_{i})\,. \tag{6.33}\] Considering the Lagrange multiplier \(\mathbf{\mu}\), we recall (6.23) \[\mu_{i}^{\varepsilon}\,\nabla_{x}\varphi_{i}^{\varepsilon}=\mathbf{0}. \tag{6.34}\] Due to (4.16), its contribution of leading order \(\mathcal{O}(1)\) in inner coordinates is given by \[\mu_{1}^{i}\partial_{z}\Phi_{0}^{i}\mathbf{\nu}+\mu_{0}^{i}\nabla_{\Gamma}\Phi_{0}^ {i}+\mu_{0}^{i}\partial_{z}\Phi_{1}^{i}\mathbf{\nu}=\mathbf{0}\] for all \(i\in\{1,\ldots,N\}\). Multiplying this identity by \(\mathbf{\nu}\) and integrating the resulting equation with respect to \(z\), we infer \[\int_{-\infty}^{\infty}\mathbf{\mu}_{1}\cdot\partial_{z}\mathbf{\Phi}_{0}\,\mathrm{d}z =-\int_{-\infty}^{\infty}\mathbf{\mu}_{0}\cdot\partial_{z}\mathbf{\Phi}_{1}\, \mathrm{d}z. \tag{6.35}\] Furthermore, applying integration by parts twice and using that due to the matching condition (5.2) all derivatives of \(\mathbf{\Phi}_{0}\) with respect to \(z\) tend to \(0\) as \(z\to\pm\infty\), we obtain \[\int_{-\infty}^{\infty}\partial_{zz}\mathbf{\Phi}_{1}\cdot\partial_{z}\mathbf{\Phi}_{ 0}\,\mathrm{d}z=\int_{-\infty}^{\infty}\partial_{zz}\left(\partial_{z}\mathbf{ \Phi}_{0}\right)\cdot\mathbf{\Phi}_{1}\,\mathrm{d}z. \tag{6.36}\] As \(\partial_{z}\mathbf{\Phi}_{0}\) attains its values only in \(T\Sigma^{N}\), we deduce \[\int_{-\infty}^{\infty}P_{T\Sigma}\left[\psi_{0}^{\prime\prime}(\mathbf{\Phi}_{0}) \mathbf{\Phi}_{1}\right]\cdot\partial_{z}\mathbf{\Phi}_{0}\,\mathrm{d}z=\int_{-\infty }^{\infty}\psi_{0}^{\prime\prime}(\mathbf{\Phi}_{0})\,\partial_{z}\mathbf{\Phi}_{0} \cdot\mathbf{\Phi}_{1}\,\mathrm{d}z \tag{6.37}\] due to the symmetry of the Hessian matrix. Moreover, recalling that \(\mathbf{W}_{0}\) is independent of \(z\) due to (6.3), a simple computation yields \[\begin{split}\int_{-\infty}^{\infty}\overline{\rho}^{\prime}( \mathbf{\Phi}_{0})\partial_{z}\mathbf{\Phi}_{0}\left|\mathbf{W}_{0}\right|^{2}\, \mathrm{d}z&=\int_{-\infty}^{\infty}\left[\frac{\mathrm{d}}{ \mathrm{d}z}\overline{\rho}(\mathbf{\Phi}_{0})\right]\left|\mathbf{W}_{0}\right|^ {2}\,\mathrm{d}z\\ &=\int_{-\infty}^{\infty}\frac{\mathrm{d}}{\mathrm{d}z}\left[ \overline{\rho}(\mathbf{\Phi}_{0})\left|\mathbf{W}_{0}\right|^{2}\right]\, \mathrm{d}z.\end{split} \tag{6.38}\] Furthermore, by the definition of the dyadic product, it holds \[\begin{split}&\left(\nabla_{\Gamma}\mathbf{W}_{0}+\partial_{z} \mathbf{W}_{1}\otimes\mathbf{\nu}\right)^{\mathrm{sym}}\mathbf{\nu}\cdot\partial_{ zz}\mathbf{W}_{1}\\ &\quad=\left(\nabla_{\Gamma}\mathbf{W}_{0}+\partial_{z}\mathbf{W }_{1}\otimes\mathbf{\nu}\right)^{\mathrm{sym}}:\left(\partial_{zz}\mathbf{W}_{1} \otimes\mathbf{\nu}\right)^{\mathrm{sym}}.\end{split}\] Now we use (6.3) (which directly entails \(\partial_{z}\nabla_{\Gamma}\mathbf{W}_{0}=\mathbf{0}\)), (6.4) and \(\partial_{z}\mathbf{\nu}=\mathbf{0}\) to deduce \[\begin{split}&\int_{-\infty}^{\infty}\overline{\mathcal{C}}^{ \prime}(\mathbf{\Phi}_{0})\partial_{z}\mathbf{\Phi}_{0}\left(\nabla_{\Gamma}\mathbf{W} _{0}+\partial_{z}\mathbf{W}_{1}\otimes\mathbf{\nu}\right)^{\mathrm{sym}}:\left( \nabla_{\Gamma}\mathbf{W}_{0}+\partial_{z}\mathbf{W}_{1}\otimes\mathbf{\nu} \right)^{\mathrm{sym}}\,\mathrm{d}z\\ &\quad=\int_{-\infty}^{\infty}\left[\frac{\mathrm{d}}{\mathrm{d}z }\overline{\mathbb{C}}(\mathbf{\Phi}_{0})\right]\left(\nabla_{\Gamma}\mathbf{W}_{ 0}+\partial_{z}\mathbf{W}_{1}\otimes\mathbf{\nu}\right)^{\mathrm{sym}}:\left( \nabla_{\Gamma}\mathbf{W}_{0}+\partial_{z}\mathbf{W}_{1}\otimes\mathbf{\nu} \right)^{\mathrm{sym}}\,\mathrm{d}z\\ &\quad=\int_{-\infty}^{\infty}\frac{\mathrm{d}}{\mathrm{d}z} \left[\overline{\mathbb{C}}(\mathbf{\Phi}_{0})\left(\nabla_{\Gamma}\mathbf{W}_{0}+ \partial_{z}\mathbf{W}_{1}\otimes\mathbf{\nu}\right)^{\mathrm{sym}}:\left( \nabla_{\Gamma}\mathbf{W}_{0}+\partial_{z}\mathbf{W}_{1}\otimes\mathbf{\nu} \right)^{\mathrm{sym}}\right]\,\mathrm{d}z\\ &\quad\quad-2\int_{-\infty}^{\infty}\frac{\mathrm{d}}{\mathrm{d}z }\left[\overline{\mathbb{C}}(\mathbf{\Phi}_{0})\left(\nabla_{\Gamma}\mathbf{W}_{0}+ \partial_{z}\mathbf{W}_{1}\otimes\mathbf{\nu}\right)^{\mathrm{sym}}\mathbf{\nu} \cdot\partial_{z}\mathbf{W}_{1}\right]\,\mathrm{d}z\end{split} \tag{6.39}\] by means of the product rule and integration by parts. Collecting (6.32)-(6.39) and recalling (6.30), we eventually obtain \[\boldsymbol{\vartheta}_{1}\cdot\left(\boldsymbol{e}_{j}-\boldsymbol {e}_{i}\right)\] \[\quad+\int_{-\infty}^{\infty}\left(\partial_{zz}\left(\partial_{z }\boldsymbol{\Phi}_{0}\right)-\psi_{0}^{\prime\prime}\left(\boldsymbol{\Phi}_ {0}\right)\partial_{z}\boldsymbol{\Phi}_{0}\right)\cdot\boldsymbol{\Phi}_{1} \,\mathrm{d}z-\int_{-\infty}^{\infty}\boldsymbol{\mu}_{0}\cdot\partial_{z} \boldsymbol{\Phi}_{1}\,\mathrm{d}z\] \[\quad+\sigma_{ij}\hat{\kappa}+\frac{1}{\gamma}\sum_{r=1}^{l} \left\{\left[\partial_{\lambda_{n_{r}}}\!\Psi\right]\left(\lambda_{0,n_{1}}, \ldots,\lambda_{0,n_{l}}\right)\right. \tag{6.40}\] \[\qquad\qquad\qquad\left.\cdot\left[\int_{-\infty}^{\infty}\frac{ \mathrm{d}}{\mathrm{d}z}\Big{(}\overline{\mathbb{C}}(\boldsymbol{\Phi}_{0})(...)^{\mathrm{sym}}:(...)^{\mathrm{sym}}\Big{)}\,\mathrm{d}z\right.\right.\] \[\qquad\qquad\qquad\left.\left.-2\int_{-\infty}^{\infty}\frac{ \mathrm{d}}{\mathrm{d}z}\left(\overline{\mathbb{C}}(\boldsymbol{\Phi}_{0})(...)^{\mathrm{sym}}\boldsymbol{\nu}\cdot\partial_{z}\mathbf{W}_{1,n_{r}} \right)\,\mathrm{d}z\right]\right\}\] on \(\Gamma_{ij}\), where \((...)^{\mathrm{sym}}\) abbreviates \(\left(\nabla_{\Gamma}\mathbf{W}_{0,n_{r}}+\partial_{z}\mathbf{W}_{1,n_{r}} \otimes\boldsymbol{\nu}\right)^{\mathrm{sym}}\). Next, we want to show that \[\int_{-\infty}^{\infty}\left(\partial_{zz}(\partial_{z}\boldsymbol{\Phi}_{0}) -\psi_{0}^{\prime\prime}(\boldsymbol{\Phi}_{0})\partial_{z}\boldsymbol{\Phi} _{0}\right)\cdot\boldsymbol{\Phi}_{1}\,\mathrm{d}z-\int_{-\infty}^{\infty} \boldsymbol{\mu}_{0}\cdot\partial_{z}\boldsymbol{\Phi}_{1}\,\mathrm{d}z=0. \tag{6.41}\] Differentiating (6.25) with respect to \(z\), multiplying by \(\boldsymbol{\Phi}_{1}\) and integrating the resulting equation with respect to \(z\), we deduce \[\int_{-\infty}^{\infty}\left(\partial_{zz}(\partial_{z}\boldsymbol{\Phi}_{0}) -\psi_{0}^{\prime\prime}(\boldsymbol{\Phi}_{0})\partial_{z}\boldsymbol{\Phi} _{0}\right)\cdot\boldsymbol{\Phi}_{1}\,\mathrm{d}z=-\int_{-\infty}^{\infty} \left[\partial_{z}(\boldsymbol{\Lambda}_{0}+\boldsymbol{\mu}_{0})\right] \cdot\boldsymbol{\Phi}_{1}\,\mathrm{d}z.\] Thus, in order to prove (6.41), it suffices to show \[\int_{-\infty}^{\infty}\left[\partial_{z}(\boldsymbol{\Lambda}_{0}+ \boldsymbol{\mu}_{0})\right]\cdot\boldsymbol{\Phi}_{1}\,\mathrm{d}z+\int_{- \infty}^{\infty}\boldsymbol{\mu}_{0}\cdot\partial_{z}\boldsymbol{\Phi}_{1}\, \mathrm{d}z=0. \tag{6.42}\] By means of the product rule, the left-hand side can be reformulated as \[\int_{-\infty}^{\infty}\left[\partial_{z}(\boldsymbol{\Lambda}_{0 }+\boldsymbol{\mu}_{0})\right]\cdot\boldsymbol{\Phi}_{1}\,\mathrm{d}z=-\int_{ -\infty}^{\infty}(\boldsymbol{\Lambda}_{0}+\boldsymbol{\mu}_{0})\cdot \partial_{z}\boldsymbol{\Phi}_{1}\,\mathrm{d}z\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad respectively, for all \(i\in\{1,\ldots,N\}\) and \(z\in\mathbb{R}\). Now, the first equation in (6.44) implies that for any \(z\in\mathbb{R}\) with \(\Phi_{0}^{i}(z)\neq 0\), we have \(\mu_{0}^{i}(z)=0\) and thus also \(\mu_{0}^{i}(z)\Phi_{1}^{i}(z)=0\). On the other, for all \(z\in\mathbb{R}\) with \(\Phi_{0}^{i}(z)=0\), we infer from the second equation in (6.44) that \(\mu_{0}^{i}(z)\Phi_{1}^{i}(z)=0\). Combining both statements, we conclude \[\mu_{0}^{i}(z)\,\Phi_{1}^{i}(z)=0\quad\text{for all $i\in\{1,\ldots,N\}$ and $z\in\mathbb{R}$}.\] This proves (6.43). By the above considerations, this verifies (6.42) which in turn implies equation (6.41). To conclude this section, we recall the definition of the _jump_, see (5.5). Moreover, we recall from (4.22) that the mean curvature of \(\Gamma_{ij}\) is given by \(\kappa_{ij}=-\nabla_{\Gamma_{ij}}\cdot\boldsymbol{n}_{\Gamma_{ij}}\). Using the matching conditions (5.4), (6.5) and (6.6), we finally infer from (6.40) that \[\begin{split}\left(\vartheta_{1}^{j}-\vartheta_{1}^{i}\right)& =\sigma_{ij}\kappa_{ij}-\frac{1}{\gamma}\sum_{j=1}^{l}[\partial _{\lambda_{n_{\!r}}}\!\Psi]\left(\lambda_{0,n_{1}},\ldots,\lambda_{0,n_{l}} \right)\lambda_{0,n_{r}}\left[\overline{\rho}\left|\boldsymbol{w}_{0,n_{r}} \right|^{2}\right]_{i}^{j}\\ &\quad+\frac{1}{\gamma}\sum_{r=1}^{l}\left\{\left[\partial_{ \lambda_{n_{\!r}}}\!\Psi\right]\left(\lambda_{0,n_{1}},\ldots,\lambda_{0,n_{l} }\right)\right.\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \ ### The sharp-interface limit of the state equation Therefore, we recall that the domain \(\Omega\) is partitioned into \(N\) regions \(\Omega_{i}\) for \(i=1,\ldots,N\) representing the presence of the \(i\)-th material (\(i<N\)) or void (\(i=N\)) in its pure form. Those regions are separated by interfaces \(\Gamma_{ij}\). Furthermore we have chosen \(\boldsymbol{\eta}_{\Gamma_{ij}}\) to be the unit normal vector field on \(\Gamma_{ij}\) pointing from \(\Omega_{i}\) into the region \(\Omega_{j}\). This means that \[x+\delta\boldsymbol{\eta}_{\Gamma_{ij}}(x)\in\Omega_{j}\quad\text{and}\quad x -\delta\boldsymbol{\eta}_{\Gamma_{ij}}(x)\in\Omega_{i}\quad x\in\Gamma_{ij} \text{ and }\delta>0.\] To capture the behavior of a function \(\boldsymbol{v}\) across the interface \(\Gamma_{ij}\), we defined its _jump_ by \[[\boldsymbol{v}]_{i}^{j}(x)\coloneqq\lim_{\delta\searrow 0}\Big{(} \boldsymbol{v}\big{(}x+\delta\boldsymbol{\eta}_{\Gamma_{ij}}(x)\big{)}- \boldsymbol{v}\big{(}x-\delta\boldsymbol{\eta}_{\Gamma_{ij}}(x)\big{)}\Big{)},\] for all \(x\in\Gamma_{ij}\), see (5.5). Combining the equations (\(SE_{0}^{i}\)) derived in Claim 4.1 and the jump conditions obtained in (6.5) and (6.10), we obtain the system \[\left\{\begin{array}{rcl}-\nabla\cdot\big{(}\mathbb{C}^{i} \mathcal{E}(\boldsymbol{w}_{0,n_{r}})\big{)}&=\lambda_{0,n_{r}}\rho^{i} \boldsymbol{w}_{0,n_{r}}&\text{in }\Omega_{i},\\ \big{[}\mathbb{C}\mathcal{E}(\boldsymbol{w}_{0,n_{r}})\boldsymbol{n}_{\Gamma_ {ij}}\big{]}_{i}^{j}&=\boldsymbol{0}&\text{on }\Gamma_{ij},\\ \big{[}\boldsymbol{w}_{0,n_{r}}\big{]}_{i}^{j}&=\boldsymbol{0}&\text{on }\Gamma_{ ij},\\ \mathbb{C}^{i}\mathcal{E}_{i}(\boldsymbol{w}_{0,n_{r}})\boldsymbol{n}_{\Gamma_ {iN}}&=\boldsymbol{0}&\text{on }\Gamma_{iN},\\ \mathbb{C}^{i}\mathcal{E}(\boldsymbol{w}_{0,n_{r}})\boldsymbol{n}&=\boldsymbol{ 0}&\text{on }\Gamma_{0}\cap\partial\Omega_{i},\\ \boldsymbol{w}_{0,n_{r}}&=\boldsymbol{0}&\text{on }\Gamma_{D}\cap\partial\Omega_{i}, \end{array}\right.\] ( \[SE_{r}^{ij}\] ) for \(i,j=1,\ldots,N-1\) and \(r=1,\ldots,l\), as the _sharp-interface limit of the state equation_ (\(SE^{\varepsilon}\)). Here, \(\boldsymbol{w}_{0,n_{r}}\) is normalized in the material regions, i.e., \[1=\sum_{i=1}^{N-1}\int_{\Omega_{i}}\rho^{i}\left|\boldsymbol{w}_{0,n_{r}} \right|^{2}\,\mathrm{d}x. \tag{7.1}\] Furthermore, we infer from (6.10) that \[[\boldsymbol{w}_{0,n_{r}}]_{i}^{N}=\boldsymbol{0}\quad\text{on }\Gamma_{iN} \tag{7.2}\] for all \(i\in\{1,\ldots,N-1\}\) and each \(r\in\{1,\ldots,l\}\). However, this condition does not provide any additional information as we do not know how \(\boldsymbol{w}_{0,n_{r}}\) behaves in the void region. In particular, we see that by interpreting (\(SE_{r}^{ij}\)) as one system of PDEs in the material region \(\bigcup_{i=1}^{N-1}\Omega_{i}\), the homogeneous Neumann boundary condition in the fourth line of (\(SE_{r}^{ij}\)) is enough to obtain a closed system. Combining the Neumann type jump condition on \(\Gamma_{ij}\) stated in the second line of (\(SE_{r}^{ij}\)) with the normality condition (7.1), we are able to obtain the relation \[\int_{\Omega^{M}}\mathbb{C}_{M}\,\mathcal{E}(\boldsymbol{w}_{0,n_{r}}): \mathcal{E}(\boldsymbol{w}_{0,n_{r}})\,\mathrm{d}x=\lambda_{0,n_{r}}, \tag{7.3}\] with \[\Omega^{M}:=\bigcup_{i=1}^{N-1}\Omega_{i}\quad\text{and}\quad\mathbb{C}^{M}:= \left(\sum_{i=1}^{N-1}\mathbb{C}^{i}\,\mathds{1}_{\Omega_{i}}\right),\] where \(\mathds{1}_{\Omega_{i}}\) denotes the characteristic function on \(\Omega_{i}\). This means that the eigenvalue \(\lambda_{0,n_{r}}\) in the sharp-interface setting is indeed solely determined by an eigenvalue equation on the material region \(\Omega^{M}\) but does not have any contribution from the void region. To verify (7.3), we test \((SE^{ij}_{\tau})\) with \(\boldsymbol{w}_{0,n_{r}}\) and integrate by parts. This yields \[\begin{split}&\int_{\Omega_{i}}\mathbb{C}^{i}\mathcal{E}( \boldsymbol{w}_{0,n_{r}}):\mathcal{E}(\boldsymbol{w}_{0,n_{r}})\,\mathrm{d}x- \int_{\partial\Omega_{i}}\mathbb{C}^{i}\mathcal{E}(\boldsymbol{w}_{0,n_{r}}) \boldsymbol{n}_{\Gamma_{i}}\cdot\boldsymbol{w}_{0,n_{r}}\,\mathrm{d}\Gamma= \lambda_{0,n_{r}}\\ &\quad=\int_{\Omega_{i}}\rho^{i}\left|\boldsymbol{w}_{0,n_{r}} \right|^{2}\,\mathrm{d}x,\end{split} \tag{7.4}\] for all \(i\in\{1,\ldots,N-1\}\), where \(\boldsymbol{n}_{\Gamma_{i}}\) stands for the outer unit normal vector field of \(\partial\Omega_{i}\). Noticing that the outer unit normal vector simply switches its sign on neighboring boundaries, we now use the second and the fourth line of \((SE^{ij}_{\tau})\) to infer \[\sum_{i=1}^{N-1}\int_{\partial\Omega_{i}}\mathbb{C}^{i}\mathcal{E}(\boldsymbol {w}_{0,n_{r}})\boldsymbol{n}_{\Gamma_{i}}\cdot\boldsymbol{w}_{0,n_{r}}\, \mathrm{d}\Gamma=0.\] Thus, summing the equations (7.4) from \(i=1\) to \(N-1\) and using property (7.1), we conclude \[\sum_{i=1}^{N-1}\int_{\Omega_{i}}\mathbb{C}^{i}\mathcal{E}(\boldsymbol{w}_{0, n_{r}}):\mathcal{E}(\boldsymbol{w}_{0,n_{r}})\,\mathrm{d}x=\lambda_{0,n_{r}}.\] By the linearity of the integral, this directly proves (7.3). **Remark 7.1**.: As a refinement of Remark 4.21, we now see that as long as at least one of the material regions \(\Omega_{1},\ldots,\Omega_{N-1}\) shares a sufficiently nice part of its boundary with \(\Gamma_{D}\), we can apply Korn's inequality in order to deduce that all \(\lambda_{0,n_{r}}\) are strictly positive. From a physical point of view, this is reasonable since if the material region \(\Omega^{M}\) of the structure is not attached to some fixed boundary the shape can freely move within the design domain just by translation without exhibiting any vibrations. ### The sharp-interface limit of the first-order optimality condition Now let us turn to the limit of the gradient inequality \((GI^{\varepsilon})\). For the sake of completeness, let us restate our final results from the previous section, i.e., (6.45) and (6.46). We have \[\begin{split} 0=\gamma\sigma_{ij}\kappa_{ij}-\sum_{j=1}^{l}[ \partial_{\lambda_{n_{r}}}\!\Psi]\left(\lambda_{0,n_{1}},\ldots,\lambda_{0,n_ {l}}\right)\lambda_{0,n_{r}}\left[\overline{\mathcal{P}}\left|\boldsymbol{w}_ {0,n_{r}}\right|^{2}\right]_{i}^{j}+\gamma\big{(}\vartheta_{1}^{i}-\vartheta_{ 1}^{j}\big{)}\\ +\sum_{r=1}^{l}\Bigl{\{}&\left[\partial_{\lambda_{n _{r}}}\!\Psi\right]\left(\lambda_{0,n_{1}},\ldots,\lambda_{0,n_{l}}\right)\\ &\quad\quad\quad\quad\quad\cdot\left(\left[\overline{\mathbb{C}} \mathcal{E}(\boldsymbol{w}_{0,n_{r}}):\mathcal{E}(\boldsymbol{w}_{0,n_{r}}) \right]_{i}^{j}-2\bigl{[}\overline{\mathbb{C}}\mathcal{E}(\boldsymbol{w}_{0,n_ {r}})\boldsymbol{\nu}\cdot\nabla\boldsymbol{w}_{0,n_{r}}\boldsymbol{\nu} \bigr{]}_{i}^{j}\right)\Bigr{\}}\end{split} \tag{7.5}\] on \(\Gamma_{ij}\) for all \(i,j=1\ldots,N-1\), and \[\begin{split} 0=\gamma\sigma_{iN}\kappa_{iN}+\sum_{r=1}^{l}[ \partial_{\lambda_{n_{r}}}\!\Psi]\left(\lambda_{0,n_{1}},\ldots,\lambda_{0,n_ {l}}\right)\lambda_{0,n_{r}}\ \rho^{i}\left|(\boldsymbol{w}_{0,n_{r}})_{i}\right|^{2}+\gamma\big{(} \vartheta_{1}^{i}-\vartheta_{1}^{N}\big{)}\\ -\sum_{r=1}^{l}[\partial_{\lambda_{n_{r}}}\!\Psi]\left(\lambda_{ 0,n_{1}},\ldots,\lambda_{0,n_{l}}\right)\mathbb{C}^{i}\mathcal{E}_{i}( \boldsymbol{w}_{0,n_{r}}):\mathcal{E}_{i}(\boldsymbol{w}_{0,n_{r}})\end{split} \tag{7.6}\] on \(\Gamma_{iN}\) for all \(i=1\ldots,N-1\) if \(j=N\). Here \(\sigma_{ij}\) is defined as in (6.28) and stands for the total energy of a transition across the interface \(\Gamma_{ij}\). The vector \(\boldsymbol{\vartheta}_{1}\in\mathbb{R}^{N}\) denotes the \(\mathcal{O}(\varepsilon)\)-contribution of the Lagrange-multiplier resulting from the integral constraint \(\int_{\Omega}\boldsymbol{\varphi}^{\varepsilon}\,\mathrm{d}x=\boldsymbol{m}\) that is hidden in the condition \(\boldsymbol{\varphi}^{\varepsilon}\in\boldsymbol{\mathcal{G}}^{\boldsymbol{m}}\) (cf. Theorem 3.12). Recalling (6.47), we additionally have the triple junction condition at any junction point \(m_{ijk}\) with pairwise disjoint \(i,j,k\{1,\ldots,N\}\) \[\sigma_{ij}\boldsymbol{n}_{\Gamma_{ij}}+\sigma_{jk}\boldsymbol{n}_{\Gamma_{jk}} +\sigma_{ki}\boldsymbol{n}_{\Gamma_{ki}}=0\quad\text{in }m_{ijk}.\] ### The sharp-interface optimality system in the case of only one material We now want to state above equations for the simplest case of only one single material (i.e., \(N=2\)) as this is the scenario we further study in the subsequent sections. In this case, we have \(\Omega=\Omega^{M}\cup\Omega^{V}\), where \(\Omega^{M}\) and \(\Omega^{V}\) denote the material and the void parts of the domain, respectively. We now denote the interface separating the two phases by \(\Gamma_{MV}\), its outer unit normal vector field by \(\boldsymbol{n}_{\Gamma_{MV}}\) and its mean curvature by \(\kappa_{MV}=-\nabla_{\Gamma_{MV}}\cdot\boldsymbol{n}_{\Gamma_{MV}}\). Using the notation \(\Gamma_{D}^{M}\coloneqq\Gamma_{D}\cap\partial\Omega^{M}\) and \(\Gamma_{0}^{M}\coloneqq\Gamma_{0}\cap\partial\Omega^{M}\), we obtain from (\(SE_{r}^{ij}\)), (7.1) and (7.2) the state equation \[\left\{\begin{array}{rcl}-\nabla\cdot\left(\mathbb{C}^{M}\mathcal{E}( \boldsymbol{w}_{0,n_{r}})\right)&=\lambda_{0,n_{r}}\rho^{M}\boldsymbol{w}_{0,n _{r}}&\text{in }\Omega^{M},\\ \mathbb{C}^{M}\mathcal{E}_{M}(\boldsymbol{w}_{0,n_{r}})\;\boldsymbol{n}_{ \Gamma_{MV}}&=\boldsymbol{0}&\text{on }\Gamma_{MV},\\ \boldsymbol{w}_{0,n_{r}}&=\boldsymbol{0}&\text{on }\Gamma_{D}^{M},\\ \mathbb{C}^{M}\mathcal{E}(\boldsymbol{w}_{0,n_{r}})\;\boldsymbol{n}&= \boldsymbol{0}&\text{on }\Gamma_{0}^{M},\end{array}\right.\] ( \[SE_{r}^{MV}\] ) for \(r=1,\ldots,l\), along with the first-order necessary optimality condition \[\begin{split} 0&=\gamma\,\sigma_{MV}\,\kappa_{MV}+\sum_{r=1}^{l} [\partial_{\lambda_{n_{r}}}\!\Psi]\,(\lambda_{0,n_{1}},\ldots,\lambda_{0,n_{l} })\,\lambda_{0}^{n_{r}}\rho^{M}\,|(\boldsymbol{w}_{0,n_{r}})_{M}|^{2}\\ &\quad-\sum_{r=1}^{l}[\partial_{\lambda_{n_{r}}}\!\Psi]\,(\lambda _{0,n_{1}},\ldots,\lambda_{0,n_{l}})\,\mathbb{C}^{M}\mathcal{E}_{M}( \boldsymbol{w}_{0,n_{r}}):\mathcal{E}_{M}(\boldsymbol{w}_{0,n_{r}})+\gamma \left(\vartheta_{1}^{1}-\vartheta_{1}^{2}\right)\end{split} \tag{7.7}\] on \(\Gamma_{MV}\). This means that the functions \(\boldsymbol{w}_{0,n_{r}}\) are eigenfunctions to the eigenvalues \(\lambda_{0,n_{r}}\) which essentially solve the eigenvalue problem for the elasticity equation subject to a homogeneous Neumann boundary condition on the shape \(\Omega^{M}\). **Remark 7.2**.: Note that, in general, one cannot predict the behavior of solutions to (\(SE_{r}^{MV}\)). If \(\Omega^{M}\) is merely a set of finite perimeter that does not have a Lipschitz boundary or if \(\Gamma_{MV}\cap\Gamma_{D}^{M}=\emptyset\), the classical spectral theory (as applied in Section 2.5) does not provide us with an infinite sequence of positive eigenvalues. Nevertheless, as we want to consider a well posed minimization problem and want to calculate shape derivatives associated to this problem, we assume that these issues do not occur. In particular, we always assume \(\Omega^{M}\) to be sufficiently smooth and \(\partial\Omega^{M}\) to have a suitably nice intersection with \(\Gamma_{D}^{M}\) such that an infinite sequence of positive eigenvalues actually exists (see also Remark 7.1). ## 8. Relating the first-order optimality condition to classical shape calculus We now want to compare the above results, especially (7.7), to the results in [7], which were obtained using shape calculus. Our goal is to justify that the gradient equality (7.7) is indeed the first-order condition of a sharp-interface eigenvalue optimization problem, which is formally the limit of the diffuse-interface problem we started with. Therefore, we need to fit the notation of [7] to our setting. As above consider the situation \(N=2\), i.e., \(\Omega=\Omega^{M}\cup\Omega^{V}\). Denote with \(P_{\Omega}(\Omega^{M})\) the perimeter of the shape \(\Omega^{M}\)_within_ the design domain \(\Omega\), which is given by the Hausdorff measure \(\mathcal{H}^{d-1}(\partial\Omega^{M}\cap\Omega)\) provided that \(\Omega^{M}\) is non-empty and sufficiently smooth. Furthermore, we consider a prescribed mass \(m=|\Omega^{M}|<|\Omega|\). In order to be consistent with the notation used in the previous chapters, we choose \(\mathbf{m}=(m_{1},m_{2})^{T}\in\Sigma^{2}\) with \(m_{1}=m|\Omega|^{-1}\) and \(m_{2}=1-m_{1}\). Then the sharp-interface structural optimization problem that we intend to approximate via our diffuse-interface problem (\(\mathcal{P}^{\varepsilon}_{l}\)) reads as \[\begin{cases}\min&J(\Omega^{M})\coloneqq\Psi(\lambda_{n_{1}},\ldots,\lambda_{ n_{l}})+\gamma\,\sigma_{MV}\,P_{\Omega}(\Omega^{M}),\\ \text{over}&\mathcal{U}^{\text{ad}}=\left\{\Omega^{M}\subset\Omega:\big{|} \Omega^{M}\big{|}=m\right\},\\ \\ \text{s.t.}&(SE^{MV}_{r})\left\{\begin{array}{rcl}-\nabla\cdot\left(\mathbb{C }^{M}\mathcal{E}(\mathbf{w}_{n_{r}})\right)&=\lambda_{n_{r}}\rho^{M}\mathbf{w}_{n_{r}}& \text{in }\Omega^{M},\\ \mathbb{C}^{M}\mathcal{E}_{M}(\mathbf{w}_{n_{r}})\,\mathbf{n}_{\Gamma_{MV}}&=\mathbf{0}& \text{on }\Gamma_{MV},\\ \mathbb{C}^{M}\mathcal{E}(\mathbf{w}_{n_{r}})\,\mathbf{n}&=\mathbf{0}&\text{on }\Gamma_{0}^{M},\\ \mathbf{w}_{n_{r}}&=\mathbf{0}&\text{on }\Gamma_{D}^{M},\\ \end{array}\right.\\ \text{for all }r\in\{1,\ldots,l\}.\end{cases}\] ( \[\mathcal{P}^{0}_{l}\] ) This system is the sharp-interface limit problem associated to the diffuse-interface problem (\(\mathcal{P}^{\varepsilon}_{l}\)), where the side condition is exactly the sharp-interface state equation (\(SE^{MV}_{r}\)) and the perimeter \(\sigma_{MV}P_{\Omega}(\Omega^{M})\) is the rigorous \(\Gamma\)-limit of the Ginzburg-Landau energy, see [14]. We recall that the constant \(\sigma_{MV}\) we obtained in (6.28) is exactly the one obtained in [14] in terms of the rigorous \(\Gamma\)-limit, which is denoted by \(d(\mathbf{e}_{i},\mathbf{e}_{j})\) there. In particular, \(\sigma_{MV}\) is independent of the shape \(\Omega^{M}\). In case an ambiguity might arise, we indicate the shape dependency explicitly in the eigenfunctions and eigenvalues, i.e., we write \((\lambda_{n_{r}}(\Omega^{M}),\mathbf{w}_{n_{r}}(\Omega^{M}))\) for \(r=1,\ldots,l\). Now, we want to apply the calculus of shape derivatives from [7, Thm. 2.5] to our situation. We obtain the following statement. **Theorem 8.1**.: _Let \(\Omega^{M}\) be a smooth bounded open set and let \(\mathbf{\theta}\in W^{1,\infty}(\mathbb{R}^{d},\mathbb{R}^{d})\) with \(\mathbf{\theta}\cdot\mathbf{n}_{\Gamma_{\partial\Omega^{M}}}=0\) on \(\partial\Omega^{M}\backslash\Gamma_{MV}\). We further assume that for \(r=1,\ldots,l\), the eigenfunctions \(\mathbf{w}_{n_{r}}(\Omega^{M})\) in \((SE^{MV}_{r})\) are sufficiently smooth, say \(\mathbf{w}_{n_{r}}(\Omega^{M})\in H^{2}(\Omega^{M};\mathbb{R}^{d})\)._ _Then, if the involved eigenvalues \(\lambda_{n_{r}}\) for \(r=1,\ldots,l\) are all simple, the shape derivative of \(J\) at the shape \(\Omega^{M}\) in the direction \(\mathbf{\theta}\) fulfills the equation_ \[\begin{split} J^{\prime}(\Omega^{M})(\mathbf{\theta})=\sum_{r=1}^{l} \Bigg{\{}&[\partial_{\lambda_{n_{r}}}\Psi](\lambda_{n_{1}}( \Omega^{M}),\ldots,\lambda_{n_{l}}(\Omega^{M}))\\ &\cdot\Bigg{[}\int_{\Gamma_{MV}}\mathbb{C}^{M}\mathcal{E}(\mathbf{w} _{n_{r}}(\Omega^{M})):\mathcal{E}(\mathbf{w}_{n_{r}}(\Omega^{M}))\mathbf{\theta}\cdot \mathbf{n}_{\Gamma_{MV}}\,\mathrm{d}\mathcal{H}^{d-1}\\ &-\lambda_{n_{r}}(\Omega^{M})\int_{\Gamma_{MV}}\rho^{M}\big{|} \mathbf{w}_{n_{r}}(\Omega^{M})\big{|}^{2}\mathbf{\theta}\cdot\mathbf{n}_{\Gamma_{MV}}\, \mathrm{d}\mathcal{H}^{d-1}\Bigg{]}\Bigg{\}}\\ &-\int_{\Gamma_{MV}}\gamma\sigma_{MV}\,\kappa_{MV}\,\mathbf{\theta} \cdot\mathbf{n}_{\Gamma_{MV}}\,\mathrm{d}\mathcal{H}^{d-1}.\end{split} \tag{8.1}\] _Here, the shape derivative of \(J\) at a shape \(\Omega^{M}\) is defined as the Frechet derivative of the functional_ \[W^{1,\infty}(\mathbb{R}^{d};\mathbb{R}^{d})\to\mathbb{R},\quad\mathbf{\zeta}\mapsto J \big{(}(\mathrm{Id}+\mathbf{\zeta})\Omega^{M}\big{)}\] _evaluated at \(\mathbf{\zeta}=\mathbf{0}\)._ **Remark 8.2**.: 1. Note that the simplicity of eigenvalues is crucial here. Only then it is guaranteed that the eigenvalues and eigenfunctions depend on the domain \(\Omega^{M}\) in a differentiable way. For a comprehensive overview over the differentiablitiy of spectral quantities with repsect to the domain we refer to [50, Section 5.7]. 2. For \(\mathbf{\zeta}\in W^{1,\infty}(\mathbb{R}^{d};\mathbb{R}^{d})\) the application \[T_{\mathbf{\zeta}}:\mathbb{R}^{d}\to\mathbb{R}^{d},\quad x\mapsto(\mathrm{Id}+\mathbf{ \zeta})(x),\] is invertible if \(\left\|\boldsymbol{\zeta}\right\|_{W^{1,\infty}}<1\), and it holds \((\mathrm{Id}+\boldsymbol{\zeta})^{-1}-\mathrm{Id}\in W^{1,\infty}(\mathbb{R}^{d} ;\mathbb{R}^{d})\) with \[\left\|(\mathrm{Id}+\boldsymbol{\zeta})^{-1}-\mathrm{Id}\right\|_{W^{1,\infty} }\leq\left\|\boldsymbol{\zeta}\right\|_{W^{1,\infty}}(1-\left\|\boldsymbol{ \zeta}\right\|_{W^{1,\infty}})^{-1}.\] This means the family \((T_{\boldsymbol{\zeta}})_{\boldsymbol{\zeta}\in W^{1,\infty}}\) describes diffeomorphic perturbations of \(\Omega^{M}\) "close" to \(\Omega^{M}\) if \(\left\|\boldsymbol{\zeta}\right\|_{W^{1,\infty}}\) is small, motivating the definition of the shape derivative above. For a detailed discussion of this concept, we refer to [50, Section 5.2]. Proof.: We proceed analogously to [7, Theorem 2.5]. In the following, \(\Omega_{\boldsymbol{\zeta}}=(\mathrm{Id}+\boldsymbol{\zeta})(\Omega^{M})\) denotes the perturbation of \(\Omega^{M}\) associated with a sufficiently small \(\boldsymbol{\zeta}\in W^{1,\infty}(\mathbb{R}^{d};\mathbb{R}^{d})\). First of all, for \(\boldsymbol{v}_{n_{r}}\in H^{1}(\mathbb{R}^{d};\mathbb{R}^{d})\) with \(r=1,\ldots,l\), we introduce the Lagrangian \[\mathcal{L}(\Omega_{\boldsymbol{\zeta}},\boldsymbol{v}_{n_{1}}, \ldots,\boldsymbol{v}_{n_{l}})\] \[\quad=\Psi\left(\frac{\int_{\Omega_{\boldsymbol{\zeta}}}\mathbb{ C}^{M}\mathcal{E}(\boldsymbol{v}_{n_{1}}):\mathcal{E}(\boldsymbol{v}_{n_{1}}) \,\mathrm{d}x}{\int_{\Omega_{\boldsymbol{\zeta}}}\rho^{M}\left|\boldsymbol{v} _{n_{1}}\right|^{2}\,\mathrm{d}x},\ldots,\frac{\int_{\Omega_{\boldsymbol{ \zeta}}}\mathbb{C}^{M}\mathcal{E}(\boldsymbol{v}_{n_{l}}):\mathcal{E}( \boldsymbol{v}_{n_{l}})\,\mathrm{d}x}{\int_{\Omega_{\boldsymbol{\zeta}}} \rho^{M}\left|\boldsymbol{v}_{n_{l}}\right|^{2}\,\mathrm{d}x}\right)\] \[\qquad\qquad+\gamma\sigma_{MV}P(\Omega_{\boldsymbol{\zeta}})\, \mathrm{d}s.\] For the partial Frechet derivatives of the Lagrangian with respect to \(\boldsymbol{v}_{n_{r}}\) for \(r=1,\ldots,l\) at the point \((\Omega_{\boldsymbol{\zeta}},\boldsymbol{w}_{n_{1}}(\Omega_{\boldsymbol{\zeta }}),\ldots,\boldsymbol{w}_{n_{l}}(\Omega_{\boldsymbol{\zeta}}))\), we obtain \[\partial_{\boldsymbol{v}_{n_{r}}}\mathcal{L}\big{(}\Omega_{\boldsymbol{\zeta }},\boldsymbol{w}_{n_{1}}(\Omega_{\boldsymbol{\zeta}}),\ldots,\boldsymbol{w} _{n_{l}}(\Omega_{\boldsymbol{\zeta}})\big{)}=0. \tag{8.2}\] This is simply due to the fact, that the derivative of the Rayleigh quotient \[\mathcal{R}_{\boldsymbol{\zeta}}:H^{1}(\mathbb{R}^{d};\mathbb{R}^{d})\to \mathbb{R},\quad\boldsymbol{v}\mapsto\frac{\int_{\Omega_{\boldsymbol{\zeta}}} \mathbb{C}^{M}\mathcal{E}(\boldsymbol{v}):\mathcal{E}(\boldsymbol{v})\, \mathrm{d}x}{\int_{\Omega_{\boldsymbol{\zeta}}}\rho^{M}\left|\boldsymbol{v} \right|^{2}\,\mathrm{d}x},\] evaluated at an eigenfunction \(\boldsymbol{w}_{n}=\boldsymbol{w}_{n}(\Omega_{\boldsymbol{\zeta}})\) reads as \[\mathcal{R}^{\prime}_{\boldsymbol{\zeta}}(\boldsymbol{w}_{n}) \boldsymbol{v} =\frac{2\int_{\Omega_{\boldsymbol{\zeta}}}\mathbb{C}^{M}\mathcal{ E}(\boldsymbol{w}_{n}):\mathcal{E}(\boldsymbol{v})\,\mathrm{d}x\int_{\Omega_{ \boldsymbol{\zeta}}}\rho^{M}\left|\boldsymbol{w}_{n}\right|^{2}\,\mathrm{d}x}{ \left(\int_{\Omega_{\boldsymbol{\zeta}}}\rho^{M}\left|\boldsymbol{w}_{n}\right| ^{2}\,\mathrm{d}x\right)^{2}}\] \[\quad-\frac{2\int_{\Omega_{\boldsymbol{\zeta}}}\mathbb{C}^{M} \mathcal{E}(\boldsymbol{w}_{n}):\mathcal{E}(\boldsymbol{w}_{n})\,\mathrm{d}x \int_{\Omega_{\boldsymbol{\zeta}}}\rho^{M}\boldsymbol{w}_{n}\cdot\boldsymbol{v }\,\mathrm{d}x}{\left(\int_{\Omega_{\boldsymbol{\zeta}}}\rho^{M}\left| \boldsymbol{w}_{n}\right|^{2}\,\mathrm{d}x\right)^{2}}\] and this vanishes due to \((SE^{MV}_{r})\). On the other hand, recalling the definition of \(J\) in \((\mathcal{P}^{0}_{l})\), we obviously have \[J(\Omega_{\boldsymbol{\zeta}})=\mathcal{L}(\Omega_{\boldsymbol{\zeta}}, \boldsymbol{w}_{n_{1}}(\Omega_{\boldsymbol{\zeta}}),\ldots,\boldsymbol{w}_{n_ {l}}(\Omega_{\boldsymbol{\zeta}}))\] as the eigenvalues can be expressed by the corresponding Rayleigh quotients. Note that due to the differentiability of eigenfunctions as discussed in Remark 8.2, we can now apply the chain rule. Thus, using (8.2) we infer that the shape derivative is given by \[J^{\prime}(\Omega^{M}) =\frac{\mathrm{d}}{\mathrm{d}\boldsymbol{\zeta}}[J((\mathrm{Id}+ \boldsymbol{\zeta})(\Omega^{M}))]_{\boldsymbol{\zeta}=\boldsymbol{0}}\] \[=\frac{\mathrm{d}}{\mathrm{d}\boldsymbol{\zeta}}[\mathcal{L}(( \mathrm{Id}+\boldsymbol{\zeta})(\Omega^{M}),\boldsymbol{w}_{n_{1}}(\Omega^{M}),\ldots,\boldsymbol{w}_{n_{l}}(\Omega^{M}))]_{\boldsymbol{\zeta}=\boldsymbol{0}}\] Applying the formulas for shape derivatives in [7, Lemma 2.3], we deduce \[J^{\prime}(\Omega^{M})(\boldsymbol{\theta}) =\sum_{r=1}^{l}\Bigg{\{}[\partial_{\lambda_{n}}\Psi](\lambda_{n_{1} }(\Omega^{M}),\ldots,\lambda_{n_{l}}(\Omega^{M}))\] \[\qquad\qquad\cdot\Bigg{[}\int_{\partial\Omega^{M}}\mathbb{C}^{M} \mathcal{E}(\boldsymbol{w}_{n_{r}}(\Omega^{M})):\mathcal{E}(\boldsymbol{w}_{n_ {r}}(\Omega^{M}))\boldsymbol{\theta}\cdot\boldsymbol{n}_{\partial\Omega^{M}} \,\mathrm{d}\mathcal{H}^{d-1}\] \[\qquad\qquad-\lambda_{n_{r}}(\Omega^{M})\int_{\partial\Omega^{M} }\rho^{M}\big{|}\boldsymbol{w}_{n_{r}}(\Omega^{M})\big{|}^{2}\boldsymbol{\theta }\cdot\boldsymbol{n}_{\partial\Omega^{M}}\,\mathrm{d}\mathcal{H}^{d-1}\Bigg{]} \Bigg{\}}\] \[-\int_{\partial\Omega^{M}}\gamma\sigma_{MV}\,\kappa_{M}\, \boldsymbol{\theta}\cdot\boldsymbol{n}_{\partial\Omega^{M}}\,\mathrm{d} \mathcal{H}^{d-1},\] where \(\kappa_{M}\) denotes the mean curvature of \(\partial\Omega^{M}\). By the assumption \(\boldsymbol{\theta}\cdot\boldsymbol{n}_{\partial\Omega^{M}}=0\) on \(\partial\Omega^{M}\backslash\Gamma_{MV}\), the boundary integrals vanish on \(\partial\Omega^{M}\backslash\Gamma_{MV}\) and we thus arrive at (8.1). Note that in [7], the mean curvature is defined as \(\kappa=\nabla_{\partial\Omega^{M}}\cdot\boldsymbol{n}_{\partial\Omega^{M}}\), whereas (in accordance with (4.22)) our mean curvature is given by \(\kappa=-\nabla_{\partial\Omega^{M}}\cdot\boldsymbol{n}_{\partial\Omega^{M}}\). This explains the negative sign of our term involving \(\kappa_{M}\). **Remark 8.3**.: The preceding theorem shows that using the approach of classical shape calculus and additionally taking the volume constraint \(\big{|}\Omega^{M}\big{|}=m\) into account, we recover the gradient equality (7.7) since the volume constraint produces a Lagrange multiplier as in our previous analysis. This justifies our formal approach from the viewpoint of classical shape calculus since (7.7) can be interpreted as the first-order necessary optimality condition of the shape optimization problem \((\mathcal{P}^{0}_{l})\). ## 9. Numerical Examples In the following, we present numerical results that illustrate the applicability of our approach to find optimal topologies. After a brief introduction of the numerical method, we investigate the dependence of solutions on the parameter \(\varepsilon\) in Section 9.1. Therefore, we study a particular setting of an elastic beam that is known from literature (cf. [7]). In Section 9.2, we consider a joint optimization of \(\lambda_{1}\) and \(\lambda_{2}\) for this beam setup, and in Section 9.3, we investigate an extended optimization problem to not only optimize the shape and topology of this beam with respect to its first eigenvalue but also its compliance. As in Subsection 7.3 and Section 8, we restrict ourselves to the case of only two phases, i.e., material and void. In this situation, the vector-valued phase-field \(\boldsymbol{\varphi}=(\varphi^{1},\varphi^{2})\) can be represented by a scalar order parameter \[\varphi:=\varphi^{1}-\varphi^{2}\in H^{1}(\Omega)\cap L^{\infty}( \Omega).\] This means that \(\varphi\) attains its values in \([-1,1]\), where "\(1\)" represents the material and "\(-1\)" represents the void. The elastic tensor \(\mathbb{C}(\varphi)\) now is defined as \[\mathbb{C}(\varphi)\mathcal{E}(w):=\alpha(\varphi)\big{(}2\mu\, \mathcal{E}(w)+\ell\operatorname{tr}\big{(}\mathcal{E}(w)\big{)}\,\mathcal{I }\big{)} \tag{9.1}\] for Lame parameters \(\mu,\ell>0\) and the quadratic interpolation function \(\alpha(\varphi)\) satisfying \(\alpha(1)=1\), \(\alpha(-1)=\underline{\alpha}\varepsilon^{2}\), and \(\alpha^{\prime}(-1)=0\) for some constant \(\underline{\alpha}\). The eigenvalue equation is given by \[-\nabla\cdot[\mathbb{C}(\varphi)\mathcal{E}(w)]=\lambda\,\beta( \varphi)\rho\,w, \tag{9.2}\] with the quadratic interpolation function \(\beta(\varphi)\) satisfying \(\beta(+1)=1\), \(\beta(-1)=\underline{\beta}\varepsilon^{2}\) and \(\beta^{\prime}(-1)=0\) as well as an additional density function \(\rho\) that might depend on the spatial variable. If not stated differently, we use \(\underline{\alpha}=10^{-2}\) and \(\underline{\beta}=10^{-4}\). **Remark 9.1**.: Let us recall the discussion about spurious eigenmodes in Section 4.2 which motivates the choice of the model in this numerical section. A slight difference compared to the setting proposed in Claim 4.3 is the scaling of the void components \(\tilde{\mathbb{C}}^{N}\) and \(\hat{\rho}^{N}\), which are now represented by \(\alpha(-1)\) and \(\beta(-1)\), respectively. Here, the relatively lower scaling of \(\alpha\) versus \(\beta\) in void regions is guaranteed by the prefactors \(\underline{\alpha}\) and \(\underline{\beta}\). For the concrete choice \(\varepsilon=10^{-2}\) (that is used in most of the computations below), one could absorb one \(\varepsilon\) into \(\underline{\alpha}\). The redefined parameters would then read as \[\underline{\alpha}=\underline{\beta}=10^{-4}\quad\text{and}\quad\alpha(-1)= \underline{\alpha}\varepsilon,\quad\beta(-1)=\underline{\beta}\varepsilon^{2},\] which fits into the setting of Section 4.2. Numerical Solution Method.The numerical implementation is based on linear finite elements for all functions provided by the finite element package FEniCs [9, 57] together with the PETSc linear algebra backend [12, 13]. For the eigenvalue problem, we use the package SLEPc [51]. The optimization problem is solved by the VMPT method that is proposed in [24]. In our case, it can be understood as an extension of the projected gradient method into the space \(H^{1}(\Omega)\cap L^{\infty}(\Omega)\). We refer to [24, 43, 44] for more details. ### Numerical investigation of the sharp-interface limit \(\varepsilon\to 0\) In this section, to illustrate the sharp-interface limit, we present numerical results for a sequence of decreasing values of \(\varepsilon\). We use the setup from [7, Sec. 7.1] to find a cantilever beam with maximal first eigenvalue, i.e., we choose \(\Psi(\lambda_{1})=-\lambda_{1}\). Our computational domain is given by \(\Omega=(0,2)\times(0,1)\). The Young's modulus is \(E=1\) and Poisson's ratio is \(\nu=0.3\) leading to \(\mu\approx 0.38\) and \(\ell\approx 0.58\). We define the subset \(\Omega_{\rho}=(1.9,2.0)\times(0.45,0.55)\) and set \(\rho(x)=1\) if \(x\not\in\Omega_{\rho}\) and \(\rho(x)=100\) if \(x\in\Omega_{\rho}\). We also fix \(\varphi(x)=1\) for all \(x\in\Omega_{\rho}\). The beam is supposed to be attached to the wall at the left boundary of \(\Omega\), i.e., at \(\Gamma_{D}=\{(0,\eta)\mid\eta\in(0,1)\}\subset\partial\Omega\). This leads to the boundary condition \(w=0\) on \(\Gamma_{D}\). We further set \(\Gamma_{0}=\partial\Omega\setminus\Gamma_{D}\) and we fix \(\gamma=10^{-4}\) and \(\int_{\Omega}\varphi=0\). Similar as in [7, Sec. 7.1], we start our optimization process with a checkerboard type initial function given by \(\varphi_{0}(x)=\operatorname{sign}\big{(}v(x)\big{)}\left|v(x)\right|^{0.3}\) with \(v(x)=\cos(3\pi x_{1})\cos(4\pi x_{2})\) for all \(x\in\Omega\). We want to emphasize that this problem is expected to have many local minima and thus, the choice of initial function can significantly influence the shape and topology of the local minimizer found by our numerical method. We now solve the optimization problem for a decreasing sequence of values of \(\varepsilon\). In Table 1, we present the values of \(\varepsilon\) together with the corresponding value of the Ginzburg-Landau energy \(E^{\varepsilon}(\varphi)=\int_{\Omega}\frac{\varepsilon}{2}|\nabla\varphi|^{2 }+\frac{1}{2\varepsilon}(1-\varphi^{2})\,\mathrm{d}x\) (cf. (2.4) with \(\psi_{0}(s)=\frac{1}{2}(1-s^{2})\)) and the eigenvalue \(\lambda_{1}\). Recall here that the values of the Ginzburg-Landau energy converge to a weighted perimeter of the shape in the sharp interface limit \(\varepsilon\to 0\). In Figure 1, we present the zero level lines of the (locally optimal) shapes we obtain for different values of \(\varepsilon\). Here we started with \(\varepsilon=0.08\) and used the local optimum as initial value for subsequent simulations. ### Optimization of a beam As a first test, we illustrate the influence of the regularization strength \(\gamma\) on the found structure. The parameter \(\gamma\) acts as a weight for the penalization of the length of the interface between void and material. Thus a smaller value of \(\gamma\) is expected to lead to thinner structures which contain more braces. Using the same setup as before, we solve again the optimization problem for the cantilever \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c} \(\varepsilon\) & \(80\cdot 10^{-3}\) & \(40\cdot 10^{-3}\) & \(20\cdot 10^{-3}\) & \(10\cdot 10^{-3}\) & \(5\cdot 10^{-3}\) & \(2.5\cdot 10^{-3}\) & \(1.25\cdot 10^{-3}\) \\ \hline \(\gamma E^{\varepsilon}(\varphi)\) & \(0.00119\) & \(0.00120\) & \(0.00117\) & \(0.00115\) & \(0.00114\) & \(0.00114\) & \(0.00114\) \\ \(\lambda_{1}\) & \(0.01577\) & \(0.01626\) & \(0.01658\) & \(0.01678\) & \(0.01692\) & \(0.01699\) & \(0.01703\) \\ \end{tabular} \end{table} Table 1: Scaled Ginzburg–Landau energy \(\gamma E^{\varepsilon}(\varphi)\) and principal eigenvalue \(\lambda_{1}\) of the optimal beam shape for decreasing values of \(\varepsilon\). This indicates that the values \(E^{\varepsilon}(\varphi)\) and \(\lambda_{1}\) converge as \(\varepsilon\) descreases. beam, but this time we fix \(\varepsilon=0.02\). We perform two simulations with \(\gamma\in\{10^{-4},10^{-5}\}\). The smaller \(\gamma\) is chosen, the finer structures we expect. We also expect that we reach a larger value for \(\lambda_{1}\), because less regularization is used. In Figure 2, we present the found structures for these parameters. On the left we present the result for \(\gamma=10^{-4}\) and on the right for \(\gamma=10^{-5}\). As expected, it is clearly visible that the structure obtained for the smaller value of \(\gamma\) is finer and contains more braces. Additionally, decreasing \(\gamma\) also leads to sharper corners. In a second test for the beam setup, we compare the numerical results for different choices of \(\Psi(\lambda_{1},\lambda_{2})\) as a linear combination of \(\lambda_{1}\) and \(\lambda_{2}\). We set \(\gamma=10^{-4}\) and use the solution shown in Figure 2 as the initialization of the optimization method. In Figure 3, we present numerical results for this setting with the choice \(\Psi(\lambda_{1},\lambda_{2})=-\lambda_{1}-\alpha\lambda_{2}\) for \(\alpha\in\{10^{-2},2\cdot 10^{-2},6\cdot 10^{-2},10^{-1}\}\). Moreover, in Table 2 we list the corresponding values of \(\lambda_{1}\) and \(\lambda_{2}\). Here, \(\alpha=0\) corresponds to the result shown in Figure 2 on the left. ### Joint optimization of compliance and principal eigenvalue In this subsection, we extend the problem by using a linear combination of compliance and the first eigenvalue as objective. The compliance problem is to find a displacement field \(\mathbf{u_{c}}\in H^{1}(\Omega;\mathbb{R}^{d})\) Figure 1: The zero level lines of the beam for the \(\varepsilon\to 0\) test for all tested \(\varepsilon\). The darker the line is, the smaller is \(\varepsilon\). We observe that the interface seems to stabilize with decreasing values of \(\varepsilon\) and that it only mildly depends on \(\varepsilon\). satisfying \[-\nabla\cdot(\mathbb{C}(\varphi)\mathcal{E}(\mathbf{u_{c}})) =\mathbf{0} \text{in }\Omega, \tag{9.3}\] \[\mathbf{u_{c}} =\mathbf{0} \text{on }\Gamma_{D}\subset\partial\Omega,\] \[\left[\mathbb{C}(\varphi)\mathcal{E}(\mathbf{u_{c}})\right]\cdot\mathbf{n }=\mathbf{g} \text{on }\Gamma_{g}\subset\partial\Omega,\] \[\left[\mathbb{C}(\varphi)\mathcal{E}(\mathbf{u_{c}})\right]\cdot\mathbf{n }=\mathbf{0} \text{on }\Gamma_{0}\subset\partial\Omega,\] which minimizes the objective \(\int_{\Gamma_{g}}\mathbf{g}\cdot\mathbf{u_{c}}\). Combining this with our eigenvalue optimization problem for \(\Psi(\lambda_{1})=-\alpha\lambda_{1}\) for some \(\alpha>0\), we arrive at \[\left\{\begin{aligned} \min J(\mathbf{u_{c}},& \lambda_{1})=-\alpha\lambda_{1}+\int_{\Gamma_{g}}\mathbf{g}\cdot\mathbf{u_{c}}+\gamma E ^{\varepsilon}(\varphi)\\ \text{s.t. }&\mathbf{u_{c}}\text{ solves the compliance equation \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq: eq:eq:eq: eq:eq:eq: eq:eq:eq:eq: eq:eq:eq: eq:eq:eq: eq:eq:eq: eq:eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq: eq:eq: eq:eq: eq:eq: eq: eq:eq: eq: eq:eq: eq: eq: eq:eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: ## Acknowledgment Harald Garcke, Paul Huttl and Patrik Knopf were partially supported by the RTG 2339 "Interfaces, Complex Structures, and Singular Limits" of the German Science Foundation (DFG). The support is gratefully acknowledged.
2307.04556
Hairy Kiselev Black Hole Solutions
In the realm of astrophysics, black holes exist within nonvacuum cosmological backgrounds, making it crucial to investigate how these backgrounds influence the properties of black holes. In this work, we first introduce a novel static spherically-symmetric exact solution of Einstein field equations representing a surrounded hairy black hole. This solution represents a generalization of the hairy Schwarzschild solution recently derived using the extended gravitational decoupling method. Then, we discuss how the new induced modification terms attributed to the primary hairs and various background fields affect the geodesic motion in comparison to the conventional Schwarzschild case. Although these modifications may appear insignificant in most cases, we identify specific conditions where they can be comparable to the Schwarzschild case for some particular background fields.
Yaghoub Heydarzade, Maxim Misyura, Vitalii Vertogradov
2023-07-10T13:43:42Z
http://arxiv.org/abs/2307.04556v2
# **Hairy Kiselev Black Hole Solutions** ###### Abstract In the realm of astrophysics, black holes exist within non-vacuum cosmological backgrounds, making it crucial to investigate how these backgrounds influence the properties of black holes. In this work, we first introduce a novel static spherically-symmetric exact solution of Einstein field equations representing a surrounded hairy black hole. This solution represents a generalization of the hairy Schwarzschild solution recently derived using the extended gravitational decoupling method. Then, we discuss how the new induced modification terms attributed to the primary hairs and various background fields affect the geodesic motion in comparison to the conventional Schwarzschild case. Although these modifications may appear insignificant in most cases, we identify specific conditions where they can be comparable to the Schwarzschild case for some particular background fields. **Keyword:** Gravitational decoupling, Kiselev black hole, hairy black hole, cosmological fields, geodesics Introduction In 2019 the Event Horizon Telescope Collaboration unveiled the very first image of a black hole located at the center of the massive elliptical galaxy M87 [1, 2, 3]. More recently, scientists have successfully observed the shadow of the supermassive black hole located in the center of our own galaxy [4]. These direct observations provide compelling evidence that black holes are not merely abstract mathematical solutions of the Einstein field equations but real astrophysical objects. Black holes possess a range of miraculous properties. As instances, they allow for the extraction of energy from their rotation and electric fields [5, 6, 7, 8]. In the vicinity of the black hole's event horizon, particles can possess negative energy [5, 7, 9, 10, 11, 12], and black holes can even function as particle accelerators [14, 15, 16, 17, 18, 19, 20]. In the realm of astrophysics, black holes are not isolated objects and they inhabit non-vacuum backgrounds. Some research has focused on investigating the direct local effects of cosmic backgrounds on the known black hole solutions. As instance, Babichev et al. [13] have shown that in an expanding universe by a phantom scalar field, the mass of a black hole decreases as a result of the accretion of particles of the phantom field into the central black hole. However, one notes that this is a global impact. To explore the local changes in the spacetime geometry near the central black hole, one should consider a modified metric that incorporates the surrounding spacetime. In this context, an analytical static spherically symmetric solution to Einstein filed equations has been presented by Kiselev [21]. This solution generalizes the usual Schwarzschild black hole to a non-vacuum background, and is characterized by an effective equation of state parameter of the surrounding field of the black hole. Hence it can encompass a wide range of possibilities including quintessence, cosmological constant, radiation and dust-like fields. Several properties of the Kiselev black hole have been extensively investigated in the literature [85-90]. Later, this solution has been generalized to the dynamical Vaidya type solutions [22, 23, 24]. Such generalizations are well justified due to the non-isolated nature of real-world black holes and their exitance in non-vacuum backgrounds. Black hole solutions coupled to matter fields, such as the Kiselev solution, are particularly relevant for the study of astrophysical black holes with distortions [25, 26, 27, 28]. They also play a significant role in investigating the no-hair theorem [29, 30, 31, 32]. This theorem states that a black hole can be described only with three charges (i.e. mass \(M\), electric charge \(Q\) and angular momentum \(a\)), and it relies on a crucial assumption that the black hole is isolated, meaning that the spacetime is asymptotically flat and free from other sources. However, real-world astrophysical situations do not meet this assumption. As instances, one may refer to black holes in binary systems, black holes surrounded by plasma, or those accompanied by accretion disks or jets in their vicinity. Such situations implies that a black hole may put on different types of wigs, and hence the applicability of the standard no-hair theorem for isolated black holes to these cases becomes questionable [30, 31, 32, 33, 34]. Recently, the minimal geometrical deformations [35, 36, 37] and the extended gravitational decoupling methods [38, 39, 40] have been utilized to derive new solutions from the known seed solutions of Einstein field equations. These techniques have been particularly effective in investigating the violation of the no-hair theorem, the emergence of novel types of hairy black holes, and the exploration of alternative theories of gravity. Us ing the extended gravitational decoupling method, Ovalle et all [41] have introduced a generalization of a Schwarzschild black hole surrounded by an anisotropic fluid and possesses primary hairs. This new solution has motivated a substantial further research in generalizing this solution to hairy Kerr [42], Vaidya and generalized Vaidya [43], regular hairy black holes [44, 45] and many others. Indeed, the gravitational decoupling method represents a novel and powerful tool for obtaining new solutions to the Einstein equations. In the present work, we introduce a novel class of exact solutions to the Einstein field equations, which describe a surrounded hairy Schwarzschild black hole. This solution serves as a generalization of the previously obtained hairy Schwarzschild solution using the extended gravitational decoupling method. Then, in order to analyze the properties of the solution, we investigate the effect of the new modification terms, attributed to the primary hairs and various surrounding fields, on the timelike geodesic motion. Specifically, we compare the effects of modification terms to the conventional Schwarzschild case. While these modifications may seem negligible in most scenarios, we identify specific situations where they can be comparable to the Schwarzschild case, particularly when specific surrounding fields are present. This analysis sheds light on the significance of these modifications in certain situations, providing insights into the behavior of geodesic motion around real astrophysical black holes. The structure of the present paper is as follows. In Section 2, we briefly discuss the hairy Schwarzschild solution by the minimal geometrical deformations and the extended gravitational decoupling method. In Section 3, we solve the Einstein field equations in order to obtain the surrounded hairy Schwarzschild black hole. In Section 4, we do analysis of the timelike geodesic motion. In Section 5, we summarize the new findings and implications of the study. The system of units \(c=G=1\) will be used throughout the paper. ## 2 Gravitational decoupling and hairy Schwarzschild black hole Gravitational decoupling method states that one can solve the Einstein field equations with the matter source \[\tilde{T}_{ik}=T_{ik}+\Theta_{ik}\,, \tag{1}\] where \(T_{ik}\) represents the energy-momentum tensor of a system for which the Einstein field equations are \[G_{ik}=8\pi T_{ik}\,. \tag{2}\] The solution of the equations (2) is supposed to be known and represents the seed solution. Then \(\Theta_{ik}\) represents an extra matter sources which causes additional geometrical deformations. The Einstein equations for this new matter source are \[\bar{G}_{ik}=\alpha\Theta_{ik}\,, \tag{3}\] where \(\alpha\) is a coupling constant and \(\bar{G}_{ik}\) is the Einstein tensor of deformed metric only. The gravitational decoupling method states that despite of non-linear nature of the Einstein equations, a straightforward superposition of these two solutions (2) and (3) \[\tilde{G}_{ik}\equiv G_{ik}+\bar{G}_{ik}=8\pi T_{ik}+\alpha\Theta_{ik}\equiv \tilde{T}_{ik}\,, \tag{4}\] is also the solution of the Einstein field equations. Now, we briefly describe this method. Let us consider the Einstein field equations \[G_{ik}=R_{ik}-\frac{1}{2}g_{ik}R=8\pi T_{ik}\,. \tag{5}\] Let the solution of (5) is a static spherically-symmetric spacetime of the form \[ds^{2}=-e^{\nu(r)}dt^{2}+e^{\lambda(r)}dr^{2}+r^{2}d\Omega^{2}\,. \tag{6}\] Here \(d\Omega^{2}=d\theta^{2}+\sin^{2}\theta d\varphi^{2}\) is the metric on unit two-sphere, \(\nu(r)\) and \(\lambda(r)\) are functions of \(r\) coordinate only and they are supposed to be known. The metric (6) is termed as the seed metric. Now, we seek the geometrical deformation of (6) by introducing two new functions \(\xi=\xi(r)\) and \(\eta=\eta(r)\) by: \[e^{\nu(r)} \to e^{\nu(r)+\alpha\xi(r)},\] \[e^{\lambda(r)} \to e^{\lambda(r)}+\alpha\eta(r). \tag{7}\] Here \(\alpha\) is a coupling constant. Functions \(\xi\) and \(\eta\) are associated with the geometrical deformations of \(g_{00}\) and \(g_{11}\) of the metric (6) respectively. These deformations are caused by new matter source \(\Theta_{ik}\). If one puts \(\xi(r)\equiv 0\) then the only \(g_{11}\) component is deformed, leaving \(g_{00}\) unperturbed -this is known as the minimal geometrical deformation. It has some drawbacks, for example, if one considers the existence of a stable black hole possessing a well-defined event horizon [37]. Deforming both \(g_{00}\) and \(g_{11}\) components is an arena of the extended gravitational decoupling. One should note that this extended decoupling works only for the vacuum seed solutions of the Einstein equations and fails for the region where we have the matter source due to the violation of the Bianchi identities except for several cases. For example, if one opts for deformations of the Vaidya solution then the extended gravitational decoupling still works, but it fails for the generalized Vaidya due to the being of an energy exchange between two matter sources [43]. Substituting (7) into (6) one obtains \[ds^{2}=-e^{\nu+\alpha\xi}dt^{2}+\left(e^{\lambda}+\alpha\eta\right)dr^{2}+r^{2 }d\Omega^{2}\,. \tag{8}\] The Einstein equations for (8) as \[\tilde{G}_{ik}=8\pi\tilde{T}_{ik}=8\pi(T_{ik}+\Theta_{ik})\,, \tag{9}\] give \[8\pi(T_{0}^{0}+\Theta_{0}^{0}) =-\frac{1}{r^{2}}+e^{-\beta}\left(\frac{1}{r^{2}}-\frac{\beta^{ \prime}}{r}\right),\] \[8\pi(T_{1}^{1}+\Theta_{1}^{1}) =-\frac{1}{r^{2}}+e^{-\beta}\left(\frac{1}{r^{2}}+\frac{\nu^{ \prime}+\alpha\xi^{\prime}}{r}\right),\] \[8\pi(T_{2}^{2}+\Theta_{2}^{2}) =\frac{1}{4}e^{-\beta}\left(2(\nu^{\prime\prime}+\alpha\xi^{ \prime\prime})+(\nu^{\prime}+\alpha\xi^{\prime})^{2}-\beta^{\prime}(\nu^{ \prime}+\alpha\xi^{\prime})+2\frac{\nu^{\prime}+\alpha\xi^{\prime}-\beta^{ \prime}}{r}\right),\] \[e^{\beta}\equiv e^{\lambda}+\alpha\eta. \tag{10}\] Here the prime sign denotes the partial derivative with respect to the radial coordinate \(r\), and we have \(8\pi\left(T_{2}^{2}+\Theta_{2}^{2}\right)=8\pi\left(T_{3}^{3}+\Theta_{3}^{3}\right)\) due to the spherical symmetry. From (10) one can define the effective energy density \(\tilde{\rho}\), effective radial and tangential \(\tilde{P}_{r}\), \(\tilde{P}_{t}\) pressures as \[\tilde{\rho}=-(T_{0}^{0}+\Theta_{0}^{0}),\] \[\tilde{P}_{r}=T_{1}^{1}+\Theta_{1}^{1},\] \[\tilde{P}_{t}=T_{2}^{2}+\Theta_{2}^{2}. \tag{11}\] From (11) one can introduce the anisotropy parameter \(\Pi\) as \[\Pi=\tilde{P}_{t}-\tilde{P}_{r}\,, \tag{12}\] where if \(\Pi\neq 0\) then it indicates the anisotropic behaviour of fluid \(\tilde{T}_{ik}\). The equations (10) can be decoupled into two parts1: the Einstein equations corresponding to the seed solution (6) and the one corresponding to the geometrical deformations. If we consider the vacuum solution i.e. \(T_{ik}\equiv 0\) - Schwarzschild solution then, by solving the Einstein field equations which correspond the geometrical deformations, one obtains the hairy Schwarzschild solution [41] Footnote 1: One should remember that it always works for \(T_{ik}\equiv 0\) i.e. the vacuum solution and for special cases of \(T_{ik}\) if one opts for Bianchi identities \(\nabla_{i}T^{ik}=\nabla_{i}\Theta^{ik}=0\) with respect to the metric (8) otherwise there is an energy exchange i.e. \(\nabla_{i}\tilde{T}^{ik}=0\Rightarrow\nabla_{i}T^{ik}=-\nabla_{i}\Theta^{ik}\neq 0\). \[ds^{2}=-\left(1-\frac{2M}{r}+\alpha e^{-\frac{r}{M-\frac{\alpha l }{2}}}\right)dt^{2}+\left(1-\frac{2M}{r}+\alpha e^{-\frac{r}{M-\frac{\alpha l }{2}}}\right)^{-1}dr^{2}+r^{2}d\Omega^{2}\,, \tag{13}\] where \(\alpha\) is the coupling constant, \(l\) is a new parameter with length dimension and associated with a primary hair of a black hole. Here \(M\) is the mass of the black hole in relation with the Schwarzschild mass \(\mathcal{M}\) as \[M=\mathcal{M}+\frac{\alpha l}{2}\,. \tag{14}\] The impact of \(\alpha\) and \(l\) on the geodesic motion, gravitational lensing, energy extraction and the thermodynamics has been studied in Refs. [52, 53, 54, 55, 56]. ## 3 Surrounded hairy Schwarzschild black hole Recently, the hairy Schwarzschild black hole has been introduced in [41] by using the gravitational decoupling method. This solution in the Eddington-Finkelstein coordinates takes the form \[ds^{2}=-\left(1-\frac{2M}{r}+\alpha e^{-\frac{r}{M-\frac{\alpha}{2}}}\right)dv^{ 2}+2\varepsilon dvdr+r^{2}d\Omega^{2}\,. \tag{15}\] Here \(v\) is the advanced (\(\varepsilon=+1\)) or retarded (\(\varepsilon=-1\)) Eddington time. In this section, using the approach in [21, 22, 57], we obtain the generalization of this solution representing a hairy Schwarzschild solution surrounded by some particular fields motivated by cosmology as in the following theorem. **Theorem:**_Considering the extended gravitational decoupling [39] and the principle of additivity and linearity in the energy-momentum tensor [21] which allows one to get correct limits to the known solutions, the Einstein field equations admit the following solution in the Eddington-Finkelstein coordinates_ \[ds^{2}=-\left(1-\frac{2M}{r}-\frac{N}{r^{3\omega+1}}+\alpha e^{-\frac{2r}{2M- \alpha}}\right)dv^{2}+2\varepsilon dvdr+r^{2}d\Omega^{2}\,, \tag{16}\] _where \(M=\mathcal{M}+\frac{\alpha l}{2}\) in which \(\mathcal{M}\) and \(\mathcal{M}\) are integration constants. The metric represents a surrounded hairy Schwarzschild solution or equivalently hairy Kiselev solution._ We summarize our proof as follows. Let us consider the general spherical-symmetric spacetime in the form \[ds^{2}=-f(r)dv^{2}+2\varepsilon dvdr+r^{2}d\Omega^{2}\,. \tag{17}\] The Einstein tensor components for the metric (17) are given by \[G_{0}^{0}=G_{1}^{1}=\frac{1}{r^{2}}\left(f^{\prime}r-1+f\right)\,,\] \[G_{2}^{2}=G_{3}^{3}=\frac{1}{r^{2}}\left(rf^{\prime}+\frac{1}{2} r^{2}f^{\prime\prime}\right)\,, \tag{18}\] where the prime sign represents the derivative with respect to the radial coordinate \(r\). The total energy-momentum tensor should be a combination of \(\Theta_{ik}\) associated to the minimal geometrical deformations and \(T_{ik}\) associated to the surrounding fluid as \[\tilde{T}_{ik}=\alpha\Theta_{ik}+T_{ik}\,. \tag{19}\] One should note that here we don't demand the fulfilment of the condition \(\Theta_{;k}^{ik}=T_{;k}^{ik}=0\). Instead, we demand that \(\tilde{T}_{;k}^{ik}=0\) which follows the Bianchi identity. The total energy-momentum tensor \(\tilde{T}_{ik}\) follows the same symmetries of the Einstein tensor (18) for (17), i.e \(\tilde{T}_{0}^{0}=\tilde{T}_{1}^{1}\) and \(\tilde{T}_{2}^{2}=\tilde{T}_{3}^{3}\). An appropriate general expression for the energy-momentum tensor \(T_{ik}\) of the surrounding fluid can be [21] \[T_{0}^{0}=-\rho(r)\,,\] \[T_{k}^{i}=-\rho(r)\left[-\xi\left(1+3\zeta\right)\frac{r^{i}r_{k }}{r^{n}r_{n}}+\zeta\delta_{k}^{i}\right]\,. \tag{20}\] From the form of the energy-momentum tensor (20), one can see that the spatial profile is proportional to the time component, describing the energy density \(\rho\) with arbitrary constants \(\xi\) and \(\zeta\) depending on the internal structure of the surrounding fields. The isotropic averaging over the angles results in \[<T_{k}^{i}>=\frac{\xi}{3}\rho\delta_{k}^{i}=P\delta_{k}^{i}\,, \tag{21}\] since we considered \(<r^{i}r_{k}>=\frac{1}{3}\delta^{i}{}_{k}r_{n}r^{n}\). Then, we have a barotropic equation of state for the surrounding fluid as \[P(r)=\omega\rho(r)\,\ \ \omega=\frac{\xi}{3}\,, \tag{22}\] where \(P(r)\) and \(\omega\) are the pressure and the constant equation of the state parameter of the surrounding field, respectively. Here, one notes that the source \(T_{ik}\) associated to the surrounding fluid should possess the same symmetries in \(\tilde{T}_{ik}\) because \(\Theta_{ij}\) associated to the geometrical deformations has the same symmetries as 2 Footnote 2: One should note that hairy Schwarzschild solution is supported with an anisotropic fluid \(\Theta_{k}^{i}\) \[\Theta_{0}^{0}=-\bar{\rho}\,,\Theta_{1}^{1}=\bar{P}_{r}\,,\Theta_{2}^{2}= \Theta_{3}^{3}=\bar{P}_{t}\,. \tag{23}\] Where the non-vanishing parameter \(\Pi=\bar{P}_{t}-\bar{P}_{r}\) indicates on the anisotropic nature of the energy momentum tensor. So, in order to satisfy the condition \(\Theta_{0}^{0}=\Theta_{1}^{1}\) the anisotropic fluid should be satisfied with the equation of the state \(P_{r}=-\bar{\rho}\). \[\Theta_{0}^{0}=\Theta_{1}^{1}=-\bar{\rho},\] \[\Theta_{2}^{2}=\Theta_{3}^{3}=\bar{P}_{t}. \tag{24}\] It means that \(T_{0}^{0}=T_{1}^{1}\) and \(T_{2}^{2}=T_{3}^{3}\). These exactly provide the so-called principle of additivity and linearity considered in [21] in order to determine the free parameter \(\zeta\) of the energy-momentum tensor \(T_{ik}\) of surrounding fluid as \[\zeta=-\frac{1+3\omega}{6\omega}\,. \tag{25}\] Now, substituting (22) and (25) into (20), the non-vanishing components of the surrounding energy-momentum tensor \(T_{ik}\) become \[T_{0}^{0}=T_{1}^{1}=-\rho,\] \[T_{2}^{2}=T_{3}^{3}=\frac{1}{2}\left(1+3\omega\right)\rho\,. \tag{26}\] Now, we know the Einstein tensor components (18) and the total energy-momentum tensor (19). Putting all these equations together, the \(G_{0}^{0}=\tilde{T}_{0}^{0}\) and \(G_{1}^{1}=\tilde{T}_{1}^{1}\) give us the following equation \[\frac{1}{r^{2}}\left(f^{\prime}r-1+f\right)=-\rho-\alpha\bar{\rho}\,. \tag{27}\] Similarly, the \(G_{2}^{2}=\tilde{T}_{2}^{2}\) and \(G_{3}^{3}=\tilde{T}_{3}^{3}\) components yields \[\frac{1}{r^{2}}\left(rf^{\prime}+\frac{1}{2}f^{\prime\prime}r^{2}\right)= \frac{1}{2}\left(1+3\omega\right)\rho+\bar{P}\,. \tag{28}\] Thus, there are four unknown functions \(f(r)\), \(\rho(r)\), \(\bar{\rho}(r)\) and \(\bar{P}\) that can be determined analytically by the differential equations (27) and (28) with the following ansatz \[f(r)=g(r)-\frac{\alpha l}{r}+\alpha e^{-\frac{2r}{2M-\alpha l}}\,. \tag{29}\] Then, by substituting (29) into (27) and (28) and using (24) one obtains the following system of linear differential equations 3 Footnote 3: Here we apply the Einstein equation \(\hat{G}^{i}_{k}=\alpha\Theta^{i}_{k}\) to eliminate \(\tilde{\rho}\) and \(\tilde{P}\). \(\hat{G}^{i}_{k}\) is the Einstein tensor for the spacetime \[ds^{2}=-\left(1-\frac{\alpha l}{r}+\alpha e^{-\frac{2r}{2M-\alpha l}}\right) dv^{2}+2\varepsilon dvdr+r^{2}d\Omega^{2}.\] for unknowns \(\rho(r)\) and \(g(r)\) \[\frac{1}{r^{2}}\left(g^{\prime}r-1+g\right)=-\rho,\] \[\frac{1}{r^{2}}\left(rg^{\prime}+\frac{1}{2}g^{\prime\prime}r^{2} \right)=\frac{1}{2}\left(1+3\omega\right)\rho\,. \tag{30}\] This second order linear system can be integrated to give the metric function \(g(r)\) as \[g(r)=1-\frac{2\mathcal{M}}{r}-\frac{N}{r^{3\omega+1}}\,, \tag{31}\] and the energy density \(\rho(r)\) of the surrounding field as \[\rho(r)=-\frac{3\omega N}{r^{3(\omega+1)}}\,. \tag{32}\] Here \(\mathcal{M}\) and \(N\) are constants of integration representing the Schwarzschild mass and the surrounding field structure parameter, respectively. By putting all these solutions together, we arrive at the _surrounded hairy Schwarzschild solution_ or equivalently _hairy Kiselev solution_ as \[ds^{2}=-\left(1-\frac{2M}{r}-\frac{N}{r^{3\omega+1}}+\alpha e^{-\frac{2r}{2M- \alpha l}}\right)dv^{2}+2\varepsilon dvdr+r^{2}d\Omega^{2}\,, \tag{33}\] where \(M=\mathcal{M}+\frac{\alpha l}{2}\). From (32), one can see that the weak energy condition demands that parameters \(\omega\) and \(N\) have different signs. ## 4 Timelike geodesics Considering the geodesic motion in spherically-symmetric spacetime, without loss of generality, one can consider the equatorial plane \(\theta=\frac{\pi}{2}\). The geodesic equations for the metric (17) can be obtained by varying the following action \[S=\int\mathcal{L}d\tau=\frac{1}{2}\int\left(-f\dot{v}^{2}+2\varepsilon\dot{v} \dot{r}+r^{2}\dot{\varphi}^{2}\right)d\tau\,, \tag{34}\] where the dot sign means the derivative with respect to the proper time \(\tau\). The spacetime (33) is spherically-symmetric and hence in addition to the time-translation Killing vector \(\frac{\partial}{\partial t}\), there exists another Killing vector \(\varphi^{i}=\frac{\partial}{\partial\varphi}\) and the corresponding conserved quantity, the angular momentum per mass, is given by \[\varphi^{i}u_{i}=\frac{\partial\mathcal{L}}{\partial\dot{\varphi}}=r^{2}\dot{ \varphi}=L\,. \tag{35}\] Taking into account (34) and (35), one obtains the following three geodesic equations \[\dot{\varphi}=\frac{L}{r^{2}}\,, \tag{36}\] \[-\frac{1}{2}f^{\prime}\dot{v}^{2}+r\dot{\varphi}^{2}-\varepsilon\ddot{v}=0\,, \tag{37}\] \[\varepsilon\ddot{r}=f\ddot{v}+f^{\prime}\dot{v}\dot{r}\,, \tag{38}\] where the prime sign denotes the derivative with respect to the radial coordinate \(r\). Substituting (36) into (37), one obtains \[f\ddot{v}=\frac{\varepsilon fL^{2}}{r^{3}}-\frac{1}{2}\varepsilon ff^{\prime} \dot{v}^{2}\,. \tag{39}\] Now, by applying the timelike geodesic condition \(g_{ik}u^{i}u^{k}=-1\) into the equation above, we find \[f^{\prime}\dot{v}\dot{r}=-\frac{1}{2}\varepsilon f^{\prime}+\frac{1}{2} \varepsilon ff^{\prime}-\frac{1}{2}\varepsilon f^{\prime}\frac{L^{2}}{r^{2}} \dot{v}^{2}\,. \tag{40}\] Substituting the equation (40) into (38) we arrive at the following general equation of motion in terms of the metric function \(f\) for the radial coordinate \[\ddot{r}=-\frac{1}{2}\left(1+\frac{L^{2}}{r^{2}}\right)f^{\prime}+f\frac{L^{2} }{r^{3}}\,. \tag{41}\] Hence, using the obtained metric function (33), one obtains the geodesic equation in the form \[\ddot{r} = \left(-\frac{M}{r^{2}}+\frac{L^{2}}{r^{3}}-\frac{3ML^{2}}{r^{4}} \right)_{sch} \tag{42}\] \[+\left(-\gamma\frac{N}{2r^{\gamma+1}}-\left(\gamma+2\right) \frac{NL^{2}}{2r^{\gamma+3}}\right)_{s}\] \[+\left(\frac{\alpha}{2M-\alpha l}e^{-\frac{2r}{2M-\alpha l}}+ \frac{\alpha L^{2}}{\left(2M-\alpha l\right)r^{2}}e^{-\frac{2r}{2M-\alpha l} }-\frac{\alpha L^{2}}{r^{3}}e^{-\frac{2r}{2M-\alpha l}}\right)_{h}\,,\] where \(\gamma=3\omega+1\). From (42), one can observe the following interesting points. 1. The three terms in the first line are the same as that of the standard Schwarzschild black hole in which the first term represents the Newtonian gravitational force, the second term represents the repulsive centrifugal force, and the third term is the relativistic correction of Einstein's general relativity which accounts for the perihelion precession. 2. The terms in the second line are new correction terms due to the presence of the background field which surrounds the hairy Schwarzschild black hole, in which its first term is similar to the term of the gravitational potential in the first brackets, while its second term is similar to the relativistic correction of general relativity. Then, regarding (42) one realizes that for the more realistic non-empty backgrounds, the geodesic equation of any object depends strictly not only on the mass of the central object of the system and the conserved angular momentum of the orbiting body, but also on the background field nature. The new correction terms may be small in general in comparison to their Schwarzschild counterparts (the first and the third term in the first brackets). However, one can show that, there are possibilities that these terms are comparable to them. One also can observe, by using the equation (32), that for \(\omega\in(-\frac{1}{3},\,0)\) the Newtonian gravitational force is strengthened by corrections caused by the surrounding field, on the other hand, for other values of \(\omega\) the force is weakened. If we consider the same question regarding the second term, which corresponds to the relativistic correction of Einstein's general relativity, then for values \(\omega\in(-1,\,0)\) the force is strengthened and this is while this force is weakened for other values \(\omega\). The surrounding fluid doesn't have any contributions to the repulsive centrifugal force. 3. The terms in the third line represent modifications by the primary hairs \(\alpha\) and \(l\). The second term here corresponds to the relativistic correction of Einstein's general relativity. The third term here represents a new correction by the primary hairs to the repulsive centrifugal force. One can define the effective distance \(D\) to find out where this force disappears by relation \(\frac{A_{1}}{A_{r}}\approx 1\) where \(A_{r}\) is the Schwarzschild black hole repulsive centrifugal force, and \(A_{1}\) is the correction to this force caused by primary hairs. So the distance is given by \[D=\left(M-\frac{\alpha l}{2}\right)\ln\alpha\,.\] (43) Considering a minimal geometrical deformations, \(\alpha\) must be negligible, i.e \(\alpha\ll 1\). So according to (43), the correction caused by primary hairs can weaken the repulsive centrifugal force but it can't cancel it, and hence this correction is negligible in general. The first term in (42) contributes a correction to the Newtonian potential. This can be seen using the effective potential \(V_{eff}(r)\). One can write the geodesic equations in the form \[V_{eff}(r)=\Phi(r)+\frac{L^{2}}{2r^{2}}+\Phi(r)\frac{L^{2}}{r^{2}}\,,\] (44) where \(\Phi(r)\) is related to \(g_{00}\) metric component via relation \[g_{00}=-\left(1+2\Phi\right)\,.\] (45) By comparing this with (33), we come to the conclusion that \[\Phi(r)=-\frac{M}{r}+\frac{N}{2r^{3\omega+1}}-\alpha e^{-\frac{r}{2M-\alpha l }}\,.\] (46) Now, taking the derivative of \(V_{eff}\) in (44) with respect to \(r\) \[\frac{d^{2}r}{d\tau^{2}}=-\frac{dV_{eff}}{dr}\,, \tag{47}\] we arrive at the equation of motion (42). In order to better understand the nature of the solution obtained in (33), one can consider the following two groups of forces and investigate their behaviour for various set of surrounding fields and primary hair parameters. \[G\equiv\frac{M}{r^{2}}+\gamma\frac{N}{2r^{\gamma+1}}-\frac{\alpha}{2M-\alpha l }e^{-\frac{2r}{2M-\alpha l}}, \tag{48}\] \[H\equiv\frac{3ML^{2}}{r^{4}}+\left(\gamma+2\right)\frac{NL^{2}}{2r^{\gamma+3}} -\frac{\alpha L^{2}}{(2M-\alpha l)r^{2}}e^{-\frac{2r}{2M-\alpha l}}\,, \tag{49}\] where \(G\) group represents the Newtonian gravitational force with its modifications and \(H\) group corresponds to the relativistic corrections of the general relativity. One can ask for the possibilities if the new modifications caused by surrounding fields and primary hairs can cancel the original forces or change their effect, i.e. change their sign. Hence, we are interested in possible cases in which for set of parameters \(\omega\), \(\alpha\) and \(l\), the \(G\) and \(H\) functions are getting negligible values or they change their signs. In the following subsections, we consider some specific fields possessing particular equations of state motivated by cosmology. However, we can note the following facts which we can derive from (49). Let's consider the first two terms: for \(-1<\omega<0\) these two terms are always positive. However, the second term is negative for positive \(\omega\) and we can expect the sign change of \(H\). Let's consider two particular cases: * the radiation \(\omega=\frac{1}{3}\). In this case, \(|N|\leq M^{2}\) and the first two terms become negative in the region \(0\leq r\leq 2M/3\) which is inside the event horizon. Because the third term in (49) is negligible we can conclude that \(H\) is always positive outside the event horizon region. * The stiff fluid \(\omega=1\). In this case we can put \(N=-M^{4}\) then \(f(r=M)>0\). Thus, in this case the event horizon location at the radius which is less than \(M\). However, the first two terms in (49) become negative at \(r=M\) and \(H<0\) outside the event horizon region. ### Stiff Fluid We begin our analysis of timelike geodesics with the surrounding fluid having the average equation of the state of a stiff fluid as \[P=\rho\,\Leftrightarrow\,\omega=1. \tag{50}\] As mentioned previously, the presence of the surrounding field has a weakening effect on the forces given by (48) and (49). From (32), one observes that \(N\) must be negative to maintain a positive energy density for the surrounding fluid. Our objective is to determine whether the corrections by the surrounding field and primary hairs can cancel out the initial Schwarzschild forces or potentially can change their sign, and thereby altering the direction of the forces. In Figure 1(a), we plotted three curves corresponding the usual Schwarzschild, Kiselev and hairy Kiselev black holes. We observe that the function \(G\) for the hairy Kiselev black hole is negligible but positive near the event horizon \(r=2\mathcal{M}\) for the given specific set of parameters. However, in the case of purely Kiselev black hole (i.e. \(\alpha=0\)), the function \(G\) is negative in the interval \(2\leq r\leq 2.15\). One notes that in the purely Kiselev case, we have a naked singularity (NS) (i.e. \(g_{00}\neq 0\)).4 Footnote 4: For this set of parameters \(g_{00}\) is always negative, i.e. there are not positive roots of the equation \(g_{00}=0\) for \(r\in(0,+\infty)\). On this reason, we have concluded that \(r=0\) represents a NS because the Kretschmann scalar diverges at \(r=0\). By NS we mean that \(r=0\) singularity is not covered with the event horizon. The question about future-directed non-spacelike geodesics, which terminate at this singularity in the past, hasn’t been considered within this paper. Figure 1(b) shows that the function \(G\) becomes negative in the vicinity of the event horizon (i.e. in the region \(2\leq r\leq 2.02\)) for the hairy Kiselev black hole for the set of parameters \(N=-5.186,\,l=1.567\). To have a bigger distance from the event horizon, where the function \(G\) can become negative, one should increase \(|N|\) and \(l\), however, in this case, \(\mathcal{M}\sim\alpha l/2\) and it will not anymore be a minimal geometrical deformation in (15). So we can conclude that \(G\) might be negative outside the event horizon but only in its vicinity. Figure 1(c) compares the function \(H\) for the Schwarzschild, Kiselev and hairy Kiselev cases for the values considered in Figure 1(b). In order to understand better the influence of a primary hair on a geodesic motion we put \(\alpha=0.1\) in order to consider bigger values of \(l\). Figures 2(a) and 2(b) show how Figure 1: Plot (a) shows the function \(G\) versus the distance \(r\) for \(N=-4.972,\,l=1.514,\,\alpha=0.5\) and \(\mathcal{M}=1\). Plot(b) shows the function \(G\) versus the distance \(r\) for \(N=-5.186,\,l=1.567,\alpha=0.5\) and \(\mathcal{M}=1\), the small picture shows the function \(G\) of hairy Kiselev black hole in the horizon vicinity. Plot (c) shows the function \(H\) versus the distance \(r\) for \(N=-5.186,\,l=1.567,\alpha=0.5\) and \(\mathcal{M}=1\). The red, blue and green curves represent the Schwarzschild, Kiselev, and hairy Kiselev cases, respectively. \(G\) changes with different values of \(l\) and \(N\). One can see that there are regions where it becomes negative. However, from these pictures one can't realize if they deal with a black hole or a naked singularity. For this purpose one should impose the condition of existence of an event horizon. The Figure 2(c) shows how \(G\) changes in this case. ### Radiation Here we consider the surrounding field having the average equation of state of radiation field as \[P=\frac{\rho}{3}\,\Leftrightarrow\,\omega=\frac{1}{3}\,. \tag{51}\] In this case, the \(N\) parameter must be negative, and akin to the previous case, the surrounding radiation field and primary hairs weaken the forces in (48) and (49). Figure 3(a) shows three curves in the pure Schwarzschild, Kiselev and hairy Kiselev black holes for the parameter values \(N=-3.729\) and \(l=4\). For the case of surrounding radiation-like field, one observes that the spacetime is akin to the hairy Reissner-Nordstrom black hole such that the parameter \(N\) plays the role of black hole's electric charge, i.e. \(N=-Q^{2}\). So, in purely Reissner-Nordstrom case, the curve corresponds to the naked singularity because \(\mathcal{M}^{2}<Q^{2}\). In comparison to the stiff fluid case, one notes that the parameters \(l\) and \(N\) are taken greater values to ensure that the function \(G\) is negligible. In Figure 3(b), we plotted curves in order to show that hairs can affect the geodesic motion and hence \(G\) can become negative in the event horizon vicinity (in the region \(2\leq r\leq 2.042\)). In this case, we set \(N=-3.889\) and \(l=4.16\). One can see that the smaller values of \(\omega\) we take, the bigger values of \(l\) are required to ensure the negative values of \(G\). For example, if we take this value of \(l\) (i.e. \(l=4.16\)), then, in the case of stiff fluid, we have \(N=-15.557\) (we obtain this value by demanding that the event horizon is located at \(r={\cal M}\)) then the \(G\) function is negative in the region \(2\leq r\leq 2.534\). Thus, one can see that the region, where negative values of \(G\) are possible, shrinks when \(\omega\) tends to zero. Figure 3(c) denotes the function \(H\) with the values of \(N\) and \(l\) as in the previous figure. Similar to the stiff fluid case, we have several plots for \(\alpha=0.1\). Figures 4(a) and 4(b) show that \(G\) becomes negative at the larger distances in comparison to the stiff fluid case. This apparently contradicts our previous statement that the smaller \(\omega\) we consider, the region where \(G\) becomes negative becomes smaller. However, one notes that this is a case of the naked singularity because if one imposes an extra condition of the event horizon existence, then for this case (\(\alpha=0.1\)) the \(G\) function is always positive outside the horizon as can be seen from Figure 4(c). ### Dust For a dust-like field we have \[P=0\,\Leftrightarrow\,\omega=0\,, \tag{52}\] and we can show analytically that the function \(G\) is positive near the event horizon as follows. We have \[\frac{2M+N}{r}=1+\alpha e^{-\frac{r}{\cal M}}\,. \tag{53}\] Substituting this into (48) and considering the event horizon at \(r=2{\cal M}\), one obtains \[\frac{1}{4{\cal M}}-\frac{\alpha}{4{\cal M}e^{2}}>0\,. \tag{54}\] So, for physically relevant values of \(\alpha,\,l\) and \(N\), the function \(G\) is positive outside the event horizon. Figure 3: Plot(a) shows the function \(G\) versus the distance \(r\) for \(N=-3.729,\,l=4,\,\alpha=0.5\) and \({\cal M}=1\). Plot(b) shows \(G\) versus \(r\) for \(N=-3.889,\,l=4.16,\,\alpha=0.5\) and \({\cal M}=1\), the small picture shows the function \(G\) of hairy Kiselev black hole in the horizon vicinity. Plot(c) shows \(H\) versus \(r\) for \(N=-3.889,\,l=4.16,\,\alpha=0.5\) and \({\cal M}=1\). The red, blue and green curves correspond to the Schwarzschild, Kiselev, and hairy Kiselev cases, respectively. Figure 5(a) compares three curves of a hairy Kiselev black hole, purely Kiselev when \(\alpha=0\), and Schwarzschild case when \(\alpha=0\) and \(N=0\). These curves are plotted for \(l=0.5\,,N=-0.115\). Figure 5(b) is plotted for the same values of black hole parameters and shows the behaviour of the function \(H\). For \(\omega\geq 0\) the function \(H\) is positive, and its behaviour is shown in the Figure 5(c). For other values of \(\omega\) we could not find the condition (at small values of \(\alpha\)) where \(H\) becomes negative. Figure 4: Plot(a) shows the function \(G\) versus the parameters \(N\in[-7,-4],\,l\in[4,8]\) for \(r=2.1,\,\alpha=0.1\) and \(\mathcal{M}=1\). Plot(b) shows the function \(G\) versus the parameters \(N\in[-7,-4],\,l\in[4,8]\) for \(r=2.5,\,\alpha=0.1\) and \(\mathcal{M}=1\). Plot(c) shows the function \(G\) versus \(r,\,l\) for \(N\in[-1.546,-0.746],\,\alpha=0.1\) and \(\mathcal{M}=1\). The event horizon, located at \(r=2\mathcal{M}\), must satisfy the condition \(N=-0.2l+0.4e^{-2}\). Figure 5: Plot(a) shows the function \(G\) versus the distance \(r\) for \(N=-0.115,\,l=0.5,\alpha=0.5\) and \(\mathcal{M}=1\). Plot(b) shows the function \(H\) versus the distance \(r\) for \(N=-0.115,\,l=0.5,\,\alpha=0.5\) and \(\mathcal{M}=1\). The red, blue and green curves correspond to the Schwarzschild, Kiselev, and hairy Kiselev cases, respectively. Plot(c) shows the function \(H\) versus \(r\,,l\) for the values \(N\in[-0.773,-0.373],\,l\in[4,8],\,r\in[2,5],\,\alpha=0.1\) and \(\mathcal{M}=1\). The event horizon, located at \(r=2\mathcal{M}\), must satisfy the condition \(N=-0.1l+0.2e^{-2}\). ### Quintessence For a quintessence-like field, the equation of the state is \[P=-\frac{2}{3}\rho\,\Leftrightarrow\,\omega=-\frac{2}{3}\,. \tag{55}\] In this case, the parameter \(N\) must be positive as one can see from (32). The function \(G\) can be negligible in the vicinity of the horizon only if either \(N\) or \(L\) are negative. However, \(G\) can take negative values but at large distances from the event horizon. As can be shown from Figure 6(a) at values \(l=0.05\,,N=0.028\), the function \(G\) for Kiselev black hole becomes negative at \(r>8.553\). The effect of \(N\) and \(\alpha\) on the function \(H\) for these values are negligible and they become considerable only at large distances, as one can see from Figure 6(b). ### De Sitter background In this case, the surrounded fluid has the effective equation of the state \[P=-\rho\,\Leftrightarrow\,\omega=-1\,. \tag{56}\] Like in the previous case, the parameter \(N\) must be negative, and the function \(G\) must be positive near the event horizon. Figure 7(a) shows that the function \(G\) for \(N=0.016\,,l=0.01\) becomes negative for \(r>3.841\). The function \(H\) behaves very similar in all three cases as can be seen in Figure 7(b). Figure 6: Plot(a) shows the function \(G\) versus the distance \(r\) for \(N=0.028\), \(l=0.05,\alpha=0.5\) and \({\cal M}=1\). Plot(b) shows the function \(H\) versus the distance \(r\) for \(N=0.028\), \(l=0.05\), \(\alpha=0.5\) and \({\cal M}=1\). The red, blue and green curves correspond to the Schwarzschild, Kiselev, and hairy Kiselev cases, respectively. Figure 8 shows the behaviour of \(G\) at \(\alpha=0.1\) and with an extra condition of the event horizon existence. Here 8(a) is plotted for positive cosmological constant as 8(b) for negative cosmological constant - anti-de-Sitter case. ### Phantom field The equation of the state a phantom-like field is given by \[P=-\frac{4}{3}\rho\,\Leftrightarrow\,\omega=-\frac{4}{3}\,. \tag{57}\] Figure 7: Plot(a) shows the function \(G\) versus the distance \(r\) for \(N=0.016\), \(l=0.01\), \(\alpha=0.5\) and \(\mathcal{M}=1.\)Plot(b) shows the function \(H\) versus the distance \(r\) for the same values of parameters. The red, blue and green curves correspond to the Schwarzschild, Kiselev, and hairy Kiselev cases, respectively. The parameter \(N\) must be positive, and as can be seen in Figure 9(a), the function \(G\) takes negative values at the region \(r>3.056\) at \(l=0.05\,,N=0.007\). Figure 9(b) shows that for the same values of \(l\) and \(N\), the function \(H\) can be negative in the region \(r>5.433\). ## 5 Conclusion Inspired by the fact that black holes inhabit non-vacuum cosmological backgrounds, we present a new solution to the Einstein field equations representing a surrounded hairy Schwarzschild black hole. This solution takes into account both the primary hair and surrounding fields (represented by an energy-momentum tensor following the linearity and additivity condition [21]) which affect the properties of the black hole. The effect of the corresponding contributions on timelike geodesics are discussed. We find that the new induced modifications can be considerable in certain cases. In particular, we investigate how the specified surrounding fields and primary hairs affect the Newtonian and perihelion precession terms. Our observations are as follows. * The surrounding fields with \(-\frac{1}{3}<\omega<0\) contribute positively to the Newtonian term, i.e strengthening the gravitational attraction. * The new corrections to the Newtonian term might be the same order or even greater for all other cases if one considers a naked singularity.5 Figure 9: Plot(a) shows the function \(G\) versus the distance \(r\) for \(N=0.007\,,l=0.05,\alpha=0.5\) and \(\mathcal{M}=1\). Plot(b) shows the function \(H\) versus the distance \(r\) for \(N=0.007\,,l=0.05,\alpha=0.5\) and \(\mathcal{M}=1\). The red, blue and green curves correspond to the Schwarzschild, Kiselev, and hairy Kiselev cases, respectively. * In the case that the solution represents a black holes, new corrections can be of the same order or even greater than the Newtonian term in the event horizon vicinity for \(\omega>0\). * For \(\omega<-\frac{1}{3}\) i.e. for effectively repulsive fluids akin to dark energy models, the correction terms dominate far from the event horizon and mainly near the cosmological horizon. Schwarzschild black hole is an idealized vacuum solution and it is important to consider how it gets deformed in the presence of matter fields. Another crucial factor to consider is the impact of the surrounding environment, particularly the shadow of a black hole in the cosmological background, which serves as a potential cosmological ruler [58]. The solution presented in this work can be further investigated to study the shadow of a hairy Schwarzschild black hole in various cosmological backgrounds in order to find out how anisotropic fluid can affect the observational properties, which is a plan of our upcoming investigations. It is worthwhile to mention that applying the Newman-Janis [59] and Azreg Ainou [60, 61] algorithms one can obtain the rotating version of the solution presented here. Also, investigation of quasi-normal modes, thermodynamics properties, accretion process and gravitational lensing of these solutions can help us to understand better the nature of these objects. **Acknowledgments**: V. Vertogradov and M. Misyura say thanks to grant NUM. 22-20-00112 RSF for financial support. The work was performed within the SAO RAS state assignment in the part "Conducting Fundamental Science Research".
2303.08843
The magnificent ACT of flavor-specific neutrino self-interaction
We revisit the cosmology of neutrino self-interaction and use the latest cosmic microwave background data from the Atacama Cosmology Telescope (ACT) and the Planck experiment to constrain the interaction strength. In both flavor-universal and nonuniversal coupling scenarios, we find that the ACT data prefers strong neutrino self-interaction that delays neutrino free streaming until just before the matter-radiation equality. When combined with the Planck 2018 data, the preference for strong interaction decreases due to the Planck polarization data. For the combined dataset, the flavor-specific interaction still provides a better fit to the CMB data than $\Lambda$CDM. This trend persists even when neutrino mass is taken into account and extra radiation is added. We also study the prospect of constraining such strong interaction by future terrestrial and space telescopes, and find that the upcoming CMB-S4 experiment will improve the upper limit on neutrino self-interaction by about a factor of three.
Anirban Das, Subhajit Ghosh
2023-03-15T18:00:03Z
http://arxiv.org/abs/2303.08843v2
# The magnificent ACT of flavor-specific neutrino self-interaction ###### Abstract We revisit the cosmology of neutrino self-interaction and use the latest cosmic microwave background data from the Atacama Cosmology Telescope (ACT) and the Planck experiment to constrain the interaction strength. In both flavor-universal and nonuniversal coupling scenarios, we find that the ACT data prefers strong neutrino self-interaction that delays neutrino free streaming until just before the matter-radiation equality. When combined with the Planck 2018 data, the preference for strong interaction decreases due to the Planck polarization data. For the combined dataset, the flavor-specific interaction still provides a better fit to the CMB data than \(\Lambda\)CDM. This trend persists even when neutrino mass is taken into account and extra radiation is added. We also study the prospect of constraining such strong interaction by future terrestrial and space telescopes, and find that the upcoming CMB-S4 experiment will improve the upper limit on neutrino self-interaction by about a factor of three.
2305.05031
Fast randomized algorithms for computing the generalized tensor SVD based on the tubal product
This work deals with developing two fast randomized algorithms for computing the generalized tensor singular value decomposition (GTSVD) based on the tubal product (t-product). The random projection method is utilized to compute the important actions of the underlying data tensors and use them to get small sketches of the original data tensors, which are easier to be handled. Due to the small size of the sketch tensors, deterministic approaches are applied to them to compute their GTSVDs. Then, from the GTSVD of the small sketch tensors, the GTSVD of the original large-scale data tensors is recovered. Some experiments are conducted to show the effectiveness of the proposed approach.
Salman Ahmadi-Asl, Ugochukwu Ugwu
2023-05-08T20:19:44Z
http://arxiv.org/abs/2305.05031v5
# Fast randomized algorithms for computing the generalized tensor SVD based on the tubal product ###### Abstract This work deals with developing two fast randomized algorithms for computing the generalized tensor singular value decomposition (GTSVD) based on the tubal product (t-product). The random projection method is utilized to compute the important actions of the underlying data tensors and use them to get small sketches of the original data tensors, which are easier to handle. Due to the small size of the sketch tensors, deterministic approaches are applied to them to compute their GTSVDs. Then, from the GTSVD of the small sketch tensors, the GTSVD of the original large-scale data tensors is recovered. Some experiments are conducted to show the effectiveness of the proposed approaches. keywords: Randomized algorithms, generalized tensor SVD, tubal product Msc: 15A69, 46N40, 15A23 + Footnote †: journal: Computer Science ## 1 Introduction The Singular Value Decomposition (SVD) is one of the key matrix factorizations, which has been widely used in many applications such as signal processing and machine learning [1]. It can compute the best low-rank approximation of a matrix in the least-squares sense for any invariant matrix norm. This has been a perfect property convincing its wide applications. The SVD is applied to a single matrix and can capture orthonormal bases for its four fundamental subspaces. The idea of generalizing the SVD to a pair of matrices was first proposed in [2; 3] and is called generalized SVD (GSVD) with applications in solving inverse problems [4] and genetics [5; 6]. The classical SVD or GSVD is prohibitive for computing low-rank approximations of large-scale data matrices. The randomization ap proach is highly effective at solving this issue when computing the SVD or GSVD of big matrices [7, 8, 9]. The principal idea of the randomization framework is first capturing the range of a given data matrix by either multiplication with random matrices or sampling some actual columns of the original data matrix. Then, using an orthonormal basis for the mentioned range, a small sketch of the matrix is computed, which can be processed and analyzed easily. Finally, the SVD/GSVD of the original big data matrices is recovered from the SVD/GSVD of the small sketches. This idea leads to a significant speed-up in our computations although it also offers parallel implementation. It is known that randomized algorithms can provide stable approximations and this issue together with the other benefits mentioned above, have made them ubiquitous tools in numerical linear algebra. Due to the promising results achieved by the randomized algorithms for matrices, they have been extended to tensors and for different types of tensor decompositions, see [10, 11, 12, 13]. In this work, we focus on the tensor SVD (t-SVD) proposed in [14, 15]. The t-SVD has similar properties as the classical SVD. In particular, in contrast to the Tucker decomposition [16, 17] or the canonical polyadic decomposition [18, 19], its truncation version provides the best tubal rank approximation in the least-squares sense. The GSVD was generalized to tensors based on the t-SVD in [20, 21], which we call GTSVD. Motivated by the works [8, 9], we develop two fast randomized algorithms for computing the GTSVD. The key contributions of this work are as follows * Developing two fast randomized algorithms for the computation of the GTSVD based on the t-product. * Providing computer simulations convincing the applicability of the proposed randomized algorithm. The rest of the paper is structured as follows. In Section 2, the preliminaries are presented. Here, we also introduce the t-SVD model and the necessary algorithms for its computations. In Section 3, the generalized SVD (GSVD) and its extension to tensors (GTSVD) are outlined and it is shown how to compute these decompositions. Two randomized algorithms are proposed in Section 4 and the computer simulation results are reported in Section 6. Finally, a conclusion is given in Section 7. ## 2 Basic definitions and concepts We adopt the same notations used in [22] in this paper. So, to represent a tensor, a matrix, and a vector we use an underlined bold capital letter, a bold capital letter, and a bold lower letter. Slices are important subtensors that are generated by fixing all but two modes. In particular, for a third-order tensor \(\underline{\mathbf{X}}\), the three types slices \(\underline{\mathbf{X}}(:,:,k),\)\(\underline{\mathbf{X}}(:,j,:),\)\(\underline{\mathbf{X}}(i,:,:)\) are called frontal, lateral and horizontal slices. For convenience, sometimes in the paper, we use the equivalent notations \(\underline{\mathbf{X}}_{i}\equiv\underline{\mathbf{X}}(:,:,i)\). Fibers are generated by fixing all but one mode, so they are vectors. For a third-order tensor \(\underline{\mathbf{X}}\), the fiber \(\underline{\mathbf{X}}(i,j,:)\) is called a tube. The notation "conj" denotes the component-wise complex conjugate of a matrix. The Frobenius norm of matrices or tensors is denoted by \(\|.\|\). The notation \(\|.\|_{2}\) stands for the Euclidean norm of vectors and the spectral norm of matrices. For a positive definite matrix \(\mathbf{S}\), the weighted inner product norm is defined as \(\|\mathbf{X}\|_{\mathbf{S}}=\sqrt{\mathrm{Tr}(\mathbf{X}^{T}\mathbf{S} \underline{\mathbf{X}})}\) where "\(\mathrm{Tr}\)" is the trace operator. The mathematical expectation is represented by \(\mathbb{E}\). We now present the next definitions, which we need in our presentation. **Definition 1**.: (t-product) Let \(\underline{\mathbf{X}}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\) and \(\underline{\mathbf{Y}}\in\mathbb{R}^{I_{2}\times I_{4}\times I_{3}}\), the tubal product (t-product) \(\underline{\mathbf{X}}*\underline{\mathbf{Y}}\in\mathbb{R}^{I_{1}\times I_{4} \times I_{3}}\) is defined as follows \[\underline{\mathbf{C}}=\underline{\mathbf{X}}*\underline{\mathbf{Y}}=\mathrm{ fold}\left(\mathrm{circ}\left(\underline{\mathbf{X}}\right)\mathrm{unfold}\left( \underline{\mathbf{Y}}\right)\right), \tag{1}\] where \[\mathrm{circ}\left(\underline{\mathbf{X}}\right)=\begin{bmatrix}\underline{ \mathbf{X}}(:,:,1)&\underline{\mathbf{X}}(:,:,I_{3})&\cdots&\underline{\mathbf{ X}}(:,:,2)\\ \underline{\mathbf{X}}(:,:,2)&\underline{\mathbf{X}}(:,:,1)&\cdots&\underline{ \mathbf{X}}(:,:,3)\\ \vdots&\vdots&\ddots&\vdots\\ \underline{\mathbf{X}}(:,:,I_{3})&\underline{\mathbf{X}}(:,:,I_{3}-1)&\cdots& \underline{\mathbf{X}}(:,:,1)\end{bmatrix},\] and \[\mathrm{unfold}(\underline{\mathbf{Y}})=\begin{bmatrix}\underline{\mathbf{Y} }(:,:,1)\\ \underline{\mathbf{Y}}(:,:,2)\\ \vdots\\ \underline{\mathbf{Y}}(:,:,I_{3})\end{bmatrix},\quad\underline{\mathbf{Y}}= \mathrm{fold}\left(\mathrm{unfold}\left(\underline{\mathbf{Y}}\right)\right).\] **Definition 2**.: (Transpose) The transpose of a tensor \(\underline{\mathbf{X}}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\) is denoted by \(\underline{\mathbf{X}}^{T}\in\mathbb{R}^{I_{2}\times I_{1}\times I_{3}}\) produced by applying the transpose to all frontal slices of the tensor \(\underline{\mathbf{X}}\) and reversing the order of the transposed frontal slices from the second till the last one. **Definition 3**.: (Identity tensor) Identity tensor \(\underline{\textbf{I}}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) is a tensor whose first frontal slice is an identity matrix of size \(I_{1}\times I_{1}\) and all other frontal slices are zero. It is easy to show \(\underline{\textbf{I}}*\underline{\textbf{X}}=\underline{\textbf{X}}\) and \(\underline{\textbf{X}}*\underline{\textbf{I}}=\underline{\textbf{X}}\) for all tensors of conforming sizes. **Definition 4**.: (Orthogonal tensor) We all that a tensor \(\underline{\textbf{X}}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) is orthogonal if \(\underline{\textbf{X}}^{T}*\underline{\textbf{X}}=\underline{\textbf{X}}* \underline{\textbf{X}}^{T}=\underline{\textbf{I}}\). **Definition 5**.: (f-diagonal tensor) If all frontal slices of a tensor are diagonal then the tensor is called an f-diagonal tensor. **Definition 6**.: (Moore-Penrose pseudoinverse of a tensor) Let \(\underline{\textbf{X}}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\) be given. The Moore-Penrose (MP) pseudoinverse of the tensor \(\underline{\textbf{X}}\) is denoted by \(\underline{\textbf{X}}^{\dagger}\in\mathbb{R}^{I_{2}\times I_{1}\times I_{3}}\) is a unique tensor satisfying the following four equations: \[\begin{array}{c}\underline{\textbf{X}}^{\dagger}*\underline{\textbf{X}}* \underline{\textbf{X}}^{\dagger}=\underline{\textbf{X}}^{\dagger},\quad \underline{\textbf{X}}*\underline{\textbf{X}}^{\dagger}*\underline{\textbf{X} }=\underline{\textbf{X}},\\ (\underline{\textbf{X}}*\underline{\textbf{X}}^{\dagger})^{T}=\underline{ \textbf{X}}*\underline{\textbf{X}}^{\dagger},\quad(\underline{\textbf{X}}^{ \dagger}*\underline{\textbf{X}})^{T}=\underline{\textbf{X}}^{\dagger}* \underline{\textbf{X}}.\end{array}\] The MP pseudoinverse of a tensor can also be computed in the Fourier domain and this is shown in Algorithm 2. The inverse of a tensor is a special case of the MP pseudoinverse of tensors. The inverse of \(\underline{\textbf{X}}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) is denoted by \(\underline{\textbf{X}}^{-1}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) is a unique tensor satisfying \(\underline{\textbf{X}}*\underline{\textbf{X}}^{-1}=\underline{\textbf{X}}^{- 1}*\underline{\textbf{X}}=\underline{\textbf{I}}\), where \(\underline{\textbf{I}}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}}\) is the identity tensor The inverse of a tensor can be also computed in the Fourier domain by replacing the MATLAB command "inv" in Line 3 of Algorithm 2. ### Tensor SVD (t-SVD), Tensor QR (t-QR) decomposition and Tensor LU (t-LU) decomposition The classical matrix decompositions such as QR, LU, and SVD can be straightforwardly generalized to tenors based on the t-product. Given a tensor \(\underline{\textbf{X}}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\), the tensor QR (t-QR) decomposition represents the tensor \(\underline{\textbf{X}}\) as \(\underline{\textbf{X}}=\underline{\textbf{Q}}*\underline{\textbf{R}}\) and can be computed through Algorithm 3. By a slight modification of Algorithm 3, the tensor LU (t-LU) decomposition and tensor SVD (t-SVD) can be computed. More precisely, in line 4 of Algorithm 3, we replace the LU decomposition and the SVD of frontal slices \(\widehat{\underline{\textbf{X}}}(:,:,i),\)\(i=1,2,\ldots,I_{3},\) instead of the QR decomposition. The tensor SVD (t-SVD) expresses a tensor as the t-product of three tensors. The first and last tensors are orthogonal while the middle tensor if an f-diagonal tensor. Let \(\underline{\textbf{X}}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\), then the t-SVD gives the following model: \[\underline{\textbf{X}}\approx\underline{\textbf{U}}*\underline{\textbf{S}}* \underline{\textbf{V}}^{T},\] ``` Input : Two data tensors \(\underline{\mathbf{X}}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\), \(\underline{\mathbf{Y}}\in\mathbb{R}^{I_{2}\times I_{4}\times I_{3}}\) Output : t-product \(\underline{\mathbf{C}}=\underline{\mathbf{X}}\ast\underline{\mathbf{Y}}\in \mathbb{R}^{I_{1}\times I_{4}\times I_{3}}\) 1\(\widehat{\mathbf{X}}=\mathrm{fft}\left(\underline{\mathbf{X}},[],3\right)\); 2\(\widehat{\mathbf{X}}=\mathrm{fft}\left(\underline{\mathbf{Y}},[],3\right)\); 3for\(i=1,2,\ldots,\lceil\frac{I_{3}+1}{2}\rceil\)do 4\(\underline{\widehat{\mathbf{C}}}\left(:,:,i\right)=\underline{\widehat{\mathbf{ X}}}\left(:,:,i\right)\)\(\underline{\widehat{\mathbf{Y}}}\left(:,:,i\right)\); 5 6 end for 7for\(i=\lceil\frac{I_{3}+1}{2}\rceil+1,\ldots,I_{3}\)do 8\(\underline{\widehat{\mathbf{C}}}\left(:,:,i\right)=\mathrm{conj}(\underline{ \widehat{\mathbf{C}}}\left(:,:,I_{3}-i+2\right))\); 9 10 end for 11\(\underline{\mathbf{C}}=\mathrm{fft}\left(\underline{\widehat{\mathbf{C}}},[],3\right)\); ``` **Algorithm 1**The t-product of two tensors [15, 23] ``` Input : The data tensor \(\underline{\mathbf{X}}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\) Output : Moore-Penrose pseudoinvers \(\underline{\mathbf{X}}^{\dagger}\in\mathbb{R}^{I_{2}\times I_{1}\times I_{3}}\) 1\(\widehat{\mathbf{X}}=\mathrm{fft}\left(\underline{\mathbf{X}},[],3\right)\); 2for\(i=1,2,\ldots,\lceil\frac{I_{3}+1}{2}\rceil\)do 3\(\underline{\widehat{\mathbf{C}}}\left(:,:,i\right)=\mathrm{pinv}\left(\widehat {\mathbf{X}}(:,:,i)\right)\); 4 5 end for 6for\(i=\lceil\frac{I_{3}+1}{2}\rceil+1,\ldots,I_{3}\)do 7\(\underline{\widehat{\mathbf{C}}}\left(:,:,i\right)=\mathrm{conj}(\underline{ \widehat{\mathbf{C}}}\left(:,:,I_{3}-i+2\right))\); 8 9 end for 10\(\underline{\mathbf{X}}^{\dagger}=\mathrm{fft}\left(\underline{\widehat{\mathbf{ C}}},[],3\right)\); ``` **Algorithm 2**Fast Moore-Penrose pseudoinvers computation of the tensor \(\underline{\mathbf{X}}\) ``` Input : The data tensor \(\underline{\mathbf{X}}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\) Output : Moore-Penrose pseudoinvers \(\underline{\mathbf{X}}^{\dagger}\in\mathbb{R}^{I_{2}\times I_{1}\times I_{3}}\) 1\(\widehat{\mathbf{X}}=\mathrm{fft}\left(\underline{\mathbf{X}},[],3\right)\); 2for\(i=1,2,\ldots,\lceil\frac{I_{3}+1}{2}\rceil\)do 3\(\underline{\widehat{\mathbf{C}}}\left(:,:,i\right)=\mathrm{pinv}\left(\widehat {\mathbf{X}}\left(:,:,i\right)\right)\); 4 5 end for 6for\(i=\lceil\frac{I_{3}+1}{2}\rceil+1,\ldots,I_{3}\)do 7\(\underline{\widehat{\mathbf{C}}}\left(:,:,i\right)=\mathrm{conj}(\underline{ \widehat{\mathbf{C}}}\left(:,:,I_{3}-i+2\right))\); 8 9 end for 11\(\underline{\mathbf{X}}^{\dagger}=\mathrm{fft}\left(\underline{\widehat{ \mathbf{C}}},[],3\right)\); ``` **Algorithm 3**Fast Moore-Penrose pseudoinvers computation of the tensor \(\underline{\mathbf{X}}\) ``` Input : The data tensor \(\underline{\mathbf{X}}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\) Output : Moore-Penrose pseudoinvers \(\underline{\mathbf{X}}^{\dagger}\in\mathbb{R}^{I_{2}\times I_{1}\times I_{3}}\) Output : Moore-Penrose pseudoinvers \(\underline{\mathbf{X}}^{\dagger}\in\mathbb{R}^{I_{2}\times I_{1}\times I_{3}}\) 12\(\widehat{\mathbf{X}}=\mathrm{fft}\left(\underline{\mathbf{X}},[],3\right)\); 13 end for 14for\(i=\lceil\frac{I_{3}+1}{2}\rceil+1,\ldots,I_{3}\)do 15\(\underline{\widehat{\mathbf{C}}}\left(:,:,i\right)=\mathrm{conj}(\underline{ \widehat{\mathbf{C}}}\left(:,:,I_{3}-i+2\right))\); 16 17 end for 18\(\underline{\mathbf{X}}^{\dagger}=\mathrm{fft}\left(\underline{\widehat{ \mathbf{C}}},[],3\right)\); ``` **Algorithm 4**Fast Moore-Penrose pseudoinvers computation of the tensor \(\underline{\mathbf{X}}\) ``` Input : The data tensor \(\underline{\mathbf{X}}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\) Output : Moore-Penrose pseudoinvers \(\underline{\mathbf{X}}^{\dagger}\in\mathbb{R}^{I_{2}\times I_{1}\times I_{3}}\) Output : Moore-Penrose pseudoinvers \(\underline{\mathbf{X}}^{\dagger}\in\mathbb{R}^{I_{2}\times I_{1}\times I_{3}}\) 19\(\widehat{\mathbf{X}}=\mathrm{fft}\left(\underline{\mathbf{X}},[],3\right)\); 20 [MISSING_PAGE_POST] where \(\underline{\mathbf{U}}\in\mathbb{R}^{I_{1}\times R\times I_{3}},\,\underline{ \mathbf{S}}\in\mathbb{R}^{R\times R\times I_{3}},\) and \(\underline{\mathbf{V}}\in\mathbb{R}^{I_{2}\times R\times I_{3}}\). The tensors \(\underline{\mathbf{U}}\) and \(\underline{\mathbf{V}}\) are orthogonal, while the tensor \(\underline{\mathbf{S}}\) is f-diagonal. For the generalization of the t-SVD to tensors of order higher than three is done in [24]. The t-SVD can be computed via Algorithm 4. ``` Input : The data tensor \(\underline{\mathbf{X}}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\) Output : The t-QR decomposition of the tensor \(\underline{\mathbf{X}}\) 1\(\widehat{\underline{\mathbf{X}}}=\mathrm{fft}\left(\underline{\mathbf{X}},[ ],3\right)\); 2for\(i=1,2,\ldots,\lceil\frac{I_{3}+1}{2}\rceil\)do 3\([\widehat{\underline{\mathbf{Q}}}\left(.;.;i\right),\widehat{\mathbf{R}}(.;.,i)]=\mathrm{qr}\left(\widehat{\mathbf{X}}(.;.;i),0\right)\); 4 end for 5for\(i=\lceil\frac{I_{3}+1}{2}\rceil+1,\ldots,I_{3}\)do 6\(\widehat{\underline{\mathbf{Q}}}\left(.;.;i\right)=\mathrm{conj}(\widehat{ \underline{\mathbf{Q}}}\left(.;.;I_{3}-i+2\right))\); 7\(\widehat{\underline{\mathbf{R}}}\left(.;.;i\right)=\mathrm{conj}(\widehat{ \underline{\mathbf{R}}}\left(.;.;I_{3}-i+2\right))\); 8 end for 9\(\underline{\mathbf{Q}}=\mathrm{fft}\left(\widehat{\underline{\mathbf{Q}}},[ ],3\right)\); 10\(\underline{\mathbf{R}}=\mathrm{fft}\left(\widehat{\underline{\mathbf{R}}},[ ],3\right)\); ``` **Algorithm 3**The t-QR decomposition of the tensor \(\underline{\mathbf{X}}\) Generalized singular value decomposition (GSVD) and its extension to tensors based on the t-product (GTSVD) In this section, the GSVD and its extension to tensors based on the t-product are introduced. The GSVD is a generalized version of the classical SVD, which is applied to a pair of matrices. The SVD was generalized in [2] from two different perspectives. More precisely, from the SVD, it is known that each matrix \(\mathbf{X}\in\mathbb{R}^{I_{1}\times I_{2}}\) can be decomposed in the form \(\mathbf{X}=\mathbf{U}\mathbf{S}\mathbf{V}^{T}\) where \(\mathbf{U}\in\mathbb{R}^{I_{1}\times I_{1}}\) and \(\mathbf{V}\in\mathbb{R}^{I_{2}\times I_{2}}\) are orthogonal matrices and the \(\mathbf{S}=\mathrm{diag}(\sigma_{1},\sigma_{2},\ldots,\sigma_{\min\{I_{1},I_{2 }\}})\in\mathbb{R}^{I_{1}\times I_{2}}\) is a diagonal matrix with singular values \(\sigma_{1}\geq\sigma_{2}\geq\ldots\geq\sigma_{R}>\sigma_{R+1}=\ldots=\sigma_{ \min\{I_{1},I_{2}\}}=0\) and \(\mathrm{rank}(\mathbf{X})=R.\) Denoting the set of singular values of the matrix \(\mathbf{X}\) as \(\sigma(\mathbf{X})=\{\sigma_{1},\sigma_{2},\ldots,\sigma_{\min\{I_{1},I_{2}\}}\}\), it is known that \[\sigma_{i}\in\sigma(\mathbf{X})\longrightarrow\det(\mathbf{X}^{T}\mathbf{X}- \sigma_{i}^{2}\mathbf{I})=0, \tag{2}\] \[\sigma_{i}\in\sigma(\mathbf{X})\longrightarrow\sigma_{i}\,\text{is a stationary point of}\,\frac{\|\mathbf{X}\mathbf{z}\|_{2}}{\|\mathbf{z}\|_{2}}. \tag{3}\] Based on (2) and 3, the SVD was generalized in the following straightforward ways: \[\text{Find}\,\sigma\geq 0\,\text{such that}\,\det(\mathbf{X}^{T} \mathbf{X}-\sigma^{2}\mathbf{Y}^{T}\mathbf{Y})=0, \tag{4}\] \[\text{Find the stationary values of}\,\frac{\|\mathbf{X}\mathbf{z}\|_{\mathbf{S}}}{\| \mathbf{z}\|_{\mathbf{T}}}, \tag{5}\] where \(\mathbf{Y}\in\mathbb{R}^{I_{3}\times I_{2}}\) is an arbitrary matrix and \(\mathbf{S}\in\mathbb{R}^{I_{2}\times I_{2}}\) and \(\mathbf{T}\in\mathbb{R}^{I_{1}\times I_{1}}\) are positive definitive matrices. Let us denote the set of all points satisfying (4) as \(\sigma(\mathbf{X},\mathbf{Y})=\{\sigma|\sigma\geq 0,\,\det(\mathbf{X}^{T} \mathbf{X}-\sigma^{2}\mathbf{Y}^{T}\mathbf{Y})=0\},\) which are called \(\mathbf{Y}\)-singular values of the matrix \(\mathbf{X}\). It was shown in [2] that given \(\mathbf{X}\in\mathbb{R}^{I_{1}\times I_{2}},\ I_{1}\geq I_{2}\) and \(\mathbf{Y}\in\mathbb{R}^{I_{3}\times I_{2}},\,I_{3}\geq I_{2}\) there exist orthogonal matrices \(\mathbf{U}\in\mathbb{R}^{I_{1}\times I_{1}},\,\mathbf{V}\in\mathbb{R}^{I_{3} \times I_{3}}\) and a nonsingular matrix \(\mathbf{Z}\in\mathbb{R}^{I_{2}\times I_{2}}\) such that \[\mathbf{U}^{T}\mathbf{X}\mathbf{Z} = \mathrm{diag}(\alpha_{1},\cdots,\alpha_{I_{2}}),\quad\alpha_{i} \in[0,1] \tag{6}\] \[\mathbf{V}^{T}\mathbf{Y}\mathbf{Z} = \mathrm{diag}(\beta_{1},\cdots,\beta_{I_{2}}),\quad\beta_{i}\in[0,1] \tag{7}\] where \(\alpha_{i}^{2}+\beta_{i}^{2}=1\) with the ratios \(\beta_{i}/\alpha_{i}\) of nondecreasing order for \(i=1,2,\ldots,I_{2}\). A more general SVD was proposed in [3] where a more computationally stable algorithm was also developed to compute it. In the following, the latter GSVD is introduced, which will be considered in our paper. **Theorem 1**.: [3] Let two matrices \(\mathbf{X}\in\mathbb{R}^{I_{1}\times I_{2}}\) and \(\mathbf{Y}\in\mathbb{R}^{I_{3}\times I_{2}}\) be given and assume that the SVD of the matrix \(\mathbf{C}=\begin{bmatrix}\mathbf{X}\\ \mathbf{Y}\end{bmatrix}\) is \[\mathbf{E}^{T}\mathbf{C}\mathbf{Z}=\begin{bmatrix}\mathbf{\Gamma}&\mathbf{0} \\ \mathbf{0}&\mathbf{0}\end{bmatrix}, \tag{8}\] with the unitary matrices \(\mathbf{E}\in\mathbb{R}^{(I_{1}+I_{3})\times(I_{1}+I_{3})},\,\mathbf{Z}\in \mathbb{R}^{I_{2}\times I_{2}}\) and a diagonal matrix \(\mathbf{\Gamma}\in\mathbb{R}^{k\times k}\). Then, there exist matrices \(\mathbf{U}\in\mathbb{R}^{I_{1}\times I_{1}},\,\mathbf{V}\in\mathbb{R}^{I_{3} \times I_{3}}\) and \(\mathbf{W}\in\mathbb{R}^{k\times k}\) such that \[\mathbf{U}^{T}\mathbf{X}\mathbf{Z}=\mathbf{\Sigma}_{\mathbf{X}}(\mathbf{W}^{T} \mathbf{\Gamma},\mathbf{0}),\quad\mathbf{V}^{T}\mathbf{Y}\mathbf{Z}=\mathbf{ \Sigma}_{\mathbf{Y}}(\mathbf{W}^{T}\mathbf{\Gamma},\mathbf{0}), \tag{9}\] where \(\mathbf{\Sigma}_{\mathbf{X}}\in\mathbb{R}^{I_{1}\times k}\) and \(\mathbf{\Sigma}_{\mathbf{Y}}\in\mathbb{R}^{I_{3}\times k}\) are defined as follows: \[\mathbf{\Sigma}_{X}=\begin{bmatrix}\mathbf{I}_{\mathbf{X}}&&\\ &\mathbf{S}_{\mathbf{X}}&\\ &&\mathbf{0}_{\mathbf{X}}\end{bmatrix},\quad\mathbf{\Sigma}_{Y}=\begin{bmatrix} \mathbf{0}_{\mathbf{Y}}&&\\ &\mathbf{S}_{\mathbf{Y}}&\\ &&\mathbf{I}_{\mathbf{Y}}\end{bmatrix}. \tag{10}\] Here, \(\mathbf{I}_{\mathbf{X}}\in\mathbb{R}^{c\times c}\) and \(\mathbf{I}_{\mathbf{Y}}\in\mathbb{R}^{(k-c-d)\times(k-c-d)}\) are identity matrices, \(\mathbf{0}_{\mathbf{X}}\in\mathbb{R}^{(I_{1}-c-d)\times(k-c-d)}\) and \(\mathbf{0}_{\mathbf{Y}}\in\mathbb{R}^{(I_{3}-k+c)\times c}\) are zero matrices that may have no columns/rows and \(\mathbf{S}_{\mathbf{X}}\in\mathbb{R}^{d\times d},\,\mathbf{S}_{\mathbf{Y}}\in \mathbb{R}^{d\times d}\) are diagonal matrices with diagonal elements \(1>\alpha_{c+1}\geq\cdots\geq\alpha_{c+d}>0\) and \(1<\beta_{c+1}\leq\cdots\leq\beta_{c+d}>0\), respectively and \(\alpha_{i}^{2}+\beta_{i}^{2}=1\) for \(c+1\leq i\leq c+d\). Note that \(c\) and \(d\) are internally defined by the matrices \(\mathbf{X}\) and \(\mathbf{Y}\). It is not difficult to check that (9) is reduced to \[\mathbf{U}^{T}\mathbf{X}\mathbf{R}^{-1}=(\mathbf{\Sigma}_{\mathbf{X}},\mathbf{ 0}),\quad\mathbf{V}^{T}\mathbf{Y}\mathbf{R}^{-1}=(\mathbf{\Sigma}_{\mathbf{Y }},\mathbf{0}), \tag{11}\] for \(\mathbf{R}^{-1}\) defined as follows \[\mathbf{R}^{-1}=\mathbf{Z}\begin{bmatrix}\mathbf{\Gamma}^{-1}\mathbf{W}& \mathbf{0}\\ \mathbf{0}&\mathbf{I}_{I_{2}-k}\end{bmatrix},\] and if the matrix \(\mathbf{C}\) is of full rank, then the zero blocks on the right-hand sides of (11) are removed. As we see, the first formulation (4) of the GSVD deals with two matrices \(\mathbf{X}\) and \(\mathbf{Y},\) and provides a decomposition of the form (11) with the same right-hand side matrix \(\mathbf{R}^{-1}\). There is a new GSVD corresponding to (5) given two positive definite matrices \(\mathbf{S}\) and \(\mathbf{T}\), see [2] for details. In this paper, we only consider the GSVD formulated in (2) and its extension to the tensor case based on the t-product. The GSVD can be analogously extended to tensors based on the t-product [20; 21]. Let \(\underline{\mathbf{X}}\), \(\underline{\mathbf{Y}}\) be two given tensors with the same number of lateral slices. Then the _Generalized tensor SVD_ (GTSVD) decomposes the tensors \(\underline{\mathbf{X}}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}},\ I_{1}\geq I _{2},\ \underline{\mathbf{Y}}\in\mathbb{R}^{I_{4}\times I_{2}\times I_{3}},\ I_{4}\geq I _{2},\) jointly in the following form: \[\underline{\mathbf{X}} = \underline{\mathbf{U}}*\underline{\mathbf{C}}*\underline{\mathbf{Z}}, \tag{12}\] \[\underline{\mathbf{Y}} = \underline{\mathbf{V}}*\underline{\mathbf{S}}*\underline{\mathbf{ Z}}, \tag{13}\] where \(\underline{\mathbf{U}}\in\mathbb{R}^{I_{1}\times I_{1}\times I_{3}},\)\(\underline{\mathbf{V}}\in\mathbb{R}^{I_{4}\times I_{4}\times I_{3}},\)\(\underline{\mathbf{C}}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}},\)\(\underline{\mathbf{S}}\in\mathbb{R}^{I_{4}\times I_{2}\times I_{3}},\)\(\underline{\mathbf{Z}}\in\mathbb{R}^{I_{2}\times I_{2}\times I_{3}}\). Note that the tensors \(\underline{\mathbf{C}}\) and \(\underline{\mathbf{S}}\) are f-diagonal and the tensors \(\underline{\mathbf{U}}\) and \(\underline{\mathbf{V}}\) are orthogonal and \(\underline{\mathbf{Z}}\) is nonsingular. The procedure of the computation of the GTSVD is presented in Algorithm 5. We need to apply the classical GSVD (lines 3-5) to the first \(\lceil\frac{I_{3}+1}{2}\rceil\) frontal slices of the tensors \(\underline{\mathbf{X}}\) and \(\underline{\mathbf{Y}}\) in the Fourier domain and the rest of the slices are computed easily (Lines 6-12). The computation of the GSVD or the GTSVD for large-scale matrices/tensors involves the computation of the SVD of some large matrices. So, it is computationally demanding and requires huge memory and resources. In recent years, the idea of randomization has been utilized to accelerate the computation of the GSVD. In [8], a randomized algorithm is proposed for the GSVD of the form (4) while [9] proposes a randomized algorithm for the GSVD of form 5. The key idea is to employ the random projection method for fast computation of the SVD, which is required in the process of computing the GSVD. ## 4 Proposed fast randomized algorithms for computation of the GTSVD In this section, we propose two randomized variants of the GTSVD Algorithm 5. The first proposed randomized algorithm is a naive modification of Algorithm 5 where we can replace the deterministic GSVD with the randomized counterpart developed in [8]. This idea is presented in Algorithm 6. Here, we can use the oversampling and the power iteration methods to improve the accuracy of the singular values of the frontal slices do not decay sufficiently fast [7]. The second proposed randomized algorithm is presented in Algorithm 7 where we first make a reduction on the given data tensors by multiplying (t-product) them with random tensors to capture their important actions. Similar to Algorithm 6, if the frontal slices of a given tensor do not have a fast decay, the power iteration technique and the oversampling method should be used to better capture their ranges. More ``` Input : The data tensors \(\underline{\mathbf{X}}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\) and \(\underline{\mathbf{Y}}\in\mathbb{R}^{I_{4}\times I_{2}\times I_{3}}\) Output : The generalized t-SVD (GTSVD) of \(\underline{\mathbf{X}}\) and \(\underline{\mathbf{Y}}\) as \(\underline{\mathbf{X}}=\underline{\mathbf{U}}*\underline{\mathbf{C}}* \underline{\mathbf{Z}}\) and \(\underline{\mathbf{Y}}=\underline{\mathbf{V}}*\underline{\mathbf{S}}* \underline{\mathbf{Z}}\) 1\(\widehat{\underline{\mathbf{X}}}=\mathrm{fft}\left(\underline{\mathbf{X}},[],3\right)\); 2\(\widehat{\underline{\mathbf{Y}}}=\mathrm{fft}\left(\underline{\mathbf{Y}},[],3\right)\); 3for\(i=1,2,\ldots,\left[\frac{I_{3}+1}{2}\right]\)do 4\(\left[\underline{\mathbf{\widehat{U}}}_{i},\,\widehat{\mathbf{V}}_{i},\, \widehat{\underline{\mathbf{Z}}}_{i},\,\widehat{\underline{\mathbf{C}}}_{i}, \,\widehat{\underline{\mathbf{S}}}_{i}\right]=\mathrm{GSVD}\left(\widehat{ \underline{\mathbf{X}}}(:,:,i),\widehat{\underline{\mathbf{Y}}}(:,:,i)\right)\); 5 6 end for 7for\(i=\lceil\frac{I_{3}+1}{2}\rceil+1,\ldots,I_{3}\)do 8\(\widehat{\underline{\mathbf{U}}}_{i}=\mathrm{conj}(\underline{\mathbf{\widehat {U}}}_{I_{3}-i+2})\); 9\(\widehat{\underline{\mathbf{V}}}_{i}=\mathrm{conj}(\underline{\mathbf{V}}_{I_{3 }-i+2})\); 10\(\widehat{\underline{\mathbf{Z}}}_{i}=\mathrm{conj}(\widehat{\underline{\mathbf{Z }}}_{I_{3}-i+2})\); 11\(\widehat{\underline{\mathbf{C}}}_{i}=\widehat{\underline{\mathbf{C}}}_{I_{3}-i+2}\); 12\(\widehat{\underline{\mathbf{S}}}_{i}=\widehat{\underline{\mathbf{S}}}_{I_{3}-i+2}\); 13 end for 14\(\underline{\mathbf{U}}=\mathrm{fft}\left(\widehat{\underline{\mathbf{U}}},[],3 \right)\); ; \(\underline{\mathbf{V}}=\mathrm{fft}\left(\widehat{\underline{\mathbf{V}}},[],3 \right)\); ; \(\underline{\mathbf{Z}}=\mathrm{fft}\left(\widehat{\underline{\mathbf{Z}}},[],3 \right)\); ; \(\underline{\mathbf{S}}=\mathrm{fft}\left(\widehat{\underline{\mathbf{S}}},[],3 \right)\); ``` **Algorithm 5**Generalized t-SVD of \(\underline{\mathbf{X}}\) and \(\underline{\mathbf{Y}}\) precisely, the random projection stages in Algorithm 7 (Lines 1-2) are replaced with the following computations \[\underline{\mathbf{W}}_{1} =(\underline{\mathbf{X}}\ast\underline{\mathbf{X}}^{T})^{q}\ast \underline{\mathbf{X}}\ast\underline{\mathbf{\Omega}}_{1}, \tag{14}\] \[\underline{\mathbf{W}}_{2} =(\underline{\mathbf{Y}}\ast\underline{\mathbf{Y}}^{T})^{q}\ast \underline{\mathbf{Y}}\ast\underline{\mathbf{\Omega}}_{2}. \tag{15}\] In practice, for the computations (14)-(15) to be stable, we employ the t-QR decomposition or the t-LU decomposition or a combination of them [7; 25; 13]. Then, by applying the t-QR algorithm to the mentioned compressed tensors, we can obtain orthonormal bases for them, which are used to get small sketch tensors by projecting the original data tensors onto the compressed tensors. Since the sizes of the sketch tensors are smaller than the original ones, the deterministic algorithms can be used to compute their GTSVDs. Finally, the GTSVD of the original data tensor can be recovered from the GTSVD of the compressed tensors. Note that in Algorithms 6 and 7, we need the tubal rank as input, however, this can be numerically estimated for a given approximation error bound. For example, we can use the randomized fixed-precision developed in [26] and for the case of tensors, the randomized rank-revealing algorithm proposed in [27; 28; 25] is applicable. ## 5 Error Analysis We provide the average/expected error bounds of the approximations obtained by the proposed randomized algorithms in this section. Let us first partition the GSVD in (11) where \(\mathbf{Z}=\mathbf{R}^{-1}\) in the following form \[\mathbf{X}=\mathbf{U}\begin{bmatrix}\mathbf{\Sigma}_{\mathbf{X}_{1}}&\mathbf{ 0}&\mathbf{0}\\ \mathbf{0}&\mathbf{\Sigma}_{\mathbf{X}_{2}}&\mathbf{0}\end{bmatrix}\begin{bmatrix} \mathbf{Z}_{1}\\ \mathbf{Z}_{2}\\ \mathbf{Z}_{3}\end{bmatrix},\quad\mathbf{Y}=\mathbf{V}\begin{bmatrix} \mathbf{\Sigma}_{\mathbf{Y}_{1}}&\mathbf{0}&\mathbf{0}\\ \mathbf{0}&\mathbf{\Sigma}_{\mathbf{Y}_{2}}&\mathbf{0}\end{bmatrix}\begin{bmatrix} \widehat{\mathbf{Z}}_{1}\\ \widehat{\mathbf{Z}}_{2}\\ \mathbf{Z}_{3}\end{bmatrix} \tag{16}\] where \(\mathbf{\Sigma}_{\mathbf{X}_{1}}\in\mathbb{R}^{r_{1}\times r_{1}},\,\mathbf{ \Sigma}_{\mathbf{X}_{2}}\in\mathbb{R}^{(I_{1}-r_{1})\times(k-r_{1})},\,\mathbf{ \Sigma}_{\mathbf{Y}_{1}}\in\mathbb{R}^{(I_{3}-r_{2})\times(k-r_{1})},\,\mathbf{ \Sigma}_{\mathbf{Y}_{2}}\in\mathbb{R}^{r_{2}\times r_{2}},\,\mathbf{Z}_{1}\in \mathbb{R}^{r_{1}\times I_{2}},\,\mathbf{Z}_{2}\in\mathbb{R}^{(k-r_{1})\times I _{2}},\,\widehat{\mathbf{Z}}_{1}\in\mathbb{R}^{(k-r_{2})\times I_{3}},\,\widehat {\mathbf{Z}}_{2}\in\mathbb{R}^{r_{2}\times I_{2}}\) and \(\mathbf{Z}_{3}\in\mathbb{R}^{(I_{2}-k)\times I_{2}}\). Also, consider the standard Gaussian matrices \(\mathbf{\Phi}\in\mathbb{R}^{I_{2}\times(r_{1}+p_{1})}\) and \(\mathbf{\Phi}\in\mathbb{R}^{I_{2}\times(r_{2}+p_{2})}\) for the given target ranks \(r_{1},\,r_{2}\) and the oversampling parameters \(p_{1},\,p_{2}\). We first start with the following theorem that gives the average error bound of an approximation yielded by the random projection method for the computation of the GSVD [8]. ``` Input : The data tensors \(\underline{\mathbf{X}}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\) and \(\underline{\mathbf{Y}}\in\mathbb{R}^{I_{4}\times I_{2}\times I_{3}}\), standard Gaussian matrices with oversampling parameter \(p_{1}\) and \(p_{2}\) Output : The generalized t-SVD of \(\underline{\mathbf{X}}\) and \(\underline{\mathbf{Y}}\) as \(\underline{\mathbf{X}}=\underline{\mathbf{U}}*\underline{\mathbf{C}}*\underline {\mathbf{Z}}\) and \(\underline{\mathbf{Y}}=\underline{\mathbf{V}}*\underline{\mathbf{S}}*\underline {\mathbf{Z}}\) 1\(\underline{\widehat{\mathbf{X}}}=\mathrm{fft}\left(\underline{\mathbf{X}},[],3\right)\); 2\(\underline{\widehat{\mathbf{Y}}}=\mathrm{fft}\left(\underline{\mathbf{Y}},[],3\right)\); 3for\(i=1,2,\ldots,[\frac{I_{3}+1}{2}]\)do 4\([\underline{\widehat{\mathbf{U}}}_{i},\,\widehat{\mathbf{V}}_{i},\,\underline {\widehat{\mathbf{Z}}}_{i},\,\underline{\widehat{\mathbf{C}}}_{i},\,\underline {\widehat{\mathbf{S}}}_{i}]=\mathrm{Randomized\,\,GSVD}\left(\underline{ \widehat{\mathbf{X}}}(:,:,i),\underline{\widehat{\mathbf{Y}}}(:,:,i)\right)\); 5 6 end for 7for\(i=\lceil\frac{I_{3}+1}{2}\rceil+1,\ldots,I_{3}\)do 8\(\widehat{\mathbf{U}}_{i}=\mathrm{conj}(\underline{\widehat{\mathbf{U}}}_{I_{3 }-i+2})\); 9\(\widehat{\underline{\mathbf{Y}}}_{i}=\mathrm{conj}(\underline{\mathbf{V}}_{I_{3 }-i+2})\); 10\(\widehat{\underline{\mathbf{Z}}}_{i}=\mathrm{conj}(\underline{\widehat{\mathbf{Z} }}_{I_{3}-i+2})\); 11\(\widehat{\underline{\mathbf{C}}}_{i}=\widehat{\underline{\mathbf{C}}}_{I_{3}-i+2}\); 12\(\widehat{\underline{\mathbf{S}}}_{i}=\widehat{\underline{\mathbf{S}}}_{I_{3}-i+ 2}\); 13 14 end for 15 16 end for 17\(\widehat{\mathbf{U}}=\mathrm{fft}\left(\widehat{\mathbf{U}},[],3\right)\); \(\widehat{\mathbf{V}}=\mathrm{fft}\left(\widehat{\mathbf{V}},[],3\right)\); \(\widehat{\underline{\mathbf{Z}}}=\mathrm{fft}\left(\widehat{\underline{ \mathbf{Z}}},[],3\right)\); 18\(\widehat{\underline{\mathbf{C}}}=\mathrm{fft}\left(\widehat{\underline{ \mathbf{C}}},[],3\right)\); \(\widehat{\underline{\mathbf{S}}}=\mathrm{fft}\left(\widehat{\underline{ \mathbf{S}}},[],3\right)\); ``` **Algorithm 6**The proposed randomized GTSVD I **Theorem 2**.: [8] Given two matrices \(\mathbf{X}\in\mathbb{R}^{I_{1}\times I_{2}},\,\mathbf{Y}\in\mathbb{R}^{I_{3} \times I_{2}}\) and consider the target ranks \(r_{1},\;r_{2}\) and the oversampling parameters \(p_{1},\;p_{2}\) (\(r_{1}+p_{1}\leq\min(I_{1},I_{2}),\;r_{2}+p_{2}\leq\min(I_{3},I_{2})\)). Assume that two standard random matrices \(\mathbf{\Phi}\in\mathbb{R}^{I_{2}\times(r_{1}+p_{1})}\) and \(\mathbf{\Phi}\in\mathbb{R}^{I_{2}\times(r_{2}+p_{2})}\) are used to capture the range of the matrices \(\mathbf{X},\,\mathbf{Y}\) using \(\mathbf{M}=\mathbf{X}\Phi,\,\mathbf{N}=\mathbf{Y}\Psi,\) then \[\mathbb{E}(\|\mathbf{I}-\mathbf{M}\mathbf{M}^{\dagger}\mathbf{X}\|)\leq\eta_ {1}^{\mathbf{X}}\alpha_{r_{1}+1}+\eta_{2}^{\mathbf{X}}\sqrt{\sum_{j>r_{1}} \alpha_{j}^{2}}, \tag{17}\] \[\mathbb{E}(\|\mathbf{I}-\mathbf{N}\mathbf{N}^{\dagger}\mathbf{Y}\|)\leq\eta_ {1}^{\mathbf{Y}}\beta_{k-r_{2}}+\eta_{2}^{\mathbf{Y}}\sqrt{\sum_{j\leq k-r_{2} }\beta_{j}^{2}}, \tag{18}\] where \[\eta_{1}^{\mathbf{X}}=\|\mathbf{Z}\|\left(1+\frac{\sigma_{1}( \mathbf{Z}_{2})}{\sigma_{r_{1}}(\mathbf{Z}_{1})}\right)+\sqrt{\frac{r_{1}}{p_{ 1}-1}\sum_{j=1}^{r_{1}}\frac{\sigma_{1}^{2}(\mathbf{Z}_{2})}{\sigma_{j}^{2}( \mathbf{Z}_{1})}},\;\eta_{2}^{\mathbf{X}}=\|\mathbf{Z}\|\frac{\|\mathbf{Z}_{2} \|_{F}}{\sigma_{r_{1}}(\mathbf{Z}_{1})}\frac{e\sqrt{r_{1}+p_{1}}}{p_{1}}, \tag{19}\] \[\eta_{1}^{\mathbf{Y}}=\|\mathbf{Z}\|\left(1+\frac{\sigma_{1}( \widehat{\mathbf{Z}}_{1})}{\sigma_{r_{2}}(\widehat{\mathbf{Z}}_{2})}\right)+ \sqrt{\frac{r_{2}}{p_{2}-1}\sum_{j=1}^{r_{2}}\frac{\sigma_{1}^{2}(\widehat{ \mathbf{Z}}_{1})}{\sigma_{j}^{2}(\widehat{\mathbf{Z}}_{2})}},\;\eta_{2}^{ \mathbf{Y}}=\|\mathbf{Z}\|\frac{\|\widehat{\mathbf{Z}}_{1}\|_{F}}{\sigma_{r_{ 2}}(\widehat{\mathbf{Z}}_{2})}\frac{e\sqrt{r_{2}+p_{2}}}{p_{2}}. \tag{20}\] The average error bound of the approximation obtained by Algorithms 6-7 are provided in Theorem 3. **Theorem 3**.: Let \(\underline{\mathbf{X}}\in\mathbb{R}^{I_{1}\times I_{2}\times I_{3}}\) and \(\underline{\mathbf{Y}}\in\mathbb{R}^{I_{4}\times I_{2}\times I_{3}}\) be given data tensors. Assume we use the standard random tensors \(\underline{\mathbf{\Omega}}\in\mathbb{R}^{I_{2}\times(r_{1}+p_{1})\times I_{3}}\), \(\underline{\mathbf{\Omega}}_{2}\in\mathbb{R}^{I_{2}\times(r_{2}+p_{2})\times I _{3}}\) for the reduction stage and let their compressed tensors are \[\underline{\mathbf{W}}_{1}=\underline{\mathbf{X}}\ast\underline{\mathbf{ \Omega}}_{1},\quad\underline{\mathbf{W}}_{2}=\underline{\mathbf{Y}}\ast \underline{\mathbf{\Omega}}_{2},\] where \(r_{1},\,r_{2}\) are the target tubal ranks and oversampling parameters \(p_{1},\,p_{2}\) are the oversampling parameters. Then, the GTSVDs computed by Algorithms 6-7 provide the solutions with the following accuracies \[\mathbb{E}(\|(\mathbf{I}-\underline{\mathbf{W}}_{1}\ast\underline{ \mathbf{W}}_{1}^{\dagger}\ast\underline{\mathbf{X}})\|)\leq\left(\frac{1}{I_ {3}}\sum_{i=1}^{I_{3}}\eta_{1}^{\widehat{\mathbf{X}}^{(i)}}\alpha_{r_{1}+1}^{i }+\eta_{2}^{\widehat{\mathbf{X}}^{(i)}}\sqrt{\sum_{j>r_{1}}(\alpha_{j}^{i})^{ 2}}\right)^{1/2}, \tag{21}\] \[\mathbb{E}(\|(\mathbf{I}-\underline{\mathbf{W}}_{2}\ast\underline{\mathbf{W}} _{2}^{\dagger}\ast\underline{\mathbf{Y}})\|)\leq\left(\frac{1}{I_{3}}\sum_{i =1}^{I_{3}}\eta_{1}^{\widehat{\mathbf{Y}}^{(i)}}+\eta_{2}^{\widehat{\mathbf{Y }}^{(i)}}\sqrt{\sum_{j\leq k-r_{2}}(\beta_{j}^{i})^{2}}\right)^{1/2}, \tag{22}\] where, according to (16), the GSVDs of the frontal slices \(\widehat{\mathbf{X}}^{(i)}=\widehat{\mathbf{X}}(:,:,i)\) and \(\widehat{\mathbf{Y}}^{(i)}=\widehat{\mathbf{Y}}(:,:,i),\,\,i=1,2,\ldots,I_{3}\) are partitioned as \[\widehat{\mathbf{X}}^{(i)}=\mathbf{U}^{(i)}\left[\begin{matrix} \mathbf{\Sigma}_{\mathbf{X}_{1}^{(i)}}&\mathbf{0}&\mathbf{0}\\ \mathbf{0}&\mathbf{\Sigma}_{\mathbf{X}_{2}^{(i)}}&\mathbf{0}\end{matrix}\right] \left[\begin{matrix}\mathbf{Z}_{1}^{(i)}\\ \mathbf{Z}_{2}^{(i)}\\ \mathbf{Z}_{3}^{(i)}\end{matrix}\right],\,\,\widehat{\mathbf{Y}}^{(i)}= \mathbf{V}^{(i)}\left[\begin{matrix}\mathbf{\Sigma}_{\mathbf{Y}_{1}^{(i)}}& \mathbf{0}&\mathbf{0}\\ \mathbf{0}&\mathbf{\Sigma}_{\mathbf{Y}_{2}^{(i)}}&\mathbf{0}\end{matrix}\right] \left[\begin{matrix}\widehat{\mathbf{Z}}_{1}^{(i)}\\ \widehat{\mathbf{Z}}_{2}^{(i)}\\ \mathbf{Z}_{3}^{(i)}\end{matrix}\right], \tag{23}\] where \(\widehat{\mathbf{X}}=\mathrm{fft}\,(\underline{\mathbf{X}},[],3)\,,\,\,\widehat{ \mathbf{Y}}=\mathrm{fft}\,(\underline{\mathbf{Y}},[],3)\). Here, the quantities \(\eta_{1}^{\widehat{\mathbf{X}}^{(i)}}\,,\,\eta_{2}^{\widehat{\mathbf{X}}^{(i)}}\) and \(\eta_{1}^{\widehat{\mathbf{Y}}^{(i)}}\,,\,\eta_{2}^{\widehat{\mathbf{Y}}^{(i)}}\) are defined analogously based on the matrices \(\widehat{\mathbf{Z}}_{1}^{(i)},\,\widehat{\mathbf{Z}}_{2}^{(i)},\,\widehat{ \mathbf{Z}}_{3}^{(i)},\,\widehat{\mathbf{Z}}_{1}^{(i)},\) and \(\widehat{\mathbf{Z}}_{2}^{(i)}\), (replacing them in (19)-(20) instead of \(\widehat{\mathbf{Z}}_{1},\,\widehat{\mathbf{Z}}_{2},\,\widehat{\mathbf{Z}}_{3 },\,\widehat{\mathbf{Z}}_{1},\,\widehat{\mathbf{Z}}_{2}\)). Also \(\alpha_{j}^{i},\,\beta_{j}^{i},\)\(i=1,2,\ldots,I_{3},\,j=1,2,\ldots,I_{2}\) are the elements of the diagonal middle matrices obtained from the GSVD of the matrices \((\widehat{\mathbf{X}}^{(i)},\widehat{\mathbf{Y}}^{(i)})\). Proof.: We prove the theorem only for (21), and part (22) can be similarly proved. Considering the linearity of the expectation operator and formula (17) we have \[\mathbb{E}\left(\|\underline{\mathbf{X}}-\mathbf{W}_{1}\ast\underline {\mathbf{W}}_{1}^{\dagger}\ast\underline{\mathbf{X}}\|^{2}\right)\leq\] \[\frac{1}{I_{3}}\left(\sum_{i=1}^{I_{3}}\mathbb{E}\,\|\widehat{ \mathbf{X}}^{(i)}-\widehat{\mathbf{W}}_{1}^{(i)}\widehat{\mathbf{W}}_{1}^{(i) \,\dagger}\widehat{\mathbf{X}}^{(i)}\|^{2}\right), \tag{24}\] where \(\widehat{\mathbf{X}}^{(i)}=\widehat{\mathbf{X}}(:,:,i)\) and \(\widehat{\mathbf{W}}_{1}^{(i)}=\widehat{\mathbf{W}}_{1}(:,:,i)\). Now, if we use Theorem 2 to bound each term of (24) and use the Holder's identity \[\mathbb{E}(\,\|\underline{\mathbf{X}}-\underline{\mathbf{W}}_{1}*\underline{ \mathbf{W}}_{1}^{\dagger}*\underline{\mathbf{X}}\|)\leq(\left(\mathbb{E}(\,\| \underline{\mathbf{X}}-\underline{\mathbf{W}}_{1}*\underline{\mathbf{W}}_{1}^ {\dagger}*\underline{\mathbf{X}}\|^{2})\right)^{1/2},\] the proof is concluded. ## 6 Experimental Results In this section, we conduct several simulations to show the efficiency of the proposed algorithms and their superiority over the baseline algorithm. We have used Matlab and some functions of the toolbox: [https://github.com/canyilu/Tensor-tensor-product-toolbox](https://github.com/canyilu/Tensor-tensor-product-toolbox) to implement the proposed algorithm using a laptop computer with 2.60 GHz Intel(R) Core(TM) i7-5600U processor and 8GB memory. The algorithms are compared in terms of running time and relative error defined as follows \[\frac{\|\underline{\mathbf{X}}-\underline{\mathbf{U}}*\underline{\mathbf{C}}* \underline{\mathbf{Z}}\|_{F}+\|\underline{\mathbf{Y}}-\underline{\mathbf{V}}* \underline{\mathbf{S}}*\underline{\mathbf{Z}}\|_{F}}{\|\underline{\mathbf{X}} \|_{F}+\|\underline{\mathbf{Y}}\|_{F}}.\] **Example 1**.: Let us generate random data tensors \(\underline{\mathbf{X}}\) and \(\underline{\mathbf{Y}}\) with zero mean and unit variance of size \(n\times n\times n\) with the tubal rank 50, where \(n=200,300,400,500\) Figure 1: Example 1. Running time comparison of different algorithms. Then, the basic TGSVD and two proposed randomized TGSVD algorithms (Algorithms 6 and 6) are applied to the mentioned data tensor. We set the oversampling as \(50\) in both Algorithms 6 and 7. The running times of the algorithms are shown in Figure 1 and the corresponding relative errors achieved by them are reported in Table 1. From Figure 1, for \(n=300,400,500\) we achieve \(\times 55,\)\(\times 37.9,\) and \(\times 11.01\) speed-up, respectively. So, in all scenarios, we have more than one order of magnitude acceleration. Also from Table 1, we see that the difference between the relative errors of the algorithms is negligible. So, we can provide satisfying results, in much less time than the baseline algorithm 5. This shows the superiority of the proposed algorithms compared to the baseline methods for handling large-scale data tensors. Figure 3: Example 1. Relative error comparison of the algorithms Figure 2: Example 1. Running time comparison of different algorithms. ## 7 Conclusion and future works In this paper, we proposed two fast randomized algorithms to compute the generalized t-SVD (GTSVD) of tensors based on the tubal product (t-product). Given two third-order tensors, the random projection technique is first used to compute two small tensor sketches of the given tensors capturing the most important ranges of them. Then from the small sketches, we recovered the GTSVDs of the original data tensor from the GTSVD of the small tensor sketches, which are easier to analyze. The computer simulations were conducted convincing the feasibility and applicability of the proposed randomized algorithm. The error analysis of the proposed algorithms using the power iteration needs to be investigated and this will be our future research. We plan to also develop randomized algorithms for the computation of the GTSVD to be applicable for steaming data tensors, which arises in real-world applications. The generalization of the proposed algorithm to higher order tensors is our ongoing research work.
2307.05648
Image-Processing Based Methods to Improve the Robustness of Robotic Gripping
Image processing techniques have huge impact on most fields of robotics and industrial automation. Real time methods are usually employed in complex automation tasks, assisting with decision making or directly guiding robots and machinery, while post-processing is usually used for retrospective assessment of systems and processes. While artificial intelligence based image processing algorithms (usually neural networks) are more common nowadays, classical methods can also be used effectively even in modern applications. This paper focuses on optical flow based image processing, proving its efficiency by presenting optical flow based solutions for modern challenges in different fields of robotics such as robotic surgery and food industry automation. The main subject of the paper is a smart robotic gripper designed for automated robot cells in the meat industry, that is capable of slip detection and secure gripping of soft, slippery tissues with the help of the implemented optical flow based algorithm.
KristΓ³f TakΓ‘cs, RenΓ‘ta NagynΓ© Elek, TamΓ‘s Haidegger
2023-07-11T13:01:55Z
http://arxiv.org/abs/2307.05648v1
# Image-Processing Based Methods to Improve the Robustness of Robotic Gripping ###### Abstract Image processing techniques have huge impact on most fields of robotics and industrial automation. Real time methods are usually employed in complex automation tasks, assisting with decision making or directly guiding robots and machinery, while post-processing is usually used for retrospective assessment of systems and processes. While artificial intelligence based image processing algorithms (usually neural networks) are more common nowadays, "classical" methods can also be used effectively even in modern applications. This paper focuses on optical flow based image processing, proving its efficiency by presenting optical flow based solutions for modern challenges in different fields of robotics such as robotic surgery and food industry automation. The main subject of the paper is a smart robotic gripper designed for automated robot cells in the meat industry, that is capable of slip detection and secure gripping of soft, slippery tissues with the help of the implemented optical flow based algorithm. Meat industry automation, Image processing, Optical flow, Sip detection ## I Background ### _Meat industry automation_ Demand for automation in the red meat sector is steadily increasing due to growing consumer claims, which was also enhanced by the global COVID-19 pandemic [1, 2, 3]. Currently the advanced abattoirs are generally semi-automated plants, meaning that different machines - and sometimes robot arms - are working together with human operators to accomplish the different meat processing steps [4]. In semi-automated plants typically the simple, repetitive and/or force demanding tasks are handled by machines (e.g., cutting the carcass in half, placing final meat products in boxes), and the more complex steps are carried out by operators (e.g., separating meat from bones, cutting close to the intestines). The difficulty of the full automation of a meat processing plant mainly derives from the natural diversity of animals, the extremely high risk of contamination (either from the intestines or from the skin surface) and the strict and inflexible regulation of the field [5]. The break-through towards completely automated solutions in the food industry started with intelligent robotic tools, that were first employed in the agriculture sector [6, 7]. These typically embedded systems are usually capable of sensing their environment using a number of different sensors, identifying and cutting/grasping/examining their target without any human intervention - or at most with remote supervision. The most regular sensors in these devices are different kinds of cameras (RGB-D, infrared, multi-spectral etc.) usually combined with an image processing deep neural networks (DNN), however classical image processing techniques - such as Hugh transformation or optical flow - can be used efficiently too. To reduce barriers to automation, the RoBUCTHER concept was born, offering the so called meat factory cell (MFC) [8, 9]. Inside the MFC the primary slaughter steps are executed by two high-payload industrial robots with custom-developed smart end-of-arm-tooling (EOAT), supported by a motorized carcass Handling Unit (CHU). The most important fields of research during the project are AI, cognitive systems and sustainability analysis (life cycle assessment) for this new slaughter concept [1]. The first two provide the necessary inputs for the robots to interact with the carcass during cutting, grasping and lifting tasks, employing concepts, methods and strategies from other domains, such as image-guided surgery [10]. Grasping and gripping are the most common tasks in industrial and service robotic applications, where physical interaction between the robot and its environment is required. Current challenges of robotic gripping include undefined target-object shapes and positions, delicate or highly elastic target materials, slippery or soft surfaces, and constantly changing or undefined environments [7]. These listed difficulties regularly occur in robotic surgery and agricultural robotics applications too, thus similar methods might be effective in both fields. ### _Optical flow_ Optical flow is a computer vision method, which is a pattern of motion of objects in a visual scene caused by the relative motion between the observer and the scene [11]. Optical flow is a fundamental algorithm for video analysis, especially in movement detection. It estimates motion between two consecutive video frames by calculating the pixel displacements [12]. Optical flow refers to the motion in the visual field based on pixel intensities. The fundamental assumption in optical flow is that the intensity of the pixels is not changing during the motion--which is true in general, however it is not provided in the case of lighting changes. The fundamental assumption of brightness constancy is the basis of optical flow: \[I(x,y,t)=I(x+dx,y+dy,t+dt), \tag{1}\] where x and y are the pixel coordinates, dx and dy are the pixel displacement, t is the time and dt is the elapsed time. From the Taylor expansion of this basic equation the optical flow constraint equation can be derived: \[I_{x}v_{x}+I_{y}v_{y}+I_{t}=0, \tag{2}\] where \(I_{x}\), \(I_{y}\) and \(I_{t}\) are the derivatives of the image function \(I(x,y)\) with respect to x, y and t; vector \(V=\left(v_{x},v_{y}\right)\) defines the velocity vector in x and y direction. This equation is called the optical flow equation. Nevertheless, beyond the brightness constancy assumption, the optical flow equation is still under-determined; thus, the different approaches use further restrictions, e.g., that the intensity of the local neighbors of pixels changes similarly. There are two main categories of optical flow techniques: dense and sparse methods. Dense optical flow algorithms calculate optical flow for all pixels, sparse techniques calculate the flow just for some pixels (special features, such as corners, edges). The Lucas-Kanade method is commonly used to calculate optical flow for only special features. The main idea of this method is based on a local motion constancy assumption, where nearby pixels have the same displacement direction. Based on the different approaches, dense optical flow is more accurate, but naturally, it needs more computational capacity [13]. For this work, the chosen technique was a dense optical flow method, the Farneback optical flow. The Farneback method has high accuracy, and it is useful to examine all of the pixels in the image. Farneback algorithm is a two-frame optical flow calculation technique that uses polynomial expansion, where a polynomial approximates the neighborhood of the image pixels. Quadratic polynomials give the local signal model represented in a local coordinate system [14]. ## II Optical flow in modern robotics ### _Robotic surgery applications_ The optical flow method is widely used in different applications where image processing is useful or required. One of its most promising utilization is in the field of surgical robotics, where the digital video feed of the endoscope (or stereo-endoscope) is always accessible. Although the video of a surgical procedure can be processed in real-time too, the vast majority of publications use recorded videos or previously published datasets (e.g., the JIGSAWS - JHU-ISI Gesture and Skill Assessment Working Set [15]). Employing robots, machines and modern digital technologies in the surgical rooms had huge impact not only on the outcome of the surgeries, but also on surgical skill assessment and analysis of the surgeons' work [16, 17]. Although artificial intelligence based methods (e.g., Deep Neural Networks) are the most common in automated video processing, classic image processing approaches and optical flow based methods can be used with good results too. Sarikaya et al. in [18] propose and optical flow based method for surgical gesture recognition which is usually done by analysing the kinematic data of the surgical robot. The described method relies exclusively on the observable motion on surgical videos i.e., dense optical flow data and shows that this can be a robust alternative to kinematic data analysis. The accuracy of their Optical flow ConvNet system on the JIGSAWS dataset was between 74% and 92% for regular surgical gestures. Liu et al. in [19] present a real-time utilization of optical flow calculation of a surgical video-feed. In their work the authors prove the effectiveness of an advanced optical flow based algorithm for tracking colonoscopy procedures by comparing real-life videos with videos from virtual and modeled environments. The paper shows the robustness of the technique against fluid and illumination artifacts, blurry images and structural changes too. Furthermore, the merged use of sparse and dense optical flow fields also improves the performance by allowing to compute the focus of expansion. Other researches show that real-time optical flow calculation can be a powerful tool in therapy too. Zachiu et al. proposed an optical flow based tracking solution to improve MR-guided beam therapies, described in [20]. Their work focuses on solving one of the biggest challenges of any beam therapies, the tracking of organ-movements during the procedure. Zachiu et al. use optical flow to determine and separate movements within the target region and in the vicinity of the target region in real time e.g., movements originating from the pulsing of different arteries within the target organ or close to it. Their experiments show that improving and optimizing the existing optical flow methods for given tasks can be highly beneficial. In the case of Minimally Invasive Surgery (MIS) and Robot-Assisted Minimally Invasive Surgery (RAMIS) the surgeon reaches the inside organs through small skin incisions, and the operating area is visualized by an endoscopic camera. The endoscopic camera provides mainly two dimensional image streams for MIS and three dimensional for RAMIS. Endoscopic images are the only sensory data in the case of MIS and an important sensory data in the case of RAMIS, besides kinematic data. Since images are available and easily recordable for both MIS and RAMIS, computer vision takes a huge role in surgical data science and automation. Optical flow usage in MIS and RAMIS can be found in several studies, such as in [21], where semantic surgical tool segmentation in endoscopic images was proposed, which can be an important step towards pose estimation, task automation and skill assessment in MIS operations. Here an efficient ground truth labelling method was proposed with the help of the optical flow algorithm for the JIGSAWS dataset [22], which is the most studied dataset for surgical skill assessment. The ground truth dataset and source codes are publicly available on Github ([https://github.com/dorapapp96/SurgToolSegJIGSAWS.git](https://github.com/dorapapp96/SurgToolSegJIGSAWS.git)). Objective skill assessment based personal performance feedback is a crucial part of surgical training, however it is not the part of the clinical practice yet. In [13], the potential usage of optical flow as an input for RAMIS skill assessment was shown, including the maximum accuracy achievable with this image-based data by evaluating it with an established skill assessment benchmark, by evaluating its methods independently, what outperformed the state of the art (Fig. 1). ### _Optical flow based slip detection_ Within the field of food industry, the automation of meat processing plants also has the potential to benefit from optical flow based methods. The most significant challenges in meat-industry automation - similarly to most fields of the food and agriculture industry - is the handling of high variance coming from natural biodiversity, and the detection and management of diseased and abnormal animals. Any machinery and robot system designed for meat industry automation must be able to autonomously deal with this outstandingly high variance of mechanical characteristics (size, weight, elasticity, surface properties etc.) of the "target objects" (animal carcasses, parts of animals, pieces of meat, organs etc.). One solution is to make the devices tolerant to high variability "by design". This means, that the mechanical design of the device is capable of fulfilling its tasks without any sensors or "intelligence" regardless of the uncertain and/or unknown characteristics and coordinates of the target object (until they stay between reasonable limits). Fig. 1(b) shows an example meat industry gripper that is capable of gripping target objects between 2-6 cm up to a displacement of 4 cm thanks to the encircling motion of the gripping fingers. The more advanced solution for improving robustness is the use of sensors and built-in intelligence. In this case, the most important feature of the tools is not the hardware design, but the intelligent components and the software behind them. Regarding the place of the decision making there are two main options in food industry automation [24]. Huge group of robots working in bounded environments usually make a centralized system, where a central compute module makes the high-level decisions, communicates with and controls all robots and devices. Creating so called embedded systems is the other solution, where the robots or devices process all the sensory data internally and make all the low and high level calculations and decisions by itself. In the RoBUTCHER project - described in Section I-A - these two methods are mixed. The robot system is fundamentally centralized, a central computer collects most of the sensor data, runs the Virtual Reality module, AI modules and controls the robots. On the other hand, the smart end of Fig. 1: Surgical tasks in JIGSAWS: Knot-Tying, Needle-Passing and Suturing. **(a-c)** Region of Interest manual selection, **(d-f)** The initial samples computed by the optical flow algorithm. The third an fourth row show one tracked points’ trajectory from each surgical tool (blue=left and orange=right). **(g-i)** the data is from a novice user **(j-l)** it is an expert subject. Fig. 2: (a) Shows the CAD model of the smart gripper used in the RoBUTCHER project. (b) Shows its closing motion as an example of β€œrobustness by design”, the encircling–centering motion tolerates high variance of target shape, size and displacement [23]. arm tools (gripper and knife) work as embedded systems with own dedicated communication lines and/or integrated compute modules. One of the robotic grippers in the RoBUTCHER project is a smart gripper with integrated microcontroller, sensors and a Raspberry Pi Zero, the mechanical design is shown on Fig. 1(a) [23]. Its mechanism provides encircling gripping motion, fail-safe grasping and high clamping force by design, however the integrated sensors and built-in intelligence are the features that make the gripper unique and capable of accomplishing the complex tasks required for entirely automated meat-processing. One of the main smart features of the gripper is the optical flow based slip detection. Detecting slippage is a crucial feature of any automation development where gripping is required. This is especially true in the field of food industry, since the target objects are usually food products that suffer contamination in case of slipping and can't be sold anymore. In addition - compared to other fields of industry - slippage is rather frequent during food industry automation because of the slippery, wet, soft surfaces and undefined, varying target shapes and dimensions. The hardware side of the slippage detection system in the smart gripper consists of an endoscopic camera (640x480 pixel, 30 fps, 2 cm focus, 70\({}^{\circ}\) view angle, built-in leds) and a Raspberry Pi Zero W. The camera is placed at the front of the gripper behind a protective lens as shown on Fig. 3. Because of the synchronized encircling motion of the fingers, the target object is always centered when a stable grasp is reached, thus it is tightly pressed against the protective lens in front of the camera. This design ensures that the surface of the target object is exactly in the focal point of the endoscopic camera, providing a clear view of the target. The illumination is provided by the built-in leds of the endoscopic camera. The optical flow software is written in Python depending on OpenCV, using the Farneback optical flow algorithm. Fig. 4 shows typical frames from the endoscopic camera, however they are already cropped and their resolution is reduced for runtime optimization. The Farneback algorithm produces a 3D optical flow matrix with the calculated direction of movement of the pixels (visualised by green vectors on Fig. 4). The image can be divided into two main regions, the ROI (Region of Interest) is the circle in which the target object can be seen, while the region outside of the circle is the housing of the camera, which is never moving, thus the optical flow of those Fig. 4: Visualization of the optical flow on the images of the built-in camera with green vectors. The black housing of the camera is static on the video feed (hence the green dots), while the grasped meat product is moving in different directions. (a) shows real slippage i.e., target movement in the Y direction, while (b) shows ”rotating” movement of grasped target which – in this case – requires no intervention. Fig. 3: Frontal view of the gripper in an open state. The endoscopic camera is placed inside the black ”passive finger”, the synchronized encircling motion of the active fingers push the target object guide and press the target objects to the lens in front of the camera. The lens is needed to keep the gripper waterproof and to maintain the 2 cm focal length between the camera and the surface of the target object. pixels are - hypothetically - always zero. Although the target objects of the gripper are soft and easily deformable pieces of tissue, it is hypothesized, that the small piece of surface in the RoI (about 2 cm in diameter) that the camera can see does not suffer significant deformation, thus the camera sees a rigid body. This assumption means, that the points inside the RoI always have the same speed, thus for further analysis the average of the optical flow field is being used, representing the actual velocity of the target object. As described in [23] the gripper was designed to be used for various gripping tasks within automated pig slaughtering, which is supported by variable clamping force (i.e., force controlled grasping). In the ideal case the target object is grasped tightly and remains in a relative rest throughout the whole process, thus the estimated velocity remains zero. However, when the object is slipping the 'y' component of the estimated velocity becomes higher (see Fig (a)a). In this case, the gripper can either react internally by increasing its own clamping force, or can simply send an error message to the robot cell (and/or to the supervising operator) that the target might have slipped out and the process should be stopped. If movement along the 'x' axis is detected, it only means that the object is rotating within the grasp of the gripper and no intervention is required. ## III Discussion This paper presented various scenarios where classical image processing techniques - such as optical flow calculation - can be used effectively. Although AI based image processing became rather popular in the recent years, hard computing methods are still viable and sometimes even more effective. Optical flow is a good example for such methods, several different algorithms are implemented in the most commonly used image processing libraries. In this paper several examples were given from modern fields of robotics where optical flow based algorithms has great potential. In Robotic surgery optical flow is usually used on recorded videos and datasets for surgical skill assessment, however real-time applications for tracking purposes were also identified. In contrast, the food industry applications of optical flow algorithms are usually real-time, since it is a powerful tool for fast detection of slippage, which is a frequent problem regarding automated grasping.
2305.13653
RaSa: Relation and Sensitivity Aware Representation Learning for Text-based Person Search
Text-based person search aims to retrieve the specified person images given a textual description. The key to tackling such a challenging task is to learn powerful multi-modal representations. Towards this, we propose a Relation and Sensitivity aware representation learning method (RaSa), including two novel tasks: Relation-Aware learning (RA) and Sensitivity-Aware learning (SA). For one thing, existing methods cluster representations of all positive pairs without distinction and overlook the noise problem caused by the weak positive pairs where the text and the paired image have noise correspondences, thus leading to overfitting learning. RA offsets the overfitting risk by introducing a novel positive relation detection task (i.e., learning to distinguish strong and weak positive pairs). For another thing, learning invariant representation under data augmentation (i.e., being insensitive to some transformations) is a general practice for improving representation's robustness in existing methods. Beyond that, we encourage the representation to perceive the sensitive transformation by SA (i.e., learning to detect the replaced words), thus promoting the representation's robustness. Experiments demonstrate that RaSa outperforms existing state-of-the-art methods by 6.94%, 4.45% and 15.35% in terms of Rank@1 on CUHK-PEDES, ICFG-PEDES and RSTPReid datasets, respectively. Code is available at: https://github.com/Flame-Chasers/RaSa.
Yang Bai, Min Cao, Daming Gao, Ziqiang Cao, Chen Chen, Zhenfeng Fan, Liqiang Nie, Min Zhang
2023-05-23T03:53:57Z
http://arxiv.org/abs/2305.13653v1
# RaSa: Relation and Sensitivity Aware Representation Learning ###### Abstract Text-based person search aims to retrieve the specified person images given a textual description. The key to tackling such a challenging task is to learn powerful multi-modal representations. Towards this, we propose a Relation and Sensitivity aware representation learning method (RaSa), including two novel tasks: Relation-Aware learning (RA) and Sensitivity-Aware learning (SA). For one thing, existing methods cluster representations of all positive pairs without distinction and overlook the noise problem caused by the weak positive pairs where the text and the paired image have noise correspondences, thus leading to overfitting learning. RA offsets the overfitting risk by introducing a novel positive relation detection task (_i.e._, learning to distinguish strong and weak positive pairs). For another thing, learning invariant representation under data augmentation (_i.e._, being insensitive to some transformations) is a general practice for improving representation's robustness in existing methods. Beyond that, we encourage the representation to perceive the sensitive transformation by SA (_i.e._, learning to detect the replaced words), thus promoting the representation's robustness. Experiments demonstrate that RaSa outperforms existing state-of-the-art methods by **6.94%**, **4.45%** and **15.35%** in terms of Rank@1 on CUHK-PEDES, ICFG-PEDES and RSTPReid datasets, respectively. Code is available at: [https://github.com/Flame-Chasers/RaSa](https://github.com/Flame-Chasers/RaSa). ## 1 Introduction Text-based person search [11, 16] aims at retrieving the person images in a large-scale person image pool given a query of textual description about that person. This task is related to person re-identification [15, 17] and text-image retrieval [14], which have been very active research topics in recent years. It, however, exhibits unique characteristics and challenges. Compared to person re-identification with image queries, text-based person search with more accessible open-form text queries provides a more user-friendly searching procedure while embracing greater challenges due to the cross-modal search. In addition, compared to general image-text retrieval, text-based person search focuses on cross-modal retrieval specific for the person with more fine-grained details, tending to larger intra-class variance as well as smaller inter-class variance, which toughly bottlenecks the retrieval performance. Targeting learning powerful feature representation and achieving cross-modal alignment for text-based person search, researchers have developed a batch of technologies over the past few years [21, 16]. It has been proved that the model armed with reasonable tasks tends to learn better representation. In this paper, we propose a representation learning method, namely RaSa, with two novel tasks: relation-aware learning and sensitivity-aware learning for text-based person search. Figure 1: Illustration of (a) two types of positive relations for relation-aware learning, where the noise interference in the weak positive pairs is highlighted in red, (b) replaced token detection for sensitivity-aware learning, in which word replacement is used as the sensitive transformation and the replaced words are marked in bold. **Relation-aware learning.** In existing methods [14, 15], the _de facto_ optimization objective is to bring image and text representations of the same identity (_i.e._, positive pairs) together and repel representations of different identities (_i.e._, negative pairs) away. However, it tends to encounter the following issue. Normally, a textual description is generated by annotating a particular single image in the text-based person search dataset. The text strongly matches the annotated image without a doubt, whereas it is not always well-aligned to other positive images of the same person at the semantic level due to intra-class variation in the image. As shown in Figure 1 (a), the images and texts depict the same person, leading to a positive relation for each image-text pair. However, there exist two different types of positive relations. _text\({}_{1}\)_(resp. text\({}_{2}\))_ is the exact description of _image\({}_{1}\)_(resp. image\({}_{2}\))_, where they are completely matched and form a strong positive pair. Nevertheless, _image\({}_{1}\)_ and _text\({}_{2}\)_(resp. image\({}_{2}\)_ and _text\({}_{1}\)_) constitute a weak positive pair with the noise interference. For instance, "white t-shirt" and "blue shorts" in _text\({}_{1}\)_ correspond to non-existent objects in _image\({}_{2}\)_ due to the occlusion. Existing methods endow the strong and weak positive pairs with equal weight in learning representations, regardless of the noise problem from the weak pairs, eventually leading to overfitting learning. In order to mitigate the impacts of the noise interference from weak positive pairs, we propose a Relation-Aware learning (RA) task, which is composed of a probabilistic Image-Text Matching (\(p\)-ITM) task and a Positive Relation Detection (PRD) task. \(p\)-ITM is a variant of the commonly-used ITM, aiming to distinguish negative and positive pairs with a probabilistic strong or weak positive inputting, while PRD is designed to explicitly makes a distinction between the strong and weak positive pairs. Therein, \(p\)-ITM emphasizes the consistency between strong and weak positive pairs, whereas PRD highlights their difference and can be regarded as the regularization of \(p\)-ITM. The model armed with RA can not only learn valuable information from weak positive pairs by \(p\)-ITM but also alleviate noise interference from them by PRD, eventually reaching a trade-off. **Sensitivity-aware learning.** Learning invariant representations under a set of manually chosen transformations (also called _insensitive_ transformations in this context) is a general practice for improving the robustness of representation in the existing methods [11, 12]. We recognize it but there is more. Inspired by the recent success of equivariant contrastive learning [15], we explore the _sensitive_ transformation that would hurt performance when applied to learn transformation-invariant representations. Rather than keeping invariance under insensitive transformation, we encourage the learned representations to have the ability to be aware of the sensitive transformation. Towards this end, we propose a Sensitivity-Aware learning (SA) task. We adopt the word replacement as the sensitive transformation and develop a Momentum-based Replaced Token Detection (\(m\)-RTD) pretext task to detect whether a token comes from the original textual description or the replacement, as shown in Figure 1 (b). The closer the replaced word is to the original one (_i.e._, more confusing word), the more difficult this detection task is. When the model is trained to well solve such a detection task, it is expected to have the ability to learn better representation. With these in mind, we use Masked Language Modeling (MLM) to perform the word replacement, which utilizes the image and the text contextual tokens to predict the masked tokens. Furthermore, considering that the momentum model, a slow-moving average of the online model, can learn more stable representations than the current online model [12] to generate more confusing words, we employ MLM from the momentum model to carry out the word replacement. Overall, MLM and \(m\)-RTD together form a Sensitivity-Aware learning (SA), which offers powerful surrogate supervision for representation learning. Our contributions can be summarized as follows: * We differentiate between strong and weak positive image-text pairs in learning representation and propose a relation-aware learning task. * We pioneer the idea of learning representation under the sensitive transformation to the text-based person search and develop a sensitivity-aware learning task. * Extensive experiments demonstrate RaSa outperforms existing state-of-the-art methods by \(6.94\)%, \(4.45\)% and \(15.35\)% in terms of Rank@1 metric on CUHK-PEDES, ICFG-PEDES and RSTPReid datasets, respectively. ## 2 Related Work **Text-based Person Search** Li _et al._ [17] first introduce the text-based person search task and publish a challenging dataset CUHK-PEDES. Following this, a series of methods are proposed to solve this task. Part of methods [16, 15] focus on designing a reasonable cross-modal alignment strategy, while others [16, 15] concentrate on learning powerful feature representation. For cross-modal alignment, it begins with global alignment [16] or local correspondences (_e.g._, patch-word or region-phrase correspondences) [17, 18], and evolves into self-adaptively learning semantic alignment across different granularity [11, 15, 16]. Beyond that, some works [19, 16] utilize external technologies (_e.g._, human segmentation, pose estimation or attributes prediction) to assist with the cross-modal alignment. For representation learning, Wu _et al._ [14] propose two color-related tasks based on the observation that color plays a key role in text-based person search. Zeng _et al._ [14] develop three auxiliary reasoning tasks with gender classification, appearance similarity and image-to-text generation. Ding _et al._ [14] firstly notice the noise interference from weak positive pairs and propose to keep the difference between strong and weak positive pairs by manually assigning different margins in the triplet loss. More recently, some works [14, 15, 16] resort to vision-language pretraining models to learn better representations. In this paper, we design two novel tasks: RA and SA. RA detects the type of the positive pair to weaken noise from weak positive pairs, differently from the method [11] with the sophisticated trick. SA focuses on representation learning by detecting sensitive transformation, which is under-explored in the previous methods. **Equivariant Contrastive Learning** Different from contrastive learning [12] that aims to learn transformation-insensitive representations, equivariant contrastive learning [11] is recently proposed by additionally encouraging the learned representations to have the ability to be aware of sensitive transformations. Mathematically, the notions of insensitivity and sensitivity can be inductively summarized as: \(f(T(x))=T^{\prime}(f(x))\) where \(T\) denotes a group of transformations of an input instance \(x\), and \(f\) is an encoder to compute the representation of \(x\). When \(T^{\prime}\) is the identity transformation, it can be said that \(f\) is trained to be insensitive to \(T\); otherwise, \(f\) is sensitive to \(T\). Equivariant contrastive learning has shown its successful application in the fields of computer vision (CV) [1] and natural language processing (NLP) [10], which inspires us to explore sensitive transformations for learning high-quality representations in the cross-modal retrieval task. In this paper, we develop a sensitivity-aware learning with MLM-based word replacement as the sensitive transformation to encourage the model to perceive the replaced words, thus obtaining more informative and discriminative representations. ## 3 Method In this section, we take ALBEF [14] as the backbone1 and elaborate on the proposed method RaSa by introducing the modal architecture in Section 3.1 and the optimization objectives involving the proposed RA and SA tasks in Section 3.2. Footnote 1: Experiments on more backbones are shown in Appendix A.4. ### Model Architecture As illustrated in Figure 2, the proposed RaSa consists of two unimodal encoders and a cross-modal encoder. We adopt \(12\)-layer and \(6\)-layer transformer blocks for the image and text encoders, respectively. The cross-modal encoder comprises \(6\)-layer transformer blocks, where a cross-attention module is added after the self-attention module in each block. Considering that the textual description usually covers a part of the information in the corresponding image, we employ a text-guided asymmetric cross-attention module in the cross-modal encoder, _i.e._, using the textual representation as query and the visual one as key and value. Simultaneously, we maintain a momentum version of the online model via Exponential Moving Average (EMA). Specifically, EMA is formulated as \(\hat{\theta}=m\hat{\theta}+(1-m)\theta\), where \(\hat{\theta}\) and \(\theta\) are the parameters of the momentum and online models, respectively, and \(m\in[0,1]\) is a momentum coefficient. The momentum model presents a delayed and more stable version of the online model and is used to guide the online model to learn better representations. Given an image-text pair \((I,T)\), we first feed the image \(I\) into the image encoder to obtain a sequence of visual representations \(\{v_{cls},v_{1},\cdots,v_{M}\}\) with \(v_{cls}\) being the global visual representation and \(v_{i}\)\((i=1,\cdots,M)\) being the patch representation. Similarly, we obtain a sequence of textual representations \(\{t_{cls},t_{1},\cdots,t_{N}\}\) by feeding the text \(T\) into the text encoder, where \(t_{cls}\) is the global textual representation and \(t_{i}\)\((i=1,\cdots,N)\) is the token representation. The visual and textual representations are then fed to the cross-modal encoder to obtain a sequence of multi-modal representations \(\{f_{cls},f_{1},\cdots,f_{N}\}\), where \(f_{cls}\) denotes the joint representation of \(I\) and \(T\), and \(f_{i}\)\((i=1,\cdots,N)\) can be regarded as the joint representation of the image \(I\) and the \(i\)-th token in the text \(T\). Simultaneously, the momentum model is employed to obtain a sequence of momentum representations. Figure 2: Model architecture of RaSa. It consists of an image encoder, a text encoder and a cross-modal encoder. An intra- and cross-modal CL task is attached after the unimodal encoders for unimodal representation learning. RA and SA tasks are tied after the cross-modal encoders for multi-modal representation learning. The momentum model (a slow-moving of the online model) is used to guide the online model to learn better representations. ### Optimization Objectives #### Relation-aware Learning The vanilla widely-used ITM predicts whether an inputted image-text pair is positive or negative, defined as: \[L_{itm}=\mathbb{E}_{p(I,T)}\mathcal{H}(y^{itm},\phi^{itm}(I,T)), \tag{1}\] where \(\mathcal{H}\) represents a cross-entropy function, \(y^{itm}\) is a \(2\)-dimension one-hot vector representing the ground-truth label (_i.e._, \([0,1]^{\top}\) for the positive pair, and \([1,0]^{\top}\) for the negative pair), and \(\phi^{itm}(I,T)\) is the predicted matching probability of the pair that is computed by feeding \(f_{cls}\) into a binary classifier, a fully-connected layer followed by a softmax function. However, it is unreasonable to directly adopt the vanilla ITM in text-based person search. On the one hand, there exists noise interference from weak positive pairs, which would hamper the representation learning. On the other hand, the weak positive pairs contain certain valuable alignment information that can facilitate representation learning. As a result, to reach a balance, we retain a proportion of weak positive pairs in ITM by introducing the probabilistic inputting. Specifically, we input the weak positive pair with a small probability of \(p^{w}\) and the strong positive pair with a probability of \(1-p^{w}\). To distinguish with the vanilla ITM, we denote the proposed probabilistic ITM as \(p\)-ITM. Furthermore, we continue to alleviate the noise effect of the weak pairs. We propose a Positive Relation Detection (PRD) pretext task to detect the type of the positive pair (_i.e._, strong or weak), which is formulated as: \[L_{prd}=\mathbb{E}_{p(I,T^{p})}\mathcal{H}(y^{prd},\phi^{prd}(I,T^{p})), \tag{2}\] where \((I,T^{p})\) denotes a positive pair, \(y^{prd}\) is the ground truth label (_i.e._, \([1,0]^{\top}\) for the strong positive pair and \([0,1]^{\top}\) for the weak pair), and \(\phi^{prd}(I,T^{p})\) is the predicted probability of the pair which is computed by appending a binary classifier to the joint representation \(f_{cls}\) of the pair. Taken together, we define the Relation-Aware learning (RA) task as: \[L_{ra}=L_{p\text{-}itm}+\lambda_{1}L_{prd}, \tag{3}\] where the weight \(\lambda_{1}\) is a hyper-parameter. During the process of the optimization, \(p\)-ITM focuses on the consistency between strong and weak positive pairs, while PRD highlights their difference. In essence, PRD plays a role of a regularized compensation for \(p\)-ITM. As a whole, RA achieves a trade-off between the benefits of the weak pair and the risk of its side effects. #### Sensitivity-aware Learning Learning invariant representations under the _insensitive_ transformation of data is a common way to enhance the robustness of the learned representations. We go beyond it and propose to learn representations that are aware of the _sensitive_ transformation. Specifically, we adopt the MLM-based word replacement as the sensitive transformation and propose a Momentum-based Replaced Token Detection (\(m\)-RTD) pretext task to detect (_i.e._, being aware of) the replacement. Given a strong positive pair \((I,T^{s})\), MLM loss is formulated as: \[L_{mlm}=\mathbb{E}_{p(I,T^{msk})}\mathcal{H}(y^{mlm},\phi^{mlm}(I,T^{msk})), \tag{4}\] where \(T^{msk}\) is a masked text in which each token in the input text \(T^{s}\) is randomly masked with a probability of \(p^{m}\), \(y^{mlm}\) is a one-hot vector denoting the ground truth of the masked token and \(\phi^{mlm}(I,T^{msk})\) is the predicted probability for the masked token based on the information of the contextual text \(T^{msk}\) and the paired image \(I\). We use the result of MLM from the momentum model as the word replacement, denoted as \(T^{rep}\). The momentum model is a slow-moving of the online model and can learn more stable representations. Therefore, the momentum model is expected to generate more confusing tokens. As \(m\)-RTD detects such challenging tokens well, the model is motivated to learn more informative representations to distinguish the tiny differences. Remarkably, besides serving as a generator for the word replacement, MLM also plays a role of token-level optimization, promoting fine-grained representation learning. Next, \(m\)-RTD performs a detection of the MLM-based token replacement. Specifically, the pair \((I,T^{rep})\) is inputted to the model to obtain a sequence of multi-modal representations \(\{f_{cls},f_{1},...,f_{N}\}\), and a binary classifier works on \(\{f_{1},...,f_{N}\}\) to predict whether the \(i\)-th token is replaced or not. \(m\)-RTD minimizes a cross-entropy loss: \[L_{m\text{-}rtd}=\mathbb{E}_{p(I,T^{rep})}\mathcal{H}(y^{m\text{-}rtd},\phi^{ m\text{-}rtd}(I,T^{rep})), \tag{5}\] where \(y^{m\text{-}rtd}\) is a one-hot vector denoting the ground truth of the replaced token and \(\phi^{m\text{-}rtd}(I,T^{rep})\) is the predicted replacement probability. We illustrate the pipeline of \(m\)-RTD in Figure 3 for clarity. Overall, Sensitivity-Aware learning (SA) loss is defined as: \[L_{sa}=L_{mlm}+\lambda_{2}L_{m\text{-}rtd}, \tag{6}\] where the weight \(\lambda_{2}\) is a hyper-parameter. In conclusion, RA works on the global representation \(f_{cls}\) and mainly focuses on the correlation between the image and text, which can be regarded as a coarse-grained optimization. As a complement, SA acts on the token representations \(\{f_{1},...,f_{N}\}\) and pays more attention to the interaction between the image and textual tokens, exhibiting a fine-grained optimization. The two complementary tasks effectively facilitate representation learning. Figure 3: Illustration of \(m\)-RTD. It aims to detect whether a token is from the original textual description or the replacement with the aid of the information of the contextual tokens and the paired image. The text with word replacement is obtained by the result of the Masked Language Modeling (MLM) from the momentum model. ### Contrastive Learning The proposed RA and SA are directly applied on the multi-modal representations from the cross-modal encoder. Furthermore, we introduce an intermediate Contrastive Learning task (CL) on the representations from the unimodal encoders, so as to make the subsequent cross-modal fusion easier to perform multi-modal representation learning. Given an image-text pair \((I,T)\), we feed it into the unimodal encoders and obtain the global visual and textual representations \(v_{cls}\) and \(t_{cls}\). Then a linear layer is applied to project them to lower-dimensional representations \(v^{\prime}_{cls}\) and \(t^{\prime}_{cls}\). Meanwhile, we obtain the output of momentum unimodal encoders, denoted as \(\hat{v}^{\prime}_{cls}\) and \(\hat{t}^{\prime}_{cls}\). We maintain an image queue \(\hat{Q}_{v}\) and a text queue \(\hat{Q}_{t}\) to store the recent \(R\) projected representations \(\hat{v}^{\prime}_{cls}\) and \(\hat{t}^{\prime}_{cls}\), similarly to MoCo [14]. The introduction of the queues implicitly enlarges the batch size, and a larger batch will provide more negative samples, thereby facilitating representation learning. In CL, the general form of InfoNCE loss is formulated as: \[L_{nce}(x,x_{+},Q)=-\mathbb{E}_{p(x,x_{+})}[\log\frac{\exp(s(x,x_{+})/\tau)}{ \sum\limits_{x_{i}\in Q}\exp(s(x,x_{i})/\tau)}], \tag{7}\] where \(\tau\) is a learnable temperature parameter, \(Q\) denotes a maintained queue, and \(s(x,x_{+})=x^{\mathrm{T}}x_{+}/\|x\|\|x_{+}\|\) measures the cosine similarity between \(x\) and \(x_{+}\). Beyond the widely-used cross-modal image-text contrastive learning (ITC) [13, 1], denoted as: \[L_{itc}=[L_{nce}(v^{\prime}_{cls},\hat{v}^{\prime}_{cls},\hat{Q}_{t})+L_{nce}( t^{\prime}_{cls},\hat{v}^{\prime}_{cls},\hat{Q}_{v})]\ /\ 2, \tag{8}\] we additionally explore the intra-modal contrastive learning (IMC). The representations of the same person are supposed to stay closer than those of different persons within each modality. IMC loss is formulated as: \[L_{imc}=[L_{nce}(v^{\prime}_{cls},\hat{v}^{\prime}_{cls},\hat{Q}_{v})+L_{nce}( t^{\prime}_{cls},\hat{t}^{\prime}_{cls},\hat{Q}_{t})]\ /\ 2. \tag{9}\] Taken together, we define the overall loss for CL as: \[L_{cl}=(L_{itc}+L_{imc})\ /\ 2. \tag{10}\] ### Joint Learning Overall, we formulate the joint optimization objective as: \[L=L_{ra}+L_{sa}+\lambda_{3}L_{cl}, \tag{11}\] where \(\lambda_{3}\) is a hyper-parameter. During inference, given a query text and a large-scale image pool, we use the predicted matching probability from \(p\)-ITM to rank all images. Considering the inefficiency of the cross-modal encoder with quadratic interaction operation, we refer to ALBEF [13] and exclude a large number of irrelevant image candidates prior to the cross-modal encoder, thereby speeding up the inference. Specifically, we first calculate each pair's similarity \(s(t_{cls},v_{cls})\) via the unimodal encoders, and then select the first \(128\) images with the highest similarities to send them to the cross-modal encoder and compute the \(p\)-ITM matching probabilities for ranking. ## 4 Experiments We conduct experiments on three text-based person search datasets: CUHK-PEDES [13], ICFG-PEDES [12] and RSTPReid [21]. _The introduction of each dataset and the implementation details of the proposed method are shown in Appendix A.1 and A.2, respectively._ ### Evaluation Protocol We adopt the widely-used Rank@K (R@K for short, K=\(1,5,10\)) metric to evaluate the performance of the proposed method. Specifically, given a query text, we rank all the test images via the similarity with the text and the search is deemed to be successful if top-K images contain any corresponding identity. R@K is the percentage of successful searches. We also adopt the mean average precision (mAP) as a complementary metric. ### Backbones Most text-based person search methods [13, 20] rely on two feature extractors pre-trained on unaligned images and texts separately, such as ResNet [14] or ViT [16] for the visual extractor, Bi-LSTM [15] or BERT [17] for the textual extractor. Recently, some works [21, 22] have applied vision-language pretraining (VLP) to text-based person search and obtained impressive results. Following this, we adopt VLP models as the backbone. The proposed RaSa can be plugged into various backbones. To adequately verify the effectiveness, we conduct RaSa on three VLP models: ALBEF [13], TCL [23] and CLIP [18]. We use ALBEF as the backbone by default in the following experiments, which is pre-trained on \(14\)M image-text pairs and adopts ITC and \begin{table} \begin{tabular}{l|l|c c c c} \hline \hline & Method & R@1 & R@5 & R@10 & mAP \\ \hline \multirow{7}{*}{CUHK-PEDES} & GNA-RNN [13] & 19.05 & - & 53.64 & - \\ & Dual Path [21] & 44.40 & 66.26 & 75.07 & - \\ & CMPM/C [21] & 49.37 & 71.69 & 79.27 & - \\ & ViTAA [22] & 55.97 & 75.84 & 83.52 & - \\ & DSSL [21] & 59.98 & 80.41 & 87.56 & - \\ & MGEL [21] & 60.27 & 80.01 & 86.74 & - \\ & ACSA [13] & 63.56 & 81.40 & 87.70 & - \\ & SAF [13] & 64.13 & 82.62 & 88.40 & 58.61 \\ & TIPCB [14] & 64.26 & 83.19 & 89.10 & - \\ & CAIBC [21] & 64.43 & 82.87 & 88.37 & - \\ & \(\mathrm{C_{2}A_{2}}\) [21] & 64.82 & 83.54 & 89.77 & - \\ & LGUR [21] & 65.25 & 83.12 & 89.00 & - \\ \hline \hline \multirow{7}{*}{CUHK-PEDES} & PSLD [21] & 64.08 & 81.73 & 88.19 & 60.08 \\ & IVT [22] & 65.59 & 83.11 & 89.21 & - \\ \cline{1-1} & CFine [21] & 69.57 & 85.93 & 91.15 & - \\ \hline \multirow{7}{*}{CUHK-PEDES} & ALBEF(backbone) [13] & 60.28 & 79.52 & 86.34 & 56.67 \\ \cline{1-1} & **RaSa (Ours)** & **76.51** & **90.29** & **94.25** & **69.38** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison with other methods on CUHK-PEDES. VLP denotes vision-language pretraining. For a fair comparison, all reported results come from the methods without re-ranking. ITM tasks for image-text retrieval. _The details and experiments on TCL and CLIP are shown in Appendix A.4._ ### Comparison with State-of-the-art Methods We compare the proposed RaSa with the existing text-based person search methods on CUHK-PEDES, ICFG-PEDES and RSTPReid, as shown in Table 1, 2 and 3, respectively. RaSa achieves the highest performance in terms of all metrics, outperforming existing state-of-the-art methods by a large margin. Specifically, compared with the current best-performing method CFine [22], RaSa gains a significant R@1 improvement of \(6.94\)%, \(4.45\)% and \(15.35\)% on the three datasets, respectively. The comparison clearly demonstrates the effectiveness of RaSa in text-based person search. ### Ablation Study We analyze the effectiveness and contribution of each optimization objective in RaSa by conducting a series of ablation experiments on CUHK-PEDES, as shown in Table 4. **Effectiveness of Optimization Objectives** RaSa consists of three optimization objectives. CL provides an explicit alignment before the cross-modal fusion. RA implements the deep fusion by the cross-modal encoder with an alleviation of noise interference. And SA encourages the learned representations to be sensitive to the MLM-based token replacement. We can see from Table 4, (1) RaSa with a single CL achieves a modest performance of \(61.35\)% and \(59.44\)% in terms of R@1 and mAP, respectively. On account of the modality gap between the image and text and the fine-grained intra-class variation, CL contributes a coarse alignment with a lack of deep interaction across modalities, which is not enough to handle such a challenging retrieval task. (2) When adding RA(\(p\)-ITM + PRD), the performance has a remarkable improvement of \(12.85\)% at R@1 and \(8.67\)% at mAP, effectively demonstrating that deep cross-modal fusion with RA is extraordinarily significant to text-based person search. And (3) with the aid of SA(MLM + \(m\)-RTD), RaSa achieves the best performance of \(76.51\)% at R@1 and \(69.38\)% at mAP. SA utilizes the visual information and the contextual token information of the corresponding text to detect whether a token has been replaced or not. In order to handle such a challenging detection task, the learned representations are encouraged to be powerful enough to distinguish the tiny difference between the original token and the replaced one. **Analysis of RA** RA contains \(p\)-ITM and PRD, where the former focuses on the consistency between the strong and weak positive pairs, while the latter highlights their difference, serving as a regularization of \(p\)-ITM. The vanilla ITM learns from all positive pairs without the probabilistic inputting. However, there exists too much noise interference from weak positive pairs. Intuitively, we can discard all weak positives to get rid of the noise. \(s\)-ITM only uses the strong positive pairs and gains a boost of \(2.23\)% at R@1 compared to the vanilla ITM. Nevertheless, such a straightforward way ignores the weak supervision from the weak positives which is also beneficial to representation learning. To reach a trade-off between the benefits of the weak supervision and the risk of side effects, \(p\)-ITM resorts to the probabilistic inputting and retains a small proportion of the weak positives. Compared with the vanilla ITM and \(s\)-ITM, \(p\)-ITM achieves an intermediate performance. Not surprisingly at all, the more noise there exists, the more it affects the retrieval \begin{table} \begin{tabular}{l|l|l l l l} \hline \hline Module & Setting & R@1 & R@5 & R@10 & mAP \\ \hline CL & ITC + IMC & 61.35 & 80.44 & 86.91 & 59.44 \\ \hline \multirow{4}{*}{+RA} & ITM & 71.29 & 86.70 & 91.46 & 67.82 \\ & \(s\)-ITM & 73.52 & 88.71 & 92.98 & 66.74 \\ & \(p\)-ITM & 72.58 & 87.98 & 92.51 & 68.29 \\ & ITM + PRD & 73.03 & 87.75 & 92.45 & 68.45 \\ & \(p\)-ITM + PRD & 74.20 & 89.02 & 92.95 & 68.11 \\ \hline \multirow{4}{*}{+SA} & MLM & 74.81 & 89.85 & 93.66 & 68.32 \\ & MLM + \(f\)-RTD & 75.13 & 89.93 & 93.47 & 69.17 \\ \cline{1-1} & MLM + \(o\)-RTD & 75.99 & 90.21 & 94.09 & 69.35 \\ \cline{1-1} & MLM + \(m\)-RTD & 76.51 & 90.29 & 94.25 & 69.38 \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison of RaSa with different settings on CUHK-PEDES. TIM learns from all positive pairs without a probabilistic inputting. \(s\)-ITM learns from only strong positive pairs and discards all weak positive pairs. \(p\)-ITM uses a probabilistic inputting of strong and weak positive pairs. \(f\)-RTD adopts DistilBERT [22] as a fixed generator to produce the replaced tokens. \(o\)-RTD uses the online model as the generator, while \(m\)-RTD is based on the momentum model. \begin{table} \begin{tabular}{l|l|l l l l} \hline \hline & Method & R@1 & R@5 & R@10 & mAP \\ \hline \multirow{4}{*}{+RA} & Dual Path [23] & 38.99 & 59.44 & 68.41 & - \\ & CMPM/C [23] & 43.51 & 65.44 & 74.26 & - \\ & ViTAA [20] & 50.98 & 68.79 & 75.78 & - \\ & SSAN [20] & 54.23 & 72.63 & 79.53 & - \\ & SAF [19] & 54.86 & 72.13 & 79.13 & 32.76 \\ & TIPCB [21] & 54.96 & 74.72 & 81.89 & - \\ & SRCF [20] & 57.18 & 75.01 & 81.49 & - \\ & LGUR [21] & 59.02 & 75.32 & 81.56 & - \\ \hline \multirow{4}{*}{+RA} & IVT [22] & 56.04 & 73.60 & 80.22 & - \\ & CFine [22] & 60.83 & 76.55 & 82.42 & - \\ \cline{1-1} & ALBEF(backbone) [1] & 34.46 & 52.32 & 60.40 & 19.62 \\ \cline{1-1} & **RaSa (Ours)** & **65.28** & **80.40** & **85.12** & **41.29** \\ \hline \end{tabular} \end{table} Table 2: Comparison with other methods on ICFG-PEDES. \begin{table} \begin{tabular}{l|l|l l l l} \hline \hline & Method & R@1 & R@5 & R@10 & mAP \\ \hline \multirow{4}{*}{+RA} & DSSL [23] & 32.43 & 55.08 & 63.19 & - \\ & SSAN [20] & 43.50 & 67.80 & 77.15 & - \\ & SAF [1] & 44.05 & 67.30 & 76.25 & 36.81 \\ & CAIBC [20] & 47.35 & 69.55 & 79.00 & - \\ & ACSA [19] & 48.40 & 71.85 & 81.45 & - \\ & C\({}_{2}\)A\({}_{2}\)[22] & 51.55 & 76.75 & 85.15 & - \\ \hline \multirow{4}{*}{+SA} & IVT [22] & 46.70 & 70.00 & 78.80 & - \\ & CFine [22] & 50.55 & 72.50 & 81.60 & - \\ \cline{1-1} & ALBEF(backbone) [1] & 50.10 & 73.70 & 82.10 & 41.73 \\ \cline{1-1} & **RaSa (Ours)** & **66.90** & **86.50** & **91.35** & **52.31** \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison with other methods on RSTPReid. performance. In order to alleviate the impact of the noise, we further propose PRD to perform an explicit distinction between the strong and weak positives, which serve as a regularization for \(p\)-ITM. Significantly, no matter whether adding PRD to the vanilla ITM or \(p\)-ITM, PRD can obtain consistent performance improvement, which powerfully demonstrates its effectiveness. **Analysis of SA** SA includes MLM and \(m\)-RTD. MLM not only plays the role of generating the text with word replacement but also performs a token-level optimization. \(m\)-RTD detects the replaced tokens by virtue of the visual information and the contextual token information. Based on CL and RA, adding a single MLM without the replacement detection task brings a slight boost of \(0.61\)% at R@1. Furthermore, we introduce the detection task and use the momentum model as the generator to produce the replaced tokens. In order to adequately investigate the effectiveness of the generator, we compare three different variants. (1) Following DiffCSE [20], we use DistilBERT [1] as a fixed generator for the word replacement, which is denoted as \(f\)-RTD. From Table 4, RaSa with \(f\)-RTD gains a modest performance of \(75.13\)% at R@1. We argue that the generated tokens from a fixed generator can be easily detected as the training advances and thus provides a limited effect on learning representation. (2) \(o\)-RTD adopts the online model as the generator. RaSa with \(o\)-RTD achieves a better performance of \(75.99\%\) at R@1. Compared with \(f\)-RTD, \(o\)-RTD resorts to a dynamic generator which is optimized constantly during the whole training process and can produce more confusing tokens with the proceeding of the model's training, effectively increasing the difficulty of replaced tokens detection and facilitating representation learning. And (3) \(m\)-RTD adopts the momentum model as the generator and reaches the best performance of \(76.51\)% at R@1. The momentum model is a slow-moving of the online model and can obtain more stable representations. As the training goes ahead, the momentum model iteratively bootstraps MLM to generate more challenging tokens for detection, which encourages the learned representations to be powerful enough to distinguish the tiny difference and substantially improve results. **Hyper-parameters** In Section 3.2, we use the inputting probability \(p^{w}\) to retain a small proportion of weak positive pairs to alleviate the noise, the masking ratio \(p^{m}\) to randomly mask tokens to perform the replaced token detection, and the loss weights \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\) to make a trade-off. We show how these hyper-parameters impact the performance of RaSa in Figure 4. (1) The best result is achieved at \(p^{w}=0.1\). The inputting probability \(p^{w}\) in RA is introduced to seek a balance between the useful information and the noise from weak positives. A larger \(p^{w}\) may introduce too much noise, while a smaller \(p^{w}\) hinders the model from making full use of the useful information. (2) RaSa performs best at \(p^{m}=0.3\). A larger \(p^{m}\) brings more perturbations to the text, making the detection task too difficult to be carried out. In contrast, when \(p^{m}\) goes smaller, SA will contribute less to representation learning. And (3) for the loss weights \(\lambda_{1}\), \(\lambda_{2}\) and \(\lambda_{3}\), they present an overall trend of first increasing and then decreasing. Empirical results show that RaSa performs best when they are set as \(0.5\). ### Extended Experiments and Visualization To go a step further and validate the effectiveness of RaSa, we perform extended experiments on two coarse-grained image-text retrieval datasets (Flickr30K [23] and COCO [14]), as well as two fine-grained datasets (CUB [17] and Flowers [17]). _The experimental results are shown in Appendix A.3_. Besides, we conduct a series of domain generalization experiments following LGUR [2] in Appendix A.3 to verify the generalization ability of RaSa. These results clearly demonstrate the effectiveness and the generalization ability of RaSa. For a qualitative analysis, we also present the retrieval visualization in Appendix A.5, vividly showing the excellent retrieval ability of RaSa. ## 5 Conclusion In this paper, we propose a Relation and Sensitivity aware representation learning method (RaSa) for text-based person search, which contains two novel tasks, RA and SA, to learn powerful multi-modal representations. Given that the noise from the weak positive pairs tends to result in overfitting learning, the proposed RA utilizes an explicit detection between strong and weak positive pairs to highlight the difference, serving as a regularization of \(p\)-ITM that focuses on their consistency. Beyond learning transformation-insensitive representations, SA encourages the sensitivity to MLM-based token replacement. Extensive experiments on multiple benchmarks demonstrate the effectiveness of RaSa. Figure 4: The impact of the hyper-parameters at R@1 on CUHK-PEDES. \(p^{w}\) denotes the probability of inputting weak positive pairs in RA. \(p^{m}\) means the masking ratio of the tokens in a text in SA. \(\lambda_{1}\), \(\lambda_{2}\) and \(\lambda_{3}\) are the loss weights. ## Acknowledgments This work is supported by the National Science Foundation of China under Grant NSFC 62002252, and is also partially supported by the National Science Foundation of China under Grant NSFC 62106165.
2310.17683
Sliceformer: Make Multi-head Attention as Simple as Sorting in Discriminative Tasks
As one of the most popular neural network modules, Transformer plays a central role in many fundamental deep learning models, e.g., the ViT in computer vision and the BERT and GPT in natural language processing. The effectiveness of the Transformer is often attributed to its multi-head attention (MHA) mechanism. In this study, we discuss the limitations of MHA, including the high computational complexity due to its ``query-key-value'' architecture and the numerical issue caused by its softmax operation. Considering the above problems and the recent development tendency of the attention layer, we propose an effective and efficient surrogate of the Transformer, called Sliceformer. Our Sliceformer replaces the classic MHA mechanism with an extremely simple ``slicing-sorting'' operation, i.e., projecting inputs linearly to a latent space and sorting them along different feature dimensions (or equivalently, called channels). For each feature dimension, the sorting operation implicitly generates an implicit attention map with sparse, full-rank, and doubly-stochastic structures. We consider different implementations of the slicing-sorting operation and analyze their impacts on the Sliceformer. We test the Sliceformer in the Long-Range Arena benchmark, image classification, text classification, and molecular property prediction, demonstrating its advantage in computational complexity and universal effectiveness in discriminative tasks. Our Sliceformer achieves comparable or better performance with lower memory cost and faster speed than the Transformer and its variants. Moreover, the experimental results reveal that applying our Sliceformer can empirically suppress the risk of mode collapse when representing data. The code is available at \url{https://github.com/SDS-Lab/sliceformer}.
Shen Yuan, Hongteng Xu
2023-10-26T14:43:07Z
http://arxiv.org/abs/2310.17683v1
# Sliceformer: Make Multi-head Attention as Simple as Sorting in Discriminative Tasks ###### Abstract As one of the most popular neural network modules, Transformer plays a central role in many fundamental deep learning models, e.g., the VIT in computer vision and the BERT and GPT in natural language processing. The effectiveness of the Transformer is often attributed to its multi-head attention (MHA) mechanism. In this study, we discuss the limitations of MHA, including the high computational complexity due to its "query-key-value" architecture and the numerical issue caused by its softmax operation. Considering the above problems and the recent development tendency of the attention layer, we propose an effective and efficient surrogate of the Transformer, called Sliceformer. Our Sliceformer replaces the classic MHA mechanism with an extremely simple "slicing-sorting" operation, i.e., projecting inputs linearly to a latent space and sorting them along different feature dimensions (or equivalently, called channels). For each feature dimension, the sorting operation implicitly generates an implicit attention map with sparse, full-rank, and doubly-stochastic structures. We consider different implementations of the slicing-sorting operation and analyze their impacts on the Sliceformer. We test the Sliceformer in the Long-Range Arena benchmark, image classification, text classification, and molecular property prediction, demonstrating its advantage in computational complexity and universal effectiveness in discriminative tasks. Our Sliceformer achieves comparable or better performance with lower memory cost and faster speed than the Transformer and its variants. Moreover, the experimental results reveal that applying our Sliceformer can empirically suppress the risk of mode collapse when representing data. The code is available at [https://github.com/SDS-Lab/sliceformer](https://github.com/SDS-Lab/sliceformer). Transformer, multi-head attention, sorting, sequential modeling, discriminative learning. ## 1 Introduction Transformer [1] has been dominant in deep learning research for recent years. It works as a backbone module in many fundamental models, achieving outstanding performance in various application scenarios. Currently, the most successful language models like BERT [2] and GPT [3] are built based on the Transformer or its variants [4, 5], which outperforms classic recurrent neural network (RNN) architectures on both effectiveness and efficiency. In the field of computer vision, the Vision Transformers (ViTs) [6, 7, 8] have achieved better performance in many image recognition and understanding tasks compared to convolutional neural networks (CNNs). Recently, the Transformer-based models have been designed for the structured data in different applications, including the Informer [9] for time series broadcasting, the Graphormer [10] for molecular representation, the Set-Transformer [11] and Point-Transformer [12] for point cloud modeling, and so on. More and more cases show the tendency that the Transformer is becoming an indispensable choice when developing deep learning models. Note that some work makes attempts to replace the Transformer with some other architectures, including the MLP Mixer for vision tasks [13], the RNN-based competitor of the Transformer [14], the Structured State Space Sequential (S4) Model [15] and its simplified variant S5 [16] for modeling extremely-long sequences, and so on. Still, they mainly focus on reusing and improving existing models (e.g., MLP, RNN, and State Space model) in specific tasks (e.g., image classification and sequential prediction) rather than designing a new module applicable for general purposes. As a result, although these models can outperform the Transformer in one or two applications, none are as universally useful as the Transformer. Although without strict theoretical support, the effectiveness of the Transformer is often attributed to the multi-head attention (MHA) mechanism [1] behind it. This empirical but dominant opinion impacts the design and modification of the Transformer significantly, which, in our opinion, might have restricted the development of new model architectures to some degree. As shown in Table I, many variants of Transformer have been proposed to \(i)\) improve the efficiency of MHA (e.g., designing sparse or low-rank attention maps [4, 18, 21]), \(ii)\) enhance the interpretability of MHA (e.g., revisiting attention maps through the lens of kernel theory [19, 23] and optimal transport [22, 24]), or \(iii)\) impose more side information on attention maps [25, 26, 10]. However, these Transformer-driven models still rely on the classic "query-key-value" (abbreviately, QKV) architecture of MHA. Little attention is paid to studying the necessity of the architecture or, more ambitiously, replacing it with a new surrogate for general purposes. In this study, we focus on discriminative learning tasks and challenge the architecture of MHA, proposing an extremely simple "slicing-sorting" operation and developing a surrogate of the Transformer called Sliceformer. In particular, our work is motivated by the MHA's drawbacks and the attention map's possibly desired structures. Firstly, we attribute the MHA's numerical issues to the softmax operation and its high complexity to its QKV architecture. Secondly, the development tendency of the MHA and its variants implies that we shall pursue as many sparse, full-rank, and doubly stochastic attention maps as possible for projected samples. Based on the analysis above, we propose the "slicing-sorting" operation, which projects samples linearly to a latent space and sorts them along different feature dimensions (a.k.a. channels). Replacing the MHA mechanism of the Transformer with the slicing-sorting operation leads to the proposed Sliceformer. We analyze the connections and differences between the proposed slicing-sorting operation and the MHA mechanism and discuss its rationality in depth. Essentially, the "slicing-sorting" operation generates channel-wise attention maps implicitly as permutation matrices for the projected samples. Therefore, the attention maps are sparse, full-rank, and doubly stochastic matrices, satisfying the desired structures mentioned above. As shown in Table I, different from the classic QKV architecture, our slicing-sorting operation only preserves the linear map from the input \(\mathbf{X}\) to the value matrix \(\mathbf{V}\). We do not need the "multi-head" structure because concatenating different linear maps is equivalent to directly increasing the columns of \(\mathbf{W}_{V}\). As a result, our Sliceformer has fewer parameters and lower computational complexity than the Transformer and its variants. In addition, the sorting step of the Sliceformer can be implemented in ascending or descending order. To further enhance the diversity of the learned implicit attention maps, we can apply the ascending and descending sorting operations in an interleaving manner for each layer of the Sliceformer and change the frequency of the interleaves across different layers. We test our Sliceformer in the well-known Long-Range Arena (LRA) benchmark, demonstrating its advantages in extremely long sequential modeling. In particular, as shown in Fig. 1, our Sliceformer achieves superior performance with less memory cost and runtime than the Transformer and its variants. Ablation studies demonstrate the rationality of our model setting. Furthermore, through other discriminative learning tasks, including image classification, text classification, and molecular property prediction, we further demonstrate the universal applicability of our Sliceformer. ## 2 Related Work ### _Transformer and Its Applications_ Transformer [1] is a powerful sequential model friendly to parallel computing. Since it was proposed, Transformer has become the critical module of many large language models, e.g., BERT [2], Transformer-XL [5], and GPT [3]. Besides texts, Transformer is also applied to other sequences, e.g., the Music Transformer [27] Fig. 1: The comparison for various Transformers and our Sliceformer on the LRA benchmark. The length of the sequence is 3K. The x-axis corresponds to the number of training steps per second. The y-axis corresponds to the average score (%) on the LRA benchmark. The peak memory usage of each model is represented as the area of the corresponding circle. For a better comparison, the values (GB) of the top-2 models are shown. \begin{table} \begin{tabular}{l|c l l} \hline \hline Model & Attention(\(\mathbf{V};\mathbf{Q},\mathbf{K}\)) & Complexity & Attention Structure \\ \hline Transformer [1] & Softmax \(\left(\frac{\mathbf{Q}\mathbf{K}^{\top}}{\sqrt{D}}\right)\mathbf{V}\) & \(\mathcal{O}(DN^{2})\) & Dense + Row-wisely normalized \\ SparseTrans [4] & Local2D-Softmax \(\left(\frac{\mathbf{Q}\mathbf{K}^{\top}}{\sqrt{D}}\right)\mathbf{V}\) & \(\mathcal{O}(DN^{1.5})\) & Sparse + Row-wisely normalized \\ Longformer [17] & Local1D-Softmax \(\left(\frac{\mathbf{Q}\mathbf{K}^{\top}}{\sqrt{D}}\right)\mathbf{V}\) & \(\mathcal{O}(DNL)\) & Sparse + Row-wisely normalized \\ Reformer [18] & LSH-Softmax \(\left(\frac{\mathbf{Q}\mathbf{K}^{\top}}{\sqrt{D}}\right)\mathbf{V}\) & \(\mathcal{O}(DN\log N)\) & Sparse + Row-wisely normalized \\ CosFormer [19] & \(\left(\mathbf{Q}_{\text{cos}}\mathbf{K}_{\text{cos}}^{\top}+\mathbf{Q}_{\text{sin}}\mathbf{K} _{\text{sin}}^{\top}\mathbf{V}\right)\mathbf{V}\) & \(\mathcal{O}(\min\{DE_{QK},NE_{Q}\})\) & Sparse \\ Performer [20] & \(\phi_{\pi}(\mathbf{Q})\phi_{\pi}(\mathbf{K})^{\top}(\mathbf{K})\) & \(\mathcal{O}(DNr)\) & Low-rank \\ Linformer [21] & Softmax \(\left(\frac{\mathbf{Q}\psi_{\pi}(\mathbf{K})^{\top}}{\sqrt{D}}\right)\psi_{\pi}(\mathbf{V})\) & \(\mathcal{O}(DNr)\) & Low-rank + Row-wisely normalized \\ Sinkformer [22] & Sinkhorn\({}_{K}\left(\frac{\mathbf{Q}\mathbf{K}^{\top}}{\sqrt{D}}\right)\mathbf{V}\) & \(\mathcal{O}(KDN^{2})\) & Dense + Doubly stochastic \\ \hline **Sliceformer** & Sort\({}_{\text{cal}}(\mathbf{V})\) & \(\mathcal{O}(DN\log N)\) & Full-rank + Sparse + Doubly stochastic \\ \hline \hline \end{tabular} * \({}^{1}\) β€œLocalID” considers \(L\) local data in a sequence. β€œLocal2D” considers the row-wise and column-wise local data for a sequence zigzagging in the 2D space. β€œLSH” denotes locality-sensitive Hashing. * \(\phi_{\pi}:\mathbb{R}^{D}\to\mathbb{R}^{r}\), and \(\phi_{\pi}(\mathbf{Q}),\phi_{\pi}(\mathbf{K})\in\mathbb{R}^{N\times r}\); \(\psi_{r}:\mathbb{R}^{N}\mapsto\mathbb{R}^{r}\), and \(\psi_{\pi}(\mathbf{K}),\psi_{\pi}(\mathbf{V})\in\mathbb{R}^{r\times D}\). * \(\mathbf{K}_{\text{cos}}=\text{diag}(\{\cos\frac{\pi\pi}{M}\}_{i=1}^{N})\text{ReLU}( \mathbf{K})\), \(\mathbf{K}_{\text{sin}}=\text{diag}(\{\sin\frac{\pi\pi}{M}\}_{i=1}^{N})\text{ReLU}( \mathbf{K})\). So are \(\mathbf{Q}_{\text{cos}}\) and \(\mathbf{Q}_{\text{sin}}\). \(E_{QK}\) is the number of nonzero elements in \(\mathbf{Q}_{\text{cos}}\mathbf{K}_{\text{cos}}\). \(E_{Q}\) is the number of nonzero elements in \(\mathbf{Q}_{\text{cos}}\). * \({}^{4}\) β€œSinkhorn\({}_{K}\)” means applying \(K\)-step Sinthon iterations. * \({}^{5}\) Note that, our Sliceformer does not need the β€œmulti-head” architecture because of the simplicity of sorting. \end{table} TABLE I: A comparison for representative Transformers and our Sliceformer on their attention mechanisms. We show one attention head for each transformer, in the input \(\mathbf{X}\in\mathbb{R}^{N\times d}\), the value \(\mathbf{V}=\mathbf{X}\mathbf{W}_{V}\in\mathbb{R}^{N\times D}\), the query \(\mathbf{Q}=\mathbf{X}\mathbf{W}_{Q}\in\mathbb{R}^{N\times D}\), and the key \(\mathbf{K}=\mathbf{X}\mathbf{W}_{K}\in\mathbb{R}^{N\times D}\). for music modeling, the Informer [9] for time series broadcasting and the Transformer Hawkes process [28] for event sequence prediction. For non-sequential data like images, the Vision Transformer (ViT) [6] and its variants [7, 29] take the patches of images as a sequence and extract image representations accordingly, which outperform convolutional neural networks (CNNs) in image classification. Nowadays, Transformer is being introduced for structured data modeling, e.g., the Graphormer [10] for molecules, the Set-Transformer [11] for point clouds, and the Mesh Transformer [30] for 3D meshes. The more applications the Transformer has, the more significant it is to study its architecture, especially its MHA mechanism. ### _The Variants of Transformer_ It has been known that the classic MHA mechanism suffers from high computational complexity, poor scalability, and numerical instability for long sequences. As shown in Table I, many efforts have been made to overcome these issues. The SparseTrans in [4] and the Longformer in [17] compute local attention maps based on the sub-sequences extracted by sliding windows, which leads to sparse global attention maps. Some other models sparsify the key and query matrices directly by the locality-sensitive hashing (LSH) [18] or the ReLU operation [19]. Besides pursuing sparse attention maps, another technical route is constructing low-rank attention maps. The Performer in [20] reduces the feature dimension (the column number) of the query and key matrices, while the Linformer in [21] reduces the sample dimension (the row number) of the key and value matrices. In addition to simplifying the computation of the attention maps, some work provides new understandings of the attention mechanism. The work in [23] treats the attention map as a normalized linear kernel and revisits the vanilla Transformer through different kernels. The Performer [20] and the CosFormer [19] introduce additional mappings for the query and key matrices and consider their linear kernels in the latent spaces. Recently, the work [22] reports an interesting phenomenon that the attention map tends to be doubly stochastic during training. Accordingly, it implements the attention map as an optimal transport through the Sinkhorn-Knopp algorithm [31]. Note that although providing these new understandings, these Transformer variants fail to design new model architectures that overcome the MHA's issues. In recent years, there have been several attempts to replace this mechanism with alternative architectures, such as the works in [15, 16]. They have endeavored to scale to long sequence inputs from the perspective of State Space Models, resulting in a significant improvement in capturing long-range dependencies. However, these models can only serve as substitutes for Transformer in the sequence modeling tasks and lack the generality of Transformer. ## 3 Proposed Sliceformer ### _Motivation and Design Principle_ Typically, given an input \(\mathbf{X}\in\mathbb{R}^{N\times d}\), where \(N\) indicates the length of a sequence or the size of a sample set and \(d\) is the input feature dimension, an attention head first obtains the value, query, and key matrices by linear maps, i.e., \(\mathbf{V}=\mathbf{X}\mathbf{W}_{V}\in\mathbb{R}^{N\times D}\), \(\mathbf{Q}=\mathbf{X}\mathbf{W}_{Q}\in\mathbb{R}^{N\times D}\), and \(\mathbf{K}=\mathbf{X}\mathbf{W}_{K}\in\mathbb{R}^{N\times D}\), and then projects \(\mathbf{V}\) as follows: \[\text{Att}(\mathbf{V};\mathbf{Q},\mathbf{K}):=\mathbf{P}(\mathbf{Q},\mathbf{K})\mathbf{V}. \tag{1}\] Here, we take \(\mathbf{V}\) as the input of the head, and \(\mathbf{P}(\mathbf{Q},\mathbf{K})\in\mathbb{R}^{N\times N}\) is the attention map parametrized by \(\mathbf{Q}\) and \(\mathbf{K}\). The multi-head attention layer applies a group of linear maps, i.e., \(\theta=\{\mathbf{W}_{V,m},\mathbf{W}_{Q,m},\mathbf{W}_{K,m}\in\mathbb{R}^{d\times D}\}_{ m=1}^{M}\), to construct \(M\) attention heads and concatenates their outputs, i.e., \[\text{MHA}_{\theta}(\mathbf{X}):=\text{Concat}_{\text{col}}(\{\text{Att}(\mathbf{V}_{ m};\mathbf{Q}_{m},\mathbf{K}_{m})\}_{m=1}^{M}), \tag{2}\] where \(\text{MHA}_{\theta}(\mathbf{X})\in\mathbb{R}^{N\times MD}\), \(\text{Concat}_{\text{col}}\) means the column-wise concatenation of the input matrices, and for \(m=1,...,M\), we have \(\mathbf{V}_{m}=\mathbf{X}\mathbf{W}_{V,m}\), \(\mathbf{Q}_{m}=\mathbf{X}\mathbf{W}_{Q,m}\), and \(\mathbf{K}_{m}=\mathbf{X}\mathbf{W}_{K,m}\). The vanilla Transformer implements the attention map based on the softmax operation, i.e., \[\mathbf{P}(\mathbf{Q},\mathbf{K})=\text{Softmax}\Big{(}\frac{\mathbf{Q}\mathbf{K}^{\top}}{\sqrt{D} }\Big{)}, \tag{3}\] where \(\text{Softmax}(\cdot)\) is applied to each row of the matrix \(\frac{\mathbf{Q}\mathbf{K}^{\top}}{\sqrt{D}}\). Following this strategy, most existing variants of the Transformer implement their attention maps based on the softmax operation as well, as shown in Table I. It has been known that the attention map in (3) suffers from the following two drawbacks: * **Numerical Issues of Softmax.** The softmax operation makes the attention maps dense and row-wisely normalized, which suffers from numerical issues when dealing with long sequences. In particular, the output of the softmax operation tends to be over-smoothed with the increase in data size. Given an vector \(\mathbf{x}\in\mathbb{R}^{N}\), we set \(\mathbf{y}=[y_{n}]=\text{Softmax}(\mathbf{x})\), where \(y_{n}\) is the \(n\)-th element of \(\mathbf{y}\). The standard deviation of the elements, i.e., \(\frac{1}{N-1}\sum_{n=1}^{N}(y_{n}-\frac{1}{N}\sum_{n^{\prime}=1}^{N}y_{n^{ \prime}})\), reduces rapidly when \(N\) increases, as shown in Fig. 2. In other words, the elements in \(\mathbf{y}\) become indistinguishable. * **High Complexity of QKV Architecture.** As shown in (3), the computation of an attention map involves a matrix multiplication, whose time complexity is \(\mathcal{O}(DN^{2})\). Moreover, because of using the softmax operation, \(\mathbf{P}(\mathbf{Q},\mathbf{K})\) is always a dense matrix no matter whether \(\mathbf{Q}\mathbf{K}^{\top}\) is sparse or not. As a result, the space complexity of the attention map is \(\mathcal{O}(N^{2})\). Both the time and space complexity are quadratic to the length of the sequence. Because of the two drawbacks, without any additional data preprocessing, like the LSH in [18] and the subsequence sampling in [4, 17], the attention maps used in the current MHA mechanism are always over-smoothed and dense when modeling long sequences, which does harm to model performance and computational efficiency severely. Fig. 2: Setting the sequence length \(N\in\{10,...,10^{6}\}\), we run the code \(100\) trials with Pytorch 2.0.0 and Python 3.9. With the increase of \(N\), the softmax operation suffers from the over-smoothness issue. To overcome the drawbacks, some models apply sparse attention maps, e.g., CosFormer [19] and Longformer [17], and achieve competitive performance and higher efficiency than the vanilla Transformer, as shown in Fig. 1 and the results reported in [15, 17, 19, 26]. Besides introducing sparse structures, at the same time, some attempts are made to impose low-rank structures on attention maps, e.g., Local Attention [32] and Linear Transformer [14]. Although these models can also reduce the computational complexity, they seem to harm the performance -- the gaps between the vanilla Transformer and the models using low-rank attention maps are significant on the LRA benchmark, as shown in Fig. 1. In our opinion, a potential reason for this phenomenon is that when \(\text{rank}(\mathbf{P})\ll N\), the rank of the output embeddings \(\text{MHA}_{\theta}(\mathbf{X})\) is likely to be smaller than \(N\). The low-rank output suffers from a high risk of mode collapse, whose representation power is limited. In addition, the recent work in [22] shows that in various discriminative learning tasks, the attention maps tend to be doubly stochastic (i.e., \(\mathbf{P1}_{N}=\mathbf{1}_{N}\) and \(\mathbf{P}^{\top}\mathbf{1}_{N}=\mathbf{1}_{N}\)) during training.1 Accordingly, a new variant of Transformer, called Sinkformer [22], is proposed, which implements the attention maps based on the Sinkhorn-Knopp algorithm [31] and makes them doubly stochastic strictly. The Sinkformer performs better than the vanilla Transformer in many discriminative learning tasks, e.g., image and text classification. Footnote 1: See Fig. 2 in [22] for more details. **In this study, we aim to replace the attention map in (3) with a new operation, and accordingly, propose a surrogate of the MHA in (2) with better (or at least comparable) model performance and higher computational efficiency.** In particular, the drawbacks of the current attention map and its recent advances provide us with valuable insights into the model design. On the one hand, to reduce the computational complexity and overcome the over-smoothness issue, it is necessary for us to apply a sparse attention map. On the other hand, applying a full-rank and doubly stochastic attention map helps to preserve or improve model performance. Therefore, we would like to design an attention map satisfying the above structural constraints (i.e., **sparse, full-rank, and doubly stochastic**). In the following subsection, we will show that a simple and novel slicing-sorting operation can implicitly achieve such a structured attention map. ### _Slicing-Sorting for Implicit Structured Attention_ As shown in Table I, the key contribution of our Sliccformer is implementing a new attention layer based on an extremely simple **slicing-sorting** operation, which projects the input linearly to a latent space and sorts each feature, i.e., \[\text{SliceSort}(\mathbf{X}):=\text{Sort}_{\text{col}}(\underbrace{\mathbf{X}\mathbf{W }_{V}}_{\mathbf{V}=[\mathbf{v}_{i}]})=\text{Concat}_{\text{col}}(\{\mathbf{P}_{i}\mathbf{v}_{ i}\}_{i=1}^{MD}), \tag{4}\] where \(\mathbf{W}_{V}\in\mathbb{R}^{d\times MD}\) is the projection matrix,2 and \(\mathbf{V}=\mathbf{X}\mathbf{W}_{V}\). Here, we call each column of \(\mathbf{V}\) a "slice", denoted as \(\mathbf{v}_{i}\) for \(i=1,...,MD\). Each slice corresponds to the projection result of \(N\)\(d\)-dimensional samples in a 1D space. As shown in (4), sorting a slice \(\mathbf{v}_{i}\) corresponds to the multiplication between a permutation matrix \(\mathbf{P}_{i}\) and the slice, and accordingly, our slicing-sorting operation concatenates all sorted slices as the output, i.e., \(\text{SliceSort}(\mathbf{X})\in\mathbb{R}^{N\times MD}\). Footnote 2: Here, we set the number of columns in \(\mathbf{W}_{V}\) to be \(MD\), such that the output has the same shape with the output of MHA. Compared to the current MHA architecture, the slicing-sorting operation has several advantages. * **Implicit Structured Attention Maps.** For each slice \(\mathbf{v}_{i}\), the sorting step implements its attention map implicitly as the permutation matrix \(\mathbf{P}_{i}\). The permutation matrix naturally satisfies the three desired structural constraints -- it is sparse, full-rank, and doubly stochastic, leading to a competitive alternative to the traditional attention map. Instead of applying the same attention map to a group of \(\mathbf{v}_{i}\)'s, we compute a specific permutation matrix for each \(\mathbf{v}_{i}\). In other words, the number of attention heads in our slicing-sorting operation can be much larger than in the current MHA architecture. * **Low Computational Complexity.** Given \(\mathbf{V}\in\mathbb{R}^{N\times MD}\), we sort the \(MD\) columns, whose time complexity is \(\mathcal{O}(MDN\log N)\). On the contrary, the time complexity of the other attention layers (i.e., the complexity per head in Table I times the number of heads \(M\)) is at most comparable to ours. With the increase in input sequence length, the time gap between training and inference for the slicing-sorting and other attention layers will become more significant. Moreover, instead of generating explicit attention maps with size \(\mathcal{O}(N^{2})\), we implement permutation matrices implicitly via sorting, whose space complexity is \(\mathcal{O}(\log N)\) in general and \(\mathcal{O}(N)\) in the worst case. Because of abandoning the QKV architecture, our slicing-sorting has a huge advantage on space complexity compared to the other attention layers. ### _Implementations of Sliccformer_ Replacing the MHA layer with our slicing-sorting operation, we obtain the proposed Sliccformer model, which has fewer parameters and higher computational efficiency than the original Transformer. The sorting step of the slicing-sorting operation plays a central role in the Sliccformer, which determines the attention map applied to each channel. Typically, we can implement the sorting step in ascending order (or equivalently in descending order) and apply it to each layer of Sliccformer. To further investigate the rationality of our model design, we can apply the following variants of the slicing-sorting operation, leading to different implementations of the Sliccformer. In particular, given \(\mathbf{V}\in\mathbb{R}^{N\times MD}\), we consider the following two settings for the sorting step. * **Max-Exchange.** For each \(\mathbf{v}_{i}\), we find its maximum element and exchange it with the first element, whose computational complexity is \(\mathcal{O}(N)\) in time and \(\mathcal{O}(1)\) in space. This setting leads to a partial permutation matrix for each \(\mathbf{v}_{i}\), which only has two non-diagonal elements. As a result, this variant further simplifies the slicing-sorting operation, constructing the attention maps satisfying the structural constraints and with lower computational complexity. * **Order-Interleave.** We can sort the columns of \(\mathbf{V}\) in different orders and interleave the orders with different frequencies in different layers. Given a Sliccformer with \(L\) layers, let \(\mathbf{V}\) be the value matrix in the \(n\)-th layer. For its \(i\)-th column, denoted as \(\mathbf{v}_{i}^{(n)}\), we have \[\text{Sort}(\mathbf{v}_{i}^{(n)})=\begin{cases}\text{ Ascending}(\mathbf{v}_{i}^{(n)}),&\psi_{n}(i)\geq 0,\\ \text{Desending}(\mathbf{v}_{i}^{(n)}),&\psi_{n}(i)<0,\end{cases}\] (5) where \(\psi_{n}(i)\) is defined as \[\psi_{n}(i)=\sin\left(2^{L-n}\pi\frac{i}{MD}\right),\ i=1,...,MD.\] (6) By applying different sorting orders, we can increase the diversity of the attention maps across different layers. In the following experiments, we will show that even if applying the max-exchange strategy, our Sliceformer can achieve competitive performance with low complexity. The more complicated slicing-sorting operations, i.e., those sorting \(\mathbf{V}\)'s in consistent ascending order or interleaved ascending and descending orders, lead to the Sliceformer models outperforming state-of-the-art Transformer-based models. ## 4 Experiments We demonstrate the effectiveness and efficiency of our Sliceformer in discriminative tasks through comprehensive comparative and analytic experiments. In particular, we first compare our Sliceformer to Transformer and its representative variants on the well-known Long Range Arena (LRA) benchmark [32] and empirically verify the rationality of our slicing-sorting operation in long sequence classification tasks. Then, we implement ViT [6] by our Sliceformer and test its performance and permutation-invariance in image classification tasks. Finally, we explore the applications of our Sliceformer in other domains, including text classification and graph classification, demonstrating the universal applicability of our slicing-sorting attention layer. In addition, we further analyze the singular spectrum achieved by our slicing-sorting operation and show that our Sliceformer has a lower risk of mode collapse empirically, which provides a potential explanation for its encouraging performance. For convenience, we denote the Sliceformers applying the "Ascending Order", "Order-Interleave", and "Max-Exchange" strategies as \(\text{Sliceformer}_{\text{ascond}}\), \(\text{Sliceformer}_{\text{interleave}}\), and \(\text{Sliceformer}_{\text{max}}\), respectively. ### _Long Range Arena Benchmark_ Long Range Arena (LRA) is a benchmark designed to evaluate models for long sequence scenarios [32], which consists of six discriminative learning tasks, including ListOps [33], byte-level text classification [34], byte-level document retrieval [35], and three sequentialized image classification tasks, i.e., CIFAR-10 [36], Pathfinder [37],3 and Pathfinder-X (a longer and more challenging version of Pathfinder). Each image is formulated as a long sequence of pixels in the three image classification tasks. We test our Sliceformer on the LRA benchmark and compare it to state-of-the-art Transformer-based models on both prediction accuracy and computational efficiency. In each task, our Sliceformer is trained to represent each input sequence through the embedding in the "CLS" position. For a fair comparison, we implement all the models based on JAX [38] and strictly follow the benchmark's default data processing and experimental design. In each task, our Sliceformer is the same as the other models on the number of layers and the dimension of hidden variables. Hence, its model parameters are fewer because of abandoning the QKV architecture. All the models are trained on four NVIDIA 3090 GPUs. Footnote 3: Pathfinder is an image classification task: given a set of gray-level images, each of which plots two points and several curves, the model aims to recognize whether there exists a path connecting the points in each image. As shown in Table II, when applying the Ascending Order or the Order-Interleave strategy, our Sliceformer performs the best in three of the six tasks and achieves the highest average accuracy. Especially in the challenging image classification tasks on CIFAR-10 and Pathfinder, our Sliceformer outperforms the other models significantly, improving the classification accuracy by at least four percentage points. Even if applying the Max-Exchange strategy, the overall performance of Sliceformer can be comparable to the state-of-the-art models, e.g., BigBird [40] and Cosformer [19]. Table III further compares the models' training speed, inference speed, and peak memory usage when dealing with sequences ranging from 1K to 4K. Note that, according to Table II, both \(\text{Sliceformer}_{\text{ascond}}\) and \(\text{Sliceformer}_{\text{interleave}}\) outperform other models. They apply the sorting step and thus have the same complexity. Therefore, we mainly focus on the efficiency analysis of \(\text{Sliceformer}_{\text{interleave}}\) in Table III. We can find that the most efficient model among the baselines is Linformer [21], but its average accuracy on LRA is merely 51.36%. Our \(\text{Sliceformer}_{\text{ascond}}\) is more efficient than the other Transformer-based models, executing more training and inference steps per second and occupying less memory. Its advantage in computational efficiency becomes even more significant with the increase of the sequence length. The results in Tables II and III have also been illustrated in Figure 1. In summary, our \(\text{Sliceformer}_{\text{ascond}}\) and \(\text{Sliceformer}_{\text{interleave}}\) achieve a trade-off between model performance and computational efficiency. When classifying long sequences, they obtain comparable or superior accuracy with significant improvements in runtime compared to the Transformer and its variants. ### _Testing on Other Discriminative Tasks_ Besides modeling long sequences, we test our Sliceformer models in other applications, demonstrating their universality. In particular, focusing on \(\text{Sliceformer}_{\text{ascond}}\) and \(\text{Sliceformer}_{\text{interleave}}\), we apply them to text classification, image classification, and molecular property prediction tasks. #### 4.2.1 Image Classification Like ViT [6], we can treat images as patch sequences and apply our Sliceformer to achieve image classification. In particular, by re \begin{table} \begin{tabular}{l|c c c c c c c} \hline \hline Model & ListOps & Text & Retrieval & Image & Path & Path-X & Avg. \\ \hline Transformer [1] & 36.37 & 64.27 & 57.46 & 42.44 & 71.40 & FAIL & 54.39 \\ \hline Local Att. [32] & 15.82 & 52.98 & 53.39 & 41.46 & 66.63 & FAIL & 46.06 \\ Linear Trans. [14] & 16.13 & **65.90** & 53.09 & 42.34 & 75.30 & FAIL & 50.55 \\ Reformer [18] & 37.27 & 56.10 & 53.40 & 38.07 & 68.50 & FAIL & 50.67 \\ Sinkformer [22] & 30.70 & 64.03 & 55.45 & 41.08 & 64.65 & FAIL & 51.18 \\ SparseTrans [4] & 17.07 & 63.58 & 59.95 & 44.24 & 71.71 & FAIL & 51.24 \\ SinkhornTrans [24] & 33.67 & 61.20 & 53.83 & 41.23 & 67.45 & FAIL & 51.29 \\ Linformer [21] & 35.70 & 53.94 & 52.27 & 38.56 & 76.34 & FAIL & 51.36 \\ Performer [20] & 18.01 & 65.40 & 53.82 & 42.77 & 77.05 & FAIL & 51.41 \\ Synthesizer [39] & 36.99 & 61.68 & 54.67 & 41.61 & 69.45 & FAIL & 52.88 \\ Longformer [17] & 35.63 & 62.85 & 56.89 & 42.22 & 69.71 & FAIL & 53.46 \\ BigBird [40] & 36.05 & 64.02 & 59.29 & 40.83 & 74.87 & FAIL & 55.01 \\ Cosformer [19] & **37.90** & 63.41 & 61.36 & 43.17 & 70.33 & FAIL & 55.23 \\ \hline \(\text{Sliceformer}_{\text{ascond}}\) & 37.00 & 62.90 & 59.00 & 40.48 & 75.42 & FAIL & 54.96 \\ \(\text{Sliceformer}_{\text{ascond}}\) & 37.30 & 64.25 & 61.97 & 45.88 & 81.98 & FAIL & 58.28 \\ \(\text{Sliceformer}_{\text{ascond}}\) & 37.65 & 64.60 & **62.23** & **48.02** & **82.04** & FAIL & **58.91** \\ \hline \hline \end{tabular} \end{table} TABLE II: The comparison for various models on the LRA benchmark. For each model, we record its classification accuracy (%) in each task and the average performance. β€œFAIL” means the training process fails to converge. In each column, we hold the best result and underline the second best one. placing the MHA layers of ViT with our slicing-sorting operation, we obtain the "Vision Sliceformer" accordingly. We test the Sliceformer on five image datasets, including Dogs vs. Cats 4, MNIST, CIFAR-10, CIFAR-100 [36], and Tiny-ImageNet [41]. For each dataset, we compare \(\text{Sliceformer}_{\text{ascend}}\) and \(\text{Sliceformer}_{\text{interleave}}\) to ViT on their model size and classification accuracy. For these three models, we set the number of layers and the dimension of each layer to be the same and train them from scratch. Table IV compares the classification accuracy achieved by the three models, and Fig. 3 further illustrates the convergence of the model performance with the increase of training epochs on CIFAR-10 and CIFAR-100, respectively. The results show that our Sliceformers outperform ViT consistently on both model size and classification accuracy. Significantly, the \(\text{Sliceformer}_{\text{interleave}}\) achieves the best performance on four of the five datasets with fewer parameters. Footnote 4: [https://www.kaggle.com/c/dogs-vs-cats/data](https://www.kaggle.com/c/dogs-vs-cats/data) #### 4.2.2 Text Classification To further evaluate the language modeling capability of Sliceformer, we test it on the IMDB dataset [42] and compare it to the Transformer. As shown in Table V, Sliceformer consistently outperforms the Transformer on the IMDB dataset, and at the same time, its model size is smaller than that of the Transformer. This result further demonstrates the superiority of the Sliceformer to the Transformer. #### 4.2.3 Molecular Property Prediction Our Sliceformer is also applicable to graph-structured data like molecules. In this study, we introduce the slicing-sorting operation to Graphormer [10] and test its impacts on molecular property prediction. In particular, the attention head of Graphormer applies a modified QKV architecture, which is formulated as \(\text{Softmax}(\mathbf{S}+\mathbf{E}+\frac{1}{\sqrt{D}}\mathbf{Q}\mathbf{K}^{\top}\mathbf{V})\mathbf{V}\). Here, \(\mathbf{S}\) and \(\mathbf{E}\) are learnable embedding matrices encoding spatial positions and edge information, respectively. Applying the slicing-sorting operation, we design a simplified attention layer as \(\text{Sort}_{\text{col}}(\text{Softmax}(\mathbf{S}+\mathbf{E})\mathbf{V})\), where the query and key matrices are ignored. Accordingly, the number of trainable parameters dramatically reduces by around 30%. Replacing the attention layer of Graphormer with this layer leads to a Sliceformer for molecular data. We apply the PCQM4M-LSC dataset [43] for training and testing the models. The experimental results in Table VI show that applying the slicing-sort operation leads to a simplified model with a smaller size, whose performance is comparable to Graphormer. \begin{table} \begin{tabular}{l|c c|c c|c c|c c|c c} \hline \hline Data & \multicolumn{3}{c|}{Dogs vs. Cats} & \multicolumn{3}{c|}{MNIST} & \multicolumn{3}{c|}{CIFAR-10} & \multicolumn{3}{c|}{CIFAR-100} & \multicolumn{3}{c}{Tiny-ImageNet} \\ Metric & Model Size & Top-1 Acc & Model Size & Top-1 Acc & Model Size & Top-1 Acc & Model Size & Top-1 Acc & Model Size & Top-1 Acc \\ \hline ViT & 1.90 & 79.03 & 9.60 & 98.78 & 9.60 & 80.98 & 9.65 & 53.99 & 22.05 & **52.74** \\ \(\text{Sliceformer}_{\text{ascend}}\) & **1.11** & 79.71 & **6.50** & 98.81 & **6.46** & 82.16 & **6.50** & 54.24 & **18.50** & 51.77 \\ \(\text{Sliceformer}_{\text{instance}}\) & **1.11** & **79.87** & **6.50** & **99.00** & **6.46** & **83.54** & **6.50** & **54.70** & **18.50** & 52.40 \\ \hline \hline \end{tabular} \end{table} TABLE IV: The comparison for our Sliceformer and ViT on the number of parameters (\(\times 10^{6}\)) and classification accuracy (%). In each task, we bold the best result and underline the second best one. \begin{table} \begin{tabular}{l|c c c c|c c c c|c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{3}{c|}{Training speed (steps per second)} & \multicolumn{3}{c|}{Inference speed (steps per second)} & \multicolumn{3}{c}{Peak Memory Usage (GB)} \\ & 1K & 2K & 3K & 4K & 1K & 2K & 3K & 4K & 1K & 2K & 3K & 4K \\ \hline Transformer [1] & 27.49 & 9.45 & 4.73 & OOM & 41.54 & 32.90 & 16.09 & OOM & 11.64 & 32.45 & 65.67 & OOM \\ \hline Local Attention [32] & 31.41 & 25.47 & 18.06 & 13.80 & 41.24 & 41.34 & 42.35 & 42.27 & 6.23 & 9.24 & 12.26 & 15.27 \\ Linear Trans. [14] & 31.35 & 25.79 & 17.07 & 12.32 & 41.98 & 42.51 & 42.39 & 42.12 & 6.50 & 9.84 & 13.18 & 16.52 \\ Reformer [18] & 31.55 & 22.00 & 13.42 & 8.84 & 42.80 & 42.25 & 41.65 & 34.25 & 6.91 & 12.22 & 19.54 & 28.88 \\ \(\text{Sinkformer}\)[22] & 18.72 & 5.82 & 2.86 & OOM & 41.74 & 20.49 & 9.12 & OOM & 14.13 & 41.58 & 82.53 & OOM \\ SparseTrans [4] & 27.39 & 9.47 & 4.72 & OOM & 41.16 & 40.45 & 41.37 & 42.12 & 11.64 & 32.45 & 65.71 & OOM \\ \(\text{SinkhornTrans}\)[24] & 29.88 & 22.00 & 15.66 & 11.70 & 41.16 & 40.45 & 41.37 & 42.12 & 6.70 & 10.23 & 13.78 & 17.32 \\ \(\text{Linformer}\)[21] & 28.47 & 28.08 & 19.08 & 14.55 & 41.72 & 42.42 & 42.23 & 42.12 & 5.95 & 8.82 & 11.65 & 14.47 \\ \(\text{Performer}\)[20] & 28.84 & 26.67 & 18.97 & 14.10 & 41.50 & 42.31 & 41.87 & 41.69 & 6.25 & 9.35 & 12.45 & 15.54 \\ \(\text{Synthesizer}\)[39] & 19.57 & 6.27 & 3.01 & OOM & 41.84 & 27.44 & 13.21 & OOM & 12.77 & 37.53 & 75.23 & OOM \\ \(\text{Longformer}\)[17] & 17.56 & 5.67 & 2.74 & OOM & 40.91 & 32.52 & 15.79 & OOM & 13.00 & 37.07 & 75.46 & OOM \\ BigBird [40] & 27.27 & 14.34 & 9.76 & 7.21 & 40.84 & 41.50 & 34.85 & 26.02 & 9.59 & 16.53 & 23.16 & 30.07 \\ \(\text{Cosformer}\)[19] & 28.42 & 26.13 & 17.56 & 12.58 & 41.62 & 40.78 & 40.56 & 40.50 & 6.36 & 9.68 & 13.36 & 16.95 \\ \hline \hline \(\text{Sliceformer}_{\text{ascend}}\) & **32.79** & **32.26** & **21.79** & **16.25** & **43.73** & **43.72** & **43.29** & **42.89** & **5.61** & **8.12** & **10.64** & **13.14** \\ \hline \hline \end{tabular} \end{table} TABLE III: The comparison for various models on their computational efficiency. β€œOOM” means the training process suffers from the out-of-memory issue. In each column, we bold the best result and underline the second best one. \begin{table} \begin{tabular}{l|c c} \hline \hline Data & \multicolumn{3}{c}{IMDB} \\ Metric & Model Size & Top-1 Acc. \\ \hline Transformer & 8.84 & 83.05 \\ \(\text{Sliceformer}_{\text{ascend}}\) & **8.05** & **84.91** \\ \(\text{Sliceformer}_{\text{interleave}}\) & **8.05** & 84.55 \\ \hline \hline \end{tabular} \end{table} TABLE V: The comparison the number of parameters (\(\times 10^{6}\)) and text classification accuracy (%). Fig. 3: The comparison for our Sliceformers and ViT on their training convergence. ### _Empirical Evidence of Model Rationality_ After training a ViT and our Sliceformer on CIFAR-10, we visualize the output matrices of their attention layers (i.e., \(\text{MHA}_{\theta}(\mathbf{X})\) and \(\text{SliceSort}(\mathbf{X})\)) and analyze their singular spectrum in Fig. 4. For ViT, the output matrices of its attention layers seem to have low-rank structures, verified by the singular spectrum shown in Fig. 4(d) -- the singular values of the output matrices decay rapidly. On the contrary, the output matrices obtained by our Sliceformers have sorted columns, as shown in Figs. 4(b) and 4(c). Moreover, according to the singular spectrum shown in Fig. 4(d), the singular values of the Sliceformers' output matrices decay much more slowly than those of ViT. This phenomenon provides an empirical explanation for the rationality of Sliceformer. In particular, the slow-decay spectrum indicates that our Sliceformer suppresses the risk of mode collapse when representing data. ## 5 Discussion and Conclusion We have proposed a new data representation model called Sliceformer. By replacing the traditional MHA mechanism with a simple slicing-sorting operation, our Sliceformer overcomes the numerical drawbacks of the current Transformers and achieves encouraging performance in various discriminative tasks. Our work provides a new perspective to the design of the attention layer, i.e., implementing attention maps through simple algorithmic operations has the potential to achieve low computational complexity and good numerical performance. **Current Limitations and Future Work.** The main drawback of Sliceformer is its limited model capacity. In particular, although satisfying the structural constraints, the implicit attention map in our model is merely a permutation matrix. Thus, its representation power is not as good as the softmax-based attention in (3). As a result, when testing on the full-sized ImageNet classification task, the top-1 accuracy of Sliceformer is merely 64.77%. To solve this problem, in the future, we would like to develop a differentiable and learnable slicing-sorting operation, enhancing its model capacity by introducing more parameters. Additionally, the performance of Sliceformer in generative learning tasks has not been investigated yet. We want to improve the model architecture further and make the model applicable to generative learning tasks. In the aspect of application, we plan to apply Sliceformer to represent structured data like point clouds and meshes.
2301.10875
Tutorial on the Executable ASM Specification of the AB Protocol and Comparison with TLA$^+$
The main aim of this report is to provide an introductory tutorial on the Abstract State Machines (ASM) specification method for software engineering to an audience already familiar with the Temporal Logic of Actions (TLA$^+$) method. The report asks to what extent the ASM and TLA$^+$ methods are complementary in checking specifications against stated requirements and proposes some answers. A second aim is to provide a comparison between different executable frameworks that have been developed for the same specification languages. Thus, the ASM discussion is complemented by executable Corinthian ASM (CASM) and CoreASM models. Similarly, the two TLA$^+$ specifications presented, which rely on the TLC and Apalache model checkers, respectively, are complemented by a Quint specification, a new language developed by Informal Systems to serve as a user-friendly syntax layer for TLA$^+$. For the basis of comparison we use the specification of the Alternating Bit (AB) protocol because it is a simple and well-understood protocol already extensively analysed in the literature. While the models reported here and developed with the two methods are semantically equivalent, ASMs and Quint are better suited for top-down specification from abstract requirements by iterative refinement. TLA$^+$ seems to be more easily used bottom-up, to build abstractions on top of verified components in spite of the fact that it, too, emphasizes iterative refinement. In the final section, the report begins to scope out the possibility of a homomorphism between the specification of the AB protocol and its finite-state machine (FSM) through state space visualizations, motivated by a search for a formal decomposition method.
Paolo Dini, Manuel Bravo, Philipp Paulweber, Alexander Raschke, Gabriela Moreira
2023-01-25T23:58:38Z
http://arxiv.org/abs/2301.10875v2
# Tutorial on the Executable ASM Specification of the AB Protocol and Comparison with TLA\({}^{+}\) ###### Abstract The main aim of this report is to provide an introductory tutorial on the Abstract State Machines (ASM) specification method for software engineering to an audience already familiar with the Temporal Logic of Actions (TLA\({}^{+}\)) method. The report asks to what extent the ASM and TLA\({}^{+}\) methods are complementary in checking specifications against stated requirements and proposes some answers. A second aim is to provide a comparison between different executable frameworks that have been developed for the same specification languages. Thus, the ASM discussion is complemented by executable Corinthian ASM (CASM) and CoreASM models. Similarly, the two TLA\({}^{+}\) specifications presented, which rely on the TLC and Apalache model checkers, respectively, are complemented by a Quint specification, a new language developed by Informal Systems to serve as a user-friendly syntax layer for TLA\({}^{+}\). For the basis of comparison we use the specification of the Alternating Bit (AB) protocol because it is a simple and well-understood protocol already extensively analysed in the literature. While the models reported here and developed with the two methods are semantically equivalent, ASMs and Quint are better suited for top-down specification from abstract requirements by iterative refinement. TLA\({}^{+}\) seems to be more easily used bottom-up, to build abstractions on top of verified components in spite of the fact that it, too, emphasizes iterative refinement. In the final section, the report begins to scope out the possibility of a homomorphism between the specification of the AB protocol and its finite-state machine (FSM) through state space visualizations, motivated by a search for a formal decomposition method. ###### Contents * 1 Introduction * 2 ASMs * 2.1 Conceptual Overview * 2.2 Definitions and Basic Concepts * 2.3 Classification of Functions and Locations * 3 Protocol Comparison * 3.1 Lynch's 2-Bit Protocol * 3.2 The AB 1-Bit Protocol * 3.3 Initial Conditions * 3.3.1 Lynch Protocol * 3.3.2 AB Protocol * 4 ASM Specification of the AB Protocol * 4.1 High-Level Requirements * 4.2 ASM Ground Model * 4.2.1 Ground Model Mapped from Requirements * 4.2.2 Ground Model in Compact Form * 5 CASM Model of the AB Protocol * 5.1 Introduction to the CASM Language * 5.2 Paolo's Executable CASM Model * 5.3 Philipp's CASM Refinement * 6 CoreASM Model of the AB Protocol * 6.1 Introduction to CoreASM * 6.2 Alexander's CoreASM specification of AB Protocol * 7 TLA\({}^{+}\) Model of the AB Protocol * 7.1 Paolo's Spec for TLC * 7.2 Manuel's Spec for Apalache * 7.2.1 ABP3_typedefs: Type aliases * 7.2.2 ABP3.tla: The model * 7.2.3 MC_ABP3.tla: Instantiating the model * 7.2.4 Output trace * 8 Quint Specification of AB Protocol * 8.1 Introduction to Quint * 8.2 Gabriela's Spec of the AB Protocol * 9 Comparison between the ASM and TLA\({}^{+}\) Methodologies * 9.1 ASMs and TLA\({}^{+}\) * 9.2 State Space Visualization * 10 Conclusions and Future Work Introduction The original purpose of this document was to serve as a tutorial for the Abstract State Machines (ASM) specification and modelling method for software engineering [8, 5], and for how such ASM specifications can be turned into executable models using the Corinthian Abstract State Machine1 (CASM) language and framework [24]. The scope then grew to write a report that could present a comparison of the ASM and \(\text{TLA}^{+}\) specification perspectives and of their tooling and executable frameworks. The tutorial assumes that the reader is already familiar with \(\text{TLA}^{+}\). Footnote 1: [https://casm-lang.org](https://casm-lang.org) The focus of the report and the basis for the comparisons is the specification of the half-duplex2 Alternating Bit (AB) protocol, first published by Bartlett et al. in 1969 [2] as an improvement on a protocol proposed by Lynch in 1968 [22]. Although the ASM specification of the AB protocol is already available in Section 6.3 of the main reference text on ASMs [8], the tutorial part of the report provides a stand-alone introduction to the basic ASM and CASM concepts and practices that will hopefully make the learning ramp easier for newcomers. A chapter on a different executable specification framework, CoreASM,3 is also included. Footnote 2: The three basic types of communication protocols are: (1) simplex, in which messages are sent in only one direction and an acr or alternation bit is sent in the other; (2) half-duplex, in which the two terminals take turns at sending messages in each direction, with the ac bit for each message travelling in the opposite direction; and full-duplex, in which both terminals send messages in both directions simultaneously and independently. Footnote 3: [https://slideplayer.com/slide/17819082/](https://slideplayer.com/slide/17819082/) After a high-level introduction to ASMs in Chapter 2, Chapter 3 presents and analyses the Lynch and AB protocols in detail. Chapter 4 derives the ASM specification from a list of requirements, Chapter 5 presents a basic and a refined version of a CASM model of the ASM rules, and Chapter 6 presents the CoreASM model. Chapter 7 introduces the \(\text{TLA}^{+}\) specification in two versions, a simpler one that emulates the single-thread execution of the CASM code and a more sophisticated one meant for more general behaviour and for the Apalache model checker.4 The audience is assumed to be already familiar with \(\text{TLA}^{+}\), whose basics can be learned through Leslie Lamport's video course.5 Chapter 8 casts the \(\text{TLA}^{+}\) model in the newly developed Quint language6 for expressing \(\text{TLA}^{+}\) specifications in a more user-friendly way, and explains how it improves the engineer's Ux while retaining the full power of \(\text{TLA}^{+}\). Chapter 9 presents a discussion of the similarities and differences between the ASM and \(\text{TLA}^{+}\) perspectives, at both theoretical level and at the level of the tools, and Chapter 10 offers some conclusions and hints on possible directions for future work. Footnote 4: [https://apalache.informal.systems/](https://apalache.informal.systems/) Footnote 5: [http://lamport.azurewebsites.net/tla/tla.html?from=https://research.microsoft.com/users/lamport/tla/la.html?=psth](http://lamport.azurewebsites.net/tla/tla.html?from=https://research.microsoft.com/users/lamport/tla/la.html?=psth) Footnote 6: [https://github.com/informalsystems/quint](https://github.com/informalsystems/quint) Regarding scope, this report addresses only a very abstract version of AB protocol. We originally thought of developing also a first refinement of the specification in order to show how the ASM method handles more concrete implementation details, but lack of time has pushed us instead towards a more extensive "horizontal" comparison between different specification languages, methodologies, and tools. The motivation for this methodological exploration arises from implementation engineers' reluctance to develop specifications for their software applications - especially in \(\text{TLA}^{+}\) - before they start coding, a problem that is universally recognized. The diagnosis is that specification languages and formal methods tend to be too mathematical, requiring of the engineers and developers a very different kind of thinking from what they rely on when coding. The ASM methodology and the Quint language were developed with a full awareness of this challenge and as a way to address it. The report, therefore, aims to compare the different specification perspectives and methodologies in order to understand their complementarities and the opportunities for integration that could offer more user-friendly tooling to developers while retaining the full generality and rigour of ASM and \(\text{TLA}^{+}\) specifications. ## 2 ASMs ### Conceptual Overview The ASM formal specification and modelling method is used for the design of complex, reactive, concurrent, distributed, non-deterministic, multi-agent software systems based on rules for how a given system transitions between different states in response to external stimuli or an internal clock. States are composed of sets of elements together with the (dynamic) functions that operate on them. The elements, the functions, and the rules are all expressed with terminology that reflects the domain in which the application will run. The precise mathematical definition of state transition rules makes ASM models executable, given suitable tooling such as the CASM language. Therefore, ASM models can be thought of as executable pseudo-code that is understandable to the customer or domain expert. The ASM methodology starts with the definition of a ground model based on the high-level requirements and proceeds by iterative refinement for each implementation decision (vertical refinement) or as new requirements are added (horizontal refinement). At each refinement step the corresponding CASM model can be run to check whether it still satisfies the requirements. The iterative refinement process terminates when the specification has reached a level of detail sufficient for the implementation in the desired target language. Conceptually, ASMs can be thought of as generalised finite-state machines. Mathematically, they are composed of sets of states and of (dynamic) functions that operate on those states, i.e. they are algebras. They were in fact first introduced by Yuri Gurevich as _evolving algebras7_[16]. In the most general and abstract terms, ASMs can be thought of as a method to develop a customised programming language for a specific problem. However, since they require the close collaboration of at least four roles/people (software implementer, ASM expert, customer, testing expert), the method also requires the production of a body of detailed documentation that everyone understands and can refer back to in case of problems, change requests, or new version releases. Therefore, ASMs are as important a shared documentation method and central repository of application knowledge as they are a rigorous specification, mathematical verification, and validation (through simulation) framework. Footnote 7: [https://www.researchgate.net/profile/Yuri-Gurevich/publication/221329427_Evolving_Algebras_and_Linear_Time_Literarchy/links/0fcfd5100a3f36d80d00000/Evolving-Algebras-and-Linear-Time-Hierarchy.pdf#page=46](https://www.researchgate.net/profile/Yuri-Gurevich/publication/221329427_Evolving_Algebras_and_Linear_Time_Literarchy/links/0fcfd5100a3f36d80d00000/Evolving-Algebras-and-Linear-Time-Hierarchy.pdf#page=46) ### Definitions and Basic Concepts ASMs were initially developed as single-agent state machines [8]. They were then generalised to multi-agent synchronous or asynchronous ASMs [8]. More recently, the concept of communicating ASMs was introduced to give more flexibility to the specification of complex distributed systems [6, 7]. Here we start with defining and understanding a single-agent state machine, known as a 'Basic ASM'. This section is a very short summary of parts of Chapter 2 in [8], but this brief summary cannot replace the original book, which the reader is strongly encouraged to consult since it provides a comprehensive discussion of all the theoretical and many practical aspects of the ASM and related concepts. Basic ASMs are composed of finite sets of transition rules of the form **if**_Condition_ **then**_Updates_, where the updates transform abstract ASM states. _Abstract ASM states_ are mathematical structures composed of data as elements of sets which are equipped with partial functions and predicates. Predicates are Boolean functions, while constants are treated as 0-ary (static) functions. Partial functions are turned into total functions by adding \(f(x)=\textit{undef}\) for values of the domain where \(f\) is not defined. Following the usual ASM convention where a capitalised variable name indicates a set, _Updates_ is a finite set of assignments of the form \[f(t_{1},\cdots,t_{n}):=t.\] The values of the functions \(f\) in this set change to the values \(t\) when these assignments are executed in parallel at the arguments indicated. More precisely, when entering a new state, first all the parameters \(t_{i}\) are evaluated to their values \(v_{i}\), then the value of \(f(v_{1},\cdots,v_{n})\) is changed to (or defined as, if it was _undef_) \(v\), which is the value of \(f(v_{1},\cdots,v_{n})\) in the new state. A function name \(f\) and the ordered sequence of its arguments \((v_{1},\cdots,v_{n})\) formed by a list of parameters is called a _location_. 'Location-value pairs \((\mathit{loc},v)\) are called _updates_ and represent the basic units of state change' ([8]: 29). If functions are interpreted as 'function tables', a location-value pair is a row of the table with the left column holding, for each row, the value of the function and the remaining columns holding the values of the arguments upon which the function depends. A _static_ function corresponds to a table that is never changed, whereas _dynamic_ functions correspond to tables whose left or value columns are updated as described above. An ASM computation step in a given state consists in executing simultaneously all updates of all transition rules whose guards are true in that state. A _condition_ or _guard_ is an arbitrary predicate logic formula without _free variables8_ that evalutes to _true_ or _false_. The result of their execution, if it is consistent, yields the next state. A set of updates is _consistent_ if it contains no pair of updates with the same location, i.e. no two location-value pairs \((\mathit{loc},v)\), \((\mathit{loc},v^{\prime})\) with \(v\neq v^{\prime}\). Footnote 8: [https://en.wikipedia.org/wiki/Free_variables_and_bound_variables](https://en.wikipedia.org/wiki/Free_variables_and_bound_variables) When analysing runs \(\mathit{S_{0}},\mathit{S_{1}},\mathit{S_{2}},\cdots\) of an ASM, \(\mathit{S_{n}}\) is the \(n^{\text{th}}\) state. If \(n<\mathit{m}\), we say that \(\mathit{S_{n}}\) is before \(\mathit{S_{m}}\), written \(\mathit{S_{n}}<\mathit{S_{m}}\). Simultaneous execution of updates enables the local description of a global state change which, in turn, implies that the next state differs from the previous state only at locations appearing in the update set. The advantage is that, unlike the case of TLA+, the frame problem [4] is avoided, i.e. only what changes needs to be specified; what is not mentioned does not change by definition. The simultaneous executions of a rule \(\mathit{R}\) for all the values of a free variable \(x\) satisfying a given condition \(\phi\) is expressed as follows: **forall \(x\) with \(\phi\) do**\(\mathit{R}\) A choice operation to describe non-deterministic behaviour [4] is expressed as **choose \(x\) with \(\phi\) do**\(\mathit{R}\) Other common constructs such as \(\ \in\ \\) to indicate belonging to a set or **if**...**then** are used freely as needed to express various kinds of conditions. Constraints on an ASM's runs can be imposed to restrict the class of models satisfying a given specification. The constraint mechanism allows the designer to combine in the specification declarative and axiomatic features with operational ones without incurring the cost of the frame problem mentioned above. The abstract nature of ASMs makes it possible to relate the state evolution of a given 'abstract' machine to the state evolution of a more'refined' machine with a more detailed state set in terms of a notion of equivalence of corresponding run segments of the two ASMs under precisely stated boundary conditions: The focus is not on generic notions of refinements which can be proved to work in every context and to provide only effects which can never be detected by any user of the new program. Instead the concern is to support a disciplined use of refinements which correctly reflect and explicitly document an intended design decision, adding more details to a more abstract design description, e.g. for making an abstract program executable, for improving a program by additional features or by restricting it through precise boundary conditions which exclude certain undesired behaviors. ([8]: 22) In summary, an ASM \(M\) is defined by its _signature_, i.e. the set of declarations of functions and rules, the set of its initial states, and the unique variable-free _main rule_ which is often identified with the machine \(M\). However, in more recent languages like CASM, the main rule is invisible and has been reduced to the entry point that launches the machine, thereby leaving the programmer more freedom to call the control centre of the ASM rule execution something other than'main'. Such execution control ASM is declared with the init command. The function and rule declarations include the constraints on signature and runs in order to determine the set of possible states of the machine. We now explain how functions are classified. ### Classification of Functions and Locations The main distinction for a given ASM \(M\) is between its static functions, whose values never change, i.e. do not depend on the states of \(M\), and its dynamic functions, whose values may change due to updates by \(M\) or by the environment, i.e. may depend on the states of \(M\). As shown in Figure 1, ASM functions can alternatively be classified as 'basic' or 'derived'. Derived functions are not directly updatable by the ASM or the environment. However, they are often expressed in terms of other functions that belong to the ASM signature and may be dynamic. The role of a derived function \(f\) is that in different states it allows to produce a different value \(f(x)\) for the same argument \(x\). For this reason, in the ASM literature derived functions are usually regarded as dynamic. The same classification applies to locations or updates. With this minimalist set of concepts and definitions we will develop an ASM model of the AB protocol. Chapter 3 presents the protocol while Chapter 4 the ASM model. Chapter 5 presents the corresponding CASM model, Chapter 7 the TLA\({}^{+}\) model of AB, and the final chapter a comparison between the ASM and TLA\({}^{+}\) models. Figure 1: **Classification of ASM functions and locations** ## 3 Protocol Comparison The half-duplex Alternating Bit (AB) protocol forms the core of the Kermit9 file transfer protocol, and was itself an improvement by Bartlett et al. [2] on a half-duplex protocol developed by Lynch [22]. In both cases, the protocol assumes that transmission errors can always be detected. Footnote 9: [http://www.columbia.edu/kermit/kermit.html](http://www.columbia.edu/kermit/kermit.html) ### Lynch's 2-Bit Protocol Lynch uses an alternation bit to indicate when the next file should be accepted by the receiver. Each terminal stores a local version of its alternation bit, which is compared to the alternation bit sent by the other terminal as an attachment to the file: if they are equal, the in-coming file is rejected even if there are no errors; if they are different, it is accepted. The verify bit is a second attachment and indicates whether the previous file transfer (in the _same_ direction) was successful or not. If \(\mathit{VFY}=1\), the next message is loaded and sent; if \(\mathit{VFY}=0\), the previous message is re-sent. Fig. 2 shows identical flowcharts for each terminal, indicating that the protocol is symmetrical. Red and blue colours are used to help distinguish between the two separate data flows in the two opposite directions. Fig. 3 shows the automata for the two terminals, 'A' and 'B', corresponding to the Lynch protocol of Fig. 2. The automata's starting states are different because we assume that the first message will be sent from B and will be received by A. The automata are otherwise identical since the protocol is symmetrical. The same red and blue colours are used to highlight the two directions of data flow. Transitions between states are labelled by bubbles that contain a guard in square brackets. If the guard evaluates to True, the rest of the text in the bubble is executed. Where there is no guard, it is assumed that some other independent trigger causes the transition, such as a timeout or user input. Fig. 4 shows a breakdown of the key variables and actions for each terminal, driven by a sequence of message transmission attempts and constrained by a sequence of errors in each direction. These sequences of messages and errors are identical to those used by Lynch in his presentation of the protocol, in Fig. 2 of [22], but we provide more details for each step in the transmission in order to make it easier to follow the state changes in the automata and to relate the automata to the flowchart. Figure 2: **Flowchart of Lynch’s reliable 2-bit half-duplex protocol** Entries involving a change indicate the current value on the left and the next-state value on the right. There are in all 22 attempts at message transmission in both directions. The left-side of the table shows what happens at Terminal A when it receives a message, which may have picked up errors in transmission. The right-side of the table does the same for Terminal B. The circuit diagram-like arrows above the table show the dependency of some of the variable at one terminal to the variables at the other. Parentheses indicate the values that were sent but that may have arrived corrupted at the receiver. Messages in each direction are numbered, such that, given the pattern of errors shown, we can see that B manages to send six files to A successfully, whereas A can only manage four. Each local _ALT_ bit is updated to equal the _ALTR_ bit just received if and only if the message arrived without error AND it had not already been accepted (stored locally). When the previous message (in the _same_ direction) was successful, the _ALTT_ bit is updated to its inverse. When messages in both directions do not incur any errors, the sequence of states in Fig. 3 is 1-3-4-5, meaning that the arriving file is stored and the next file in the opposite direction is prepared for sending. However, since the two data flows are decoupled and independent, it is possible for an A-to-B message to arrive successfully and be stored even if there was an error in the B-to-A direction, such that a new file in the latter direction is not loaded and the previous file is resent (1-3-4 trace). Equally, it is also possible for an A-to-B file that has already been stored to arrive without error, such that in this case only a new file Figure 4: **Sequence of messages and errors (based on Fig. 2 in [22])** Figure 3: **Automata of the Lynch protocol** for the B-to-A transmission is readied (1-3-5 state trace). Figs. 5 and 6 show the same information as Fig. 4 with a simpler graphical rendition of the automata of Fig. 3, but making the states and state transitions explicit at _both_ terminals. Transitions are highlighted with slightly thicker arrows. Red states are always starting states and blue states are always ending states. The other colours are intermediate states. The subscript indicates the alternation bit being sent, _ALTT_. ### The AB 1-Bit Protocol AB combines both alternation and verification functions in a single bit. The consequence is that whereas in the Lynch case the protocol is symmetrical, AB is not. Figure 7 shows the flowchart for both terminals, where the asymmetry is highlighted by the opposite handling of the branch point where the alternation bit just received (_ALTR_) is compared to the bit to be sent in the other direction (_ALTT_). More precisely, where Lynch uses VFY = 0 or 1 to indicate that the previous message was unsuccessful or successful, respectively, Bartlett et al. use a _change_ in the control bit to indicate success in the previous transfer and no change to indicate failure. However, this rule is reversed for the other terminal. As shown in Fig. 7, Terminal B follows this rule whereas Terminal A follows the opposite. Fig. 8 shows the corresponding automata. These are smaller than the automata devised by Bartlett et al. but behave the same way. Fig 9 shows the same sequence of message transfer attempts and errors as Fig. 4. While the B-to-A transmission matches the number of files sent with the Lynch protocol, the A-to-B transmission achieves two additional transfers, suggesting that the AB protocol may be more efficient. Figure 8: **Automata of AB 1-bit protocol** Figure 7: **Flowchart of reliable AB 1-bit protocol** Figs. 10 and 11 show the detailed automata diagrams for the same sequence. In this case the _ALTR_ and _ALTT_ bits are drawn within each automaton to make it easier to verify that the correct sequence of files is sent in the presence of the given errors. Following the convention used by Bartlett et al., underscores on transition labels indicate the sending transition and absence of underscoring indicates the receiving transition. As previously, the subscript indicates _ALTT_, creating some redundancy since the same information is also provided by the value of the bit written on the right within each sending automaton. ### Initial Conditions We can take the first transmission to be B-to-A without loss of generality. In the asymmetric AB case we indicate what needs to change if A were the sending the first transmission. We use the notation \(ALTT_{B}(0)\) to indicate the value of \(ALTT\) of the B terminal before the first transmission. We also assume no errors occur in the first few transmissions. #### 3.3.1 Lynch Protocol For the Lynch protocol, although we could set \(\mathit{VFYT}(0)=0\) at the beginning, for the starting terminal, there is no loss in generality in pretending that the "previous" transmission was successful. So we can set \(\mathit{VFYT}_{B}(0)=1\). This leaves \((ALT,ALTT)\) as the only variables, which can be set independently for each terminal. \(ALTR\) and \(\mathit{VFYR}\) are not relevant since their values are overwritten by whatever the other terminal sends them. As shown in Table 1, therefore, there are 16 possible distinct initial conditions (ICs). Rather than developing a formal proof of which combinations lead to reliable transmission and which do not, here we only lay the groundwork for such a proof and merely suggest the likely trend. The proof can be revisited at a later date if it turns out that it would be helpful to obtain it. To distinguish between ICs that lead to reliable behaviour and those that do not might be difficult in general, meaning with errors in the first transmissions. If instead we focus on the sequence of Fig. 4, it is sufficient to show that the first two files are delivered correctly, since after that the protocol is already known to be reliable and we have already shown all the state transitions for this sequence. The combination shown in red in Table 1 corresponds to Figs. 4 and 5. The other 9 of the first 10 are shown in Figs. 12-14. The large red cross indicates that the wrong file is being sent at that stage. Absence of a red cross in the presence of an indication of which file is dropped means that the wrong file is sent at the _next_ stage (not shown). "OK" means success. Figure 11: **Sequence of automata diagrams corresponding to Fig. 9, Lines 13-22** To help with the verification, Table 2 shows the traces of alternation bit values at the two terminals for the first 3 steps of the sequence of Fig. 4 and for the first 10 ICs in Table 1. We stop at the first 10 cases because they are enough for the trend to be recognised, which will \begin{table} \begin{tabular}{c c c c c} IC & \(ALT_{A}(0)\) & \(ALT_{A}(0)\) & \(ALT_{B}(0)\) & \(ALT_{B}(0)\) \\ \hline 1 & 0 & 0 & 0 & 0 \\ **2** & **0** & **0** & **0** & **1** \\ 3 & 0 & 0 & 1 & 0 \\ 4 & 0 & 0 & 1 & 1 \\ 5 & 0 & 1 & 0 & 0 \\ 6 & 0 & 1 & 0 & 1 \\ 7 & 0 & 1 & 1 & 0 \\ **8** & **0** & **1** & **1** & **1** \\ **9** & **1** & **0** & **0** & **0** \\ 10 & 1 & 0 & 0 & 1 \\ 11 & 1 & 0 & 1 & 0 \\ 12 & 1 & 0 & 1 & 1 \\ 13 & 1 & 1 & 0 & 0 \\ 14 & 1 & 1 & 0 & 1 \\ **15** & **1** & **1** & **1** & **0** \\ & 16 & 1 & 1 & 1 \\ \end{tabular} \end{table} Table 1: **Possible combinations of initial conditions for the Lynch protocol** \begin{table} \begin{tabular}{c c c c c c} IC & \(ALT_{A}\) & \(ALT_{A}\) & \(ALT_{A}\) & \(ALT_{B}\) & \(ALT_{B}\) & \(ALT_{B}\) \\ \hline 1 & 0 & 0 & 0\(\rightarrow\)1 & 0 & - & 0 \\ 1 & 0 & 0 & 1 & 0\(\rightarrow\)1 & 0 & 0\(\rightarrow\)1 \\ & 0\(\rightarrow\)1 & 1 & 1\(\rightarrow\)0 & 1 & 1 & 1 \\ & 0\(\rightarrow\)1 & 0 & 0\(\rightarrow\)1 & 1 & 1 & 0 \\ & 0\(\rightarrow\)1 & 1 & 1\(\rightarrow\)0 & 1 & 1 & 1 \\ & 0\(\rightarrow\)1 & 0 & 1\(\rightarrow\)0 & 1 & 1 & 0 \\ & 0\(\rightarrow\)1 & 0 & 1\(\rightarrow\)0 & 0 & - & 0 \\ 5 & 0 & 0 & 0 & 0 & 0 & 0\(\rightarrow\)1 \\ & 0\(\rightarrow\)1 & 1 & 0\(\rightarrow\)1 & 0 & 0 & 1 \\ & 0\(\rightarrow\)1 & 1 & 1\(\rightarrow\)0 & 0 & - & 1 \\ 6 & 1 & 1 & 0 & 0 & 0 & 1\(\rightarrow\)0 \\ & 1\(\rightarrow\)0 & 0 & 0\(\rightarrow\)1 & 0 & 0 & 0 \\ & 0 & 0 & 1\(\rightarrow\)0 & 1 & - & 0 \\ 7 & 0 & 0 & 0 & 1\(\rightarrow\)0 & 0 & 0\(\rightarrow\)1 \\ & 0\(\rightarrow\)1 & 1 & 0\(\rightarrow\)1 & 0 & 0 & 1 \\ & 0\(\rightarrow\)1 & 1 & 1\(\rightarrow\)0 & 1 & - & 1 \\ 8 & 1 & 1 & 0 & 1\(\rightarrow\)0 & 0 & 1\(\rightarrow\)0 \\ & 1\(\rightarrow\)0 & 0 & 0\(\rightarrow\)1 & 0 & 0 & 0 \\ & 1\(\rightarrow\)0 & 0 & 0\(\rightarrow\)1 & 0 & - & 0 \\ 9 & 0 & 0 & 1 & 0\(\rightarrow\)1 & 1 & 0\(\rightarrow\)1 \\ & 0\(\rightarrow\)1 & 1 & 1\(\rightarrow\)0 & 1 & 1 & 1 \\ & 1 & 1 & 0\(\rightarrow\)1 & 0 & - & 1 \\ 10 & 1 & 1 & 1 & 0\(\rightarrow\)1 & 1 & 1\(\rightarrow\)0 \\ & 1\(\rightarrow\)0 & 0 & 1\(\rightarrow\)0 & 1 & 1 & 0 \\ \end{tabular} \end{table} Table 2: **Record of alternation bit traces for the two terminals and the first 10 cases of Table 1** be cast as an ASM rule in the next chapter as the following two conditions which must be satisfied simultaneously: \[ALT_{A}(0)\neq ALTT_{B}(0)\qquad\qquad\mathit{ALTT}_{A}(0)=\mathit{ALT}_{B}(0). \tag{1}\] There are only 4 initial conditions that satisfy these conditions: 2, 8, 9, and 15, where the latter three are shown in blue font in the table. We have not shown the 15\({}^{th}\) IC explicitly and leave it as an exercise for the reader to verify. Figure 12: **Graphical analysis of ICs 1, 3, and 4 from Table 1 for the Lynch protocol** Figure 13: **Graphical analysis of ICs 5, 6, and 7 from Table 1 for the Lynch protocol** #### 3.3.2 AB Protocol For the AB protocol, since _ALTR_ is overwritten by whatever _ALTT_ from the other terminal is sending, we don't need to worry about it. So there are only four cases of interest for the four possible combinations of \((\mathit{ALTT}_{A}(0),\mathit{ALTT}_{B}(0))\). The first one, (1, 1) has already been addressed in the figures above. The remaining three possibilities are shown in Fig. 15, from which we deduce that for this protocol to work reliably the initial conditions when B starts are either (1, 1) or (0, 0). On the other hand, it can easily be verified by inspection that if A starts the initial conditions should be either (0, 1) or (1, 0). Since it may be difficult to synchronise two remote terminals, an easy fix that allows the use of any combination is to add a dummy file at the beginning of the transmission, so that if it is dropped nothing is lost. The figure assumes that no errors occur in the first few steps. We should examine also the case where Figure 14: **Graphical analysis of ICs 8, 9, and 10 from Table 1 for the Lynch protocol** Figure 15: **Graphical analysis of remaining initial conditions for the AB protocol** one or more errors occur. Let's assume as before that B starts, and that an error occurs. What follows is a series of "steps" to help with the logical flow of events, even if in some of the steps no actual event takes place: 1. B sends msg (BA1, 0) to A, where the 0 is the initial value of \(\mathit{ALTT}_{B}\). 2. BA1 is corrupted en route. 3. A detects the error and goes into its error state. 4. A's initial message has not been initialized, but its \(\mathit{ALTT}\) bit has: \(\mathit{ALTT}_{A}=0\). 5. A sends (_undef_, 0) back to B. _undef_ here can be anything: 000, or random garbage. 6. Let's assume that there are no errors and B receives the message. The assumption is that both terminals can tell if an error occurred, so B knows that an error has not occurred. However, its acceptance condition is that \(\mathit{ALTR}_{B}\neq\mathit{ALTT}_{B}\). In this case they are both 0 so B does not accept the garbage message. 7. B resends (BA1, 0), i.e. without updating the payload or \(\mathit{ALTT}_{B}\). 8. Assuming no error, A receives BA1 and accepts it because its accepting condition is \(\mathit{ALTR}_{A}=\mathit{ALTT}_{A}\) and they are both 0. 9. A fetches its first message AB1 and flips its ALTT bit, so sends the message (AB1, 1). 10. Assuming no error, B receives AB1 and accepts it since \(1\neq 0\). 11. Etc. If there is no error at Step 2, then A overwrites the garbage with AB1 at Step 3 and flips \(\mathit{ALTT}_{A}\), so "go to" Step 9. If there is an error at Step 6, the following will happen: 1. B detects a transmission error and goes into its error state. 2. B resends (BA1, 0) 3. (The rest is the same) ASM Specification of the AB Protocol ### High-Level Requirements The Alternating Bit (AB) protocol was formally specified and verified by James Huggins [17] using evolving algebras, i.e. what later became ASMs. Here we follow the methodology guidelines and start from the requirements, from which the functions and rules and rules are built up step-by-step. In a real implementation there is a notion of timeout that is not present in the automata described by Lynch or Bartlett et al. and that can be specified at the next refinement level. We now describe textual descriptions of what each terminal must do. These will become ASM rules in the next section. 1. For each terminal, the _ALTT_ bit can be thought of as the number of the file being sent mod 2. 2. Each terminal needs to initialise _ALTT_. As shown in the previous chapter, we can use \(\mathit{ALTT}_{A}(0):=1\) and \(\mathit{ALTT}_{B}(0):=1\). 3. _ALTR_ of the receiving terminal is always overwritten by the _ALTT_ sent by the sending terminal, so its initial value could be _undef_. 4. During normal operation and consistently with ASM practice, when _ALTR_ and _ALTT_ are not overwritten their local values at each terminal remain unchanged. 5. For each terminal, when a message is received without error and accepted the value of _ALTT_ is inverted: \(\mathit{ALTT}:=\neg\mathit{ALTT}\). Equivalently, the file number is incremented and \(\mathit{ALTT}:=\mathit{N}_{\mathit{file}}\mod 2\). In this case, the next file is readied and sent to the other terminal. 6. For each terminal, when a message is received without error but it is not accepted the value of _ALTT_ is left unchanged. In this case, the previous file is resent to the other terminal. 7. The conditions for accepting the files are different and depend on the initial conditions. For the initial conditions used here, the condition for acceptance by Terminal B is \(\mathit{ALTR}_{B}\neq\mathit{ALTT}_{B}\), whereas for Terminal A the condition is \(\mathit{ALTR}_{A}=\mathit{ALTT}_{A}\). 8. The protocol assumes that an error detection system is in place, such as a checksum, that allows the receiving terminal to detect reliably the presence of errors generated during transmission. 9. If an error is detected, the receiver resends the current file: it does not store what just arrived and it does not prepare the next file in the other direction. If no error is detected, see Eq. 5. 10. The two terminals take turns at sending messages, where each message is composed of one file and the _ALTT_ control bit. ### ASM Ground Model #### 4.2.1 Ground Model Mapped from Requirements **Req. 1**. It turns out that since in the AB protocol the two terminals are not independent this rule is not easy to implement. It is much easier to update _ALTT_ to its complement every time a file is successfully received and the control bit test has been passed. _Terminal_ is a set, while _fileNumber_ and _ALTT_ are dynamic functions of a single variable: \[\begin{array}{l}\mathit{Terminal}=\{A,B\}\\ \mathit{fileNumber}\colon\mathit{Terminal}\rightarrow\mathbb{N}\\ \mathit{//ALTT}(\mathit{Terminal}\rightarrow\mathbb{B})=\mathit{fileNumber}( \mathit{terminal})\mathit{\ mod}\ 2\\ \mathit{ALTT}\colon\mathit{Terminal}\rightarrow\mathbb{B}\end{array}\] **Req. 2**. With the preferred way to handle _ALTT_ updates just described, _ALTT_ is no longer a derived function and has to be initialized explicitly. In ASM rule specification, parallel execution is encountered more often than sequential execution. Therefore, the default is parallel execution and, as shown in the following rule, it does not require any special markers. However, where they are deemed necessary for added clarity, single curly brackets are used to indicate a parallel code block. \begin{tabular}{l} Initialize = \\ _fileNumber_(_A_) := 1 \\ _fileNumber_(_B_) := 1 \\ _ALTT_(_A_) := _true_ \\ _ALTT_(_B_) := _true_ \\ _counter_ := 1 \\ _initialized_ := _true_ \\ \end{tabular} **Req. 3**. Although setting \(\mathit{ALTR}(0)=\mathit{undef}\) for both terminals is in principle correct, in the CASM code to be discussed below we set its initial value explicitly. \(\mathit{ALTR}\) is a dynamic function of one variable, ReceiveBit is an ASM rule, and \(\mathit{otherTerminal}\) is a derived (dynamic) function of one variable: \begin{tabular}{l} \(\mathit{ALTR}\) : \(\mathit{Terminal}\rightarrow\mathbb{B}\) \\ \(\textsc{ReceiveBit}(\mathit{terminal})=\) \\ \(\mathit{ALTR}(\mathit{terminal}):=\mathit{ALTT}(\mathit{otherTerminal}( \mathit{terminal}))\) \\ \(\mathit{otherTerminal}(\mathit{terminal})\rightarrow\mathit{Terminal}=\) \\ \(\mathbf{if}\mathit{terminal}=\mathit{A}\mathbf{then}\mathit{B}\mathbf{ else}\mathit{A}\) \\ \end{tabular} **Req. 4** is always satisfied by default by an ASM model. **Reqs. 5-7**. The next rule requires sequential execution of some of its rules and statements. Following CASM syntax, this is indicated by the notation \(\{|\quad P\ \mathbf{seq}\ Q\ \mathbf{seq}\ R\cdots\quad|\}\). Although in ASM syntax indentation is sufficient to indicate code blocks, the \(\mathbf{if}\) statement below is an example of code that benefits from delimiters for added clarity. This makes it less likely that the CASM code, for which indentation is _not_ sufficient and which requires such delimiters, will be implemented incorrectly. \begin{tabular}{l l} ReceiveSuccess(\(\mathit{terminal}\)) = \{|\) \\ ReceiveBit(\(\mathit{terminal}\)) & //Update the \(\mathit{ALTR}\) bit \\ \(\mathbf{let}\ \mathit{condition}=\) \\ \(\mathbf{if}\ \mathit{terminal}=\mathit{A}\mathbf{then}\) \\ \(\mathit{ALTR}(\mathit{terminal})=\mathit{ALTT}(\mathit{terminal})\) \\ \(\mathbf{else}\) \\ \(\mathit{ALTR}(\mathit{terminal})=\mathbf{not}\ \mathit{ALTT}(\mathit{terminal})\) \\ \(\mathbf{in}\) \\ \(\mathbf{if}\ \mathit{condition}\mathbf{then}\{\) \\ \(\mathbf{//Load}\) next file to be sent: \\ \(\mathit{fileNumber}(\mathit{terminal}):=\mathit{fileNumber}(\mathit{terminal})+1\) \\ \(\mathbf{//Update}\) the control bit: \\ \(\mathit{ALTT}(\mathit{terminal}):=\mathbf{not}\ \mathit{ALTT}(\mathit{terminal})\) \\ \(\mathbf{//}\mathit{fileNumber}(\mathit{otherTerminal}(\mathit{terminal}))\) would be stored here \\ \(\}\) \\ \(\}\) \\ \(\}\) \\ \end{tabular} Remark: Although it is possible to do without a sequential rule here (e. g., by replacing each occurence of \(\mathit{ALTR}(\mathit{terminal})\) with its definition \(\mathit{ALTT}(\mathit{otherTerminal}(\mathit{terminal}))\)), we decided to model it like this in order to emphasize that a bit is _received_ before it is processed. **Req. 8**. In the ground model we are not modelling random error occurrence. Rather, since we wish to validate the ASM model with the CASM executable model we assume the error occurrence shown in Fig. 9. This is specified with the following static functions, where round parentheses denote the usual mathematical meaning of a fixed-order tuple: \(\begin{array}{l}\mathit{errTraceA}:\mathbb{Z}\rightarrow\mathbb{B}=(\mathit{false },\mathit{false},\mathit{true},\mathit{false},\mathit{false},\mathit{false}, \mathit{true},\mathit{false},\mathit{true},\mathit{false},\mathit{false})\\ \mathit{errTraceB}:\mathbb{Z}\rightarrow\mathbb{B}=(\mathit{false},\mathit{ false},\mathit{true},\mathit{true},\mathit{false},\mathit{true},\mathit{false}, \mathit{true},\mathit{false},\mathit{false})\end{array}\) **Req. 9**. The SendMsg rule is just a stub since at the current level of abstraction all the work is done by the ReceiveSuccess rule: \(\begin{array}{l}\textsc{ReceiveMsg}(\mathit{terminal},\mathit{error})=\\ \textbf{if not}\mathit{\mathit{error}}\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\ \ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\textbf{\ }\ \ \ }\textbf{\ Derived function(s): \(\mathit{otherTerminal}(\mathit{terminal})\rightarrow\mathit{Terminal}=\) **if**\(\mathit{terminal}=\mathit{A}\)**then**\(\mathit{B}\)**else**\(\mathit{A}\) Rules: \(\textsc{Initialize}=\) \(\mathit{fileNumber}(\mathit{A}):=1\) \(\mathit{fileNumber}(\mathit{B}):=1\) \(\mathit{ALTT}(\mathit{A}):=\mathit{true}\) \(\mathit{ALTT}(\mathit{B}):=\mathit{true}\) \(\mathit{counter}:=1\) \(\mathit{initialized}:=\mathit{true}\) ReceiveBit\((\mathit{terminal})=\) \(\mathit{ALTR}(\mathit{terminal}):=\mathit{ALTT}(\mathit{otherTerminal}( \mathit{terminal}))\) ReceiveSuccess\((\mathit{terminal})=\{|\) ReceiveBit\((\mathit{terminal})\)//Update the \(\mathit{ALTR}\) bit **let**\(\mathit{condition}=\) **if**\(\mathit{terminal}=\mathit{A}\)**then** \(\mathit{ALTR}(\mathit{terminal})=\mathit{ALTT}(\mathit{terminal})\) **else** \(\mathit{ALTR}(\mathit{terminal})=\)**not**\(\mathit{ALTT}(\mathit{terminal})\) **in** **if**\(\mathit{condition}\)**then**\(\{\) //Load next file to be sent: \(\mathit{fileNumber}(\mathit{terminal}):=\mathit{fileNumber}(\mathit{terminal})+1\) //Update the control bit: \(\mathit{ALTT}(\mathit{terminal}):=\)**not**\(\mathit{ALTT}(\mathit{terminal})\) //\(\mathit{fileNumber}(\mathit{otherTerminal}(\mathit{terminal}))\) would be stored here \(\}\) \(|\}\) ReceiveMsg\((\mathit{terminal},\mathit{error})=\) **if**\(\mathit{not}\)\(\mathit{error}\)**then**ReceiveSuccess\((\mathit{terminal})\) SendMsg = _skip_ Run = **if**\(\mathit{initialized}\)\(\neq\mathit{true}\)**then** Initialize **else** \(\mathit{counter}:=\mathit{counter}+1\) **if**\(\mathit{counter}<12\)**then**\(\{|\) SendMsg ReceiveMsg\((\mathit{A},\mathit{errTraceA}(\mathit{counter}))\) SendMsg ReceiveMsg\((\mathit{B},\mathit{errTraceB}(\mathit{counter}))\) \(|\}\) **else** _stop_ CASM Model of the AB Protocol ### Introduction to the CASM Language The Corinthian Abstract State Machine (CASM) language, along with its tooling and framework, represents a concrete ASM implementation of the ASM theory defined by Borger and Stark [8] whose purpose is to simulate (execute) ASM specifications. CASM features a statically strong, inferred, and typed language to aid the specifier in defining only the necessary types for definition elements. The intermediate types are completely inferred and statically checked by appropriate compiler techniques [24]. The language implementation10 consists currently of three tools - a numeric and symbolic interpreter casmi, a source code format beautiful casmf, and a Language Server Protocol11 (LSP) daemon casmd for LSP client editor integration. Footnote 10: [https://casm-lang.org/download](https://casm-lang.org/download) Historically speaking, the first version of CASM was created during a research project at the Vienna University of Technology (TU Wien) in order to formally describe and simulate computer architectures using ASMs [20]. The research effort started out using CoreASM [14] but, due to a strong demand on simulation (execution) performance, the Java-based interpreter implementation of CoreASM could not satisfy the desired goals. Therefore, a specific subset of language features of CoreASM was initially used - all Basic ASM rules - with some minor syntax adaptations. At the time, the project featured a C\({}^{++}\)-based parser, static code analysis, interpreter, and compiler prototype implementation [21][23]. Sadly this project and its outcome were covered by an NDA. Therefore, since 2014 a completely new CASM implementation written from scratch was created as an open-source project12 by Paulweber et al. [30][26][25]. Footnote 11: [https://langserver.org](https://langserver.org) Footnote 12: [https://github.com/casm-lang](https://github.com/casm-lang) In addition to researching core aspects for the (improved) execution of ASM models, Paulweber et al. [29][28] started another investigation to find empirical evidence of how the understandability and usability of ASM languages can be improved using object-oriented language abstractions. The result of this research led to the integration of a trait-based syntax extension [27]. The trait-based integration provides the ability to specify even CASM language and run-time features within CASM itself, and makes it possible to move progressively more and ultimately all parts of the language definition and compiler behaviour away from the C\({}^{++}\)-based implementation and to a CASM-based specification [27]. For example, Fig. 16 demonstrates how the default behaviour for the type Color is defined in a way that makes it possible to derive the colour opposite to the current one. Furthermore, we can see that a trait Amount is defined and the implementation of that trait for Color specifies the color-to-amount mapping. ### Paolo's Executable CASM Model Fig. 17 shows the executable CASM model as a screenshot of the browser-based CASM editor. This model was put together mainly by Paolo but with close guidance from Philipp. The level of abstraction of this model is very high. Thus, rather than sending actual files a counter for file transmission is incremented when there is no error in the transmission. The errors, in turn, follow the same pattern of the Lynch sequence in Figs. 4 and 9 in order to be able to check if the model replicates the same behaviour, which in this case is given by file numbers in both directions. Figure 16: **Trait-Based CASM Specification with Example ASM Run Output** The output is shown in Fig. 18. This figure is not as easy to read as Fig. 9 but it contains the same information, thereby validating the model. Line numbers were introduced to make it easier to compare to the output in Fig. 9. ### Philipp's CASM Refinement Based on the ground model specification shown in Fig. 17, a lot of refinement steps can be performed. The first concern is to remove the hidden computation steps implied by the use of the sequential execution semantics block inside the run rule. The computation is "hidden" to the ASM agent, meaning that the intermediate states are not part of the global state set of the ASM. This has negative verification and computational efficiency implications and, therefore, it is best to avoid it. Thus, Fig. 19 depicts the run rule's refinement that removes the sequential execution semantics block, which executes the receive and send message rules, and uses a Phases abstraction to represent each phase of the protocol computation within a dedicated ASM step. Furthermore, to showcase the trait implementation of CASM, a default behaviour for the enumeration Phases was defined to retrieve the next phase value given a current phase value. This behaviour encapsulation allows us to decouple the specification of the action performed during a phase and the update of the phase to the next phase by using just a single update rule. Figure 19: **Refinement of Fig. 17** Figure 18: **Output of the executable CASM specification of Fig. 17** CoreASM Model of the AB Protocol ### Introduction to CoreASM As discussed in Chapter 2, Abstract State Machines (ASMs)[3] are algebraic structures with rules that manipulate them, without a precise language definition. In place of a language, mathematical notation is used in a flexible way and new abbreviations or constructs are introduced in an ad-hoc manner, with the goal to improve the readability and understandability of formal specifications. The drawback of this flexibility is the difficulty in developing an execution engine that allows simulation of even not-yet-completed abstract specifications. CoreASM has addressed this challenge with the objective to preserve the specification character in the executable language and avoid slipping into a programming language style. To achieve this goal, CoreASM was designed based on a rigorous plugin architecture and a less strict handling of types. It originated around 2003 as a PhD project by Roozbeh Farahbod at Simon Fraser University in Vancouver, Canada [12]. Besides a very small core (hence the name), each language construct is provided by plugins that must be declared at the beginning of a specification. Each plugin consists of a parsing component and an execution component. A primitive bootstrap parser loads all of the named plugins and creates the parser of the currently "plugged-in" language by combining the partial parsers of the loaded plugins. Each partial parser adds subtrees to the general abstract syntax tree (AST) of the specification. An abstract interpreter component in CoreASM then traverses this AST and plugin-specific execution functions are called for each node. On the one hand, the plugin architecture obviously reduces execution speed, which was one of the main reasons also for the development of CASM. On the other hand, it allows a relatively easy extension of the executable specification language itself and thus more domain-specific specifications. The plugin architecture also enables the developer to provide language constructs that interact with Java classes or even other Java applications (see [15, 1]). Another interesting (and very helpful) aspect of CoreASM is that it is _itself_ specified precisely in ASMs. This is a good example of how formal specifications can serve as an abstract yet precise documentation of the architecture and the semantics of an application, greatly improving maintenance [12, 14]. CoreASM is implemented in Java and published on GitHub13 under the Academic Free License 3.0.14 In addition to the core parsing and execution engine, an Eclipse plugin is provided that includes a language-sensitive editor, a debugger, and various other integrations. The fact that the language of a given specification is compiled based on the loaded plugins complicates the use of existing language modeling tools like Xtext.15 Footnote 13: [https://github.com/coreasm](https://github.com/coreasm) Footnote 14: [https://opensource.org/licenses/AFL-3.0](https://opensource.org/licenses/AFL-3.0) Footnote 15: [https://www.eclipse.org/Itert/](https://www.eclipse.org/Itert/) Besides several improvements to the original source code, the Institute of Software Engineering and Programming Languages at Ulm University has developed the debugger component16 and introduced a plugin that allows aspect-oriented specifications [10]. Footnote 16: [https://github.com/CoreASM/coreasm.core/tree/master/org.coreasm.eclipse/rsc/doc](https://github.com/CoreASM/coreasm.core/tree/master/org.coreasm.eclipse/rsc/doc) In addition to the formal specification in the PhD thesis [12], a user manual is provided that shows how to use the language constructs.17 CoreASM has been used in several projects to validate specifications, including more complex ones such as [13, 31]. Footnote 17: [https://github.com/CoreASM/coreasm.core/tree/master/org.coreasm.engine/rsc/doc/user_manual](https://github.com/CoreASM/coreasm.core/tree/master/org.coreasm.engine/rsc/doc/user_manual) ### Alexander's CoreASM specification of AB Protocol As mentioned in Sect. 5.1, CASM is based on CoreASM and therefore both languages are very similar. Therefore, only few adaptations were required to convert the CASM specification into an executable CoreASM specification that produces the same results. In detail these are (see Figs. 20,21): * Definition of used plugins at the beginning of the specification (line 4). * Removal of many type annotations (functions must be declared with types in order to know the arity) and replacement of some operators/keywords (e.g., '=' after a function definition in CASM is equivalent to the keyword initially in CoreASM). ## 6 Conclusion In this paper, we have proposed a new method for generating the _execution_ and _execution_ in the following way. We have proposed a method to generate the _execution_ and _execution_ in the following way. We have proposed a method to generate the _execution_ and _execution_ in the following way. [MISSING_PAGE_POST] * //Req1.(cont'd) derivedoutputLnNumber(terminal)=2*counter-1+ifterminal=Bthen1else0 derivedoutputLine(terminal)= outputLnNumber(terminal)+"Terminal"+otherTerminal(terminal)+"isending"+otherTerminal(terminal)+terminal+fileNumber(otherTerminal(terminal))+",error("+counter+")="+errTraceA(counter)+",ALTR("+terminal+")="+ALTR(terminal)+",ALTT("+terminal+")="+ALTT(terminal) * //Req2: ruleinitializeALTT={ fileNumber(A):=0 fileNumber(B):=1 ALTT(A):=true ALTT(B):=true } //Req3: functionALTR:Terminal\BOOLEANinitially{A\(\rightarrow\)false,B\(\rightarrow\)false} rulereceiveBit(terminal)={ ALTR(terminal):=ALTT(otherTerminal(terminal)) } //Reqs5and6(severalactions): rulereceiveSuccess(terminal)=seq receiveBit(terminal) print(outputLine(terminal)) letcondition=ifterminal=Athen ALTR(terminal)=ALTT(terminal) else ALTR(terminal)=notALTT(terminal) in{ ifconditionthen{ fileNumber(terminal):=fileNumber(terminal)+1 ALTT(terminal):=notALTT(terminal) } } endseq //Req10: rulereceiveMsg(terminal,error)={ iferror=falsethen receiveSuccess(terminal) else print(outputLine(terminal)) } rulesendMsg(terminal,fileNumber,controlBit)={ skip } } ``` Figure 21: **Executable CoreASM specification** 1 Terminal B is sending BA1, error(1) = false, ALTR(A) = true, ALTT(A) = true 2 Terminal A is sending AB1, error(1) = false, ALTR(B) = false, ALTT(B) = true 3 Terminal B is sending BA2, error(2) = false, ALTR(A) = false, ALTT(A) = false 4 Terminal A is sending AB2, error(2) = false, ALTR(B) = true, ALTT(B) = false 5 Terminal B is sending BA3, error(3) = true, ALTR(A) = false, ALTT(A) = true 6 Terminal A is sending AB2, error(3) = true, ALTR(B) = true, ALTT(B) = true 7 Terminal B is sending BA3, error(4) = false, ALTR(A) = true, ALTT(A) = true 8 Terminal A is sending AB3, error(4) = false, ALTR(B) = true, ALTT(B) = true 9 Terminal B is sending BA3, error(5) = false, ALTR(A) = true, ALTT(A) = false 10 Terminal A is sending AB3, error(5) = false, ALTR(B) = false, ALTT(B) = true 11 Terminal B is sending BA4, error(6) = false, ALTR(A) = false, ALTT(A) = false 12 Terminal A is sending AB4, error(6) = false, ALTR(B) = false, ALTT(B) = false 13 Terminal B is sending BA4, error(7) = true, ALTR(A) = false, ALTT(A) = true 14 Terminal A is sending AB4, error(7) = true, ALTR(B) = false, ALTT(B) = false 15 Terminal B is sending BA4, error(8) = false, ALTR(A) = false, ALTT(A) = true 16 Terminal A is sending AB4, error(8) = false, ALTR(B) = true, ALTT(B) = false 17 Terminal B is sending BA5, error(9) = true, ALTR(A) = false, ALTT(A) = true 18 Terminal A is sending AB4, error(9) = true, ALTR(B) = true, ALTT(B) = true 19 Terminal B is sending BA5, error(10) = false, ALTR(A) = true, ALTT(A) = true 20 Terminal A is sending AB5, error(10) = false, ALTR(B) = false, ALTT(B) = true 21 Terminal B is sending BA6, error(11) = false, ALTR(A) = false, ALTT(A) = false 22 Terminal A is sending AB6, error(11) = false, ALTR(B) = true, ALTT(B) = false Figure 22: **Output of the executable CoreASM specification of Figs. 20 and 21, which is identical to the output of CASM in Fig. 18** Tla\({}^{+}\) Model of the AB Protocol There are several ways to write a TLA\({}^{+}\) spec for a given application. In this chapter we discuss a simple-minded version, developed by Paolo, and a more sophisticated version, developed by Manuel. ### Paolo's Spec for TLC In this first spec the aim was to make the comparison with the ASM and CASM versions as easy as possible. Thus, the spec could be regarded as an "emulation" of the single-thread execution of the CASM code. The result is shown in Module ABPaolo2, below, whose constants are initialized as follows: \[\mathit{Term} \triangleq\{1,2\}\] \[\mathit{errTrace} \triangleq\langle\langle\mathit{FALSE},\mathit{FALSE},\mathit{TRUE}, \mathit{FALSE},\mathit{FALSE},\mathit{FALSE},\mathit{TRUE},\mathit{FALSE},\mathit{TRUE},\mathit{FALSE},\mathit{TRUE},\] \[\mathit{FALSE},\mathit{FALSE},\mathit{FALSE},\mathit{TRUE}, \mathit{TRUE},\mathit{FALSE},\mathit{TRUE},\mathit{FALSE},\mathit{TRUE},\] \[\mathit{FALSE},\mathit{FALSE})\rangle\] \[\mathit{msgs} \triangleq\langle\langle``\mathit{AB1}",``\mathit{AB2}",`` \mathit{AB3}",``\mathit{AB4}",``\mathit{AB5}",``\mathit{AB6}"\rangle\] \[\langle``\mathit{BA1}",``\mathit{BA2}",``\mathit{BA3}",``\mathit{ BA4}",``\mathit{BA5}",``\mathit{BA6}"\rangle\rangle\] Similarly to the CASM model, this spec can be validated by treating the _errTrace_(_terminal_) as an input for each terminal, and checking whether the behaviour of each terminal matches the behaviour that was derived manually in Figs. 10 and 11 and verified as CASM output in Fig. 18. The output provided by TLC is shown in Fig. 23 and confirms that this spec reproduces the desired behaviour. In Chapter 9 we will compare the roles and assess the usefulness of the different specification methodologies for and perspectives on the software engineering process that we have examined in this report. For now we can say that although this specification can be considered successful, it is not clear how it is actually "specifying" imperative code to be implemented. Rather, developing this spec felt more like an implementation effort. Perhaps, as Lamport says in his video course, the point is to think about the implementation abstractly, and perhaps this is the greatest value of the exercise. This is the same claim made by ASMs. More analysis and discussion later. \begin{tabular}{|c l|} \hline \multicolumn{2}{|c|}{extends} & _Integers_, _Sequences_ \\ constant & _msgs_, _errTrace_, _Term_ \\ variables & _pendMsg_, _rcvMsg_, _altt_, _msgCnt_, _errCnt_, _swapTerm_, _step_ \\ _TypeOK_ & \(\stackrel{{\Delta}}{{=}}\) \\ \(\wedge\) & _pendMsg_\(\in[\)_Term_\(\rightarrow\)string_\(\times\) {0, 1}_\(]\) \\ \(\wedge\) & _rcvMsg_\(\in[\)_Term_\(\rightarrow\)string_\(]\) \\ \(\wedge\) & _altt_\(\in[\)_Term_\(\rightarrow\){0, 1}_\(]\) \\ \(\wedge\) & _msgCnt_\(\in[\)_Term_\(\rightarrow\)_Int_\(]\) \\ \(\wedge\) & _errCnt_\(\in[\)_Term_\(\rightarrow\)_Int_\(]\) \\ \(\wedge\) & _swapTerm_\(\in\){0, 1}_\(] \\ \(\wedge\) & _step_\(\in\)_Int_ \\ _Init_ & \(\stackrel{{\Delta}}{{=}}\) \\ \(\wedge\)_pendMsg_ & \(=[\)_term_\(\in\)_Term_\(\mapsto\)**if**_term_\(=\) 1 **then**_ \(\langle\)"AB1", 1\(\rangle\)**else** \(\langle\)"BA1", 1\(\rangle\)] \\ \(\wedge\)_rcvMsg_ & \(=[\)_term_\(\in\)_Term_\(\mapsto\)**if**_term_\(=\) 1 **then** **"** **else** **""** ] \\ \(\wedge\)_altt_ & \(=[\)_term_\(\in\)_Term_\(\mapsto\)**if**_term_\(=\) 1 **then** 1 **else** 1\(]\) \\ \(\wedge\)_msgCnt_ & \(=[\)_term_\(\in\)_Term_\(\mapsto\)**if**_term_\(=\) 1 **then** 1 **else** 2\(]\) \\ \(\wedge\)_errCnt_ & \(=[\)_term_\(\in\)_Term_\(\mapsto\)**if**_term_\(=\) 1 **then** 1 **else** 1\(]\) \\ \(\wedge\)_swapTerm_ & \(=0\) \\ \(\wedge\)_step_ & \(=0\) \\ _vars_ & \(\stackrel{{\Delta}}{{=}}\) _(pendMsg_, _rcvMsg_, _altt_, _msgCnt_, _errCnt_, _swapTerm_, _step_)_ \\ _flipBit_(_bit_) & \(\stackrel{{\Delta}}{{=}}\) **if**_bit_\(=\)_0_**then** 1 **else** 0 \\ _cntCounter_(_n_, _max_) & \(\stackrel{{\Delta}}{{=}}\) **if**_n_\(<\)_max_**then**\(n+1\)**else**_max_ \\ _alternationTest_(_term_) & \(\stackrel{{\Delta}}{{=}}\) **if**_term_\(=\)_1_**then**_pendMsg_[1][2] \(=\)_pendMsg_[2][2] \\ _else_pendMsg_[1][2] & \(\neq\)_pendMsg_[2][2] \\ _otherTerm_(_term_) & \(\stackrel{{\Delta}}{{=}}\) **if**_term_\(=\)_1_**then** 2 **else** 1 \\ _ReceiveMsg_(_term_) & \(\stackrel{{\Delta}}{{=}}\) \\ \(\wedge\)**if**_errTrace_[_term_][_errCnt_[_term_]] **then** \\ \(\underline{\phantom{\wedge}}\)Transmission error detected in incoming msg, so only error and global counters are incremented: \\ \(\wedge\)_errCnt_\({}^{\prime}=[\)_errCnt_ except_![_term_] = _cntCounter_(_errCnt_[_term_], 11\(]\)] \\ \(\wedge\)_step_\({}^{\prime}=\)_cntCounter_(_step_, 22\()\) \\ \(\wedge\)_unchanged_\(\langle\)_pendMsg_, _rcvMsg_, _altt_, _msgCnt_\(\rangle\) \\ \(\underline{\phantom{\wedge}}\)**else** \\ **if**_alternationTest_(_term_) **then** \\ No error in incoming msg, so it is stored and outgoing msg is prepared: \\ \(\wedge\)_rcvMsg_\({}^{\prime}=[\)_rcvMsg_ except_![_term_] = _Append_(_rcvMsg_[_term_], _pendMsg_[_otherTerm_(_term_)][1])\) \\ \(\wedge\)_altt_\({}^{\prime}=[\)_altt_ except_![_term_] = _flipBit_(_altt_[_term_])] \\ \(\wedge\)_pendMsg_\({}^{\prime}=[\)_pendMsg_ except_![_term_] = \(\langle\)_msgs_[_term_][_msgCnt_[_term_]], _flipBit_(_altt_[_term_]))\(\rangle\) \\ \(\wedge\)_msgCnt_\({}^{\prime}=[\)_msgCnt_ except_![_term_] = _cntCounter_(_msgCnt_[_term_], 6\()\)] \\ \(\wedge\)_errCnt_\({}^{\prime}=[\)_errCnt_ except_![_term_] = _cntCounter_(_errCnt_[_term_], 11\()\)] \\ \(\wedge\)_step_\({}^{\prime}=\)_cntCounter_(_step_, 22\()\) \\ **else** No error, but incoming msg has already been stored and next outgoing msg prepared: \\ \(\wedge\)_errCnt_\({}^{\prime}=[\)_errCnt_ except_![_term_] = _cntCounter_(_errCnt_[_term_], 11\()\)] \\ \(\wedge\)_step_\({}^{\prime}=\)_cntCounter_(_step_, 22\()\) \\ \(\wedge\)_unchanged_\(\langle\)_pendMsg_, _rcvMsg_, _altt_, _msgCnt_\(\rangle\) \\ \(\wedge\)_swapTerm_\({}^{\prime}=\)_flipBit_(_swapTerm_) \\ _Next_ & \(\stackrel{{\Delta}}{{=}}\) **if**_swapTerm_\(=\)_0_**then**_ReceiveMsg_(1) **else**_ReceiveMsg_(2) \\ _Spec_ & \(\stackrel{{\Delta}}{{=}}\) _Init_\(\wedge\)\(\Box[\)_Next_\(]_{\text{vars}}\) \\ \hline \multicolumn{2}{|c|}{\(\underline{\phantom{\wedge}}\)Modification History} \\ \hline \multicolumn{2}{|c|}{Last modified Sun Doc 04 22:01:49 GMT 2022 by paolo} \\ \multicolumn{2}{|c|}{Created Tue Nov 22 11:19:40 GMT 2022 by paolo} \\ \end{tabular} \end{tabular} Figure 23: **TLC state trace for simple TLA\({}^{+}\) spec** ### Manuel's Spec for Apalache Manuel wrote a spec for the Apalache symbolic model checker.18 Apalache is slightly different from TLC but the language is TLA\({}^{+}\) in both cases. Apalache allows (sometimes requires) annotating the types of variables, constants, and functions. Thus, the modules include type annotations. Also, Apalache uses these annotations to run a type checker, which helps debug TLA\({}^{+}\) specifications. Footnote 18: [https://apalache.informal.systems/](https://apalache.informal.systems/) The spec involves three modules: ABP3_typedefs, ABP3, and MC_ABP3: * The ABP3_typedefs module defines custom types aliases used by the other modules and Apalache. * The ABP3 module specifies the AB protocol. * The MC_ABP3 module instantiates the ABP3 specification by giving concrete values to the constants MsgsA and MsgsB. It also includes an invariant (consistentPrefix) that checks the correctness of the protocol and two trace invariant generateTrace and generateCompleteTrace22Steps used for debugging and to generate traces. #### 7.2.1 ABP3_typedefs: Type aliases We define two custom type aliases: MSG and STATE. The type MSG defines a message sent between terminals. It contains three fields: the receiver terminal, the message payload, and the alternate bit of the sender terminal. The type STATE defined the state of the state machine. ``` @typeAlias:MSG=[ receiver:Str, msg:Str, altr:Int]; @typeAlias:STATE=[ storedMsgs:Str->Seq(Str), alt:Str->Int, msg:Str->Str, pendingMsg:MSG, counterMsgs:Str->Int]; ``` #### 7.2.2 ABP3.tla: The model Our model defines 2 constants and 5 variables: ``` CONSTANTS {\*@type:Seq(Str); MsgsA, \*@type:Seq(Str); MsgsB VARIABLES {\*@type:Str->Seq(Str); storedMegs, \*@type:Str->Int; alt, \*@type:Str->Str; msgt, \*@type:MSG; pendingMsg, \*@type:Str->Int; counterMags The specification assumes that terminal B sends the first message and that the alternation bit is initially set to 1 at both terminals. Furthermore, the specification assumes that terminal A has not fetched any message initially. This is important to guarantee that no message sent by terminal A is dropped. Variables are initialized as follows to capture these assumptions: Init == /\storedMsgg = [ terminal \in Terminals |-> <<>> ] \* initializealternate bits /\alt = [ terminal \in Terminals |->1 ] /\msgst = [ terminal \in Terminals |->IF terminal = "terminalA" THEN "garbage" ELSE MsgsA[1] ] /\pendingMsg = [ receiver |-> "terminalA", msgr |->MsgsB[1], altr |->1 ] /\counterSend = [ terminal \in Terminals |->IF terminal = "terminalA" THEN 1 ELSE 2 ] /\counterMags = [ terminal \in Terminals |->IF terminal = "terminalA" THEN 1 ELSE 2 ] The model includes the following auxiliary operators: * Terminals: it returns the set of terminals. * OtherTerminal: it returns the other terminal. * GetMessages: it fetches a message payload from the corresponding message array by index. * AcceptMsg: given a terminal and a message, it returns TRUE is the message should be accepted or FALSE otherwise. * AlternateBit: alternates a bit. Terminals == {"terminalA", "terminalB"} OtherTerminal (terminal) == IF terminal = "terminalA" THEN "terminalB" ELSE "terminalA" GetMessage(terminal, sequence) == IF terminal = "terminalA" THEN MsgsA[sequence] ELSE MsgsB[sequence] \*@type:(Str,MSG) =>Bool; AcceptMsg(receiver, msg, error) == /\"error /\"reciver = "terminalA" /\ msg.altr = alt[receiver] /\"reciver = "terminalB" /\ msg.altr /= alt[receiver] AlternateBit(bit) == IF bit = 0 THEN 1 ELSE 0 Finally, the main functionality is defined by operators ReceiveMsg and Next. The Next operator picks the pending message from the pendingMsg variable and calls the ReceiveMsg operator. Next == IF \A terminal \in Terminals : Len(storedMsgs[terminal]) = Len(MsgsA) THEN UNCHANGED <storedMsgg, pendingMsg, alt, counterMsgs, msgt>> ELSE \E receiver \in Terminals: \E error \in BOOLEAN: pendingMsg.receiver = receiver /\"ReceiveMsg(receiver, pendingMsg, error) The ReceiveMsg operator first checks if the message should be accepted. If not, then the terminal resends its last message. If the message should be accepted according to AcceptMsg, then the terminal stores the message payload, alternates the bit, fetches a new message payload, and sends the new message message. ``` \*@type:(Str,MSG,Bool)=>Bool; ReceiveMsg(receiver,msg,error)== IFAcceptMsg(receiver,msg,error)THEN \*messageaccepted LETalt ==AlternateBit(alt[receiver])IN LETnextMsg == GetMessage(receiver,counterMsgs[receiver])IN LETsendMsg == [receiver |->OtherTerminal(receiver), msgr |->nextMsg, altr |->altr]IN \*storemessage /\ storedMsgs' = [storedMsgs EXCEPT![receiver] = Append(@,msg.msgr) ] \*alternate bit /\ alt' = [ alt EXCEPT![receiver] = altt ] \* fetchnext message /\ msgr' = [ msgr EXCEPT![receiver] = nextMsg] \* sendmessage and clean processed /\ pendingMsg' = sendMsg \* update counters /\ counterMags' = [ counterMsgs EXCEPT![receiver] = @ + 1 ] ELSE \* messagenotaccepted; resendinglast message LETsendMsg == [ receiver |->OtherTerminal(receiver), msgr |->msgr[receiver], altr |->alt[receiver]]IN /\ pendingMsg' = sendMsg /\ UNCHANGED <<storedMsgs, alt, msgt, counterMsgs>> #### 7.2.3 Mc_ABP3.tla: Instantiating the model We instantiate the constants as follows: MsgA == <<"AB1", "AB2", "AB3", "AB4", "AB5", "AB6">> MagsB == <<"BA1", "BA2", "BA3", "BA4", "BA5", "BA6">> The correctness of the protocol is verified via the consistentPrefix invariant. This invariant checks that the sequence of messages received by a terminal is always a prefix of the sequence that the counterparty terminal is supposed to send, e.g., it checks that the sequence storedMsgs of terminal A is a prefix of the sequence defined by the constant MsgsB. The invariant implies that a terminal stores messages in a consistent order (as scheduled) dealing successfully with errors and duplicates, without dropping messages. The consistentPrefix invariant uses the isPrefix operator internally. This operator is implemented using ApaFoldSet, a built-in fold operator of Apalache. Fold operators are common in functional programming and refer to the iterative application of a binary operator over a collection: F and the set of integers from 1 to the length of storedMsgs[terminal] in our case. ApaFoldSet is used in this case to iterate over all entries of storedMsgs in index order and compare each entry to the entry in the counterparty's message array. The fold function returns TRUE if all comparisons match and FALSE otherwise. isPrefix(terminal) == LET F(result, index) == IF result = FALSE THEN FALSE ELSE storedHags[terminal][index] = GetMessage(OtherTerminal(terminal), index) IN ApaFoldSet(F, TRUE, { i \in 1..20: i <= Len(storedMsgs[terminal]) }) consistentPrefix == \A terminal \in Terminals: isPrefix(terminal) ``` We have checked the correctness of the model by checking consistentPrefix with Apalache on executions of up to 22 steps. This guarantees that for executions of 22 or less steps, the invariant consistentPrefix is never violated, given any sequence of errors. MC_ABP3.tla also includes two trace invariants: generateTrace and generateCompleteTrace22Steps. The trace invariant generateTrace simply generates a trace of 12 steps. We mostly use it for debugging. The trace invariant generateCompleteTrace22Steps is more interesting. It generates a trace of 22 steps in which both terminals successfully exchange all their messages but only at the last step. ``` \*@type:Seq(STATE)=>Bool; generateTrace(trace)== LETExample== /Len(trace)=12 IN "Example ``` \*@type:Seq(STATE)=>Bool; generateCompleteTrace22Steps(trace)== LETExample== /\Len(trace)=23 /\Eterminal\in Terminals: trace[22].storedMags[terminal]/=IFterminal="terminalA" THENMagsB ELSEMsgA /\A terminal\in Terminals: trace[23].storedMags[terminal]=IFterminal="terminalA" THENMagsB ELSEMsgA IN "Example #### 7.2.4 Output trace We have generated an output trace using the generateCompleteTrace22Steps trace invariant. To find this execution for this particular model, Apalache explores increasingly longer executions for all possible permutations of error sequences until an execution satisfying the conditions in generateCompleteTrace22Steps is found. ``` apalache-mccheck--inv=generateCompleteTrace22Steps--length=22MC_ABP3.tla ``` The trace: ``` 1:<Initialpredicate> /\alt=SetsFun({<<"terminalA", 1>>, <<"terminalB", 1>> }) /counterMags=SetsFun({<<"terminalA", 1>>, <<"terminalB", 2>> }) /\msg=SetsFun({<<"terminalA", " garbage">>, <<"terminalB", "BA1">}) /\pendingHags=\altr[->1,msg|->"BA1",receiver|->"terminalA"] /\storedMags=SetsFun({<<"terminalA", <<>>>>, <<"terminalB", <<>>>> }) 2: <Next> /\ alt = SetAsFun({ <<"terminalA", 0>>, <<"terminalB", 1>> }) /\ counterMags = SetAsFun({ <<"terminalA", 4>>, <<"terminalB", 4>> }) /\ msg = SetAsFun({ <<"terminalA", "AB2">>, <<"terminalB", "BA3">> }) /\ pendingMsg = [altr |-> 0, msgr |-> "AB3", receiver |-> "terminalB"] /\ storedMags = SetAsFun({ <<"terminalA", "CC"BA1", "BA2", "BA3">>, <<"terminalB", "CC"BA1", "AB2">> }) 7: <Next> /\ alt = SetAsFun({ <<"terminalA", 0>>, <<"terminalB", 0>> }) /\ counterMags = SetAsFun({ <<"terminalA", 4>>, <<"terminalB", 5>> }) /\ msg = SetAsFun({ <<"terminalA", "AB4">>, <<"terminalB", "BA4">> }) /\ pendingMsg = [altr |-> 0, msgr |-> "BA4", receiver |-> "terminalA"] /\ storedMsgs = SetAsFun({ <<"terminalA", "CC"BA1", "BA2", "BA3">>, <<"terminalB", "CC"AB1", "AB2", "AB3">> }) 8: <Next> /\ alt = SetAsFun({ <<"terminalA", 1>>, <<"terminalB", 0>> }) /\ counterMagss = SetAsFun({ <<"terminalA", 5>>, <<"terminalB", 5>> }) /\ msg = SetAsFun({ <<"terminalA", "AB4">>, <<"terminalB", "BA4">> }) /\ pendingMsg = [altr |-> 1, msgr |-> "AB4", receiver |-> "terminalB"] /\ storedMsgs = SetAsFun({ <<"terminalA", "CC"BA1", "BA2">>>>, <<"terminalB", "CC"AB1", "AB2">>>> }) 9: <Next> /\ alt = SetAsFun({ <<"terminalA", 1>>, <<"terminalB", 0>> }) / counterMsgs = SetAsFun({ <<"terminalA", >>>>, <<"terminalB", >>>> }) / msg = SetAsFun({ <<"terminalA", "AB4">>, <<"terminalB", "BA4">> }) / pendingMgg = [altr |-> 0, msg |-> "BA4", receiver |-> "terminalA"] / storedMsgs = SetAsFun( <<"terminalA", <<"BA1", "BA2", "BA3", "BA4">>>, <<"terminalB", <<"AB1", "AB2", "AB3", "AB4">>> }) 10: <Next> / alt = SetAsFun({ <<"terminalA", >>>>, <<"terminalB", 0>> }) / counterMsgs = SetAsFun({ <<"terminalA", 5>>, <<"terminalB", 5>> }) / msgt = SetAsFun({ <<"terminalA", "AB4">>, <<"terminalB", "BA4">> }) / pendingMgg = [altr |-> 1, msg |-> "AB4", receiver |-> "terminalB"] / storedMsgs = SetAsFun({ <<"terminalA", <<"BA1", "BA2", "BA3", "BA4">>>, <<"terminalB", "AB2", "AB3">>> }) 11: <Next> / alt = SetAsFun({ <<"terminalA", 1>>, <<"terminalB", 0>> }) / counterMsgs = SetAsFun({ <<"terminalA", 5>>, <<"terminalB", 5>> }) / msgt = SetAsFun({ <<"terminalA", "AB4">>, <<"terminalB", "BA4">> }) / pendingMgg = [altr |-> 0, msg |-> "BA4", receiver |-> "terminalA"] / storedMsgs = SetAsFun({ <<"terminalA", <<"BA1", "BA2", "BA3", "BA4">>>, <<"terminalB", "<"AB1", "AB2", "AB3">>> }) 12: <Next> / alt = SetAsFun({ <<"terminalA", 1>>, <<"terminalB", 0>> }) / counterMsgs = SetAsFun({ <<"terminalA", 5>>, <<"terminalB", 5>> }) / msgt = SetAsFun({ <<"terminalA", "AB4">>, <<"terminalB", "BA4">> }) / pendingMgg = [altr |-> 1, msg |-> "AB4", receiver |-> "terminalB"] / storedMsgs = SetAsFun({ <<"terminalA", "<<"BA1", "BA2", "BA3", "BA4">>>, <<"terminalB", "<"AB1", "AB2", "AB3">>> }) 13: <Next> / alt = SetAsFun({ <<"terminalA", 1>>, <<"terminalB", 1>> }) / counterMsgs = SetAsFun({ <<"terminalA", 5>>, <<"terminalB", 6>> }) / msgt = SetAsFun({ <<"terminalA", "AB4">>, <<"terminalB", "BA5">> }) / pendingMgg = [altr |-> 1, msg |-> "BAS", receiver |-> "terminalA"] / storedMsgs = SetAsFun({ <<"terminalA", "<<"BA1", "BA2", "BA3", "BA4">>>, <<"terminalB", "<"AB1", "AB2", "AB3", "AB4">>> }) 14: <Next> / alt = SetAsFun({ <<"terminalA", 0>>, <<"terminalB", 1>> }) / counterMsgs = SetAsFun({ <<"terminalA", 6>>, <<"terminalB", 6>> }) / msgt = SetAsFun({ <<"terminalA", "AB5">>, <<"terminalB", "BA5">> }) / pendingMgg = [altr |-> 0, msg |-> "ABS", receiver |-> "terminalB"] / storedMsgs = SetAsFun({ <<"terminalA", "<<"BA1", "BA2", "BA3", "BA4", "BA5">>>, <<"terminalB", "<"AB1", "AB2", "AB3", "AB4">>> }) 15: <Next> / alt = SetAsFun({ <<"terminalA", 0>>, <<"terminalB", 1>> }) / counterMsgs = SetAsFun({ <<"terminalA", 6>>, <<"terminalB", 6>> }) / msgt = SetAsFun({ <<"terminalA", "AB5">>, <<"terminalB", "BA5">> }) / pendingMgg = [altr |-> 1, msg |-> "BAS", receiver |-> "terminalA"] / storedMsgs = SetAsFun({ <<"terminalA", "<<"BA1", "BA2", "BA3", "BA4", "BA5">>>, <<"terminalB", "<"AB1", "AB2", "AB3", "AB4">>> }) 16: <Next> / alt = SetAsFun({ <<"terminalA", 0>>, <<"terminalB", 1>> }) / counterMsgs = SetAsFun({ <<"terminalA", 6>>, <<"terminalB", 6>> }) / msgt = SetAsFun({ <<"terminalA", "AB5">>, <<"terminalB", "BA5">> }) / pendingMgg = [altr |-> 0, msg |-> "AB5", receiver |-> "terminalB"] / storedMsgs = SetAsFun({ <<"terminalA", "<<"BA1", "BA2", "BA3", "BA4", "BA5">>>, <<"terminalB", "<"AB1", "AB2", "AB3", "AB4">>> }) 17: <Next> / alt = SetAsFun({ <<"terminalA", 0>>, <<"terminalB", 1>> >}) / counterMsgs = SetAsFun({ <<"terminalA", 6>>, <<"terminalB", 6>> }) / msg = SetAsFun({ <<"terminalA", "AB5>>, <<"terminalB", "BAS">> }) / pendingMsg = [altr |-> 1, msg |-> "BA5", receiver |-> "terminalA"] / storedMsgs = SetAsFun({ <<"terminalA", "<"BA1", "BA2", "BA3", "BA4", "BAS">>>, <<"terminalB", <<"AB1", "AB2", "AB3", "AB4">>> }) 18: <Next> / alt = SetAsFun({ <<"terminalA", 0>>, <<"terminalB", 1>> }) / counterMsgs = SetAsFun({ <<"terminalA", 6>>, <<"terminalB", 6>> }) / msg = SetAsFun({ <<"terminalA", "AB5>>, <<"terminalB", "BAS">> }) / pendingMsg = [altr |-> 0, msg |-> "AB5", receiver |-> "terminalB"] / storedMsgs = SetAsFun({ <<"terminalA", "<"BA1", "BA2", "BA3", "BA4", "BA5">>>, <<"terminalB", <<"AB1", "AB2", "AB3", "AB4">>> }) 19: <Next> / alt = SetAsFun({ <<"terminalA", 0>>, <<"terminalB", 1>> }) / counterMsgs = SetAsFun({ <<"terminalA", 6>>, <<"terminalB", 6>> }) / msg = SetAsFun({ <<"terminalA", "AB5">>, <<"terminalB", "BA5">> }) / pendingMsg = [altr |-> 1, msg |-> "BA5", receiver |-> "terminalA"] / storedMsgs = SetAsFun({ <<"terminalA", <<"BA1", "BA2", "BA3", "BA4", "BA5">>>, <<"terminalB", <<"AB1", "AB2", "AB3", "AB4">>> }) 20: <Next> / alt = SetAsFun({ <<"terminalA", 0>>, <<"terminalB", 1>> }) / counterMsgs = SetAsFun({ <<"terminalA", 6>>, <<"terminalB", 6>> }) / msg = SetAsFun({ <<"terminalA", "AB5">>, <<"terminalB", "BA5">> }) / pendingMsg = [altr |-> 0, msg |-> "AB5", receiver |-> "terminalB"] / storedMsgs = SetAsFun({ <<"terminalA", "<"BA1", "BA2", "BA3", "BA4", "BA5">>>, <<"terminalB", <<"AB1", "AB2", "AB3", "AB4">>> }) 21: <Next> / alt = SetAsFun({ <<"terminalA", 0>>, <<"terminalB", 0>> }) / counterMsgs = SetAsFun({ <<"terminalA", 6>>, <<"terminalB", 7>> }) / msg = SetAsFun({ <<"terminalA", "AB5">>, <<"terminalB", "BA6">> }) / pendingMsg = [altr |-> 0, msg |-> "BA6", receiver |-> "terminalA"] / storedMsgs = SetAsFun({ <<"terminalA", "<"BA1", "BA2", "BA3", "BA4", "BA5">>>, <<"terminalB", "<"AB1", "AB2", "AB3", "AB4", "AB5">>> }) 22: <Next> / alt = SetAsFun({ <<"terminalA", 1>>, <"terminalB", 0>> }) / counterMsgs = SetAsFun({ <<"terminalA", 6>>, <"terminalB", 7>> }) / msg = SetAsFun({ <<"terminalA", "AB5">>, <<"terminalB", "BA6">> }) / pendingMsg = [altr |-> 1, msg |-> "AB6", receiver |-> "terminalB"] / storedMsgs = SetAsFun({ <<"terminalA", "<"BA1", "BA2", "BA3", "BA4", "BA5", "BA6">>>, <<"terminalB", "<"AB1", "AB2", "AB3", "AB4", "AB5">>> }) 23: <Next> / alt = SetAsFun({ <<"terminalA", 1>>, <<"terminalB", 1>> }) / counterMsgs = SetAsFun({ <<"terminalA", 7>>, <<"terminalB", 8>> }) / msg = SetAsFun({ <<"terminalA", "AB6">>, <<"terminalB", "BA6">> }) / pendingMsg = [altr |-> 1, msg |-> "AB6", receiver |-> "terminalA"] / storedMsgs = SetAsFun({ <<"terminalA", "<"BA1", "BA2", "BA3", "BA4", "BA5", "BA6">>>, <<"terminalB", "<"AB1", "AB2", "AB3", "AB4", "AB5", "AB6">>> }) ## 8 Quint Specification of AB Protocol ### Introduction to Quint Quint is a specification language over the same underlying logic of TLA\({}^{+}\). Quint has syntax and tooling that aim to resemble programming languages and their environments in many ways. By restricting the syntax in some aspects, such as avoiding operator overloading, Quint can be parsed and statically analyzed with significantly less effort. As the specification in Section 8.2 shows, Quint's syntax has constructs related to static analysis. The most evident example is typing information. Quint also has different qualifiers for its operators, with which specification writers can state their expectations on how an operator can interact with the state. Quint is still under construction, and it is not fully integrated with a model checker as of this writing. It offers a REPL (Read-Eval-Print Loop) that is able to perform random simulation and obtain traces of execution. The REPL is a useful tool to enable initial inspection and debugging of specifications, and it makes sense to use it before running a model checker because of its fast feedback for errors and accessible interface. However, since in order to verify properties Quint needs a model checker, it is currently being integrated into Apalache [18]. ### Gabriela's Spec of the AB Protocol This Section describes a Quint specification for the AB Protocol written by Gabriela. This specification follows the same level of abstraction as the CoreASM specification in Section 6.2, and it is presented in a broken-up manner to include explanations. The only custom type defined is a record for representing a message being transmitted: typeMSG = { receiver: str, msgr: str, altr: int, error: bool } The state is composed of four main variables for the protocol, and two auxiliary variables (counter and output) that keep track of extra information required for testing executions: // The state variables var storedMsgs: str -> List[int] var altt: str -> bool var altr: str -> bool var fileNumber: str -> int // Auxiliary state variables for testing var counter: int var output: List[{ terminal: str, sent: int, error: bool, altr: bool, altt: bool }] This spec has a single (pure) operator, that is, an operator that does not interact with the state at all: pure def otherTerminal(terminal) = if (terminal == "A") "B" else "A" The initial condition is defined by an action called Init, assigning a value for each state variable. action Init = all { storedMsgs'=Map("A"->[],"B"->[]), altt'=Map("A"->true,"B"->true), altr'=Map("A"->false,"B"->false), fileNumber"=Map("A"->0,"B"->1), counter'=0, output'=[], } The condition of acceptance depends on the altt state variable, and is therefore defined by an state-level operator, which requires the def modifier: def conditionOfAcceptance(terminal: str, newAltr: bool): bool = or { and { terminal == "A", newAltr == altt.get(terminal) }, and { terminal == "B", newAltr!= altt.get(terminal) }, } Message passing also has to be defined through the state. One option is to define a pendingMsg state variable with the latest message's payload, as it was done in the TLA\({}^{+}\) in Section 7.2. This spec, similarly to the CoreASM spec, does not specify that level of detail on message passing. Instead, the information that would be transmitted in a message is read directly from the other terminal's state variables. The operators responsible for this have their names prefixed with receive with the intention of making it explicit that they relate to message passing. // These operators simulate message reception. They read the ALTT and the // message (file number) from the state belonging to the other terminal. def receiveBit(terminal) = altt.get(otherTerminal(terminal)) def receiveFileNumber(terminal) = fileNumber.get(otherTerminal(terminal)) The action for accepting a message is defined as ReceiveSuccess and updates all core state variables. It receives the messages using the receiveBit and receiveFileNumber operators, then uses the new received altr value to determine if the condition of acceptance is satisfied. If it is, fileNumber and altt for the receiving terminal are updated, and the message is stored in storedMsgs. action ReceiveSuccess(terminal: str): bool = val newAltr = receiveBit(terminal) val newFileNumber = receiveFileNumber(terminal) all { altr'=altr.set(terminal, newAltr), output'=output.append({ terminal: otherTerminal(terminal), sent: newFileNumber, error: false, altr: newAltr, altt: altt.get(terminal) }), if (conditionOfAcceptance(terminal, newAltr)) all { fileNumber'=fileNumber.setBy(terminal, (n)=>n+1), altt'=altr.setBy(terminal, (b)=>not(b)), storedMsgs'= storedMsgs.setBy(terminal, (msgs)=>msgs.append(newFileNumber)), } else all { fileNumber'=fileNumber, altt'=altt, storedMsgs'=storedMsgs, } } The action for re-sending a message is defined as SendMsg and, as in the CoreASM spec, no variables are updated. ///Sendingmessagesdoesn'tchangethestateofthesystem. actionSendMsg(terminal,error)=all{ storedMsgs'=storedMsgs, altt'=altt, altr'=altr, fileNumber'=fileNumber, } The action ReceiveMsg defines which action should be taken according to an error flag. actionReceiveMsg(terminal:str,error:bool):bool= if(error)all{ //Resendthelastmessage SendMsg(terminal,error), output'=output.append( terminal:otherTerminal(terminal), sent:fileNumber.get(otherTerminal(terminal)), error:error, altr:altr.get(terminal), altt:altr.get(terminal), } } else ReceiveSuccess(terminal) In order to simulate this protocol, a run is defined. The concept of a run is not present in TLA\({}^{+}\), and was introduced in Quint for cases similar to this one, where the goal is to guide a simulation of the protocol according to some parameters and check the output. Here, the parameters are the order of actions (between sending and receiving on each terminal) and the sequence of errors, that is, for each step, whether an error occurs in that step. The sequence of actions is defined as the same sequence in run of the CoreASM specification, and the map of errors is equivalent to the error trace functions defined in that specification as well. The value of expectedOutput is omitted here to save space, but the actual output and the assertion result are given at the end of this section. purevalerrors:str->List[bool]=Map( "A"->[false,false,true,false,false,false,true,false,true,false,false], "B"->[false,false,true,true,false,true,false,true,false,false] ) Init.then((all{ SendMsg("B",errors.get("B")[counter]), output'=output, counter'=counter, }).then(all{ ReceiveMsg("A",errors.get("A")[counter]), counter'=counter, }).then(all{ SendMsg("A",errors.get("A")[counter]), output'=output, counter'=counter, }).then(all{ ReceiveMsg("B",errors.get("B")[counter]), counter'=counter + 1, }).repeated(11)).then(all{ assert(output==expectedOutput), output'=output, counter'=counter, altt'=altt, altr'=altr, storedMsgs'=storedMsgs, fileNumber'=fileNumber, }) ``` To obtain a trace in the Quint REPL, the previous code blocks need to be wrapped inside a module that needs to be loaded and imported. By wrapping the code blocks inside a module called ABP with'module ABP... ', it can be loaded in the REPL by running 'quint -r src/quint/ABP.qnt::ABP' in the shell. Then, the run 'test' can be invoked to run the simulation and make the assertion. That will raise an error if the assertion fails. The output can be inspected by evaluating the state variable 'output' after invoking 'test'. This is the obtained result: ``` QuintREPLv0.0.3 Type".exit"toexit,or".help"formoreinformation true >>>test true >>>output { terminal:"B",sent:1,error:false,altr:true,altt:true}, { terminal:"A",sent:1,error:false,altr:false,altt:true}, { terminal:"B",sent:2,error:false,altr:false,altt:false}, { terminal:"A",sent:2,error:false,altr:true,altt:false}, { terminal:"B",sent:3,error:true,altr:false,altt:true}, { terminal:"A",sent:2,error:true,altr:true,altt:true}, { terminal:"B",sent:3,error:false,altr:true,altt:true}, { terminal:"A",sent:3,error:true,altr:true,altt:true}, { terminal:"B",sent:3,error:false,altr:true,altt:false}, { terminal:"A",sent:3,error:false,altr:false,altt:true}, { terminal:"B",sent:4,error:false,altr:false,altt:false}, { terminal:"B",sent:4,error:true,altr:false,altt:false}, { terminal:"B",sent:4,error:false,altr:false,altt:true}, { terminal:"A",sent:4,error:false,altr:true,altt:false}, { terminal:"B",sent:5,error:true,altr:false,altt:true}, { terminal:"A",sent:4,error:true,altr:true,altt:true}, { terminal:"B",sent:5,error:false,altr:true,altt:true}, { terminal:"A",sent:5,error:false,altr:true,altt:true}, { terminal:"A",sent:5,error:false,altr:false,altt:true}, { terminal:"B",sent:6,error:false,altr:false,altt:false}, { terminal:"A",sent:6,error:false,altr:true,altt:false}, Comparison between the ASM and TLA\({}^{+}\) Methodologies In this chapter we discuss what we have learned from the different specification perspectives of the previous chapters, first at the theoretical level and then at the level of executable languages and tools. The considerations presented in this chapter should be seen more as speculative conjectures meant to stimulate further discussion than certain conclusions or proven results. The objective of the discussion, and indeed of the whole report, is to explore the complementarities between the different specification methods and, therefore, the possibility to combine them in some way that will strengthen the software engineering development process. ### ASMs and TLA\({}^{+}\) Invoking a "geometrization" metaphor, each type of models can be seen as the specification of the boundary surface of a state space shaped like an infinite cone, whose vertex is rooted at the INIT state and that fans out in 2D or 3D space. The boundaries and interior of this cone can be explored by model checkers like TLC or Apalache. Similarly stated, TLA\({}^{+}\) models appear to be analogous to a set of simultaneous linear inequalities from elementary analytic geometry, which together define a certain region of the plane. An ASM model can be interpreted similarly, whereas a CASM model requires more data to run, the result of which usually yields a single trajectory through that same space and starting from the same vertex. CoreASM, and Quint specifications are also able to specify single trajectories within the cone, while TLC and Apalache state traces are analogous constructions derivable from TLA\({}^{+}\) models plus suitable constraints. Methodologically, both frameworks embrace the concepts of abstraction and refinement. Both methodologies go out of their way to stress the importance of abstraction, i.e. of focusing on macro aspects of the application being specified rather than on the implementation details. In addition, both methods encourage the use of simple high-level models to start with, adding granularity in later iterations. As an example of refinement in TLA\({}^{+}\), Lecture 9 of Leslie Lamport's video course19 presents the specification of a simple version of the AB protocol,20 where some messages are lost randomly, while Lecture 10 adds the ability to detect if a message was corrupted, as a refinement. In the cone metaphor, for both modelling methods iterative refinement increases the granularity of the state space, i.e. the "density" of the states within the cone and on its boundary surface, for a given fixed "cone volume". Footnote 19: [https://lamport.azurewebsites.net/video/videos.html](https://lamport.azurewebsites.net/video/videos.html) Footnote 20: In particular, of the Simplex version of that protocol. Another important dimension for the comparison is given by global properties or invariants. Although ASMs can also define invariants, they appear to be used more often and more consistently by TLA\({}^{+}\). The reason could be that the specification of the behaviour boundary is itself a global property of the model. Hence, to gain greater purchase on the set of possible behaviours, identifying and then checking the invariants is a very useful and effective way to explore the large size of the state space looking for bugs (i.e. states where a given invariant is not satisfied). By contrast, since an ASM/CASM model is already much more specific about a particular behaviour, its effectiveness is less dependent on the discovery of invariants, even though they are still relevant and potentially useful. Another difference between the two methods that has important methodological implications can be attributed to the fact that ASMs are based on operational semantics whereas TLA\({}^{+}\) is based on declarative Boolean logic statements. Writing ASM specifications as pseudo-code is cognitively equivalent to writing the implementation code. It is in fact a form of implementation, but more abstract. The most abstract is the ground model, with iterative refinement steps progressively approaching the implementation code, but each step is itself an algorithm that can be "executed" mathematically in one's mind (which we called _verification_ in Chapter 2). Quint was developed to achieve a similar Ux effect. There are four additional important aspects that derive more from the experience of some of the authors than from the findings of this report: * First, because an ASM model is derived directly from the stated requirements, if it is paired with an executable language such as CASM at each level of refinement the CASM specification can be executed to check whether the output matches the requirements, as was done in Section 5.2. This provides a fast process of _validation_ that increases the speed and confidence of the developer. Quint aims for a similar effect. * Second, the names of ASM variables and functions are written in a language that matches the language of the domain expert (or customer). To some extent this is true of TLA\({}^{+}\) as well, but the problem is that the semantics of TLA\({}^{+}\) are mathematical statements in Boolean logic, which is a type of abstraction that non-technical people and engineers find difficult to relate to. On the other hand, by using operational semantics ASM rules are much closer to the natural language statements with which the requirements themselves are expressed. The combination of operational semantics, understandability, and rapid validation cycle makes it easier for implementation engineers to start dabbling in ASM specifications and to try to use them. Quint's syntax is similarly aimed at programmers, while Apalache also relies on operational semantics [18]. * Third, when a change request or a new requirement emerges it is easy to modify the ground model and to then apply the modification to the different refinement levels, all the way to the code. In some cases this process can be automated and actually be performed by a compiler. * Finally, the fact that ASM specifications are easily understandable by all the stakeholders implies that such a document becomes the central documentation reference for the whole development team, including the customer. CASM and CoreASM are similar in this regard, although somewhat more technical. Since Quint is a new language, it is too early to assess how effective it will be in fulfilling this function. Probably the most useful feature of TLA\({}^{+}\) is the opportunity it affords to express invariants. Although the often infinite state space of most applications cannot be explored fully, checking the invariant(s) for representative finite state traces can still provide a high degree of confidence that the application will perform as desired. ### State Space Visualization To begin scoping out a possible formal relationship between ASM and TLA\({}^{+}\) models we take advantage of the simplicity of the AB protocol to visualize its state space explicitly. Fig. 24 shows a "swimlanes" view of the state trajectories of the two terminals that correspond to the error sequence of Fig. 9. We can develop a more efficient visualization for the system as follows. Since each terminal's automaton can be in one of 4 states, the system of two automata can be in 16 states at most, as shown in Table 3. The reachable states are shown in bold in different colours, where black denotes normal operation, red is an error state, and green is a state that corresponds to the error-free receipt of a message that was previously stored. State 3 (i.e. (1,3)) is not visited by the sequence corresponding to Fig. 9. Table 4 shows the state traces of the two terminals that are also depicted in the swimlanes figure. \begin{table} \begin{tabular}{c|c c c c c c c c c c c c c c c} Automaton & \multicolumn{5}{c}{States} & & & & & & & & & & & & & & & & \\ \hline A & 1 & **1** & **1** & **1** & **2** & 2 & 2 & 2 & **3** & 3 & 3 & 3 & **4** & 4 & 4 & 4 \\ B & 1 & **2** & **3** & **4** & **1** & 2 & 3 & 4 & **1** & 2 & 3 & 4 & **1** & 2 & 3 & 4 \\ \hline System & 1 & **2** & **3** & **4** & **5** & 6 & 7 & 8 & **9** & 10 & 11 & 12 & **13** & 14 & 15 & 16 \\ \end{tabular} \end{table} Table 3: **System states, reachable in bold (black: normal op; red: error; green: already stored)** Fig. 25 shows the state trace corresponding to the error sequence of Fig. 9 in the system's state space. Since this type of state space is not a metric space, how a trace is arranged does not matter as long as its topology is preserved. Thus, Fig. 26 shows the same information in a modified state space where all the reachable states have been bunched together for greater clarity. The "cone" is shown by the blue boundary, although this figure shows that when the whole reachable state space is traversed, as in this case, perhaps a rectangle is a better representative shape. On the right of Fig. 26 another view of the state trace is shown that uses the colour coding defined above. Fig. 27 shows the system's finite state machine (FSM) immersed in the global state space. For a given length of error sequence \(n\), the number of possible sequences is \(2^{n}\) of which, for \(n=22\), the Lynch sequence of Fig. 9 is one. Tables 5 and 6 show a set of elementary automata to see whether they could serve as a basis for some kind of automaton decomposition. This is by no means meant to be a representative set of all the possible patterns. Could the map from the set of error sequences to the set of corresponding automata be a homomorphism? A map \(\theta\colon S\to A\), where \(S\) is the set of error sequences and \(A\) is the set of corresponding automata, is a homomorphism if both these conditions hold [9]: \[\theta(s_{i}+s_{j}) =\theta(s_{i})+\theta(s_{j})\] \[\theta(s_{i}s_{j}) =\theta(s_{i})\theta(s_{j}), \tag{2}\] for suitably defined addition and multiplication operations on the set elements \(s_{i},s_{j}\in S\), where both indices range from \(1\) to \(2^{n}\). Glossing over whether or not the \(S\) and \(A\) sets in question have any algebraic structure (like a ring, group, etc), we can easily see that the simplest possible example of linear superposition for a simple-minded definition of addition operation does not work. In fact, adding \(s_{2}\) and \(s_{3}\) vectorially we get \(s_{4}\), but the automaton corresponding to \(s_{4}\) is very different from Figure 27: **System’s finite state machine for Lynch’s error sequence** Figure 26: **System state trace rearranged for better readability** the "addition" of the automata corresponding to \(s_{2}\) and \(s_{3}\) defined as the union of their edge sets. More seems to be required for this idea to work. Automata decomposition is studied by algebraic automata theory, an algebra subfield that grew out of semigroup theory about 60 years ago [19, 32]. Unfortunately it is so abstract that it is difficult to relate its results and insights to concrete applications. A finite-state automaton can be defined mathematically as a finite set of transformations acting on a finite set of states \(\,Q\). In general, these transformations can be composed by functional composition. The set of all finite sequences of transformations thus satisfies the axioms of a semigroup, meaning that it is closed with respect to a multiplication law (here, functional composition) that satisfies the associative property. However, not all of its members need have an inverse. By convention, the empty sequence yields an identity; although this implies that we have a monoid, the term semigroup is used anyway. Thus, the algebraic version of a finite-state automaton is a 'transformation semigroup', or 'ts', which is a direct generalization of the permutation group concept to semigroups. As for permutations groups, a potential cause of confusion arises from the fact that one element \(s\) of the semigroup \(S\) of a given ts acts as an operator on _all_ the states \(q\in Q\)_simultaneously_. In other words, the element \(s\in S\) should be seen as the whole function \(s\colon Q\to Q\) that is defined over the whole state set at once. By contrast, the execution of a single step of a given algorithm implemented by a given automaton, such as we have been discussing here, yields \begin{table} \begin{tabular}{|c|l|l|l|} \hline **No.** & **Error Sequence** & **State Trace** & **Automaton** \\ \hline 1 & \((0,0,0,0,0,0,0,\cdots)\) & \(14141414\cdots\) & \\ & & \(414141414\cdots\) & \\ \hline 2 & \((0,0,E,0,0,0,0,\cdots)\) & \(14121414\cdots\) & \\ & & \(41413141\cdots\) & \\ & & \(4141314\cdots\) & \\ & & \(5\) & \\ \hline 3 & \((0,0,0,E,0,0,0,\cdots)\) & \(14141314\cdots\) & \\ & & \(41412141\cdots\) & \\ & & \(41412141\cdots\) & \\ \hline 4 & \((0,0,E,E,0,0,0,\cdots)\) & \(14121414\cdots\) & \\ & & \(41412141\cdots\) & \\ & & \(41412141\cdots\) & \\ \hline 6 & \((0,0,E,0,E,0,E,\cdots)\) & \(14121212\cdots\) & \\ & & \(41414141\cdots\) & \\ & & \(5\) & \\ \hline \end{tabular} \end{table} Table 5: **Automaton motifs for representative error sequences** a'state transition' and should be seen as a single value of such a function for a given starting state \(q\colon s(q):=q^{\prime}\). This and similar points about the algebraic structure of automata are explored in more detail in [11]. \begin{table} \begin{tabular}{|c|c|c|c|} \hline **No.** & **Error Sequence** & **State Trace** & **Automaton** \\ \hline 7 & \((0,0,0,E,0,E,0,E,0,\cdots)\) & \(141414141\cdots\) & \\ & \(414121212\cdots\) & \\ \hline 8 & \((0,0,E,E,0,E,0,0,\cdots)\) & \(141214131\cdots\) & \\ & \(41412121214\cdots\) & \\ \hline 9 & \((0,0,0,E,E,0,E,0,0,\cdots)\) & \(1414121214\cdots\) & \\ & \(4141214131\cdots\) & \\ \hline 10 & \((0,0,E,0,E,E,0,0,\cdots)\) & \(141212141\cdots\) & \\ & \(414131214\cdots\) & \\ \hline 11 & \((0,0,0,E,0,E,E,0,0,\cdots)\) & \(1414131214\cdots\) & \\ & \(4141212141\cdots\) & \\ \hline 12 & \((0,0,E,E,E,E,\cdots)\) & \(14121212\cdots\) & \\ & \(414121212\cdots\) & \\ \hline 13 & \((0,0,0,E,E,E,E,\cdots)\) & \(141412121\cdots\) & \\ & \(414121212\cdots\) & \\ & \(414121212\cdots\) & \\ \hline \end{tabular} \end{table} Table 6: **Automaton motifs for representative error sequences (Cont’d)** Conclusions and Future Work The search for a possible algebraic structure in this type of problem is motivated by a different kind of mapping. Namely, if it were possible to "decompose" the ASM or TLA\({}^{+}\) specification of a given system into elementary sub-specifications in such a way that a homomorphism could be established between the elementary components of the overall specification and the corresponding elementary components of the general automaton, the task of specifying, verifying, and validating complex software systems could be broken down into simpler tasks that could then be composed to achieve the general specification. Such a condition would clearly impose a significant constraint on the formal systems involved, but this does not necessarily imply a constraint on the _computation_ being specified. The potential benefits of composability seem significant enough to motivate further exploration in this direction. Such an algebra-based approach is likely to be more relevant to TLA\({}^{+}\) than to ASMs because the latter already rely on a methodology that is fundamentally different from the concept of composability. The ASM methodology begins from a very abstract state-based domain model based on requirements described by the domain expert and adds structure and details by iterative refinement. Although also TLA\({}^{+}\) makes extensive use of iterative refinement, its declarative semantics seems to afford it greater at each stage structure, possibly making TLA\({}^{+}\) models better suited for decomposition. ## Acknowledgment We are very grateful to Prof. Egon Borger for his feedback on Chapter 2 of this report.
2306.05561
Privacy- and Utility-Preserving NLP with Anonymized Data: A case study of Pseudonymization
This work investigates the effectiveness of different pseudonymization techniques, ranging from rule-based substitutions to using pre-trained Large Language Models (LLMs), on a variety of datasets and models used for two widely used NLP tasks: text classification and summarization. Our work provides crucial insights into the gaps between original and anonymized data (focusing on the pseudonymization technique) and model quality and fosters future research into higher-quality anonymization techniques to better balance the trade-offs between data protection and utility preservation. We make our code, pseudonymized datasets, and downstream models publicly available
Oleksandr Yermilov, Vipul Raheja, Artem Chernodub
2023-06-08T21:06:19Z
http://arxiv.org/abs/2306.05561v1
# Privacy- and Utility-Preserving NLP with Anonymized Data: A case study of Pseudonymization ###### Abstract This work investigates the effectiveness of different pseudonymization techniques, ranging from rule-based substitutions to using pre-trained Large Language Models (LLMs), on a variety of datasets and models used for two widely used NLP tasks: text classification and summarization. Our work provides crucial insights into the gaps between original and anonymized data (focusing on the pseudonymization technique) and model quality and fosters future research into higher-quality anonymization techniques to better balance the trade-offs between data protection and utility preservation. We make our code, pseudonymized datasets, and downstream models publicly available.1 Footnote 1: [https://github.com/olexandryermilov/privacy-preserving-nlp](https://github.com/olexandryermilov/privacy-preserving-nlp) ## 1 Introduction With the advances in artificial intelligence and data-hungry machine learning systems, privacy and compliant data governance have become increasingly important. Text documents, such as emails, court rulings, customer service chats, interview transcripts, and patient records, frequently contain personally identifiable information (PII), such as mentions of persons, locations, organizations, etc. While the collection and use of text data is necessary for improving products or services, conducting research, or providing personalized recommendations, it has to be done in a safe, responsible and compliant way. However, access to text data becomes a challenge where data containing personally identifiable mentions is involved. Although it is a widely accepted notion that no data is truly anonymous and is said to be an unattainable target Rocher et al. (2019), pseudonymization, on the other hand, is recognized by the GDPR as one of the ways (and a requirement) to reduce risks of re-identification of a data subject European Commission (2016). Following Eder et al. (2022), we define _pseudonymization_ as recognizing entities bearing privacy-sensitive information, and their replacement by realistic substitutes. With the right implementation and safeguards, pseudonymization can be a useful technique for protecting the privacy of individuals while still enabling data-driven technological advances, such as NLP research, enabling researchers to work with sensitive data, while reducing data privacy risks. However, there is a risk that quality of texts can often be compromised by techniques such as pseudonymization, which can not only negatively affect downstream NLP tasks and analyses, it can also reduce the utility of anonymized data for other research. It is noteworthy that while privacy and utility-preserving NLP has been a crucial topic in the medical domain, it has been largely overlooked in mainstream NLP research, barring a few recent works (Section 2). The quality of clinical texts can often be compromised by de-identification. Therefore, in this work, we investigate the effectiveness of pseudonymization as a technique for working with NLP models. Specifically, we consider three different systems for pseudonymization: 1. **NER**, which uses named entity recognition \begin{table} \begin{tabular}{l|l} \hline \hline (NER) models to detect text spans containing PII, and then uses a knowledge graph to replace the detected spans; 2. **Seq2Seq**, which formulates the task of pseudonymization as a sequence-to-sequence (Seq2Seq) transformation, using an encoder-decoder model; 3. **LLM**, which leverages the zero-shot and few-shot learning capabilities of large, pre-trained language models (LLMs) for performing the task of pseudonymization. We then use the aforementioned systems to pseudonymize training datasets for two widely-used NLP tasks: text classification and summarization, and evaluate the performance of models (trained on these pseudonymized datasets) on downstream tasks. Through our analyses, we provide crucial insights into the effectiveness of different pseudonymization techniques for data anonymization, and their effect on downstream NLP tasks, from a privacy and utility perspective. Finally, we make our code, pseudonymized datasets, and downstream models publicly available to foster future research into privacy- and utility-preserving NLP. ## 2 Related Work Pseudonymization has predominantly been researched in Clinical NLP up until recently, focusing on NLP techniques on how to replace PII such as named entities in medical texts, across different languages. For English medical texts, Sweeney (1996) was one of the first pseudonymization systems, followed by numerous works such as Sweeney et al. (2005); Uzuner et al. (2007); Neamatullah et al. (2008); Meystre et al. (2010); Kushida et al. (2012); Carrell et al. (2013); Sanchez et al. (2013); Meystre (2015); Dernoncourt et al. (2017); Liu et al. (2017); Iwendi et al. (2020). The techniques proposed in related works range from simply replacing the detected text spans by a placeholders, pseudonyms or synthetic surrogates using lists, lexical substitution such as synonyms or hypernyms, or knowledge bases Lison et al. (2021); Pilan et al. (2022). Relatedly, techniques such as _C_-sanitize Sanchez and Batet (2016), _t_-plausibility Anandan et al. (2012), and more recently, Yue et al. (2021) have proposed frameworks for privacy-aware and -preserving document sanitization and pseudonymization. While numerous recent works such as the aforementioned ones have investigated the topic of pseudonymization, our work comes closest to Lampoltshammer et al. (2019); Obeid et al. (2019); Berg et al. (2020); Vakili et al. (2022) and Liu et al. (2023), which focus on analyzing different techniques of data anonymization or pseudonymization, and their effect on downstream tasks. However, our work differs from those since they focus on different domains, different tasks, and different techniques. ## 3 Pseudonymization Systems The general architecture of a pseudonymization system consists of two steps, where they first recognize entities bearing PII (detection), and the second sub-system their replacement by realistic substitutes (replacement). For this work, we restrict our analysis to three predominant categories of named entities: PERSON (PER), LOCATION (LOC), and ORGANIZATION (ORG). Using this general framework, we describe the three types of systems that are used in our experiments: ### NER-based Pseudonymization (NER-PS) The NER-based system uses an off-the-shelf Named Entity Recognition (NER) system to first detect spans of named entities that belong to the aforementioned categories. We use two publicly available NER systems for the first stage: spaCy2 and FLAIR3. The Spacy NER is a fine-tuned RoBERTa model Liu et al. (2019), whereas the FLAIR NER is a LSTM-CRF model based on Flair embeddings Akbik et al. (2018). Footnote 2: We use spaCy v3.5.1: spacy.io/usage/v3-5 Footnote 3: We use FLAIR v0.12.2: github.com/flairNLP/flair The detected named entity spans are then replaced with named entities having similar characteristics, such as gender and language of origin (as described in Wikidata) for PERs, and so on. We first generate a list of replacement candidates, and then randomly sample a single item from this list under some predefined constraints (details in A.1). We refer to the two NER-based systems as **NER-PS\({}_{\text{(SPACY)}}\)** and **NER-PS\({}_{\text{(FLAIR)}}\)**. ### Seq2Seq Pseudonymization (Seq2Seq-PS) The Seq2Seq-based system was developed by fine-tuning a BART-base model Lewis et al. (2020) on a parallel corpus of pseudonymized texts (created using the NER-PS system). An important thing to note is that this system does not exactly fit the two-step process outlined above, as it performs the full task in a single-step text-to-text transformation. Specifically, we developed two variants of this system using the same NER models as the NER-PS. We refer to the two Seq2Seq-PS variants as **Seq2Seq-PS(SPACY)**, **Seq2Seq-PS(FLAIR)**, depending on which NER-PS system was used to create the parallel training data for training the system. ### LLM Pseudonymization (LLM-PS) Following the aforementioned two-step architecture, the LLM-based system is based on a sequential chain of two LLMs: GPT-34Brown et al. (2020) and ChatGPT5. For the first step, we extract named entities using GPT-3 with a 1-shot prompt (details in Appendix A.3), and then perform 1-shot pseudonymization on the extracted named entities using ChatGPT. Footnote 4: We use text-curie-001 as the GPT-3 model. Footnote 5: We use gpt-turbo-3.5 as the GPT-3.5 model. We chose GPT-3 to perform the detection step of the architecture due to the fact it works much faster on big paragraphs of text (characterized by both text classification and summarization tasks). Despite being considerably slow, we chose ChatGPT (GPT-3.5) for the replacement step, since the size of the input text is much smaller for the replacement sub-task, and we observed better qualitative performance with this model compared to GPT-3. ## 4 Experiments In this section, we experimentally evaluate the considered pseudonymization methods. First, we evaluate the negative impact of pseudonymization on the downstream tasks' quality. Next, we compare the privacy preservation quality of different pseudonymization methods. Finally, we evaluate the consistency and privacy-preservation characteristics of pseudonymized texts through a text syntheticity detection experiment. ### Downstream Tasks Performance Since pseudonymization may introduce additional noise into the processed data, we evaluate the impact of various pseudonymization methods on target dataset quality for the respective downstream tasks. We first pseudonymize the texts for two downstream tasks: Summarization and Text Classification (Table 2), using the aforementioned methods, and then train and evaluate the trained models on their respective task-specific metrics. For training, we fine-tune the bart-base6Lewis et al. (2020) for the Summarization task, and bert-base-cased7Devlin et al. (2019) for the Text Classification task. In both scenarios, we train the models for three epochs using _AdamW_ optimization Loshchilov and Hutter (2017) with learning rate \(\alpha=2*10^{-5}\), and batch size \(8\). Footnote 6: [https://huggingface.co/facebook/bart-base](https://huggingface.co/facebook/bart-base) Footnote 7: [https://huggingface.co/bert-base-cased](https://huggingface.co/bert-base-cased) For evaluation, as a baseline, we use the quality obtained with the original (non-pseudonymized) texts using the same training process to make sure the difference in metrics is caused only by the difference in the training datasets. Also, as an additional baseline, we compare the results of pseudonymization with two NER-based sanitizations (Table 1 for reference) denoted by **NER-S(SPACY)** and **NER-S(FLAIR)**. The sanitization method is the same as NER-PS (Section 3.1) except that the detected named entities are replaced with enumerated placeholders, e.g. **PERSON_1**, **LOCATION_2**, and **ORGANIZATION_3**, instead of Wikidata-based named entities. Evaluation results on both the downstream tasks are presented in Ta \begin{table} \begin{tabular}{l|l|l|l|l|l|l} \hline \hline **Task** & **Dataset name** & **train size** & **dev size** & **test size** & **domain** & **metrics** \\ \hline Summarization & CNN/DM Nallapati et al. (2016) & 286,817 & 13,368 & 11,487 & news ROUGE-1/2/L \\ \hline Text classification & IMDB Maas et al. (2011) & 25,000 & N/A & 25,000 & movie reviews & F-score \\ \hline \hline \end{tabular} \end{table} Table 2: Details of the evaluated downstream tasks. \begin{table} \begin{tabular}{l|l l l|l} \hline \hline & \multicolumn{4}{c|}{**Summarization**} & \multicolumn{1}{c}{**Classification**} \\ \cline{2-5} & **ROUGE-1** & **ROUGE-2** & **ROUGE-1** & **F-score** \\ \hline **Original text** & **42.82** & **20.13** & **36.33** & **88.42** \\ \hline **NER-S(SPACY)** & 41.59 & 19.17 & 29.07 & 87.65 \\ **NER-S(FLAIR)** & 39.05 & 17.52 & 27.43 & 87.38 \\ \hline **NER-PS(SPACY)** & **41.93** & **19.38** & **29.36** & 88.06 \\ **NER-PS(FLAIR)** & 40.25 & 18.04 & 27.97 & 88.14 \\ \hline **SES-PS(SPACY)** & 39.1 & 17.23 & 26.96 & 88.10 \\ **SES-PS(FLAIR)** & 36.04 & 15.07 & 24.73 & 88.13 \\ \hline **LLM-PS** & 38.62 & 16.57 & 26.34 & **88.15** \\ \hline \hline \end{tabular} \end{table} Table 3: Results of downstream evaluation tasks: summarization (left) and text classification (right). The smaller the gap with the original text, the better the utility is preserved. ble 3. We observe that NER-based pseudonymization achieves the best results for the summarization task, and approaches with spaCy as the underlying NER system show better results compared to FLAIR. These results are related to the fact that FLAIR is a better NER system, which results in making more changes to the original text and introducing more noise into the dataset. This is further compounded with LLM-PS, as it performs a greater amount of edits, thus, forcing the summarization model to learn different patterns than the original dataset, leading to lower ROUGE scores. For the classification task, all pseudonymization approaches show similar results, although using FLAIR as the underlying system results in better classification performance compared to spaCy. The difference in task formulations explains this small difference between methods: sentiment classification mostly relies on words with positive/negative sentiment, not on the named entities in the text (although, named entities might associate with positive/negative sentiment more than others Batra and Rao (2010), resulting in a correlation between them and sentiment of the text). Hence, pseudonymization might have a very limited effect on the task-specific performance. On the other hand, the summarization task is more sensitive to any errors introduced by the NER/Replacement models, as any false positives or false negatives might lead to inconsistent entity mentions and entity relationships, leading to a corruption in the data, which might be further learned by the summarization model. ### Privacy Preservation Another risk with pseudonymization is that some named entities will still remain non-anonymized. To estimate these risks of false negatives, we evaluate our methods of pseudonymization on a standard NER benchmark: The English CoNLL-2003 test set Tjong Kim Sang and De Meulder (2003). We pseudonymize the dataset, and compare the resulting texts to the originals. We measure the percentage of named entities of each type in the original texts that get leaked into the pseudonymized texts. We observe that NER-based approaches show better results than Seq2Seq approaches, and FLAIR approaches show better results compared to their spaCy equivalents (Table 4), which confirms the observations of the previous experiment. Similar to the observations in Section 4.1, the former observation is related to the fact that the errors present in NER systems are propagated into the Seq2Seq approaches due to the way they were trained. ### Text Syntheticity Detection As mentioned above, pseudonymization may corrupt relationships and alignment among named entities and other artifacts in the text. For example, the United States never had a president named "John Smith." Due to such contextual distortions, pseudonymization can negatively affect the quality of processed texts in hard-to-predict ways. To estimate the degree to which pseudonymized texts are similar to natural ones, we carry out a text syntheticity detection experiment. We combine original and pseudonymized texts from the Summarization task to a single dataset, and train a text classification model with the goal of detecting pseudonymized texts from their non-pseudonymized counterparts, using the same model and settings as for the Text Classification task (Section 4.1). The results are presented in Table 5. LLM-PS shows the best results for this experiment, which are about an order of magnitude better than replacement-based pseudonymization methods. We observe that it is happening because in LLM-rewritten texts, named entities are in better agreement with the context, making it the best-performing system for preserving the syntactic and semantic integrity of the original text. \begin{table} \begin{tabular}{l|c|c|c|c} \hline & **PER** & **ORG** & **LOC** & **Mean** \\ \hline **NER-PS\({}_{\text{SPACV}}\)** & 23.00 & 37.9 & **19.48** & 27.23 \\ **NER-PS\({}_{\text{(FLAIR)}}\)** & **2.48** & **10.09** & 21.55 & **10.23** \\ \hline **Seq2Seq-PS\({}_{\text{SPACV}}\)** & 70.14 & 78.68 & 79.74 & 75.67 \\ **Seq2Seq-PS\({}_{\text{(FLAIR)}}\)** & 14.82 & 36.65 & 65.76 & 36.03 \\ \hline **LLM-PS** & 34.36 & 33.09 & 40.36 & 35.53 \\ \hline \end{tabular} \end{table} Table 4: Results of privacy preservation experiment on CoNLL-2003 test set. We report the False Negative Rate for each type of named entity. Lower is better. \begin{table} \begin{tabular}{l|c|c|c} \hline & **Precision** & **Recall** & **F-score** \\ \hline **NER-PS\({}_{\text{(SPACV)}}\)** & 99.12 & 97.86 & 98.49 \\ **NER-PS\({}_{\text{(FLAIR)}}\)** & 98.68 & 95.96 & 97.30 \\ \hline **Seq2Seq-PS\({}_{\text{(SPACV)}}\)** & 99.94 & 99.76 & 99.85 \\ **Seq2Seq-PS\({}_{\text{(FLAIR)}}\)** & 99.61 & 98.41 & 99.01 \\ \hline **LLM-PS** & **85.61** & **66.92** & **75.12** \\ \hline \end{tabular} \end{table} Table 5: Results of text syntheticity detection experiment. Lower is better. Conclusions We investigate the effectiveness of pseudonymization for NLP research with privacy-sensitive data. We develop three different approaches for this task, and evaluate them from three aspects: downstream task performance (on two downstream tasks: text summarization and text classification), privacy preservation, and text syntheticity detection. We find that the proposed approaches have pros and cons for pseudonymization, so one must chose what task and objective (privacy vs. utility) is the most important for them. NER-based systems with FLAIR perform the best for privacy preservation and downstream task performance, whereas the LLM-based system shows the best results for preserving the integrity of the text. ### Limitations While we endeavor in this work to shed light on the impact of various pseudonymization techniques, we recognize a major limitation of our work - especially the LLM-based pseudonymization approach. Using closed-source LLMs may not be an acceptable solution for many settings since it requires sending a (potentially sensitive) text to a third-party API, which, in the absence of appropriate legal safeguards and responsible-use agreements, defeats the purpose of privacy preservation. There are some more technical limitations of the work, such as the following: * While this is a problem that affects sensitive texts in all languages, all the experiments were conducted for data in the English language only. * LLMs are highly sensitive to prompts, as well as the number and ordering of examples provided for few-shot learning. In this work, we experimented with a limited number of prompts for LLM-PS due to API cost constraints. * For the data privacy detection experiment, the FLAIR NER system was trained using the CoNLL-2003 dataset, which might affect its performance for privacy protection tasks. This may also apply to GPT-3 and ChatGPT models as the authors do not state specifically on which data they were trained. * We considered only a limited part of named entity types, specifically, PERSON (PER), LOCATION (LOC), and ORGANIZATION (ORG), whereas it is well understood that PII encompasses a much broader range of data types (eg. dates, phone numbers, etc.). We also do not consider sentiments associated with named entities used for substitution in the downstream task of text classification. We plan to address these in future work. ### Ethics Statement User data privacy and data anonymization, are sensitive, and very important matters. Through this work, we try to dive deeper into the challenges and opportunities of using pseudonymization as a technique to strike a suitable tradeoff between privacy- and utility preservation. The goal of this work is to expose the strengths and limitations of different techniques and their implications. The datasets, knowledge bases, and models that we work with have been publicly released for many years. All of these artifacts are considered to be in the public sphere from a privacy perspective. We do not make any recommendations on using these on public or private datasets without proper due diligence for privacy, security, legal, and compliance measures. Another risk is that pseudonymization may corrupt the names of people, organizations, and locations and state them in an inappropriate context and therefore produce offensive texts. ## 6 Acknowledgements We express our gratitude to our colleagues Cortney Napoles and Leonardo Neves for their advice and to our managers Viktor Zamaruiev and Max Gubin for their constant support. To our communities: While we are writing this, our homeland Ukraine continues to resist the unprovoked Russian invasion. We are grateful to everyone who defends Ukraine, declares support for the people of Ukraine, and sends aid. Thank you!
2305.03440
Tight Bounds for Chordal/Interval Vertex Deletion Parameterized by Treewidth
In Chordal/Interval Vertex Deletion we ask how many vertices one needs to remove from a graph to make it chordal (respectively: interval). We study these problems under the parameterization by treewidth $tw$ of the input graph $G$. On the one hand, we present an algorithm for Chordal Vertex Deletion with running time $2^{O(tw)} \cdot |V(G)|$, improving upon the running time $2^{O(tw^2)} \cdot |V(G)|^{O(1)}$ by Jansen, de Kroon, and Wlodarczyk (STOC'21). When a tree decomposition of width $tw$ is given, then the base of the exponent equals $2^{\omega-1}\cdot 3 + 1$. Our algorithm is based on a novel link between chordal graphs and graphic matroids, which allows us to employ the framework of representative families. On the other hand, we prove that the known $2^{O(tw \log tw)} \cdot |V(G)|$-time algorithm for Interval Vertex Deletion cannot be improved assuming Exponential Time Hypothesis.
Michal Wlodarczyk
2023-05-05T11:35:52Z
http://arxiv.org/abs/2305.03440v1
# Tight Bounds for Chordal/Interval Vertex Deletion Parameterized by Treewidth ###### Abstract In Chordal/Interval Vertex Deletion we ask how many vertices one needs to remove from a graph to make it chordal (respectively: interval). We study these problems under the parameterization by treewidth \(\mathbf{tw}\) of the input graph \(G\). On the one hand, we present an algorithm for Chordal Vertex Deletion with running time \(2^{\mathcal{O}(\mathbf{tw})}\cdot|V(G)|\), improving upon the running time \(2^{\mathcal{O}(\mathbf{tw}^{2})}\cdot|V(G)|^{\mathcal{O}(1)}\) by Jansen, de Kroon, and Wlodarczyk (STOC'21). When a tree decomposition of width \(\mathbf{tw}\) is given, then the base of the exponent equals \(2^{\omega-1}\cdot 3+1\). Our algorithm is based on a novel link between chordal graphs and graphic matroids, which allows us to employ the framework of representative families. On the other hand, we prove that the known \(2^{\mathcal{O}(\mathbf{tw}\log\mathbf{tw})}\cdot|V(G)|\)-time algorithm for Interval Vertex Deletion cannot be improved assuming Exponential Time Hypothesis. 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 20122 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 222 2022 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 22222 2222 22222 2222 22222 2222 2222 22222 22222 22222 22222 2222 22222 22222 22222 222222 22222 222222 222222 2222222 222222 2222222 22222222 22222222222 ###### Abstract We consider a class of \(\mathcal{H}\)-Vertex Deletion problems that are solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex-deletion problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex problem. We also show that the \(\mathcal{H}\)-vertex-deletion problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex problem. We also show that the \(\mathcal{H}\)-vertex problem is solved in a finite time \(\mathcal{O}(\mathrm{t}\mathbf{w})\)-vertex problem. The best known running time for Interval Vertex Deletion is \(2^{\mathcal{O}(\mathbf{tw}\log\mathbf{tw})}n\)[66]. (While this algorithm has been described for the edge-deletion variant, we briefly explain in Section 5.1 how it can be adapted for vertex deletion.) We show that, unlike the chordal case, this running time is optimal under ETH. This gives a sharp separation between the two studied problems. Under the assumption of ETH, Interval Vertex Deletion cannot be solved in time \(2^{o(\mathbf{tw}\log\mathbf{tw})}n^{\mathcal{O}(1)}\) on \(n\)-vertex unweighted graphs of treewidth \(\mathbf{tw}\). In fact, we show a stronger lower bound that rules out the same running time with respect to a different graph parameter, called treedepth, which is never smaller than treewidth. Our lower bound is obtained via a reduction from \(k\times k\) Permutation Clique[55], which produces an instance of size \(2^{\mathcal{O}(k)}\) and treedepth \(\mathcal{O}(k)\). Related work.The two considered \(\mathcal{H}\)-Vertex Deletion problems have been studied in several contexts. Both problems are FPT parameterized by the solution size \(k\), with the best-known running times \(\mathcal{O}(8^{k}(n+m))\) for \(\mathcal{H}=\mathtt{interval}\)[27] and \(2^{\mathcal{O}(k\log k)}n^{\mathcal{O}(1)}\) for \(\mathcal{H}=\mathtt{chordal}\)[28] (but the problem becomes W[2]-hard for \(\mathcal{H}=\mathtt{perfect}\)[41]). There are polynomial-time approximation algorithms with approximation factor \(8\) for \(\mathcal{H}=\mathtt{interval}\)[27] and \(k^{\mathcal{O}(1)}\) for \(\mathcal{H}=\mathtt{chordal}\)[48]. Observe that, in these two regimes, vertex deletion into chordal graphs seems harder than into interval graphs (although no lower bounds are known to justify such a separation formally); this contrasts our results with respect to the treewidth parameterization. Both studied problems admit exact exponential algorithms with running times of the form \(\mathcal{O}((2-\varepsilon)^{n})\)[18] as well as polynomial kernelizations [48, 3, 4]. The obstructions to being chordal (resp. interval) enjoy the Erdos-Posa property: any graph \(G\) either contains \(k\) vertex-disjoint subgraphs which are not chordal (resp. not interval) or a vertex set \(X\) of size \(\mathcal{O}(k^{2}\log k)\) such that \(G-X\) is chordal [51] (resp. interval [2]). Vertex deletion into other subclasses of perfect graphs has been studied as well [1, 5, 6, 70]. For other modification variants, where instead of vertex deletions one considers removals, insertions, or contractions of edges, see, e.g., [17, 26, 27, 28, 39, 56, 72]. The concept of representative families, which plays an important role in our algorithm for ChVD, has found applications outside the context of treewidth as well [68, 73]. Our other tool, boundaried graphs, has revealed fruitful insights for various graph classes [9, 21, 45]. Organization of the paper.We begin by describing our technical contributions informally in Section 2. In Section 3 we provide formal preliminaries. Section 4 is devoted to establishing a connection between chordal graphs and graphic matroids, which is followed by the proof of Theorem 1. In Section 5 we prove our lower bound for Interval Vertex Deletion. We conclude in Section 6. ## 2 Techniques Chordal Vertex Deletion.The standard approach to design algorithms over a bounded-width tree decomposition is to assign a data structure to each node \(t\) in the decomposition, which stores information about partial solutions for the subgraph associated with the subtree of \(t\). Suppose that \(X\subseteq V(G)\) is a bag of \(t\), \(A\subseteq V(G)\setminus X\) denote the set of vertices appearing in the bags of the descendants of \(t\) (but not in \(X\)), and \(B\subseteq V(G)\) is the set of remaining vertices. We say that a subset \(S\subseteq V(G)\) is a _solution_ if \(G[S]\) is chordal; we want to _maximize_ the size of \(S\). Next, a pair \((S_{A}\subseteq A,S_{X}\subseteq X)\) is a _partial solution_ if \(G[S_{A}\cup S_{X}]\) is chordal. A set \(S_{B}\subseteq B\) is an _extension_ of a partial solution \((S_{A},S_{X})\) if \(S_{A}\cup S_{X}\cup S_{B}\) is a solution. Since \(X\) separates \(S_{A}\) from \(S_{B}\), the graph \(G[S_{A}\cup S_{X}\cup S_{B}]\) can be regarded as a result of _gluing_\(G[S_{A}\cup S_{X}]\) with \(G[S_{B}\cup S_{X}]\) alongside the _boundary_\(S_{X}\). For a node \(t\) and \(S_{X}\subseteq X\), we want to store a family of partial solutions \(\mathcal{G}_{t,S_{X}}\) so that for every possible \(S_{B}\subseteq B\): if \(S_{B}\) is an extension for some partial solution \((S_{A},S_{X})\), then there exists a partial solution \((S_{A}^{\prime},S_{X})\in\mathcal{G}_{t,S_{X}}\) for which (a) \(S_{B}\) is still a valid extension, and (b) \(S_{A}^{\prime}\) is at least as large as \(S_{A}\). We say that such a family satisfies the correctness invariant for \((t,S_{X})\). Jansen et al. [45] showed that any chordal graph \(H\) with a boundary of size \(k\) can be _condensed_ to a graph \(H^{\prime}\) on \(\mathcal{O}(k)\) vertices that exhibits the same behavior in terms of gluing. More precisely, the gluing product of \(H\) with any graph \(J\) is chordal if and only if the gluing product of \(H^{\prime}\) with \(J\) is chordal. Since there are \(2^{\mathcal{O}(\mathbf{tw}^{2})}\) graphs on \(\mathcal{O}(\mathbf{tw})\) vertices and \(2^{\mathcal{O}(\mathbf{tw})}\) choices for the boundary \(S_{X}\), it suffices to store only \(2^{\mathcal{O}(\mathbf{tw}^{2})}\) partial solutions. We take this idea one step further and show that it is actually sufficient to store only \(2^{\mathcal{O}(\mathbf{tw})}\) partial solutions. To this end, we investigate the properties of the class of chordal graphs with respect to the gluing operation and prove a homomorphism theorem relating it to graphic matroids. A _graphic matroid_ of a graph \(J\) is a set system \(\mathcal{I}\) over \(E(J)\) where a subset \(S\subseteq E(J)\) belongs to \(\mathcal{I}\) (and is called _independent_) when \(S\) contains no cycles. A _rank_ of a matroid is the largest size of an independent set; here this coincides with the size of any spanning forest in \(J\). In the following statement, \(\mathcal{G}_{X,B}\) is a family of graphs \(H\) that satisfy (a) \(V(H)\supseteq X\) and (b) \(H[X]=B\). For graphs \(H_{1},H_{2}\in\mathcal{G}_{X,B}\) we assume that \(V(H_{1})\cap V(H_{2})=X\) and define their gluing product as \(H_{3}=(H_{1},X)\oplus(H_{2},X)\) where \(V(H_{3})=V(H_{1})\cup V(H_{2})\) and \(E(H_{3})=E(H_{1})\cup E(H_{2})\). Consider a family of graphs \(\mathcal{G}_{X,B}\) for some pair \((X,B)\). There exists a graphic matroid \(M=(E,\mathcal{I})\) of rank at most \(|X|-1\) and a polynomial-time computable mapping \(\sigma:\mathcal{G}_{X,B}\to 2^{E}\) such that \((H_{1},X)\oplus(H_{2},X)\) is chordal if and only if \(\sigma(H_{1})\cap\sigma(H_{2})=\emptyset\) and \(\sigma(H_{1})\cup\sigma(H_{2})\in\mathcal{I}\). With this criterion at hand, we can employ the machinery of representative families to truncate the number of partial solutions to be stored for a node of a tree decomposition. Technical details aside, for a family \(\mathcal{S}\) of independent sets in a matroid \(M=(E,\mathcal{I})\), a subfamily \(\widehat{\mathcal{S}}\subseteq\mathcal{S}\) is called _representative_ for \(\mathcal{S}\) if for every independent set \(Y\) in \(M\): if there exists \(X\in\mathcal{S}\) so that \(X\cap Y=\emptyset\) and \(X\cup Y\in\mathcal{I}\), then there exists \(\widehat{X}\in\widehat{\mathcal{S}}\) so that \(\widehat{X}\cap Y=\emptyset\) and \(\widehat{X}\cup Y\in\mathcal{I}\). Fomin et al. [38] showed that for any family \(\mathcal{S}\) in a graphic matroid (more generally, in a linear matroid) of rank \(k\) there exists a representative family of size at most \(2^{k}\) and it can be constructed in time \(2^{\mathcal{O}(k)}\). We use Theorem 2.1 to translate this result into the language of chordal graphs and gluing. When \(\mathcal{G}_{t,S_{X}}\) is a family of partial solutions that satisfies the correctness invariant for \((t,S_{X})\), a representative family for \(\sigma(\mathcal{G}_{t,S_{X}})\) in the related graphic matroid \(M\) corresponds to a subfamily \(\widehat{\mathcal{G}}_{t,S_{X}}\subseteq\mathcal{G}_{t,S_{X}}\) that satisfies condition (a) of the correctness invariant and \(|\widehat{\mathcal{G}}_{t,S_{X}}|\leq 2^{\mathbf{tw}}\). In order to satisfy condition (b), we need to assign weights to the elements of the matroid \(M\), encoding the size of the largest partial solution mapped to each element. We can then utilize the weighted variant of representative families, which preserves the largest-weight elements [38]. By storing only the condensed forms of the partial solutions (having \(\mathcal{O}(\mathbf{tw})\) vertices), we also achieve a linear dependency on \(|V(G)|\). In order to prove Theorem 2.1, we give a novel criterion for testing chordality of a gluing product. When \(G\) originates from gluing two chordal graphs \(G_{1},G_{2}\) alongside boundary \(X\), then any hole in \(G\) must visit both \(V(G_{1})\setminus X\) and \(V(G_{2})\setminus X\), so it must traverse \(X\) multiple times. We show that if a hole \(H\) intersects at least two connected components of \(G[X]\), then it corresponds to a cycle in the graph obtained from \(G\) by contracting each of the connected components of \(G[X]\), \(G_{1}-X\), \(G_{2}-X\) into single vertices. Otherwise, let \(C\) be the unique connected component of \(G[X]\) that is intersected by the hole. We prove that there exists a vertex set \(S\subseteq V(C)\) that is disjoint from \(V(H)\) and \(C-S\) has two connected components \(C_{1},C_{2}\) satisfying \(N_{C}(C_{1})=N_{C}(C_{2})=S\) (below we refer to such components as _relevant_) and having non-empty intersections with \(V(H)\). Moreover, every vertex from \(V(H)\cap C\) belongs to some relevant component. Consider a graph \(\mathtt{Aux}(G,X,S)\) obtained from \(G\) by (1) removing the connected components of \(G[X]\) different than \(C\), (2) contracting relevant components of \(C-S\) into single vertices while removing the irrelevant ones, and (3) contracting the components of \(G_{1}-X\), \(G_{2}-X\) into single vertices. A detailed construction is given in Definition 4.10; see also Figure 1 on page 1. Then the hole \(H\) corresponds to a cycle in \(\mathtt{Aux}(G,X,S)\). The first scenario can be analyzed with this approach as well, by taking \(S=\emptyset\). We prove that considering all minimal vertex separators \(S\) in \(G[X]\) and checking acyclity of each auxiliary graph \(\mathtt{Aux}(G,X,S)\) yields a necessary and sufficient condition for \(G\) to be chordal. This criterion allows us to construct a graphic matroid encoding all the information about each of the graphs \(G_{1},G_{2}\) necessary to reconstruct the graphs \(\mathtt{Aux}(G,X,S)\) and to determine whether \(G\) is chordal. In order to bound the rank of this matroid, we investigate the structure of minimal vertex separators in a chordal graph and bound the size of a spanning forest in a certain graph obtained from the union of \(\mathtt{Aux}(G,X,S)\). A criterion of a similar kind is known for testing planarity of a gluing product of planar graphs when the boundary has a Hamiltonian cycle; then the corresponding auxiliary graph (defined in a different way) should be bipartite [13]. Our criterion can be also compared to the one used by Bonnet et al. [23] in their work on Bounded \(\mathcal{P}\)-Block Vertex Deletion. Here, the task is to remove the smallest number of vertices from a graph so that every remaining biconnected component has at most \(d\) vertices and belongs to the class \(\mathcal{P}\). They showed that when \(\mathcal{P}\) is a subclass of chordal graphs then Bounded \(\mathcal{P}\)-Block Vertex Deletion can be solved in time \(2^{\mathcal{O}(\mathtt{tw}\cdot d^{2})}n^{\mathcal{O}(1)}\) and otherwise it cannot be solved in time \(2^{\mathcal{O}(\mathtt{tw}\log\mathtt{tw})}n^{\mathcal{O}(1)}\) for fixed \(d\) unless ETH fails. Their positive result is also based on a criterion which determines whether a gluing product of two graphs has the desired property by checking if a union of two certain sets is independent in a graphic matroid. It handles cases similar to the first scenario considered in the outline above. However, while this criterion is necessary it is not sufficient and more information needs to be stored in a DP state, leading to the additional factor \(d^{2}\) in the exponent. In our setting the biconnected components can be arbitrarily large so such a factor is prohibitive. Interval Vertex Deletion.In order to prove Theorem 1 we present a parameterized reduction from \(k\times k\) Permutation Clique. Here, the input is a graph \(G\) on vertex set \([k]\times[k]\), and we ask whether there exists a permutation \(\pi\colon[k]\to[k]\) such that \((1,\pi(1)),(2,\pi(2)),\ldots,(k,\pi(k))\) forms a clique in \(G\). Lokshtanov et al. [55] proved that \(k\times k\) Permutation Clique cannot be solved in time \(2^{o(k\log k)}\) under ETH. So we seek a reduction from \(k\times k\) Permutation Clique to Interval Vertex Deletion that produces a graph of treewidth \(\mathcal{O}(k)\). Imagine an interval model of a complete graph \(Y\) on vertex set \([k]\) in which all the right endpoints of the intervals coincide and all the left endpoints are distinct. Choosing the order of the left endpoints encodes some permutation \(\pi\colon[k]\to[k]\) (see Figure 2 on page 2). We can extend this interval model by inserting a new vertex \(v\) only if \(N(v)\) corresponds to a set of intervals intersecting at a single point. This is possible only when \(N(v)=\pi([\ell])\) for some \(\ell\in[k]\). Furthermore, inserting to \(Y\) independent vertices \(v_{1},v_{2},\ldots,v_{k}\), such that \(|N(v_{i})|=i\) and \(N(v_{i})\subset N(v_{i+1})\), enforces the choice of permutation \(\pi\). We can thus encode a permutation \(\pi\) by an ascending family of sets \(N_{1}\subset N_{2}\subset\cdots\subset N_{k}=[k]\), satisfying \(N_{i}=\pi([i])\), which correspond to the neighborhoods of \(v_{1},v_{2},\ldots,v_{k}\) in \(Y\). On the other hand, any ascending family of sets for which the construction above gives an interval graph, must encode some permutation. On an intuitive level, a partial interval model of a size-\(k\) separator can encode one of \(k!\) permutations. We need a mechanism to verify that a chosen permutation \(\pi\) encodes a clique, i.e., that it satisfies \(\binom{k}{2}\) constraints of the form \((i,\pi(i))(j,\pi(j))\in E(G)\). To implement a single constraint, we construct a _choice gadget_, inspired by the reduction to Planar Vertex Deletion [60]. Such a gadget \(C_{i,j}\) is defined as a path-like structure, divided into blocks, so that each block has some special vertices adjacent to \(Y\) (see Figure 3 on page 3). We show that any minimum-size interval deletion set in \(C_{i,j}\) must 'choose' one block and leave its special vertices untouched while it can remove the remaining special vertices. We use this gadget to check if a permutation \(\pi\) encoded by an ascending family of sets \(N_{1}\subset N_{2}\subset\cdots\subset N_{k}\) satisfies the constraint \((i,\pi(i))(j,\pi(j))\in E(G)\). As \(\pi(i)\) is the only element in \(N_{i}\setminus N_{i-1}\), this information can be extracted from the tuple \((N_{i-1},N_{i},N_{j-1},N_{j})\). We create a single block in \(C_{i,j}\) for each valid tuple. Since the number of such tuples is \(2^{\mathcal{O}(k)}\), we need a choice gadget of exponential length, unlike the mentioned reduction which works in polynomial time. However, producing an instance of size \(2^{\mathcal{O}(k)}\) and treewidth \(\mathcal{O}(k)\) is still sufficient to achieve the claimed lower bound. ## 3 Preliminaries We write \([k]=\{1,2,\ldots,k\}\) and assume that \([0]=\emptyset\). We abbreviate \(X\setminus v=X\setminus\{v\}\). For a function \(w\colon X\to\mathbb{N}\) and \(S\subseteq X\) we use shorthand \(w(S)=\sum_{x\in S}w(x)\). Graphs.We consider finite, simple, undirected graphs. We denote the vertex and edge sets of a graph \(G\) by \(V(G)\) and \(E(G)\), respectively. For a set of vertices \(S\subseteq V(G)\), by \(G[S]\) we denote the graph induced by \(S\). We use shorthand \(G-v\) and \(G-S\) for \(G[V(G)\setminus v]\) and \(G[V(G)\setminus S]\), respectively. The open neighborhood \(N_{G}(v)\) of \(v\in V(G)\) is defined as \(\{u\in V(G)\mid\{u,v\}\in E(G)\}\). The closed neighborhood of \(v\) is \(N_{G}[v]=N_{G}(v)\cup\{v\}\). For \(S\subseteq V(G)\), we have \(N_{G}[S]=\bigcup_{v\in S}N_{G}[v]\) and \(N_{G}(S)=N_{G}[S]\setminus S\). When \(C\) is a subgraph of \(G\) we abbreviate \(G[C]=G[V(C)]\) and \(N_{G}(C)=N_{G}(V(C))\). For sets \(S_{1},S_{2}\subseteq V(G)\) we denote by \(E_{G}(S_{1},S_{2})\) the set of edges with one endpoint in \(S_{1}\) and one in \(S_{2}\). We say that \(S_{1},S_{2}\) are adjacent in \(G\) if \(E_{G}(S_{1},S_{2})\neq\emptyset\). A forest is a graph without cycles. A set \(S\subseteq V(G)\) is called a feedback vertex set if \(G-S\) is a forest. A clique in a graph \(G\) is a vertex set \(S\) such that for each distinct \(u,v\in S\) the edge \(uv\) belongs to \(E(G)\). A contraction of \(uv\in E(G)\) introduces a new vertex adjacent to all of \(N_{G}(\{u,v\})\), after which \(u\) and \(v\) are deleted. For \(S\subseteq V(G)\) such that \(G[S]\) is connected, we say we contract \(S\) if we simultaneously contract all edges in \(G[S]\) and introduce a single new vertex adjacent to \(N_{G}(S)\). Separators.For vertices \(u,v\in V(G)\) a vertex set \(S\subseteq V(G)\setminus\{u,v\}\) is called a \((u,v)\)-separator if \(u,v\) belong to different connected components of \(G-S\). A \((u,v)\)-separator is minimal when no proper subset of it is a \((u,v)\)-separator. A vertex set \(S\) is called a minimal vertex separator if \(S\) is a minimal \((u,v)\)-separator for some \(u,v\in V(G)\). **Lemma 3.1** (\(\star\)).: _Let \(u,v\) be vertices in a graph \(G\) and \(S\) be a \((u,v)\)-separator in \(G\). Denote by \(C_{u},C_{v}\) the connected components of \(G-S\) that contain respectively \(u\) and \(v\). Then \(S\) is minimal if and only if \(N_{G}(C_{u})=N_{G}(C_{v})=S\)._ Proof.: To see the first implication suppose w.l.o.g. that \(N_{G}(C_{u})\subsetneq S\). Let \(w\in S\setminus N_{G}(C_{u})\). The connected component of \(u\) in the graph \(G-(S\setminus w)\) is \(C_{u}\) because \(N_{G}(C_{u})\subseteq S\setminus w\). Therefore \(S\setminus w\) is also a \((u,v)\)-separator contradicting minimality of \(S\). To see the opposite implication suppose that \(S^{\prime}\subsetneq S\) is also a \((u,v)\)-separator. Let \(w\in S\setminus S^{\prime}\). But \(w\in N_{G}(C_{u})\cap N_{G}(C_{v})\) so \(u,v\) belong to the same connected component of \(G-S^{\prime}\). A vertex (or a vertex set) is called _simplicial_ if its open neighborhood is a clique. **Lemma 3.2** (\(\star\)).: _Let \(S\) be a minimal vertex separator in a graph \(G\). Then \(S\) does not contain any simplicial vertices._ Proof.: Suppose that \(S\) contains a simplicial vertex \(v\). Next, suppose there are two distinct connected components \(C_{1},C_{2}\) of \(G-S\) which are adjacent to \(v\). Let \(w_{1}\in N_{G}(v)\cap C_{1},w_{2}\in N_{G}(v)\cap C_{2}\). But then \(w_{1}w_{2}\in E(G)\) which contradicts the assumption that \(C_{1},C_{2}\) are distinct. Therefore there is at most one connected component of \(G-S\) adjacent to \(v\). But then \(S\setminus v\) separates the same pairs of vertices as \(S\) does. This means that \(S\) is not a minimal vertex separator. Chordal and interval graphs.An interval graph is an intersection graph of intervals on the real line. In an interval model \(\mathcal{I}_{G}=\{I(v)\mid v\in V(G)\}\) of a graph \(G\), each vertex \(v\in V(G)\) corresponds to a closed interval \(I(v)\); there is an edge between vertices \(u\) and \(v\) if and only if \(I(v)\cap I(u)\neq\emptyset\). A _hole_ in a graph is an induced (i.e., chordless) cycle of length at least four. A graph is chordal when it does not contain any hole. An equivalent definition states that a chordal graph is an intersection graph of a family of subtrees in a tree [40]. This implies that any interval graph is chordal. For more background on these graph classes see surveys [16, 24]. The characterization of the two classes as intersection graphs of intervals/subtrees leads to the following observation. **Observation 3.3**.: _The classes of chordal and interval graphs are closed under vertex deletions and edge contractions._ An _asteroidal triple_ (AT) is a triple of vertices such that for any two of them there exists a path between them avoiding the closed neighborhood of the third. Interval graphs cannot contain ATs, which is a consequence of a linear ordering of any interval model. It turns out that this is the only property that separates the two graph classes. **Lemma 3.4** ([24]).: _A graph is interval if and only if it is chordal and does not contain an AT._ We collect two more useful facts about chordal graphs. **Lemma 3.5** ([24]).: _Every non-empty chordal graph contains a simplicial vertex._ When a chordal graph contains a cycle then it also contains a triangle. As a bipartite graph does not have any triangles, we obtain the following. **Observation 3.6**.: _If a graph is chordal and bipartite, then it is a forest._ A vertex set \(S\) in graph \(G\) is called a _chordal deletion set_ (resp. _interval deletion set_) if \(G-S\) is chordal (resp. interval). The Chordal/Interval Vertex Deletion problem is defined as follows. We are given a graph \(G\), a non-negative weight function \(w\colon V(G)\to\mathbb{N}\), an integer \(p\), and we ask whether there exists a chordal (resp. interval) deletion set \(S\) in \(G\) such that \(w(S)\leq p\). Boundaried graphs.For a set \(X\) and a graph \(B\) on vertex set \(X\), we define a family \(\mathcal{G}_{X,B}\) of graphs \(G\) that satisfy (a) \(V(G)\supseteq X\), (b) \(G[X]=B\). For graphs \(G_{1},G_{2}\in\mathcal{G}_{X,B}\) we define their gluing product \((G_{1},X)\oplus(G_{2},X)\) by taking a disjoint union of \(G_{1}\) and \(G_{2}\) and identifying vertices from \(X\). Note that two vertices from \(X\) are adjacent in \(G_{1}\) if and only if they are adjacent in \(G_{2}\). For \(X\subseteq V(G)\) a pair \((G,X)\) is called a boundaried graph. We say that two boundaried graphs \((G_{1},X),(G_{2},X)\) are compatible if \(G_{1},G_{2}\in\mathcal{G}_{X,B}\) for some \(B\). We remark that it is common in the literature to define a boundaried graph as a triple \((G,X,\lambda)\) where \(\lambda\colon X\to[|X|]\) is a labeling (cf. [9, 21]). Since we do not need to perform gluing of abstract boundaried graphs, but only ones originating from subgraphs of a fixed graph, this simpler definition is sufficient. As an example, consider a graph \(G\) and \(X\subseteq V(G)\). Then for any \(A\subseteq V(G)\setminus X\) the graph \(G[A\cup X]\) belongs to \(\mathcal{G}_{X,G[X]}\). When \(A,B\subseteq V(G)\setminus X\) are disjoint and non-adjacent then \(G[A\cup B\cup X]\) is isomorphic to \((G[A\cup X],X)\oplus(G[B\cup X],X)\). Tree decompositions. [Treewidth] A tree decomposition of a graph \(G\) is a pair \((\mathbb{T},\chi)\) where \(\mathbb{T}\) is a tree, and \(\chi\colon V(\mathbb{T})\to 2^{V(G)}\) is a function, such that: 1. for each \(v\in V(G)\) the nodes \(\{t\mid v\in\chi(t)\}\) form a non-empty connected subtree of \(\mathbb{T}\), 2. for each edge \(uv\in E(G)\) there is a node \(t\in V(\mathbb{T})\) with \(\{u,v\}\subseteq\chi(t)\). The _width_ of \((\mathbb{T},\chi)\) is defined as \(\max_{t\in V(\mathbb{T})}|\chi(t)|-1\). The _treewidth_ of a graph \(G\) (denoted \(\textbf{tw}(G)\)) is the minimal width a tree decomposition of \(G\). A tree decomposition \((\mathbb{T},\chi)\) is called _nice_ if \(\mathbb{T}\) is a rooted tree with a root \(r\) where \(\chi(r)=\emptyset\), each node has at most two children, and each node is of one of the following types. 1. [topsep=0pt,itemsep=0pt] 2. **Base node:** a leaf \(t\neq r\) in \(\mathbb{T}\) with \(\chi(t)=\emptyset\). 3. **Introduce node:** a node \(t\) having one child \(t^{\prime}\) for which \(\chi(t)=\chi(t^{\prime})\cup\{v\}\) for some \(v\not\in\chi(t^{\prime})\). 4. **Forget node:** a node \(t\) having one child \(t^{\prime}\) for which \(\chi(t)=\chi(t^{\prime})\setminus v\) for some \(v\in\chi(t^{\prime})\). 5. **Join node:** a node \(t\) having two children \(t_{1},t_{2}\) for which \(\chi(t)=\chi(t_{1})=\chi(t_{2})\). It is well known that any tree decomposition of \(G\) of width \(k\) can be transformed in linear time into a nice tree decomposition of width \(k\) and with \(\mathcal{O}(k\cdot|V(G)|)\) nodes [53]. When a rooted tree decomposition \((\mathbb{T},\chi)\) of \(G\) is clear from the context we denote by \(V_{t}\) the set of vertices occurring in the subtree rooted at \(t\in V(\mathbb{T})\) and define \(U_{t}=V_{t}\setminus\chi(t)\). [Treedepth] A treedepth of a graph \(G\) (denoted \(\textbf{td}(G)\)) is defined recursively as follows. \[\textbf{td}(G)=\begin{cases}0&\text{if $G$ is empty}\\ 1+\min_{v\in V(G)}(\textbf{td}(G-v))&\text{if $G$ is non-empty and connected}\\ \max_{i=1}^{d}(\textbf{td}(G_{i}))&\text{if $G$ is disconnected and $G_{1},\ldots G_{d}$ are its components}\end{cases}\] As a direct consequence of this definition, inserting a vertex into a graph can increase its treedepth by at most one. It is well known that for every graph \(\mathbf{tw}(G)\leq\mathbf{td}(G)\). Matroids.We provide only the basic background related to our applications. For more information about matroids we refer to the survey [59]. [Matroid] A pair \(M=(E,\mathcal{I})\) where \(E\) is a set and \(\mathcal{I}\subseteq 2^{E}\) is called a matroid if the following conditions hold. If \(X\subseteq Y\) and \(Y\in\mathcal{I}\) then also \(X\in\mathcal{I}\). If \(X,Y\in\mathcal{I}\) and \(|X|<|Y|\) then there exists \(e\in Y\setminus X\) such that \(X\cup\{e\}\in\mathcal{I}\). We say that a set \(X\subseteq E\) is independent in \(M\) when \(X\in\mathcal{I}\). The rank of \(M\) is the size of the largest independent set in \(M\). The simplest example is a \(k\)-uniform matroid in which a set \(X\subseteq E\) is independent when \(|X|\leq k\). Another important example is a linear matroid. Let \(A\) be a matrix over a field \(\mathbb{F}\). We define matroid \(M=(E,\mathcal{I})\) where \(E\) be the set of columns of \(A\) and \(X\subseteq E\) is independent in \(M\) when the corresponding columns are independent over \(\mathbb{F}\). We say that the matrix \(A\) is a representation of \(M\) over \(\mathbb{F}\). Given a graph \(G\), we define its graphic matroid \(M=(E(G),\mathcal{I})\) where \(X\subseteq E(G)\) is independent when \(X\) does not contain a cycle. It is well-known that every graphic matroid is linear and the oriented incidence matrix of \(G\) forms a representation of \(M\) over any field. [[59]] Given a graph \(G\) we can find a representation matrix of its graphic matroid over any field in polynomial time. [Product family] Given two families of independent sets \(\mathcal{S}_{1},\mathcal{S}_{2}\) in a matroid \(M=(E,\mathcal{I})\) we define \[\mathcal{S}_{1}\bullet\mathcal{S}_{2}=\{X\cup Y\mid X\in\mathcal{S}_{1},Y\in \mathcal{S}_{2},X\cap Y=\emptyset,X\cup Y\in\mathcal{I}\}.\] Representative families.We say that a family of sets \(\mathcal{S}\) is a \(p\)-family if every set in \(\mathcal{S}\) has size \(p\). [Min/max \(q\)-representative family] Let \(M=(E,\mathcal{I})\) be a matroid, \(\mathcal{S}\) be a family of subsets of \(E\), and \(w\colon\mathcal{S}\to\mathbb{N}\) be a non-negative weight function. A subfamily \(\widehat{S}\subseteq S\) is min \(q\)-representative (resp. max \(q\)-representative) for \(S\) if for every set \(Y\subseteq E\) of size at most \(q\), if there is a set \(X\in S\) disjoint from \(Y\) with \(X\cup Y\in\mathcal{I}\), then there is a set \(\widehat{X}\in\widehat{S}\) disjoint from \(Y\) with (a) \(\widehat{X}\cup Y\in\mathcal{I}\) and (b) \(w(\widehat{X})\leq w(X)\) (resp. \(w(\widehat{X})\geq w(X)\)). When all weights are zero, we obtain a simpler notion of a \(q\)-representative family. Observe that when \(X\) is a \(p\)-element set in a matroid of rank \(k\) and \(Y\) satisfies \(X\cap Y=\emptyset\) and \(X\cup Y\in\mathcal{I}\) then \(|Y|\leq k-p\). We make note of this fact. If \(\mathcal{S}\) is a \(p\)-family in a matroid of rank \(k\) and \(\widehat{\mathcal{S}}\subseteq_{\max\mathrm{rep}}^{k-p}\mathcal{S}\) then \(\widehat{\mathcal{S}}\subseteq_{\max\mathrm{rep}}^{k}\mathcal{S}\). The following lemmas have been stated in [38] for the unweighted version of representative families but with a remark that they work as well for the weighted version (as stated below). [[38, Lemma 3.1]] Let \(M=(E,\mathcal{I})\) be a matroid and \(\mathcal{S}\) be a family of subsets of \(E\). If \(\widehat{\mathcal{S}}\subseteq_{\max\mathrm{rep}}^{q}\mathcal{S}\) and \(\widehat{\mathcal{S}}\subseteq_{\max\mathrm{rep}}^{q}\widehat{\mathcal{S}}\) then \(\widehat{\mathcal{S}}\subseteq_{\max\mathrm{rep}}^{q}\mathcal{S}\). **Lemma 3.16** ([38, Lemma 3.2]).: _Let \(M=(E,\mathcal{I})\) be a matroid and \(\mathcal{S}\) be a family of subsets of \(E\). If \(\mathcal{S}=\mathcal{S}_{1}\cup\mathcal{S}_{2}\cup\dots\cup\mathcal{S}_{\ell}\) and \(\widehat{\mathcal{S}}_{i}\subseteq_{\mathrm{mascrep}}^{q}\mathcal{S}_{i}\), then \(\bigcup_{i=1}^{\ell}\widehat{\mathcal{S}}_{i}\subseteq_{\mathrm{mascrep}}^{q} \mathcal{S}\)._ **Lemma 3.17** ([38, Lemma 3.3]).: _Let \(M=(E,\mathcal{I})\) be a matroid or rank \(k\) and \(\mathcal{S}_{1}\) be a \(p_{1}\)-family of independent sets, \(\mathcal{S}_{2}\) be a \(p_{2}\)-family of independent sets, \(\widehat{\mathcal{S}}_{1}\subseteq_{\mathrm{mascrep}}^{k-p_{1}}\mathcal{S}_{1}\), \(\widehat{\mathcal{S}}_{2}\subseteq_{\mathrm{mascrep}}^{k-p_{2}}\mathcal{S}_{2}\). Then \(\widehat{\mathcal{S}}_{1}\bullet\widehat{\mathcal{S}}_{2}\subseteq_{\mathrm{ mascrep}}^{k-p_{1}-p_{2}}\mathcal{S}_{1}\bullet\mathcal{S}_{2}\)._ The following theorem is the key to employ representative families in the design of single-exponential algorithms. We state it only in the maximization variant. [[38, Theorem 3]] Let \(M=(E,\mathcal{I})\) be a linear matroid of rank \(p+q=k\) given together with its representation matrix \(A_{M}\) over a field \(\mathbb{F}\). Let \(\mathcal{S}\) be a \(p\)-family of independent sets in \(M\). Then a max \(q\)-representative family \(\widehat{S}\subseteq S\) for \(S\) with at most \(\binom{k}{p}\) elements can be found in \(\mathcal{O}\left(|\mathcal{S}|\cdot\binom{k}{p}\cdot p^{\omega}+|\mathcal{S}| \cdot\binom{k}{p}^{\omega-1}\right)\) operations over \(\mathbb{F}\). We present a more concise corollary suited for our applications. **Lemma 3.19**.: _Let \(M=(E,\mathcal{I})\) be a graphic matroid of rank \(k\). Let \(\mathcal{S}\) be a family of subsets of \(E\). Then \(\widehat{\mathcal{S}}\subseteq_{\mathrm{mascrep}}^{k}\mathcal{S}\) with at most \(2^{k}\) elements can be found in time \(\mathcal{O}\left(|\mathcal{S}|\cdot 2^{(\omega-1)k}\cdot k^{\omega}\right)\)._ Proof.: Thanks to Lemma 3.11 we can efficiently represent \(M\) over \(\mathbb{F}_{2}\). For \(p\in[k]\) let \(\mathcal{S}^{p}\subseteq\mathcal{S}\) be the family of independent sets in \(\mathcal{S}\) of size \(p\). We apply Theorem 3.18 to compute a max \((k-p)\)-representative family \(\widehat{\mathcal{S}}^{p}\subseteq_{\mathrm{mascrep}}^{k-p}\mathcal{S}^{p}\) of size at most \(\binom{k}{p}\) for each \(\mathcal{S}_{p}\). By Observation 3.14 we can write \(\widehat{\mathcal{S}}^{p}\subseteq_{\mathrm{mascrep}}^{k}\mathcal{S}^{p}\). From Lemma 3.16 we know that \(\widehat{\mathcal{S}}=\bigcup_{p=1}^{k}\widehat{\mathcal{S}}^{p}\) (plus \(\emptyset\) if \(\emptyset\in\mathcal{S}\)) is max \(k\)-representative for \(\mathcal{S}\). The sizes of \(\widehat{\mathcal{S}}^{p}\) sum up to \(2^{k}-1\) and the total running time can be upper bounded as in the statement with a trivial bound \(\binom{k}{p}\leq 2^{k}\). If the family \(\mathcal{S}\) has the special form of a product family, then instead of applying Theorem 3.18 directly, one can obtain a slightly better running time. Such families are of special importance for treewidth-based algorithm since they appear naturally in the computations for join nodes. In the following theorem, the input consists of \(\mathcal{S}_{1},\mathcal{S}_{2},\mathcal{S}_{1}\bullet\mathcal{S}_{2}\), and the weight function \(w\colon\mathcal{S}_{1}\bullet\mathcal{S}_{2}\to\mathbb{N}\), so it may have size \(\mathcal{O}(4^{k})\). [[37, Corrolary 2]] Let \(M=(E,\mathcal{I})\) be a linear matroid of rank \(k\) given together with its representation matrix \(A_{M}\) over a field \(\mathbb{F}\). Let \(\mathcal{S}_{1},\mathcal{S}_{2}\) be two families of independent sets of \(M\) and the number of sets of size \(p\) in \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\) be at most \(\binom{k+c}{p}\). Here, \(c\) is a fixed constant. Let \(\mathcal{S}^{p}_{r}\) be the subfamily of \(\mathcal{S}_{r}\) of sets of size \(p\), for \(r\in\{1,2\}\), \(p\in[k]\). Then for all pairs \(p,q\in[k]\) we can find \(\widehat{\mathcal{S}}^{p,q}\subseteq_{\mathrm{mascrep}}^{k-p-q}\mathcal{S}^{p} _{1}\bullet\mathcal{S}^{q}_{2}\) of size \(\binom{k}{p+q}\) in total \(\mathcal{O}\left(2^{(\omega-1)k}3^{k}\cdot k^{\omega}\right)\) operations over \(\mathbb{F}\). We remark that in the original statement in [37] the number of operations is upper bounded by \(\mathcal{O}\left((2^{\omega}+2)^{k}\cdot k^{\omega}+2^{(\omega-1)k}3^{k}\cdot k ^{\omega}\right)\) but for every value of \(\omega\geq 2\) it holds that \(2^{\omega}+2\leq 2^{\omega-1}\cdot 3\). **Lemma 3.21**.: _Let \(M=(E,\mathcal{I})\) be a graphic matroid of rank \(k\). Let \(\mathcal{S}_{1},\mathcal{S}_{2}\) be two families of independent sets of \(M\), each of size at most \(2^{k}\). Then we can find \(\widehat{\mathcal{S}}\subseteq_{\mathrm{mascrep}}^{k}\mathcal{S}_{1}\bullet \mathcal{S}_{2}\) of size at most \(2^{k}\) in time \(\mathcal{O}\left(2^{(\omega-1)k}3^{k}\cdot k^{\omega}\right)\)._ Proof.: As observed before, we can efficiently represent \(M\) over \(\mathbb{F}_{2}\) and we can assume that \(\emptyset\not\in\mathcal{S}_{1}\bullet\mathcal{S}_{2}\) as otherwise it can added in the end. For \(p\in[k],r\in\{1,2\}\), let \(\mathcal{S}^{p}_{r}\subseteq\mathcal{S}_{r}\) be the family of independent sets in \(\mathcal{S}_{r}\) of size \(p\). We first apply Theorem 3.18 for each pair \((r,p)\) to compute \(\widehat{\mathcal{S}}^{p}_{r}\subseteq_{\mathrm{mascrep}}^{k-p}\mathcal{S}^{p}_{r}\) of size \(\binom{k}{p}\). The total running time is bounded by \(\mathcal{O}(2^{\omega k}\cdot k^{\omega})\) which is bounded by \(\mathcal{O}\left(2^{(\omega-1)k}3^{k}\cdot k^{\omega}\right)\). Now we can apply Theorem 3.20 to \(\bigcup_{p=1}^{k}\hat{\mathcal{S}}_{1}^{p}\) and \(\bigcup_{p=1}^{k}\hat{\mathcal{S}}_{2}^{p}\). We obtain, for each pair \(p,q\in[k]\), a family \(\widehat{\mathcal{S}}^{p,q}\subseteq_{\text{\rm{maxrep}}}^{k-p-q}\hat{\mathcal{ S}}_{1}^{p}\bullet\hat{\mathcal{S}}_{2}^{q}\). By Lemmas 3.15, 3.17, and Observation 3.14, we get that \(\widehat{\mathcal{S}}^{p,q}\subseteq_{\text{\rm{maxrep}}}^{k}S_{1}^{p} \bullet\hat{\mathcal{S}}_{2}^{q}\). Lemma 3.16 implies that \(\widehat{\mathcal{S}}^{\prime}=\bigcup_{p,q\in[k]}\widehat{\mathcal{S}}^{p,q }\subseteq_{\text{\rm{maxrep}}}^{k}\mathcal{S}_{1}\bullet\mathcal{S}_{2}\). The family \(\widehat{\mathcal{S}}^{\prime}\) contains at most \(k\cdot 2^{k}\) sets; we use Lemma 3.19 to find \(\widehat{\mathcal{S}}\subseteq_{\text{\rm{maxrep}}}^{k}\widehat{\mathcal{S}}^ {\prime}\) of size at most \(2^{k}\) in time \(\mathcal{O}(2^{\omega k}\cdot k^{\omega+1})\) which is negligible compared to the running time from Theorem 3.20. The claim follows from Lemma 3.15. ## 4 Chordal Deletion We begin with a simple treewidth-preserving reduction from Feedback Vertex Set. [\(\star\)] Let \(G\) be a graph and \(\ell\in\mathbb{N}\). Let \(G^{\prime}\) be obtained from \(G\) by subdividing each edge. Then \(\textbf{tw}(G^{\prime})=\textbf{tw}(G)\) and \(G\) has a feedback vertex set (FVS) of size \(\ell\) if and only if \(G^{\prime}\) has a chordal deletion set of size \(\ell\). Proof.: When \(S\subseteq V(G)\) is a FVS in \(G\) then it is also a FVS in \(G^{\prime}\). Since \(G^{\prime}-S\) is acyclic, it is also chordal. In the second direction, consider a chordal deletion set \(S^{\prime}\subseteq V(G^{\prime})\) in \(G^{\prime}\). As \(G^{\prime}\) is bipartite, then \(G^{\prime}-S^{\prime}\) is as well, so by Observation 3.6 it must be acyclic. So \(S^{\prime}\) is a FVS in \(G^{\prime}\). If \(S^{\prime}\) contains a vertex \(w\) introduced by subdividing an edge \(uv\in E(G)\) then every cycle in \(G^{\prime}\) going through \(w\) also goes through \(u\) and \(v\). Therefore \((S^{\prime}\setminus w)\cup\{u\}\) is also a FVS in \(G^{\prime}\). We can thus assume that \(S^{\prime}\subseteq V(G)\) and it forms a FVS in \(G\). It remains to upper bound \(\textbf{tw}(G^{\prime})\) as clearly \(\textbf{tw}(G^{\prime})\geq\textbf{tw}(G)\). If \(\textbf{tw}(G)=1\) then \(G\) is a forest and so is \(G^{\prime}\). Suppose that \(\textbf{tw}(G)\geq 2\) and consider a tree decomposition of \(G\) of optimal width. We can transform it into a tree decomposition of \(G^{\prime}\) as follows: for each \(uv\in E(G)\) pick a node \(t\) whose bag contains both \(u,v\) and create a node \(t_{uv}\), adjacent only to \(t\), with a bag \(\{u,v,w\}\), where \(w\) is a vertex introduced on the edge \(uv\). Since all the created bags have size three, the width of the decomposition does not change. As a consequence, the base of the exponent \(c\) in Theorem 1 must be at least \(3\) under Strong Exponential Time Hypothesis [35] and \(c\) must be at least \(2^{\omega}+1\) if the current-best deterministic algorithm for Feedback Vertex Set parameterized by treewidth is optimal [71]. While we have no evidence that the mentioned algorithm should be optimal for deterministic time, we provide this comparison to indicate that breaching this gap for ChVD would imply the same for a more heavily studied problem. Minimal vertex separators.We set the stage for the proof of Theorem 2. First we need to develop some theory about minimal vertex separators in chordal graphs. Let \(\texttt{MinSep}(G)\) denote the set of minimal vertex separators in a graph \(G\). For a graph \(G\) and a (possibly empty) set \(S\subseteq V(G)\), we define \(\texttt{Comp}(G,S)\) to be the set of connected components \(C_{i}\) of \(G-S\) for which it holds that \(N_{G}(C_{i})=S\). Note that whenever \(G\) is disconnected then \(\emptyset\in\texttt{MinSep}(G)\) and \(\texttt{Comp}(G,\emptyset)\) is just the set of connected components of \(G\). According to Lemma 3, the set \(S\) is a minimal \((u,v)\)-separator if and only if \(u,v\) belong to some (distinct) components from \(\texttt{Comp}(G,S)\). For later use, we establish a relation between sets \(\texttt{MinSep}(G)\), \(\texttt{Comp}(G,S)\) in \(G\) and a graph obtained by a removal of a simplicial vertex. [\(\star\)] Let \(v\) be a simplicial vertex in \(G\) and \(S\in\texttt{MinSep}(G)\). If \(S\neq N_{G}(v)\) then \(S\in\texttt{MinSep}(G-v)\) and \(|\texttt{Comp}(G,S)|=|\texttt{Comp}(G-v,S)|\). Proof.: Suppose that \(S\neq N_{G}(v)\). By Lemma 3.2 we know that \(v\not\in S\). First, consider the case \(N_{G}(v)\subsetneq S\). Then \(\{v\}\) forms a connected component of \(G-S\) but \(\{v\}\not\in\mathsf{Comp}(G,S)\). Next, Lemma 3.1 implies that \(S\) is not a minimal \((v,u)\)-separator for any \(u\in V(G)\). Therefore \(S\in\mathtt{MinSep}(G-v)\) and \(|\mathsf{Comp}(G,S)|=|\mathsf{Comp}(G-v,S)|\). In the second case \(N_{G}(v)\not\subseteq S\). Let \(u\in N_{G}(v)\setminus S\) and \(C\) be the connected component of \(G-S\) which contains \(v\). Then \(u\in V(C)\). Since \(v\) is simplicial, we have \(N_{G}(v)\subseteq N_{G}[u]\). Therefore, \(C-v\) is connected, \(N_{G-v}(C\setminus v)=N_{G}(C)\), and so inserting \(v\) to \((G-v)-S\) does not affect the number of connected components nor their neighborhoods. This means that \(S\in\mathtt{MinSep}(G-v)\) and \(|\mathsf{Comp}(G,S)|=|\mathsf{Comp}(G-v,S)|\). We need a simple technical lemma about minimal vertex separators. [\(\star\)] Let \(G\) be a connected graph and \(V_{1},\ldots,V_{k}\subseteq V(G)\), \(k\geq 2\), be disjoint sets so that \(G[V_{i}]\) is connected, for \(i\in[k]\), and \(E_{G}(V_{i},V_{j})=\emptyset\), for \(i\neq j\). Then there exists a minimal vertex separator \(S\subseteq V(G)\setminus(V_{1}\cup\cdots\cup V_{k})\) in \(G\) which is a \((V_{i},V_{j})\)-separator for some \(i\neq j\) and each set \(V_{i}\) is contained in some component \(C\in\mathsf{Comp}(G,S)\). Proof.: Let \(S\subseteq V(G)\setminus(V_{1}\cup\cdots\cup V_{k})\) be an inclusion-minimal set with the following property: \(S\) separates sets \(V_{i},V_{j}\) for some \(i\neq j\). Such a set \(S\) must exist because \(N_{G}(V_{1})\) has this property. We argue that \(S\) satisfies the conditions of the lemma. Clearly \(S\) is a minimal \((v_{i},v_{j})\)-separator for each \(v_{i}\in V_{i}\), \(v_{j}\in V_{j}\). Suppose that for some \(h\in[k]\) the set \(V_{h}\) is not contained in any component from \(\mathsf{Comp}(G,S)\). Let \(C\) be the component of \(G-S\) that contains \(V_{h}\). Since \(C\not\in\mathsf{Comp}(G,S)\), we have \(N_{G}(C)\subsetneq S\). At least one of the sets \(V_{i},V_{j}\) is not contained in \(C\); assume w.l.o.g. that it is \(V_{i}\). Then \(N_{G}(C)\) is a \((V_{h},V_{i})\)-separator being a proper subset of \(S\), which contradicts the choice of \(S\). The claim follows. We will use the following concept which appears in the algorithm for ChVD by Jansen et al. [[46, Def. 5.55]] For a graph \(G\) and a vertex set \(X\subseteq V(G)\) let the graph \(\mathsf{Condense}(G,X)\) be obtained from \(G\) by contracting the connected connected components of \(G-X\) into single vertices and then removing those of them which are simplicial. We say that \(G\) is _condensed_ with respect to \(X\) if \(G=\mathsf{Condense}(\widehat{G},X)\) for some graph \(\widehat{G}\) or, equivalently, \(G=\mathsf{Condense}(G,X)\). Due to the following facts, condensation forms a handy tool for efficiently storing partial solutions for ChVD. [[46, Lem. 5.57]] Consider compatible boundaried graphs \((G,X)\), \((H,X)\) so that \(G,H\) are chordal. Let \(\widehat{G}=\mathsf{Condense}(G,X)\). Then \((G,X)\oplus(H,X)\) is chordal if and only if \((\widehat{G},X)\oplus(H,X)\) is chordal. [[46, Lem. 5.53]] Consider a chordal graph \(G\) with a non-empty vertex subset \(X\subseteq V(G)\). If \(G\) is condensed with respect to \(X\), then \(|V(G)|\leq 2|X|-1\). Consider compatible boundaried graphs \((G,X)\), \((H,X)\). Let \(\widehat{G}=\mathsf{Condense}(G,X)\) and \(\widehat{H}=\mathsf{Condense}(H,X)\). Then \(\mathsf{Condense}((G,X)\oplus(H,X),X)=(\widehat{G},X)\oplus(\widehat{H},X)\). In this section we will exploit the following property of condensation. [\(\star\)] Consider a graph \(G\) with a vertex set \(X\) so that \(G[X]\) is chordal. Then \(G\) is chordal if and only if the following conditions hold: 1. for each connected component \(C\) of \(G-X\) the graph \(G[X\cup C]\) is chordal, 2. the graph \(\mathsf{\mathsf{\mathsf{\mathsf{Condense}}}}(G,X)\) is chordal. Proof.: The forward direction is clear as the class of chordal graph is closed under vertex deletions and edge contractions. We prove the opposite direction by induction on the number \(k\) of the connected components in \(G-X\). For \(k=1\) the condition (1) suffices to obtain chordality of \(G\). Suppose now that \(k>1\) and consider a partition \(V(G)\setminus X=A\cup B\) where \(A\) induces a single connected component of \(G-X\) and \(B\) induces the rest of them. Let \(\widehat{G}_{A}=\mathsf{\mathsf{\mathsf{Condense}}}(G[A\cup X],X)\) and \(\widehat{G}_{B}=\mathsf{\mathsf{\mathsf{Condense}}}(G[B\cup X],X)\). From Observation 4.8 we know that \((\widehat{G}_{A},X)\oplus(\widehat{G}_{B},X)=\mathsf{\mathsf{\mathsf{Condense }}}(G,X)\); in particular this implies that \(\widehat{G}_{A},\widehat{G}_{B}\) are chordal. From inductive assumption we get that \(G[A\cup X],G[B\cup X]\) are chordal. We apply Lemma 4.6 (twice) to obtain that \(G=(G[A\cup X],X)\oplus(G[B\cup X],X)\) is chordal as well. In order to turn Lemma 4.9 into a more convenient criterion, we will compress information about a graph \(G\) with a vertex subset \(X\) into multiple auxiliary graphs, one for each minimal vertex separator in \(G[X]\). **Definition 4.10**.: _Consider a graph \(G\) with a vertex set \(X\) so that \(G[X]\) is chordal. For a set \(S\in\mathsf{\mathsf{\mathsf{MinSep}}}(G[X])\) we construct the graph \(\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{missing}}}}}}(G,X,S)\) as follows:_ 1. _contract each_ \(C\in\mathsf{\mathsf{\mathsf{\mathsf{missing}}}}(G[X],S)\) _into a vertex and remove the remaining vertices of_ \(X\) _(including all of_ \(S\)_),_ 2. _contract each connected component of_ \(G-X\) _into a vertex._ Note that \(\mathsf{\mathsf{\mathsf{\mathsf{missing}}}}(G,X,\emptyset)\) is obtained by just contracting each connected component of \(G[X]\) and each connected component of \(G-X\). Moreover, observe that \(\mathsf{\mathsf{\mathsf{\mathsf{missing}}}}(G,X,S)\) is always a bipartite graph because there can be no edges between two components from \(\mathsf{\mathsf{\mathsf{missing}}}(G[X],S)\) nor between two components of \(G-X\). See Figure 1 for an example of this construction. To make a connection between holes in \(G\) and cycles in \(\mathsf{\mathsf{\mathsf{\mathsf{missing}}}}(G,X,S)\), we need a criterion to derive existence of a cycle from a closed walk with certain properties. In the following lemma we consider a cyclic order on a sequence of length \(k\). We define the successor operator as \(s(i)=i+1\), for \(i\in[k-1]\), and \(s(k)=1\). **Lemma 4.11** (\(\star\)).: _Let \(G\) be a bipartite graph with vertex partition \(V(G)=A\cup B\). Suppose there exists a sequence of vertices \((v_{1},\ldots,v_{k})\) in \(G\) such that:_ 1. _for_ \(i\in[k]\) _it holds_ \(v_{i}=v_{s(i)}\) _or_ \(v_{i}v_{s(i)}\in E(G)\)_,_ 2. _the multiset_ \(\{v_{1},\ldots,v_{k}\}\) _contains at most one occurrence of each vertex from_ \(A\)_,_ 3. _the set_ \(\{v_{1},\ldots,v_{k}\}\) _contains at least two vertices from_ \(B\)_._ _Then \(G\) contains a cycle._ Figure 1: On the left: graph \(G\) and set \(X\subseteq V(G)\) represented by black disks. The graph \(G[X]\) is drawn with solid edges. There are two minimal vertex separators in \(G[X]\): \(S_{1}=\{v\}\) and \(S_{2}=\{u,v\}\), sketched in gray. In the middle: the graph \(\mathsf{\mathsf{\mathsf{\mathsf{missing}}}}(G,X,S_{1})\) with thick edges indicating a component that gets contracted into a single vertex; the gray vertices and edges are removed. On the right: the graph \(\mathsf{\mathsf{\mathsf{missing}}}(G,X,S_{2})\); note that \(|\mathsf{\mathsf{\mathsf{missing}}}(G[X],S_{2})|=2\) because the lower vertices of \(X\) are not adjacent to every vertex in \(S_{2}\). The graph \(\mathsf{\mathsf{\mathsf{\mathsf{missing}}}}(G,X,S_{1})\) contains a cycle and this witnesses that \(G\) is not chordal. However, removing from \(G\) any single vertex among \(x,y,z\) results in a chordal graph. Proof.: We apply modifications to the sequence \((v_{1},\ldots,v_{k})\) while preserving conditions (1-3). First, if \(v_{i}=v_{s(i)}\) then remove \(v_{s(i)}\). This rule is clearly safe. Second, if \(v_{i}=v_{s(s(i))}\neq v_{s(i)}\) then remove \(v_{s(i)}\) and \(v_{s(s(i))}\). Due to condition (2) it must be \(v_{i}\in B\) and \(v_{s(i)}\in A\) so the set \(\{v_{1},\ldots,v_{k}\}\cap B\) stays invariant, which preserves condition (3). Each modification shortens the sequence, so after applying them exhaustively we obtain a sequence that cannot be further reduced. Due to condition (3) the length of the sequence cannot drop below 4. We claim that each edge from \(E(G)\) is now traversed at most once. Suppose otherwise that \(v_{i}v_{s(i)},v_{j}v_{s(j)}\) represent the same edge for \(i\neq j\). The indices \(i,j\) cannot be consecutive due to the second modification rule. But then some vertex of \(A\) must occur twice in the sequence which contradicts condition (2). As a result we obtain a non-trivial closed walk in \(G\) without repeated edges, which implies the existence of a cycle. We are ready to prove a proposition creating a link between chordality and acyclicity. **Proposition 4.12**.: _Consider a graph \(G\) with a vertex subset \(X\subseteq V(G)\) so that for each connected component \(C\) of \(G-X\) the graph \(G[X\cup C]\) is chordal. Then \(G\) is chordal if and only if for each \(S\in\texttt{MinSep}(G[X])\) the graph \(\texttt{Auz}(G,X,S)\) is acyclic._ Proof.: First we argue that if \(G\) is chordal then all graphs \(\texttt{Aux}(G,X,S)\) are acyclic. Because the class of chordal graphs is closed under vertex deletions and edge contractions, the graphs \(\texttt{Aux}(G,X,S)\) are chordal as well. Since each graph \(\texttt{Aux}(G,X,S)\) is also bipartite, by Observation 3.6 we obtain that \(\texttt{Aux}(G,X,S)\) is acyclic. Now suppose that \(G\) is not chordal. Let \(G^{\prime}=\texttt{Condense}(G,X)\) (recall Definition 4.5). By Lemma 4.9, the graph \(G^{\prime}\) is not chordal as well but for each vertex \(v\in V(G^{\prime})\setminus X\) the graph \(G^{\prime}[X\cup\{v\}]\) is chordal (because contraction preserves chordality). Note that \(\texttt{Aux}(G^{\prime},X,S)\) is an induced subgraph of \(\texttt{Aux}(G,X,S)\) for each \(S\in\texttt{MinSep}(G[X])\) (they may differ only due to removal of simplicial vertices), so it suffices to show that one of the graphs \(\texttt{Aux}(G^{\prime},X,S)\) has a cycle. As \(G^{\prime}\) is not chordal, it contains a hole \(H=(u_{1},\ldots,u_{k})\). We consider two cases: either \(V(H)\) intersects at least two connected components of \(G^{\prime}[X]\) or only one. In the first case, let \(\phi_{0}\colon V(G^{\prime})\to V(\texttt{Aux}(G^{\prime},X,\emptyset))\) be the mapping given by the contractions from Definition 4.10. Recall that \(V(G^{\prime})\setminus X\) is an independent set in \(G^{\prime}\) so \(\phi_{0}\) is an identity on this set. The sequence \((\phi_{0}(u_{1}),\ldots,\phi_{0}(u_{k}))\) meets the preconditions of Lemma 4.11 for \(A=V(G^{\prime})\setminus X\) and \(B=\phi_{0}(X)\) so \(\texttt{Aux}(G^{\prime},X,\emptyset)\) has a cycle. As \(G^{\prime}[X]=G[X]\) is disconnected, we have \(\emptyset\in\texttt{MinSep}(G[X])\). In the second case, let \(Y\subseteq X\) induce the only connected component of \(G^{\prime}[X]\) that intersects \(V(H)\). Let \(V_{1},\ldots,V_{\ell}\subseteq Y\) be the vertex sets of maximal subpaths of \(H\) within \(Y\). By the definition of a hole, we have \(E_{G^{\prime}}(V_{i},V_{j})=\emptyset\) for distinct \(i,j\in[\ell]\). It must be \(\ell\geq 2\) because for each \(v\in V(G^{\prime})\setminus X\) the graph \(G^{\prime}[X\cup\{v\}]\) is chordal and the hole \(H\) must visit at least two vertices from the independent set \(V(G^{\prime})\setminus X\). By Lemma 4.4, there exists a minimal vertex separator \(S\subseteq Y\setminus V(H)\) in \(G^{\prime}[Y]\) such that every set \(V_{i}\) is contained in some component from \(\texttt{Comp}(G^{\prime}[Y],S)\) and at least two components from \(\texttt{Comp}(G^{\prime}[Y],S)\) intersect \(V(H)\). Note that \(S\in\texttt{MinSep}(G[X])\). Let \(C_{S}\) be the union of the components from \(\texttt{Comp}(G^{\prime}[Y],S)\); note that \(V(H)\subseteq V(C_{S})\cup(V(G^{\prime})\setminus X)\). Let \(\phi_{S}\colon V(C_{S})\cup(V(G^{\prime})\setminus X)\to V(\texttt{Aux}(G^{ \prime},X,S))\) be the mapping given by the contractions from Definition 4.10 which turn each component from \(\texttt{Comp}(G^{\prime}[Y],S)\) into a single vertex. Again, the sequence \((\phi_{S}(u_{1}),\ldots,\phi_{S}(u_{k}))\) meets the preconditions of Lemma 4.11 for \(A=V(G^{\prime})\setminus X\) and \(B=\phi_{S}(V(C_{S}))\) so \(\texttt{Aux}(G^{\prime},X,S)\) has a cycle. See Figure 1 for an illustration. Observe that whenever a component of \(G-X\) is simplicial then in every graph \(\mathtt{Aux}(G,X,S)\) the corresponding vertex has degree one and so it cannot be a part of any cycle. Therefore the simplicial components of \(G-X\) does not affect the criterion from Proposition 4.12. This agrees with the definition of \(\mathtt{Condense}(G,X)\) where the simplicial components are removed as meaningless. Signatures of boundaried graphs.The next step is to construct a graphic matroid \(M_{B}\) for a chordal graph \(B\) so that for any two graphs \(G_{1},G_{2}\in\mathcal{G}_{X,B}\) the information about chordality of \((G_{1},X)\oplus(G_{2},X)\) could be read from \(M_{B}\). Proposition 4.12 already relates chordality to acyclicity but the corresponding graphic matroids for \(G_{1},G_{2}\) are disparate. To circumvent this, we will further compress the information about cycles. Consider a graph \(B\). For \(S\in\mathtt{MinSep}(B)\), let \(\mathtt{Base}(B,S)\) be the complete graph on vertex set \(\mathtt{Comp}(B,S)\). The graph \(\mathtt{Base}(B)\) is a disjoint union of all the graphs \(\mathtt{Base}(B,S)\) for \(S\in\mathtt{MinSep}(B)\). That is, we treat the components from \(\mathtt{Comp}(B,S)\) as abstract vertices of a new graph which is a union of cliques. The following transformation is similar to the one used in the algorithm for Steiner Tree based on representative families [38]. For the sake of disambiguation, in the definition below we assume an implicit linear order on the vertices of \(B\); this order may be arbitrary. Since vertices of \(\mathtt{Base}(B)\) correspond to distinct subsets of \(V(B)\), which can ordered lexicographically, fixing the order on \(V(B)\) yields an order on \(V(\mathtt{Base}(B))\). We can thus assume that also the vertices of \(V(\mathtt{Base}(B))\) are linearly ordered. Consider a chordal graph \(B\) and \(Y\subseteq V(B)\). We define the _spanning signature_\(\mathtt{Span}(B,Y)\subseteq E(\mathtt{Base}(B))\) as follows. For each \(S\in\mathtt{MinSep}(B)\) let \(C_{S,Y}\subseteq V(\mathtt{Base}(B,S))\) be given by components from \(\mathtt{Comp}(B,S)\) with a non-empty intersection with \(Y\). Let \(P_{S,Y}\subseteq E(\mathtt{Base}(B,S))\) be the path connecting the vertices of \(C_{S,Y}\) in the increasing order. Then \(\mathtt{Span}(B,Y)=\bigcup_{S\in\mathtt{MinSep}(B)}P_{S,Y}\). In other words, \(\mathtt{Span}(B,Y)\) is a disjoint union of paths in the graph \(\mathtt{Base}(B)\), where each path encodes the relation between \(Y\) and a respective minimal vertex separator in \(B\). The next lemma states that under certain conditions replacing a vertex \(v\) with a tree over \(N(v)\) (in particular: a path) does not affect acyclicity of the graph. Note that due to the precondition \(|N(u)\cap N(v)|\leq 1\) we never attempt to insert an edge that is already present. [\(\star\)] Let \(G\) be a bipartite graph with a vertex partition \(V(G)=A\cup B\) so that for each distinct \(u,v\in A\) it holds that \(|N_{G}(u)\cap N_{G}(v)|\leq 1\). Consider a graph \(G^{\prime}\) obtained from \(G\) by replacing each vertex \(v\in A\) by an arbitrary tree on vertex set \(N_{G}(v)\). Then \(G\) is acyclic if and only if \(G^{\prime}\) is acyclic. Proof.: For a graph \(G\), let \(\mu(G)\) be the multiset of integers \((|V(C)|-|E(C)|)_{C\in\mathcal{C}}\) where \(\mathcal{C}\) is the family of connected components of \(G\). A graph \(G\) is acyclic if and only if \(\mu(G)\) contains only \(1\)'s. We show that the described modifications, performed in an arbitrary order, does not affect \(\mu(G)\) except for possibly removing some \(1\)'s. Let \(v\in V(G)\), \(d=|N_{G}(v)|\), and \(C\) denote the connected component of \(v\). If \(v\) is isolated, then removing \(v\) translates into removing single \(1\) from \(\mu(G)\). Otherwise, replacing \(v\) with a tree on \(N_{G}(v)\) transforms \(C\) into another connected graph \(C^{\prime}\). We remove one vertex and \(d\) edges, so \(|V(C)|-|E(C)|\) drops by \(d-1\). On the other hand, any tree over \(N_{G}(C)\) has exactly \(d-1\) edges. Due to the assumption \(|N_{G}(u)\cap N_{G}(v)|\leq 1\) for \(u\neq v\), every inserted tree is disjoint from previously inserted edges among \(B\). Hence, we insert exactly \(d-1\) new edges and \(|V(C)|-|E(C)|=|V(C^{\prime})|-|E(C^{\prime})|\). We also only remove vertices from \(A\) so we never remove an endpoint of an inserted edge. The claim follows by observing that \(\mu(G)\) contains an element different from \(1\) if and only if \(\mu(G^{\prime})\) does. This allows us to translate the criterion from Proposition 4.1 into a more convenient one, in which the vertex set of the auxiliary graph depends only on \(G[X]\) rather than \(G\). Consider a graph \(G\) with a vertex subset \(X\subseteq V(G)\). Let \(\mathcal{C}\) denote the family of connected components of \(G-X\). Suppose that for each \(C\in\mathcal{C}\) the graph \(G[X\cup C]\) is chordal. Then \(G\) is chordal if and only if: 1. the sets \(\texttt{Span}(G[X],N_{G}(C))\), for different \(C\in\mathcal{C}\), are pairwise disjoint, 2. the union of sets \(\texttt{Span}(G[X],N_{G}(C))\), over \(C\in\mathcal{C}\), forms an acyclic edge set in \(E(\texttt{Base}(G[X]))\). Proof.: From Proposition 4.1 we know that \(G\) is chordal if and only if for each \(S\in\texttt{MinSep}(G[X])\) the graph \(\texttt{Aux}(G,X,S)\) is acyclic. We consider two cases. First, suppose that for some \(S\in\texttt{MinSep}(G[X])\) there are two vertices representing distinct components \(C_{1},C_{2}\in\mathcal{C}\) that share two common neighbors \(x,y\) in \(\texttt{Aux}(G,X,S)\). In other words, there are two components from \(\texttt{Comp}(G[X],S)\) that intersect both \(N_{G}(C_{1})\) and \(N_{G}(C_{2})\). Then \(\texttt{Aux}(G,X,S)\) contains a cycle of length \(4\), so \(G\) is not chordal. If \(\texttt{Span}(G[X],N_{G}(C_{1}))\) and \(\texttt{Span}(G[X],N_{G}(C_{2}))\) share an edge, then condition (1) fails, so suppose this is not the case. But then the paths \(P_{S,N(C_{1})}\) and \(P_{S,N(C_{2})}\) (recall Definition 4.14) are edge-disjoint and they both visit \(x\) and \(y\). As a consequence, \(x,y\) lie on a cycle contained in the edge set \(\texttt{Span}(G[X],N_{G}(C_{1}))\cup\texttt{Span}(G[X],N_{G}(C_{2}))\) so condition (2) fails. In summary, both \(G\) is not chordal and one of conditions (1, 2) does not hold. Next, suppose that for each \(S\in\texttt{MinSep}(G[X])\) and any two vertices representing distinct components \(C_{1},C_{2}\in\mathcal{C}\) the intersection of their neighborhoods in \(\texttt{Aux}(G,X,S)\) contains at most one element. This implies condition (1). Consider a graph \(H\) given by a disjoint union of all graphs \(\texttt{Aux}(G,X,S)\) over \(S\in\texttt{MinSep}(G[X])\). This graph meets the preconditions of Lemma 4.15. Replacing each \(\mathcal{C}\)-component-vertex in \(\texttt{Aux}(G,X,S)\) by the path \(P_{S,N(C)}\) transforms \(H\) into a subgraph of \(\texttt{Base}(G[X])\) with the edge set \(\bigcup_{C\in\mathcal{C}}\texttt{Span}(G[X],N_{G}(C))\). By Lemma 4.15, this graph is acyclic if and only if the graph \(H\) is. By Proposition 4.12, this condition is equivalent to \(G\) being chordal. The lemma follows. We are ready to define the graphic matroid encoding all the necessary information about where a hole can appear after gluing two chordal graphs. Recall that a graphic matroid of a graph \(G\) is a set system over \(E(G)\) where a subset \(S\subseteq E(G)\) is called independent when \(S\) contains no cycles. For a graph \(B\) on vertex set \(X\) we define matroid \(M_{B}\) as the graphic matroid of the graph \(\texttt{Base}(B)\). For a graph \(G\in\mathcal{G}_{X,B}\) the signature \(\texttt{Sign}(G,X)\subseteq E(\texttt{Base}(B))\) is defined as a union of \(\texttt{Span}(B,N_{G}(C))\) over all connected components \(C\) of \(G-X\). It follows from Lemma 4 that whenever \(G\) is chordal then \(\texttt{Sign}(G,X)\) is acyclic and so it forms an independent set in the matroid \(M_{G[X]}\). We can now give the existential part of Theorem 2. The mapping \(\sigma\colon\mathcal{G}_{X,B}\to 2^{E(M_{B})}\) therein is given here as \(\sigma(G)=\texttt{Sign}(G,X)\). Let \((G_{1},X)\) and \((G_{2},X)\) be compatible boundaried chordal graphs. Then \(G=(G_{1},X)\oplus(G_{2},X)\) is chordal if and only if the sets \(\texttt{Sign}(G_{1},X)\), \(\texttt{Sign}(G_{2},X)\subseteq E(\texttt{Base}(G[X]))\) are disjoint and \(\texttt{Sign}(G_{1},X)\cup\texttt{Sign}(G_{2},X)\) is acyclic. Furthermore, \(\texttt{Sign}(G,X)=\texttt{Sign}(G_{1},X)\cup\texttt{Sign}(G_{2},X)\). Proof.: Let \(C_{1},C_{2},\ldots,C_{\ell}\) denote the connected components of \(G_{1}-X\) and \(D_{1},D_{2},\ldots,D_{r}\) denote the connected components of \(G_{2}-X\). Clearly all graphs \(G_{1}[X\cup C_{i}]\) and \(G_{2}[X\cup D_{i}]\) are chordal. Let \(\mathcal{S}_{1}\) be the family of sets \(\{\mathtt{Span}(G[X],N_{G_{1}}(C_{i}))\}_{i=1}^{\ell}\) and \(\mathcal{S}_{2}\) be \(\{\mathtt{Span}(G[X],N_{G_{2}}(D_{i}))\}_{i=1}^{r}\). It follows from Lemma 4.16 that the sets in \(\mathcal{S}_{1}\) are pairwise disjoint and their union, which is \(\mathtt{Sign}(G_{1},X)\), is an acyclic edge set in \(E(\mathtt{Base}(G[X]))\). The same holds for \(\mathcal{S}_{2}\) and \(\mathtt{Sign}(G_{2},X)\). Again by Lemma 4.16, the graph \(G=(G_{1},X)\oplus(G_{2},X)\) is chordal if and only if the sets in the family \(\mathcal{S}_{1}\cup\mathcal{S}_{2}\) are pairwise disjoint and their sum is acyclic. This is equivalent to the condition that \(\mathtt{Sign}(G_{1},X),\mathtt{Sign}(G_{2},X)\) are disjoint and \(\mathtt{Sign}(G_{1},X)\cup\mathtt{Sign}(G_{2},X)\) is acyclic, as intended. By definition, \(\mathtt{Sign}(G,X)\) is the union of \(\mathtt{Span}(G[X],N_{G}(C))\) over all connected components \(C\) of \(G-X\). This union equals \(\mathtt{Sign}(G_{1},X)\cup\mathtt{Sign}(G_{2},X)\). The following lemma is the main ingredient in the running time analysis. As the bound on the representative family's size is exponential in the rank of a matroid2, it is necessary to bound the rank of \(M_{B}\). It is known that the number of minimal vertex separators in a chordal graph is bounded by the number of vertices but we need a strengthening of this fact. Footnote 2: We remark that Fomin et al. [38] also considered a case when the rank might be large and the exponential term is governed by a different parameter but it is not applicable in our case. **Lemma 4.19**.: _For a non-empty chordal graph \(B\), the rank of \(M_{B}\) is at most \(|V(B)|-1\)._ Proof.: Let \(k=|V(B)|\). The rank of \(M_{B}\) equals the size of a spanning forest in \(\mathtt{Base}(B)\). The vertex sets of connected components of \(\mathtt{Base}(B)\) are the sets \(\mathtt{Comp}(B,S)\) for \(S\in\mathtt{MinSep}(B)\). Therefore it suffices to estimate \[\sum_{S\in\mathtt{MinSep}(B)}(|\mathtt{Comp}(B,S)|-1)\leq k-1.\] We first prove the inequality for connected chordal graphs by induction on \(k\). For \(k=1\) the sum is zero. Consider \(k>1\). By Lemma 3.5, \(B\) contains a simplicial vertex. Let \(v\) be a simplicial vertex in \(B\) and suppose that the claim holds for the graph \(B-v\) (which is connected). Let \(S\) be a minimal vertex separator in \(B\). By Lemma 4.3 when \(S\neq N_{B}(v)\) then \(S\in\mathtt{MinSep}(B-v)\) and \(|\mathtt{Comp}(B,S)|=|\mathtt{Comp}(B-v,S)|\). In that case the summand coming from \(S\) is the same for \(B\) and \(B-v\). It remains to handle the case \(S=N_{B}(v)\). Clearly, \(\{v\}\in\mathtt{Comp}(B,S)\). If \(|\mathtt{Comp}(B,S)|=1\) then \(S\not\in\mathtt{MinSep}(B)\) (Lemma 3.1). If \(|\mathtt{Comp}(B,S)|=2\) then \(S\in\mathtt{MinSep}(B)\setminus\mathtt{MinSep}(B-v)\) and the sum grows by one. If \(|\mathtt{Comp}(B,S)|\geq 3\) then \(S\in\mathtt{MinSep}(B)\cap\mathtt{MinSep}(B-v)\) and \(|\mathtt{Comp}(B,S)|=|\mathtt{Comp}(B-v,S)|+1\) so the sum again grows by one. This concludes the proof of the inequality for connected chordal graphs. When \(B\) is disconnected, let \(B_{1},B_{2},\ldots,B_{t}\) denote its connected components and let \(k_{i}=|V(B_{i})|\). We have \(|\mathtt{Comp}(B,\emptyset)|-1=t-1\). Together with the sums for \(B_{1},B_{2},\ldots,B_{t}\) the total sum is at most \(\sum_{i=1}^{t}k_{i}-t+t-1=k-1\). The last thing to be checked is whether we can compute the signatures efficiently. To this end, we enumerate minimal vertex separators using Lemma 4.3. **Lemma 4.20** (\(\star\)).: _There is a polynomial-time algorithm that, given a graph \(G\) with a vertex subset \(X\subseteq V(G)\) such that \(G[X]\) is chordal, computes \(\mathtt{Sign}(G,X)\)._ Proof.: Let \(B=G[X]\). We show that one can enumerate \(\mathtt{MinSep}(B)\) in polynomial time. By Lemma 3.5 the graph \(B\) contains a simplicial vertex \(v\). This vertex can be found in polynomial time. We recursively enumerate \(\mathtt{MinSep}(B-v)\). By Lemma 4.3, if \(S\in\mathtt{MinSep}(B)\) then either \(S\in\mathtt{MinSep}(B-v)\) or \(S=N_{B}(v)\), so the output size increases by at most one. We can verify which elements of \(\mathtt{MinSep}(B-v)\cup\{N_{B}(v)\}\) are minimal vertex separators in \(G\) using Lemma 3.1; as a byproduct we obtain the sets \(\mathtt{Comp}(B,S)\). It remains to directly follow the definition of \(\mathtt{Sign}(G,X)\). Lemmas 4.18, 4.19, and 4.20 entail Theorem 2.1 but instead of working with that abstract statement we use these three lemmas directly when describing the final algorithm. Representative families for boundaried graphs.We translate the framework of representative families from the language of matroids to chordal graphs and gluing. Consider a family of chordal graphs \(\mathcal{G}\subseteq\mathcal{G}_{X,B}\) for some pair \((X,B)\) and a non-negative weight function \(w:\mathcal{G}\to\mathbb{N}\). We say that a subfamily \(\widehat{\mathcal{G}}\subseteq\mathcal{G}\) is max-representative for \(\mathcal{G}\) (and write \(\widehat{\mathcal{G}}\subseteq_{\max\mathrm{rep}}\mathcal{G}\)) if the following holds. For every graph \(H\in\mathcal{G}_{X,B}\), if there exist \(G\in\mathcal{G}\) so that \((H,X)\oplus(G,X)\) is chordal, then there exists \(\widehat{G}\in\widehat{\mathcal{G}}\) so that \((H,X)\oplus(\widehat{G},X)\) is chordal and \(w(\widehat{G})\geq w(G)\). Consider a family of chordal graphs \(\mathcal{G}\subseteq\mathcal{G}_{X,B}\) for some pair \((X,B)\) and a non-negative weight function \(w:\mathcal{G}\to\mathbb{N}\). Suppose that the matroid \(M_{B}\) has rank \(r\). Let \(\mathcal{S}=\{\mathtt{Sign}(G,X)\mid G\in\mathcal{G}\}\) and \(\tau\colon\mathcal{S}\to\mathcal{G}\) be given as \(\tau(Y)=\operatorname{argmax}\left\{w(G)\mid G\in\mathcal{G},\,\mathtt{Sign}(G,X)=Y\right\}\). Suppose that \(\widehat{\mathcal{S}}\subseteq_{\max\mathrm{rep}}^{r}\mathcal{S}\) with respect to matroid \(M_{B}\) and weight function \(w_{\mathcal{S}}(Y)=w(\tau(Y))\). Then \(\tau(\widehat{\mathcal{S}})\subseteq_{\max\mathrm{rep}}\mathcal{G}\). Proof.: Let \(H\in\mathcal{G}_{X,B}\) and \(G\in\mathcal{G}\) be such that \((H,X)\oplus(G,X)\) is chordal. Clearly \(H\) must be chordal as well. The set \(S=\mathtt{Sign}(G,X)\) belongs to \(\mathcal{S}\) and \(w_{\mathcal{S}}(S)\geq w(G)\). By Lemma 4.18 we have that \(\mathtt{Sign}(H,X)\cap S=\emptyset\) and \(\mathtt{Sign}(H,X)\cup S\) is acyclic. By the definition of a max-representative family for a graphic matroid, there exists a set \(\widehat{S}\in\widehat{\mathcal{S}}\) so that \(\mathtt{Sign}(H,X)\cap\widehat{S}=\emptyset\), \(\mathtt{Sign}(H,X)\cup\widehat{S}\) is acyclic, and \(w_{\mathcal{S}}(\widehat{S})\geq w_{\mathcal{S}}(S)\). Let \(\widehat{G}=\tau(\widehat{S})\). Again by Lemma 4.18 we infer that \((H,X)\oplus(\widehat{G},X)\) is chordal. Finally, \(w(\widehat{G})=w_{\mathcal{S}}(\widehat{S})\geq w_{\mathcal{S}}(S)\geq w(G)\). Consider a family of chordal graphs \(\mathcal{G}\subseteq\mathcal{G}_{X,B}\) for some pair \((X,B)\) and a non-negative weight function \(w:\mathcal{G}\to\mathbb{N}\). Suppose that \(|X|=k>0\), \(|\mathcal{G}|\leq 2^{k+c}\), and each \(G\in\mathcal{G}\) has at most \(ck\) vertices. Here, \(c\) is a fixed constant. Then a max-representative family \(\widehat{G}\subseteq\widehat{\mathcal{G}}\) for \(\mathcal{G}\) of size at most \(2^{k-1}\) can be computed in time \(\mathcal{O}(2^{\omega k}\cdot k^{\omega})\). Proof.: Let \(\mathcal{S}=\{\mathtt{Sign}(G,X)\mid G\in\mathcal{G}\}\). This family can be computed in time \(2^{k}\cdot k^{\mathcal{O}(1)}\) thanks to Lemma 4.20. By Lemma 4.19 the rank \(r\) of \(M_{B}\) is at most \(k-1\). By Lemma 4.22 if suffices to find a max \(r\)-representative family for \(\mathcal{S}\) of the requested size. This can be done with Lemma 3.19 in time \(\mathcal{O}(2^{\omega k}\cdot k^{\omega})\). Since \(2<2^{\omega}\) the latter term is dominating in the running time. We also provide an efficient algorithm for processing large families of graphs that arise when handling a join node. For a graph family \(\mathcal{G}\) we write \(\mathcal{G}\cap\mathtt{chordal}\) to indicate the subfamily of chordal graphs in \(\mathcal{G}\). Consider two families of chordal graphs \(\mathcal{G}_{1},\mathcal{G}_{2}\subseteq\mathcal{G}_{X,B}\) for some pair \((X,B)\). Suppose that \(|X|=k>0\), \(|\mathcal{G}_{1}|,|\mathcal{G}_{2}|\leq 2^{k}\), and each \(G\in\mathcal{G}_{1}\cup\mathcal{G}_{2}\) has \(\mathcal{O}(k)\) vertices. Let \(\mathcal{G}=\{(G_{1},X)\oplus(G_{2},X)\mid G_{1}\in\mathcal{G}_{1},G_{2}\in \mathcal{G}_{2}\}\cap\mathtt{chordal}\) and \(w:\mathcal{G}\to\mathbb{N}\) be a non-negative weight function. Then \(\widehat{\mathcal{G}}\subseteq_{\max\mathrm{rep}}\mathcal{G}\) of size at most \(2^{k-1}\) can be computed in time \(\mathcal{O}(2^{(\omega-1)}k^{3}k\cdot k^{\omega})\) when given \(\mathcal{G}_{1},\mathcal{G}_{2},\mathcal{G},w\). Proof.: Let \(\mathcal{S}=\{\mathtt{Sign}(G,X)\mid G\in\mathcal{G}\}\), \(\mathcal{S}_{1}=\{\mathtt{Sign}(G,X)\mid G\in\mathcal{G}_{1}\}\), and \(\mathcal{S}_{2}=\{\mathtt{Sign}(G,X)\mid G\in\mathcal{G}_{2}\}\). We claim that \(\mathcal{S}=\mathcal{S}_{1}\bullet\mathcal{S}_{2}\). For \(\ell\in\{1,2\}\) consider \(S_{\ell}\in\mathcal{S}_{\ell}\) and \(G_{\ell}\in\mathcal{G}_{\ell}\) such that \(\mathtt{Sign}(G_{\ell},X)=S_{\ell}\). Then \((G_{1},X)\oplus(G_{2},X)\) belongs to \(\mathcal{G}\) if and only if it is chordal what, by Lemma 4.18, is equivalent to the condition that \(S_{1}\cap S_{2}=\emptyset\) and \(S_{1}\cup S_{2}\) is independent in \(M_{B}\). This justifies the claim. By Lemma 4.19 the rank \(r\) of \(M_{B}\) is at most \(k-1\). We proceed similarly as in Lemma 4.23 but we use Lemma 3.21 to compute an \(r\)-representative family of size \(2^{k-1}\) for \(\mathcal{S}_{1}\bullet\mathcal{S}_{2}\) in time \(\mathcal{O}(2^{(\omega-1)k}3^{k}\cdot k^{\omega})\). Computing the signatures and the mapping \(\tau\colon\mathcal{S}\to\mathcal{G}\) takes time \(|\mathcal{G}|\cdot k^{\mathcal{O}(1)}=4^{k}\cdot k^{\mathcal{O}(1)}\) so the previous term is dominating in the running time. ### Dynamic programming We present a dynamic programming algorithm processing a tree decomposition. We begin with describing the states of the dynamic programming routine and their invariants. Although we do not store tables indexed by partial solutions but rather a family of partial solutions with weights (as in [38]), we keep the notion of'state' to refer to information stored at a node of a decomposition. States for DP.A state for a node \(t\in V(\mathbb{T})\) is a family of pairs \((\mathcal{H}_{t,X},h_{t,X})\) assigned to each \(X\subseteq\chi(t)\), where \(\mathcal{H}_{t,X}\subseteq\mathcal{G}_{X,G[X]}\) is a family of chordal graphs and \(h_{t,X}\colon\mathcal{H}_{t,X}\to\mathbb{N}\) is a non-negative weight function. Recall that \(V_{t}\) denotes the set of vertices occurring in the subtree rooted at \(t\in V(\mathbb{T})\) and \(U_{t}=V_{t}\setminus\chi(t)\). The intended meaning of \(H\subseteq\mathcal{H}_{t,X}\) and \(h_{t,X}(H)=s\) is that there should exists a set \(A\subseteq U_{t}\) of total weight \(s\) so that \((A,X)\) is a partial solution equivalent to \(H\). For each \(H\) we want to keep track of such a set \(A\) of maximal weight. For sets \(X,Y\) we write concisely \((A,B)\subseteq(X,Y)\) to denote \(A\subseteq X\), \(B\subseteq Y\). Correctness invariant.For \(t\in V(\mathbb{T})\) and \(X\subseteq\chi(t)\) we say that a pair \((\mathcal{H},h)\) satisfies the correctness invariant for \((t,X)\) if the following holds. 1. For each \(H\in\mathcal{H}\) there exists a set \(A\subseteq U_{t}\) so that \(\mathtt{Condense}(G[A\cup X],X)=H\), \(G[A\cup X]\) is chordal, and \(h(H)=w(A)\). 2. For each \((A,B)\subseteq(U_{t},V(G)\setminus V_{t})\) for which \(G[A\cup X\cup B]\) is chordal there exists \(H\in\mathcal{H}\) so that \((H,X)\oplus(G[B\cup X],X)\) is chordal and \(h(H)\geq w(A)\). As a consequence of this invariant, we have that for each pair \((A,B)\) from the second condition there exists \(\widehat{A}\subseteq U_{t}\) (a replacement for \(A\)) so that \(w(\widehat{A})\geq w(A)\), \(\mathtt{Condense}(G[\widehat{A}\cup X],X)\in\mathcal{H}\), and \((\mathtt{Condense}(G[\widehat{A}\cup X],X),X)\oplus(G[B\cup X],X])\) is chordal which implies that \(G[\widehat{A}\cup X\cup B]\) is chordal (Lemma 4.6). Since every graph in \(\mathcal{H}_{t,X}\) is condensed with respect to \(X\), Lemma 4.7 implies a bound on its size (we do not subtract one from \(2|X|\) to cover the case \(X=\emptyset\)). If \(t\in V(\mathbb{T})\), \(X\subseteq\chi(t)\), and \((\mathcal{H},h)\) satisfies the correctness invariant for \((t,X)\), then every graph in \(\mathcal{H}\) has at most \(2|X|\) vertices. We take advantage of the theory developed so far to keep the sizes of \(\mathcal{H}_{t,X}\) in check. Consider \(t\in V(\mathbb{T})\) and \(X\subseteq\chi(t)\). If a pair \((\mathcal{H},h)\) satisfies the correctness invariant for \((t,X)\) and \(\widehat{\mathcal{H}}\subseteq_{\mathrm{maxrep}}\mathcal{H}\) then \((\widehat{\mathcal{H}},h)\) also satisfies the correctness invariant for \((t,X)\). Proof.: The first condition is satisfied trivially as the quantification switches to a subset of \(\mathcal{H}\). Consider \((A,B)\subseteq(U_{t},V(G)\setminus V_{t}))\) from condition (2). By the assumption, there exists \(H\in\mathcal{H}\) so that \((H,X)\oplus(G[B\cup X],X)\) is chordal and \(h(H)\geq w(A)\). By the definition of a max-representative family there exists \(\widehat{H}\subseteq\widehat{\mathcal{H}}\) so that \((\widehat{H},X)\oplus(G[B\cup X],X)\) is chordal and \(h(\widehat{H})\geq h(H)\geq w(A)\). The claim follows. Size invariant.For \(t\in V(\mathbb{T})\) and \(X\subseteq\chi(t)\) we say that \(\mathcal{H}\subseteq\mathcal{G}_{X,G[X]}\) satisfies the size invariant if \(|\mathcal{H}|\leq 2^{|X|}\). For a node \(t\in\mathbb{T}\) we say that its state satisfies the correctness or size invariant if all the pairs \((\mathcal{H}_{t,X},h_{t,X})_{X\subseteq\chi(t)}\) satisfies it. We move on to describing the dynamic programming routine for the three non-trivial types of nodes in a nice tree decomposition. In each case we begin with the algorithm, then prove the correctness invariant, and then analyze the running time together with the size invariant. [Introduce node] Let \((\mathbb{T},\chi)\) be a nice tree decomposition of \(G\) of width \(k\) and \(t\in V(\mathbb{T})\) be an introduce node with a child \(t^{\prime}\). Suppose that the state for \(t^{\prime}\) satisfying the correctness and size invariants is given. Then we can compute the state for \(t\) which satisfies the correctness and size invariants in time \(3^{k}k^{\mathcal{O}(1)}\). Proof.: We have \(\chi(t)=\chi(t^{\prime})\cup\{v\}\) for some vertex \(v\). Next, \(U_{t}=U_{t^{\prime}}\) and \(V_{t}=V_{t^{\prime}}\cup\{v\}\). Note that \(N_{G}(v)\cap U_{t}=\emptyset\). We define operation \(\mathtt{Introduce}(H,N)\) which takes a graph \(H\) and a set \(N\subseteq V(H)\) and inserts to \(H\) a new vertex \(v\) with neighborhood \(N\). If \(X\subseteq\chi(t)\) does not contain \(v\), we set \(\mathcal{H}_{t,X}=\mathcal{H}_{t^{\prime},X}\) and \(h_{t,X}=h_{t^{\prime},X}\). Consider \(X\subseteq\chi(t)\) containing \(v\); let \(X^{-}=X\setminus v\). If \(G[X]\) is not chordal we set \(\mathcal{H}_{t,X}=\emptyset\). Otherwise for each \(H^{\prime}\in\mathcal{H}_{t^{\prime},X^{-}}\) we compute \((H,X)=\mathtt{Introduce}(H^{\prime},N_{G}(v)\cap X^{-})\). If \(H\) is chordal we insert it to \(\mathcal{H}_{t,X}\) and set \(h_{t,X}(H)=h_{t^{\prime},X^{-}}(H^{\prime})\). Correctness.Suppose first that \(v\not\in X\). We check condition (1). Let \(H\in\mathcal{H}_{t,X}\). By the construction, \(H\in\mathcal{H}_{t^{\prime},X}\) and, by the inductive assumption, there is a set \(A\subseteq U_{t^{\prime}}=U_{t}\) so that \(\mathtt{Condense}(G[A\cup X],X)=H\), \(G[A\cup X]\) is chordal, and \(h_{t,X}(H)=h_{t^{\prime},X}(H)=w(A)\). To see condition (2), consider any \(A\subseteq U_{t}\) and \(B\subseteq V(G)\setminus V_{t}\). Then \(A\subseteq U_{t^{\prime}}\) and \(B\subseteq V(G)\setminus V_{t^{\prime}}\) so again the claim follows directly from the invariant for \(t^{\prime}\). Suppose now that \(v\in X\). We check condition (1). Every \(H\in\mathcal{H}_{t,X}\) is of the form \(H=\mathtt{Introduce}(H^{\prime},N_{G}(v)\cap X^{-})\) for some \(H^{\prime}\in\mathcal{H}_{t^{\prime},X^{-}}\). Moreover, \(H\) must be chordal by the construction. By the inductive assumption there is a set \(A\subseteq U_{t^{\prime}}=U_{t}\) so that \(\mathtt{Condense}(G[A\cup X^{-}],X^{-})=H^{\prime}\), \(G[A\cup X^{-}]\) is chordal, \(h_{t,X}(H)=h_{t^{\prime},X}(H^{\prime})=w(A)\). Since \(N_{G}(v)\cap A=\emptyset\) the vertex \(v\) does not affect which connected components of \(G[A]\) are simplicial in \(G[A\cup X]\). Therefore, the graph \(\mathtt{Condense}(G[A\cup X],X)\) can be obtained from \(\mathtt{Condense}(G[A\cup X^{-}],X^{-})\) by simply inserting the vertex \(v\), and so it equals \(H\). Finally, we need to check that \(G[A\cup X]\) is chordal. We apply criterion from Lemma 4.1 with respect to set \(X^{-}\). For every connected component \(C\) of \(G[A\cup X]-X^{-}\) the graph \(G[C\cup X^{-}]\) is either a subgraph of \(G[A\cup X^{-}]\) or it equals \(G[X]\)--in both cases it is chordal. The graph \(\mathtt{Condense}(G[A\cup X],X^{-})\) is either isomorphic to \(H\) or can be obtained from \(H\) by removal of \(v\) (when \(v\) is simplicial in \(H\)). Therefore this graph is also chordal what implies that \(G[A\cup X]\) is chordal. We move on to condition (2) for the case \(v\in X\). Let \((A,B)\subseteq(U_{t},V(G)\setminus V_{t})\) be such that \(G[A\cup X\cup B]\) is chordal. Note that no such pair exists when \(G[X]\) is not chordal, so we only care about the case where \(G[X]\) is chordal. Let \(B^{+}=B\cup\{v\}\); then \((A,B^{+})\subseteq(U_{t^{\prime}},V(G)\setminus V_{t^{\prime}})\). Therefore, there exists \(H^{\prime}\in\mathcal{H}_{t^{\prime},X^{-}}\) so that \((H^{\prime},X^{-})\oplus(G[B^{+}\cup X^{-}],X^{-})\) is chordal and \(h_{t^{\prime},X^{-}}(H^{\prime})\geq w(A)\). Let \(H=\texttt{Introduce}(H^{\prime},N_{G}(v)\cap X^{-})\). By construction we have \(h_{t,X}(H)=h_{t^{\prime},X^{-}}(H^{\prime})\). Next, \(B\cup X=B^{+}\cup X^{-}\). Since and \(N_{H}(v)\subseteq X\), the gluing product \((H,X)\oplus(G[B\cup X],X)\) is the same as \((H^{\prime},X^{-})\oplus(G[B\cup X],X^{-})\) which is chordal, as noted above. This also implies that \(H\) is chordal so it gets inserted to \(\mathcal{H}_{t,X}\). This concludes the proof of the correctness invariant. Running time.If \(v\not\in X\) then \(\mathcal{H}_{t,X}=\mathcal{H}_{t^{\prime},X}\) and if \(v\in X\) then each graph from \(\mathcal{H}_{t^{\prime},X^{-}}\) is mapped to a single graph in \(\mathcal{H}_{t,X}\). In both cases the size invariant is preserved. For each \(X\subseteq\chi(t)\) we process at most \(2^{|X|}\) graphs what in total gives \(\sum_{X\subseteq\chi(t)}2^{|X|}\leq 3^{k+1}\) graphs. By Observation 4.25 each graph has at most \(2(k+1)\) vertices and is processed in time \(k^{\mathcal{O}(1)}\). Before describing the routine for a forget node we prove that the condensing operation applied with respect to \(X\) and then to \(X\setminus v\) results in the same graph as when applying it directly to \(X\setminus v\). Let \(X\subseteq V(H)\) and \(v\in X\). Next, let \(H_{2}=\texttt{Condense}(H,X)\) and \(H_{3}=\texttt{Condense}(H_{2},X\setminus v)\). Then \(H_{3}=\texttt{Condense}(H,X\setminus v)\). Proof.: Let \(C\) be a connected component of \(H-X\) which is non-adjacent to \(v\). Then it gets contracted into a single vertex and is present in both \(H_{2},H_{3}\) as long as it is non-simplicial. Consider now the connected components of \(H-X\) which are adjacent to \(v\). Let \(S_{1},S_{2},\dots,S_{\ell}\) be those among them which are simplicial in \(H\) and \(C_{1},C_{2},\dots,C_{r}\) are those which are non-simplicial. Let \(V_{S,C,v}=\bigcup S_{i}\cup\bigcup D_{i}\cup\{v\}\) and \(V_{C,v}=\bigcup C_{i}\cup\{v\}\). If \(u\in N_{H}(S_{i})\) then, since \(v\in N_{H}(S_{i})\) and \(S_{i}\) is simplicial, we have \(u\in N_{H}(v)\). This implies that \(N_{H}(V_{S,C,v})=N_{H}(V_{C,v})\). In \(H_{2}\) all the components \(S_{i}\) are removed and \(C_{i}\) are contracted to vertices. In \(H_{3}\) the latter vertices are replaced with a new vertex \(v^{\prime}\) with neighborhood \(N_{H}(V_{C,v})\). In \(\texttt{Condense}(H,X\setminus v)\) we directly replace all components \(S_{i},C_{i}\) with a new vertex \(v^{\prime\prime}\) with neighborhood \(N_{H}(V_{S,C,v})\). As observed above, these neighborhoods coincide, hence \(H_{3}=\texttt{Condense}(H,X\setminus v)\). [Forget node] Let \((\mathbb{T},\chi)\) be a nice tree decomposition of \(G\) of width \(k\) and \(t\in V(\mathbb{T})\) be a forget node with a child \(t^{\prime}\). Suppose that the state for \(t^{\prime}\) satisfying the correctness and size invariants is given. Then we can compute the state for \(t\) which satisfies the correctness and size invariants in time \((2^{\omega}+1)^{k}k^{\mathcal{O}(1)}\). Proof.: We have \(\chi(t^{\prime})=\chi(t)\cup\{v\}\) for some vertex \(v\in V_{t}\). Next, \(V_{t}=V_{t^{\prime}}\) and \(U_{t}=U_{t^{\prime}}\cup\{v\}\). Note that \(N_{G}(v)\subseteq V_{t}\). Let \(X\subseteq\chi(t)\), \(X^{+}=X\cup\{v\}\), and \(\mathcal{H}^{\prime}=\{\texttt{Condense}(H^{\prime},X)\mid H^{\prime}\in \mathcal{H}_{t^{\prime},X^{+}}\}\). For each \(H\in\mathcal{H}_{t^{\prime},X}\cup\mathcal{H}^{\prime}\) we define its weight \(h_{t,X}(H)\) as maximum over \(h_{t^{\prime},X}(H)\) (considered only if \(H\in\mathcal{H}_{t^{\prime},X}\)) and \(\max\{h_{t^{\prime},X^{+}}(H^{\prime})+w(v)\}\mid H^{\prime}\in\mathcal{H}_{ t^{\prime},X^{+}}\wedge\texttt{Condense}(H^{\prime},X)=H\}\). It follows from the construction that we take maximum over a non-empty set. Finally, we compute \(\mathcal{H}_{t,X}\) with Lemma 4.23 as the max-representative family for \(\mathcal{H}_{t^{\prime},X}\cup\mathcal{H}^{\prime}\) with the weight function \(h_{t,X}\). Correctness.We show that the pair \((\mathcal{H}_{t^{\prime},X}\cup\mathcal{H}^{\prime},h_{t,X})\) satisfies the correctness invariant. Then the claim for \((\mathcal{H}_{t,X},h_{t,X})\) will follow from Lemma 4.26. We check condition (1). First consider \(H\in\mathcal{H}_{t^{\prime},X}\) satisfying \(h_{t,X}(H)=h_{t^{\prime},X}(H)\). By the inductive assumption there is a set \(A\subseteq U_{t^{\prime}}\subset U_{t}\) so that \(\texttt{Condense}(G[A\cup X],X)=H\), \(G[A\cup X]\) is chordal, and \(h_{t,X}(H)=w(A)\), as intended. Now consider \(H\in\mathcal{H}^{\prime}\) for which there exists \(H^{\prime}\in\mathcal{H}_{t^{\prime},X^{+}}\) so that \(H=\mathsf{Condense}(H^{\prime},X)\) and \(h_{t,X}(H)=h_{t^{\prime},X^{+}}(H^{\prime})+w(v)\). We know that there exists a set \(A\subseteq U_{t^{\prime}}=U_{t}\setminus v\) so that \(\mathsf{Condense}(G[A\cup X^{+}],X^{+})=H^{\prime}\), \(G[A\cup X^{+}]\) is chordal, and \(h_{t^{\prime},X^{+}}(H^{\prime})=w(A)\). The set \(A^{+}=A\cup\{v\}\subseteq U_{t}\) satisfies \(h_{t,X}(H)=w(A^{+})\) and \(G[A^{+}\cup X]=G[A\cup X^{+}]\) is chordal. From Lemma 4.28 we obtain that \(\mathsf{Condense}(G[A^{+}\cup X],X)=\mathsf{Condense}(H^{\prime},X)=H\). We move on to condition (2). Let \((A,B)\subseteq(U_{t},V(G)\setminus V_{t})\) be that \(G[A\cup X\cup B]\) is chordal. First consider the case \(v\not\in A\). Then \((A,B)\subseteq(U_{t^{\prime}},V(G)\setminus V_{t^{\prime}})\) and there exists \(H\in\mathcal{H}_{t^{\prime},X}\) so that \((H,X)\oplus(G[B\cup X],X)\) is chordal and \(h_{t^{\prime},X}(H)\geq w(A)\). By construction \(h_{t,X}(H)\geq h_{t^{\prime},X}(H)\). Now suppose that \(v\in A\) and let \(A^{-}=A\setminus v\). We have \((A^{-},B)\subseteq(U_{t^{\prime}},V(G)\setminus V_{t^{\prime}})\) and so there exists \(H^{\prime}\in\mathcal{H}_{t^{\prime},X^{+}}\) so that \((H^{\prime},X^{+})\oplus(G[B\cup X^{+}],X^{+})\) is chordal and \(h_{t^{\prime},X^{+}}(H^{\prime})\geq w(A^{-})=w(A)-w(v)\). Let \(H=\mathsf{Condense}(H^{\prime},X)\in\mathcal{H}^{\prime}\). Since \(N_{G}(v)\cap B=\emptyset\), we deduce that \((H^{\prime},X^{+})\oplus(G[B\cup X^{+}],X^{+})=(H^{\prime},X)\oplus(G[B\cup X ],X)\). The graph \((H,X)\oplus(G[B\cup X],X)\) can be obtained from the one above by a series edge contractions and possibly a vertex removal so it is chordal as well. Finally, \(h_{t,X}(H)\geq h_{t^{\prime},X^{+}}(H^{\prime})+w(v)\geq w(A)\). Running time.Consider a non-empty \(X\subseteq\chi(t)\). By Observation 4.25 each graph \(H\in\mathcal{H}_{t^{\prime},X}\cup\mathcal{H}^{\prime}\) has at most \(2|X|\) vertices and is processed in time \(k^{\mathcal{O}(1)}\). By the assumption \(|\mathcal{H}_{t^{\prime},X}|\leq 2^{|X|}\) and \(|\mathcal{H}_{t^{\prime},X^{+}}|\leq 2^{|X|+1}\) so the input to Lemma 4.23 has size less than \(2^{|X|+2}\). The computation of the max-representative family \(\mathcal{H}_{t,X}\) takes time \(\mathcal{O}(2^{\omega\cdot|X|}\cdot|X|^{\omega})\). This family have size at most \(2^{|X|-1}\) so it satisfies the size invariant. The sum of the exponential terms in the total running time equals \(\sum_{X\subseteq\chi(t)}2^{\omega\cdot|X|}=(2^{\omega}+1)^{|X|}\). [Join node] Let \((\mathbb{T},\chi)\) be a nice tree decomposition of \(G\) of width \(k\) and \(t\in V(\mathbb{T})\) be a join node with a children \(t_{1},t_{2}\). Suppose that the states for \(t_{1},t_{2}\) satisfying the correctness and size invariants are given. Then we can compute the state for \(t\) which satisfies the correctness and size invariants in time \(\mathcal{O}\left((2^{\omega-1}\cdot 3+1)^{k}\cdot k^{\omega}\right)\). Proof.: We have \(\chi(t)=\chi(t_{1})=\chi(t_{2})\) and \(U_{t}=U_{t_{1}}\cup U_{t_{2}}\) where the union is disjoint. Consider \(X\subseteq\chi(t)\). Let \(\mathcal{H}^{\prime}=\{(H_{1},X)\oplus(H_{2},X)\mid H_{1}\in\mathcal{H}_{t_{1 },X},H_{1}\in\mathcal{H}_{t_{1},X}\}\cap\mathsf{chordal}\). For each \(H\in\mathcal{H}^{\prime}\) we define its weight \(h_{t,X}(H)\) as \(\max\left(w_{t_{1},X}(H_{1})+w_{t_{2},X}(H_{2})\right)\) over \(\{(H_{1},X)\oplus(H_{2},X)=H\mid H_{1}\in\mathcal{H}_{t_{1},X},H_{1}\in \mathcal{H}_{t_{1},X}\}\). We compute \(\mathcal{H}_{t,X}\) with Lemma 4.24 as the max-representative family for \(\mathcal{H}^{\prime}\) with respect to the weight function \(h_{t,X}\). Correctness.We show the correctness invariant for the pair \((\mathcal{H}^{\prime},h_{t,X})\). Then the claim will follow from Lemma 4.26. Let us fix \(X\subseteq\chi(t)\). We check condition (1). Let \(H\in\mathcal{H}^{\prime}\) and \(H_{1}\in\mathcal{H}_{t_{1},X},H_{2}\in\mathcal{H}_{t_{2},X}\) be such that \(H=(H_{1},X)\oplus(H_{2},X)\) and \(h_{t,X}(H)=w_{t_{1},X}(H_{1})+w_{t_{2},X}(H_{2})\). By the assumption, there exist sets \(A_{1}\subseteq U_{t_{1}},A_{2}\subseteq U_{t_{2}}\) so that for \(i\in\{1,2\}\) it holds that \(\mathsf{Condense}(G[A_{i}\cup X],X)=H_{i}\), \(G[A_{i}\cup X]\) is chordal, and \(w_{t_{i},X}(H_{i})=w(A_{i})\). Observe that there are no edges between \(A_{1},A_{2}\) so \(G[A_{1}\cup A_{2}\cup X]=(G[A_{1}\cup X],X)\oplus(G[A_{2}\cup X],X)\). By Observation 4.8 we have \(\mathsf{Condense}(G[A_{1}\cup A_{2}\cup X],X)=\mathsf{Condense}(G[A_{1} \cup X],X)\oplus\mathsf{Condense}(G[A_{2}\cup X],X)=(H_{1},X)\oplus(H_{2},X)=H\). Since \(H\) is chordal by the construction, Lemma 4.9 implies that \(G[A_{1}\cup A_{2}\cup X]\) is chordal. It remains to check that \(w(A_{1}\cup A_{2})=w(A_{1})+w(A_{2})=h_{t,X}(H)\). Condition (2). Let \((A,B)\subseteq(U_{t},V(G)\setminus V_{t})\) be that \(G[A\cup X\cup B]\) is chordal, and \(A_{1}=A\cap U_{t_{1}},A_{2}=A\cap U_{t_{2}}\). Note that \(B\cup A_{2}\subseteq V(G)\setminus V_{t_{1}}\) and there are no edges between \(U_{t_{1}}\) and \(U_{t_{2}}\). By the assumption, there exists \(H_{1}\in\mathcal{H}_{t_{1},X}\) so that \((H_{1},X)\oplus(G[B\cup A_{2}\cup X)\) is chordal and \(h_{t_{1},X}(H_{1})\geq w(A_{1})\). From condition (1) we obtain that there exists \(\widehat{A}_{1}\subseteq U_{t_{1}}\) so that \(\mathsf{Condense}(G[\widehat{A}_{1}\cup X],X)=H_{1}\), \(G[\widehat{A}_{1}\cup X]\) is chordal, and \(h_{t_{1},X}(H_{1})=w(\widehat{A}_{1})\). It follows from Lemma 4.6 that \(G[\widehat{A}_{1}\cup A_{2}\cup X\cup B]\) is chordal. Next, we consider \((A_{2},B\cup\widehat{A}_{1})\subseteq(U_{t_{2}},V(G)\setminus V_{t_{2}})\). There exists \(H_{2}\in\mathcal{H}_{t_{2},X}\) so that \((H_{2},X)\oplus(G[B\cup\widehat{A}_{1}\cup X),X)\) is chordal and \(h_{t_{2},X}(H_{2})\geq w(A_{2})\). Again from condition (1) we obtain \(\widehat{A}_{2}\subseteq U_{t_{2}}\) so that \(\mathsf{Condense}(G[\widehat{A}_{2}\cup X],X)=H_{2}\), \(G[\widehat{A}_{2}\cup X]\) is chordal, and \(h_{t_{2},X}(H_{2})=w(\widehat{A}_{2})\). As before, the graph \(G[\widehat{A}_{1}\cup\widehat{A}_{2}\cup X\cup B]\) is chordal. By Observation 4.8 we have that \(H=(H_{1},X)\oplus(H_{2},X)\) equals \(\mathsf{Condense}(G[\widehat{A}_{1}\cup\widehat{A}_{2}\cup X],X)\). Then \((H,X)\oplus(G[B\cup X],X)\) is chordal which in particular means that \(H\) is chordal and belongs to \(\mathcal{H}^{\prime}\). Finally, we check that \(h_{t,X}(H)\geq h_{t_{1},X}(H_{1})+h_{t_{2},X}(H_{2})\geq w(A_{1})+w(A_{2})=w(A)\). Running time.Consider a non-empty \(X\subseteq\chi(t)\). By Observation 4.25 each graph \(H\in\mathcal{H}^{\prime}\) has at most \(2|X|\) vertices. By the assumption \(|\mathcal{H}_{t_{1},X}|,|\mathcal{H}_{t_{2},X}|\leq 2^{|X|}\) so we can use Lemma 4.24 to compute the max-representative family \(\mathcal{H}_{t,X}\) in time \(\mathcal{O}\left((2^{\omega-1}\cdot 3)^{|X|}\cdot|X|^{\omega}\right)\). This family have size at most \(2^{|X|-1}\) so it satisfies the size invariant. The sum of the exponential terms in the total running time equals \(\sum_{X\subseteq\chi(t)}(2^{\omega-1}.3)^{|X|}=(2^{\omega-1}.3+1)^{|X|}\). Chordal Vertex Deletion can be solved in deterministic time \(\mathcal{O}(c^{k}k^{\omega+1}n)\) on \(n\)-vertex node-weighted graphs when a tree decomposition of width \(k\) is provided. The constant \(c\) equals \(2^{\omega-1}\cdot 3+1\). Proof.: Let \((\mathbb{T},\chi)\) be a tree decomposition of \(G\) of width \(k\). We can assume that this is a nice tree decomposition and \(|V(\mathbb{T})|=\mathcal{O}(nk)\). We fill the states for \(t\in V(\mathbb{T})\) in a standard bottom-up fashion, while maintaining the correctness and size invariants. When \(t\) is a base node, we have \(\chi(t)=V_{t}=\emptyset\) and it suffices to consider \(X=\emptyset\). The family \(\mathcal{H}_{t,\emptyset}\) then contains only an empty graph with weight zero. This satisfies both invariants trivially. We proceed the remaining types of nodes using Lemmas 4.27, 4.29, and 4.30. The bottleneck for the running time comes from processing a join node. After filling the state of the root node \(r\), we read the value of \(h_{r,\emptyset}(\bot)\), where \(\bot\) denotes the empty graph. We claim that this value equals the highest weight of a vertex set in \(G\) inducing a chordal graph. We have \(\chi(r)=\emptyset\), \(V_{t}=U_{t}=V(G)\). The family \(\mathcal{H}_{r,\emptyset}\) can contain only the empty graph. Let \(A\subseteq V(G)\) be a maximal-weight set inducing a chordal graph. From the correctness condition (2) we obtain that there exists \(H\in\mathcal{H}_{r,\emptyset}\) (clearly \(H=\bot\)) so that \(h_{r,\emptyset}(H)\geq w(A)\). From the correctness condition (1) for \(H=\bot\) we get that there exists \(\widehat{A}\subseteq V(G)\) so that \(G[\widehat{A}\cup\emptyset]\) is chordal and \(h_{r,\emptyset}(\bot)=w(\widehat{A})\). This implies that \(h_{r,\emptyset}(\bot)=w(A)\). ## 5 Interval Deletion We switch our attention to Interval Vertex Deletion and show that in this case it is unlikely to achieve any speed-up over the existing \(2^{\mathsf{O}(\mathbf{tw}\log\mathbf{tw})}\cdot n\)-time algorithm. We prove Theorem 1 via a parameterized reduction from \(k\times k\) Permutation Clique, which is defined as follows. \(k\times k\) Permutation Clique **Input:** Graph \(G\) over the vertex set \([k]\times[k]\). **Question:** Is there a permutation \(\pi\colon[k]\to[k]\) so that \((1,\pi(1)),(2,\pi(2)),\ldots,(k,\pi(k))\) forms a clique in \(G\)? Permutation gadget.We will encode a permutation \(\pi\colon[k]\to[k]\) as a family of sets \(N_{1},N_{2},\ldots,N_{k}\) so that \(N_{i}=\pi([i])\) (i.e., \(N_{i}\) is the set of \(i\) numbers appearing first in \(\pi\)). First, we need a gadget to verify that such a family represents some permutation. **Definition 5.1**.: _For an integer \(k\), let \(Y_{k}\) be a graph on a vertex set \(\{y_{1},y_{2},\ldots,y_{k+2}\}\) so that \(\{y_{1},y_{2},\ldots,y_{k+1}\}\) induces a clique and \(y_{k+2}\) is adjacent only to \(y_{k+1}\)._ We need a simple observation that every linearly ordered family of sets can be represented by some permutation. **Lemma 5.2**.: _Let \(N_{1},\ldots,N_{\ell}\subseteq[k]\). Suppose that for each \(i,j\in[\ell]\) it holds that \(N_{i}\subseteq N_{j}\) or \(N_{j}\subseteq N_{i}\). Then there exists a permutation \(\pi\colon[k]\to[k]\) so that for each \(i\in[\ell]\) it holds that \(N_{i}=\pi([n_{i}])\) where \(n_{i}=|N_{i}|\)._ Proof.: Proof by induction on \(k\). For \(k=1\) the claim clearly holds so consider \(k>1\). If a set \(N_{i}\) is empty then \(N_{i}=\pi(\emptyset)\) for any permutation, so we can assume that all the sets are non-empty. The family \((N_{i})_{i\in[\ell]}\) is linearly ordered so the after reordering the indices we can assume that \(N_{1}\subseteq N_{2}\subseteq\cdots\subseteq N_{\ell}\). Let \(e\) be an arbitrary element from \(N_{1}\) and \(\tau\colon[k]\setminus\{e\}\to[k-1]\) be an arbitrary bijection. We define \(N^{\prime}_{i}=\tau(N_{i}\setminus\{e\})\). Then \(N^{\prime}_{1}\subseteq N^{\prime}_{2}\subseteq\cdots\subseteq N^{\prime}_{\ell}\). By the inductive assumption, there exists a permutation \(\pi^{\prime}\colon[k-1]\to[k-1]\) such that \(N^{\prime}_{i}=\pi^{\prime}([n_{i}-1])\). We define \(\pi(1)=e\) and for \(i>1\) as follows: \(\pi(i)=\tau^{-1}(\pi^{\prime}(i-1))\). Then \(N_{i}=\{e\}\cup\tau^{-1}(N^{\prime}_{i})=\{\pi(1)\}\cup\tau^{-1}(\pi^{\prime} ([n_{i}-1]))=\{\pi(1)\}\cup\pi([2,n_{i}])=\pi([n_{i}])\). We shall enforce a linear order on \(N_{1},\ldots,N_{k}\) by demanding that a particular supergraph of \(Y_{k}\) is interval. The corresponding interval model is depicted on Figure 2. **Lemma 5.3** (\(\star\)).: _Let \(N_{1},\ldots,N_{\ell}\subseteq[k]\). Consider a graph \(G\) obtained from \(Y_{k}\) by inserting an independent set of vertices \(x_{1},\ldots,x_{\ell}\) so that \(N_{G}(x_{i})=\{y_{j}\mid j\in N_{i}\}\). Then \(G\) is interval if and only if there exists a permutation \(\pi\colon[k]\to[k]\) so that for each \(i\in\ell\) it holds that \(N_{i}=\pi([n_{i}])\) where \(n_{i}=|N_{i}|\)._ Proof.: Suppose there is no such permutation. By Lemma 5.2 there are \(i,j\in[\ell]\) so that neither \(N_{i}\subseteq N_{j}\) nor \(N_{j}\subseteq N_{i}\). Fix \(p_{i}\in N_{G}(x_{i})\setminus N_{G}(x_{j})\), and \(p_{j}\in N_{G}(x_{j})\setminus N_{G}(x_{i})\). We claim that \((x_{i},x_{j},y_{k+2})\) forms an AT in \(G\). Indeed, \((x_{i},p_{i},y_{k+1},y_{k+2})\) avoids \(N_{G}[x_{j}]\), \((x_{j},p_{j},y_{k+1},y_{k+2})\) avoids \(N_{G}[x_{i}]\), and \((x_{i},p_{i},p_{j},x_{j})\) avoids \(N_{G}[y_{k+2}]=\{y_{k+1},y_{k+2}\}\). Since \(G\) contains an AT, it is not interval. Now suppose that a permutation \(\pi\) satisfying the conditions of the lemma exists. We construct an interval model of \(G\) (see Figure 2). Let \(\varepsilon=\frac{1}{3\ell}\). A vertex \(y_{i}\in V(Y_{k})\) where \(i\in[k]\) is assigned the interval \([\pi^{-1}(i),k+2]\). Vertex \(y_{k+1}\) is assigned the interval \([k+1,k+4]\) and vertex \(y_{k+2}\) is assigned to \([k+3,k+4]\). For \(i\in[\ell]\) we consider the vertex \(x_{i}\) with neighborhood specified by \(N_{i}=\pi([n_{i}])\). Its interval is given as \([n_{i}+\frac{i-1}{\ell}+\varepsilon,n_{i}+\frac{i-1}{\ell}+2\varepsilon]\). Note that the intervals of distinct \(x_{i},x_{j}\) are disjoint, as intended. The interval of \(x_{i}\) is contained in \((n_{i},n_{i}+1)\) so it intersects an interval of the form \([\pi^{-1}(j),k+2]\) exactly when \(\pi^{-1}(j)\leq n_{i}\). The latter inequality is equivalent to \(j\in\pi([n_{i}])=N_{i}\). The lemma follows. Figure 2: Illustration for Lemma 5.3. The intervals for vertices of \(Y_{4}\) are blank, ordered from bottom to top. They encode permutation \((2,4,3,1)\). The black intervals represent vertices \(x_{1},x_{2},x_{3},x_{4},x_{5}\) with neighborhoods encoding sets \(\{2\}\), \(\{2,4\}\) (twice), \(\{2,4,3\}\), and \(\{2,4,3,1\}\). Choice gadget.We need to verify that \((i,\pi(i))(j,\pi(j))\in E(G)\) for each \(1\leq i<j\leq k\). As \(\pi(i)\) is the only element in \(N_{i}\setminus N_{i-1}\), the information whether \((i,\pi(i)),(j,\pi(j))\in E(G)\) can be extracted from the tuple \((N_{i-1},N_{i},N_{j-1},N_{j})\). We construct a gadget that enforces a solution to select one such valid tuple. We use a following convention to describe the gadgets. When \(P\) is a graph with a distinguished vertex named \(v\) and a graph \(H\) is constructed using explicit vertex-disjoint copies of the graph \(P\), referred to as \(P_{1},P_{2},\ldots,P_{\ell}\), we refer to the copy of \(v\) within the subgraph \(P_{i}\) as \(P_{i}[v]\). We construct the choice gadget as a path-like structure consisting of blocks, each equipped with four special vertices. These are the only vertices that later get connected to the permutation gadget. On the intuitive level, a solution should choose one block, leave its special vertices untouched, and remove the remaining special vertices. See Figure 3 for an illustration. The graph \(P\) is obtained from a path \((u_{1},u_{2},\ldots,u_{9})\) by appending to \(u_{2}\) two subdivided edges, one subdivided edge to \(u_{7}\), and inserting edge \(u_{4}u_{8}\). The choice gadget of order \(s\) is a graph constructed as follows. We begin with a vertex set \(\bigcup_{i=1}^{s}\{v_{i}^{1},v_{i}^{2},v_{i}^{3}\}\cup\{v_{\ell\ell},v_{right}\}\). For each pair \((x,y)\) of the form \((v_{i}^{1},v_{i}^{2}),(v_{i}^{2},v_{i}^{3}),(v_{i}^{3},v_{i}^{1}),(v_{i}^{3},v_ {i+1}^{1})\) as well as for \((v_{\ell\ell\text{r}},v_{i}^{1})\), \((v_{i}^{3},v_{right})\) we create two subdivided edges between \(x\) and \(y\). We refer to the subgraph given by the two subdivided edges between \(x,y\) as \(\langle x,y\rangle\). We refer to the union of \(\langle v_{i}^{1},v_{i}^{2}\rangle,\langle v_{i}^{2},v_{i}^{3}\rangle,\langle v _{i}^{3},v_{i}^{1}\rangle\) as \(Q_{i}\). Next, for each \(i\in[s]\) we create four copies of the graph \(P\), denoted \(P_{i}^{1},P_{i}^{2},P_{i}^{3},P_{i}^{4}\). We insert edges between \(v_{i}^{2}\) and \(P_{i}^{1}[u_{1}],P_{i}^{2}[u_{1}],P_{i}^{3}[u_{1}],P_{i}^{4}[u_{1}]\). We refer to vertices \(P_{i}^{\alpha}[u_{8}]\), \(P_{i}^{\alpha}[u_{9}]\), \(\alpha\in[4]\), as respectively \(h_{i}^{\alpha}\), \(g_{i}^{\alpha}\). Figure 3: Top: the choice gadget \(H_{5}\) with the subgraph \(Q_{1}\) highlighted in green. The copies of \(P\) are sketched symbolically with dashed lines and the squares represent vertices \(g_{i}^{\alpha}\). The red disks and squares represent a solution constructed in Lemma 5.6(2). This solution β€˜chooses’ \(i=2\), leaves untouched the four vertices \(g_{2}^{\alpha}\), and removes \(h_{2}^{\alpha}\) as well as \(g_{i}^{\alpha}\) for \(i\neq 2\). Bottom left: the graph \(P\) and vertices named \(h,g\). Two vertex-disjoint non-interval subgraphs of \(P\) have green edges. Bottom right: a closer look at the first two blocks of \(H_{5}\) with two copies of \(P\) drawn in detail. The subgraph highlighted in green witnesses that if a minimum-size solution removes \(g_{i}^{\alpha}\) for at least one \(\alpha\in[4]\) then it must also remove \(v_{i}^{2}\), what is exploited in Lemma 5.6(3). The choice gadget is designed to enforce a special structure of minimum-size interval deletion sets. Let \(H_{s}\) be the choice gadget of order \(s\) and \(X\) be an interval deletion set in \(H_{s}\). Then for each \(i\in[s]\) and \(\alpha\in[4]\) it holds that \(|V(P_{i}^{\alpha})\cap X|\geq 2\) and \(|V(Q_{i})\cap X|\geq 2\). Proof.: The graph \(P\) contains two vertex-disjoint non-interval subgraphs which is witnessed by ATs: one induced by \(u_{2},u_{3},u_{4}\) and the two subdivided edges appended to \(u_{2}\), the second one induced by \(u_{5},u_{6},u_{7},u_{8},u_{9}\) and the subdivided edge appended to \(u_{7}\) (see Figure 3). Therefore any copy of \(P\) in \(H_{s}\) must contain at least two vertices from \(X\). Next, observe that no single vertex intersects all three holes in \(Q_{i}\). Therefore any interval deletion set must contain at least two vertices from \(V(Q_{i})\). We prove several properties of the choice gadget which are analogous to the properties of the gadget used by Pilipczuk in the lower bound for Planar Vertex Deletion [60]. However, in that construction every block has only one special vertex with edges leaving the gadget, while in our case there are four special vertices. We also need to ensure that when the special vertices in some block are not being removed then a solution can remove their neighbors in the gadget. (Inserting a planar graph attached to a single vertex of \(G\) does not affect planarity of \(G\) but the analogous property does not hold for the class of interval graphs.) The special structure of the graph \(P\) allows us to resolve these two issues. **Lemma 5.6** (\(\star\)).: _Let \(H_{s}\) be the choice gadget of order \(s\)._ 1. _The minimal size of an interval deletion set in_ \(H_{s}\) _is_ \(10s\)_._ 2. _For every_ \(i\in[s]\) _there exists a minimum-size interval deletion set_ \(X\) _in_ \(H_{s}\) _such that_ \(\{h_{i}^{1},h_{i}^{2},h_{i}^{3},h_{i}^{4}\}\subseteq X\) _and_ \(\{g_{j}^{1},g_{j}^{2},g_{j}^{3},g_{j}^{4}\}\subseteq X\) _for each_ \(j\neq i\)_._ 3. _For every minimum-size interval deletion set_ \(X\) _in_ \(H_{s}\) _there is_ \(i\in[s]\) _such that_ \(\{g_{i}^{1},g_{i}^{2},g_{i}^{3},g_{i}^{4}\}\cap X=\emptyset\)_._ 4. _If_ \(s\leq 2^{k}\) _then_ \(\textbf{td}(H_{s})\leq\textbf{td}(H_{1})+k\)_, where_ \(\textbf{td}(G)\) _stands for the treedepth of_ \(G\)_._ Proof.: Part (1). All the \(4s\) copies of \(P\), as well as subgraphs \(Q_{1},\ldots,Q_{s}\), are vertex-disjoint in \(H_{s}\). The lower bound follows from Lemma 5.5 whereas the upper bound is a consequence of the next part of the lemma. Part (2). The construction is depicted on Figure 3. The set \(X\) comprises: * \(v_{j}^{1},v_{j}^{2}\) for \(j<i\), * \(v_{i}^{1},v_{i}^{3}\), * \(v_{j}^{2},v_{j}^{3}\) for \(j>i\), * \(P_{j}^{\alpha}[u_{4}],P_{j}^{\alpha}[u_{9}]\) for \(\alpha\in[4]\), \(j\neq i\), * \(P_{i}^{\alpha}[u_{2}],P_{i}^{\alpha}[u_{8}]\) for \(\alpha\in[4]\). One can easily verify that \(|X|=10s\) and each connected component of \(H_{s}-X\) is either a path or a star with at most two subdivided edges. These graphs are interval. Part (3). Suppose that there exists an interval deletion set \(X\) of size \(10s\) such that for each \(i\in[s]\) there is \(\alpha\in[4]\) so that \(g_{i}^{\alpha}\in X\). From Lemma 5.5 we know that \(|X\cap V(P_{i}^{\alpha})|\geq 2\). From a counting argument we infer that in fact it must be \(|X\cap V(P_{i}^{\alpha})|=2\). The vertices \(P_{i}^{\alpha}[u_{4}],P_{i}^{\alpha}[u_{5}],P_{i}^{\alpha}[u_{6}],P_{i}^{\alpha }[u_{7}],P_{i}^{\alpha}[u_{8}]\) induce \(C_{5}\) so one of them must belong to \(X\). Together with \(g_{i}^{\alpha}=P_{i}^{\alpha}[u_{9}]\) these are the two vertices of \(X\cap V(P_{i}^{\alpha})\). The vertices \(v_{i}^{2},P_{i}^{\alpha}[u_{1}],P_{i}^{\alpha}[u_{2}]\) and the two subdivided edges appended to \(P_{i}^{\alpha}[u_{2}]\) induce a graph with an AT (see Figure 3). As no more vertices from \(V(P_{i}^{\alpha})\) belong to \(X\) apart from the two described above, it must be \(v_{i}^{2}\in X\). This argument works for every \(i\in[s]\). We count the already allocated vertices: \[\left|\bigcup_{i\in[s],\alpha\in[4]}(X\cap V(P_{i}^{\alpha}))\right|+|\{v_{i}^{2} \mid i\in[s]\}|=8s+s=9s.\] Since \(|X|=10s\), there are exactly \(s\) vertices remaining in \(X\). But there are \(s+1\) vertex-disjoint holes yet to be hit: \(\langle v_{\mathrm{left}},v_{1}^{1}\rangle,\langle v_{1}^{3},v_{2}^{1}\rangle, \langle v_{2}^{3},v_{3}^{1}\rangle,\ldots,\langle v_{s}^{3},v_{\mathrm{right}}\rangle\). This means that \(X\) cannot be an interval deletion set in \(H_{s}\). Part (4). Clearly \(\mathbf{td}(H_{s})\leq\mathbf{td}(H_{s+1})\) so it suffices to prove the claim for \(s=2^{k}\) by induction on \(k\). For \(k=0\) we get equality. For \(k>0\) there exists a vertex \(v\in V(H_{2^{k}})\) so that \(H_{2^{k}}-v\) has two connected components, each being a subgraph of \(H_{2^{k-1}}\). By the definition of treedepth we get \(\mathbf{td}(H_{2^{k}})\leq\mathbf{td}(H_{2^{k-1}})+1\). Lokshtanov et al. [55] proved that \(k\times k\) Permutation Clique cannot be solved in time \(2^{o(k\log k)}\) assuming ETH. According to the reduction below, this also rules out running time of the form \(2^{o(\mathbf{td}\log\mathbf{td})}\cdot n^{\mathcal{O}(1)}\) for Interval Vertex Deletion, where \(\mathbf{td}\) is the treedepth of the input graph. As \(\mathbf{tw}(G)\leq\mathbf{td}(G)\), this entails the same hardness for treewidth, what proves Theorem 1. There is an algorithm that, given an instance \((G,k)\) of \(k\times k\) Permutation Clique, runs in time \(2^{\mathcal{O}(k)}\) and returns an equivalent unweighted instance \((H,p)\) of Interval Vertex Deletion such that \(|V(H)|=2^{\mathcal{O}(k)}\) and \(\mathbf{td}(H)=\mathcal{O}(k)\). Proof.: For \(1\leq i<j\leq k\) and \(x\neq y\in[k]\) let \(\mathcal{S}_{i,x,j,y}\) be the family of tuples \((S_{1},S_{2},S_{3},S_{4})\) of subsets of \([k]\) satisfying: * \(S_{1}\subset S_{2}\subseteq S_{3}\subset S_{4}\), * \(|S_{1}|=i-1\), * \(S_{2}\setminus S_{1}=\{x\}\), * \(|S_{3}|=j-1\), * \(S_{4}\setminus S_{3}=\{y\}\). Furthermore, for \(1\leq i<j\leq k\), let \(\mathcal{S}_{i,j}\) be the union of \(\mathcal{S}_{i,x,j,y}\) over all pairs \(x\neq y\in[k]\) such that \((i,x)(j,y)\in E(G)\). Let \(s_{i,j}=|\mathcal{S}_{i,j}|\) and \(\rho_{i,j}\colon[s_{i,j}]\to\mathcal{S}_{i,j}\) be an arbitrary bijection. Clearly \(s_{i,j}\leq 4^{k}k^{2}\). The graph \(H\) consists of a permutation gadget \(Y_{k}\) and, for each \(1\leq i<j\leq k\), a choice gadget \(C_{i,j}\) of order \(s_{i,j}\). For \(S\subseteq[k]\) we use shorthand \(Y_{k}[S]=\{y_{i}\mid i\in S\}\). For \(\ell\in[s_{i,j}]\) and \((S_{1},S_{2},S_{3},S_{4})=\rho_{i,j}(\ell)\) the vertices \(C_{i,j}[g_{\ell}^{1}]\), \(C_{i,j}[g_{\ell}^{2}]\), \(C_{i,j}[g_{\ell}^{3}]\), \(C_{i,j}[g_{\ell}^{4}]\) get connected to vertex sets \(Y_{k}[S_{1}],Y_{k}[S_{2}],Y_{k}[S_{3}],Y_{k}[S_{4}]\), respectively. This finishes the construction of \(H\). The number of vertices in \(H\) is clearly \(2^{\mathcal{O}(k)}\) and the construction can be performed in time polynomial in the size of \(H\). We set \(p=10\cdot\sum_{1\leq i<j\leq k}s_{i,j}\). If \((G,k)\) admits a solution, then \(H\) has an interval deletion set of size \(p\). Proof.: Let \(\pi\colon[k]\to[k]\) be a permutation encoding a clique in \(G\). By the construction, for each \(1\leq i<j\leq k\) we have \((\pi([i-1]),\pi([i]),\pi([j-1]),\pi([j])\in\mathcal{S}_{i,j}\). Let \(\ell\in[s_{i,j}]\) be the index mapped to this tuple by \(\rho_{i,j}\). By Lemma 5.2(2) the choice gadget \(C_{i,j}\) has an interval deletion set \(X_{i,j}\subseteq V(C_{i,j})\) of size \(10s_{i,j}\) such that \(\{C_{i,j}[h_{\ell}^{1}],C_{i,j}[h_{\ell}^{2}],C_{i,j}[h_{\ell}^{3}],C_{i,j}[h_ {\ell}^{4}]\}\subseteq X_{i,j}\) and \(\{C_{i,j}[g_{\ell}^{1}],C_{i,j}[g_{\ell}^{2}],C_{i,j}[g_{\ell}^{3}],C_{i,j}[g_ {\ell}^{4}]\}\subseteq X_{i,j}\) for each \(r\neq\ell\). In other words, \(X_{i,j}\) contains all vertices in \(C_{i,j}\) which are adjacent to \(Y_{k}\) except for the \(C_{i,j}\)-copies of \(g_{\ell}^{1},g_{\ell}^{2},g_{\ell}^{3},g_{\ell}^{4}\) and \(X_{i,j}\) also contains the neighbors of \(C_{i,j}[g_{\ell}^{1}],C_{i,j}[g_{\ell}^{2}],C_{i,j}[g_{\ell}^{4}],C_{i,j}[g_ {\ell}^{4}]\) in \(C_{i,j}\). We set \(X=\bigcup_{1\leq i<j\leq k}X_{i,j}\). Then the only connected component of \(H-X\) which is not a connected component of any \(C_{i,j}-X_{i,j}\) is given by \(Y_{k}\) together with an independent set of the vertices described above. The neighborhood of each such vertex in \(Y_{k}\) is of the form \(Y_{k}[\pi([k^{\prime}])]\) for some \(0\leq k^{\prime}\leq k\). By Lemma 5 this component is an interval graph. This shows that \(X\) is indeed an interval deletion set. If \(H\) has an interval deletion set of size at most \(p\), then \((G,k)\) admits a solution. Proof.: Let \(X\) be an interval deletion set in \(H\). By Lemma 5 a minimum-size interval deletion set in \(C_{i,j}\) has size \(10s_{i,j}\). As the choice gadgets are vertex-disjoint subgraphs of \(H\), the set \(X\) must contain exactly \(10s_{i,j}\) vertices from \(V(C_{i,j})\). This also implies that \(V(Y_{k})\cap X=\emptyset\). Let \(X_{i,j}=V(C_{i,j})\cap X\). By Lemma 5 there exists \(\ell\in[s_{i,j}]\) such that \(\left\{C_{i,j}[g_{\ell}^{1}],C_{i,j}[g_{\ell}^{2}],\right.\)\(C_{i,j}[g_{\ell}^{3}],C_{i,j}[g_{\ell}^{4}]\left.\right\}\cap X_{i,j}=\emptyset\). Therefore for each pair \((i,j)\) there is a tuple \((S_{i,j}^{1},S_{i,j}^{2},S_{i,j}^{3},S_{i,j}^{4})\in\mathcal{S}_{i,j}\) so that vertices from \(C_{i,j}\) with neighborhoods \(Y_{k}[S_{i,j}^{1}],Y_{k}[S_{i,j}^{2}],Y_{k}[S_{i,j}^{3}],Y_{k}[S_{i,j}^{4}]\) are present in \(H-X\). By Lemma 5 there exists a single permutation \(\pi\colon[k]\to[k]\) so that each set \(S_{i,j}^{\alpha}\) is of the form \(\pi([|S_{i,j}^{\alpha}|])\). By the definition of family \(\mathcal{S}_{i,j}\) this implies that \((i,\pi(i))(j,\pi(j))\in E(G)\) for each pair \((i,j)\). Hence there is a \(k\)-clique in \(G\). The treedepth of \(H\) is at most \(|Y_{k}|=k+2\) plus \(\mathbf{td}(H-Y_{k})\), which equals the maximum of \(\mathbf{td}(C_{i,j})\) over all employed choice gadgets \(C_{i,j}\). As \(s_{i,j}\leq 4^{k}k^{2}\), Lemma 5 implies that \(\mathbf{td}(C_{i,j})\leq 2k+2\log_{2}k+\mathcal{O}(1)\). This conludes the proof of the proposition. ### Upper bound Saitoh et al. [66] have presented an algorithm for Interval Edge Deletion (and for several related graphs classes) with running time \(2^{\mathcal{O}(\mathbf{tw}\log\mathbf{tw})}\cdot n\) and stated that they expect it to also work for the vertex-deletion variant [66, SS6]. We briefly describe their approach and justify that vertex deletion can indeed be incorporated. Instead of working with real-line interval models, one can represent an interval model in an abstract way. For a set \(X\) its _interval representation_ is a linear order \(\pi\) over the set \(LR_{X}=L_{X}\cup R_{X}\cup\{\bot,\top\}\) where \(L_{X}=\{\ell_{x}\mid x\in X\}\), \(R_{X}=\{r_{x}\mid x\in X\}\), such that \(\bot<_{\pi}\ell_{x}<_{\pi}r_{x}<_{\pi}\top\) for each \(x\in X\). An interval is a pair of elements from \(LR_{X}\). The interval graph \(G_{\pi}\) of an interval representation \(\pi\) over \(V\) is defined by adding an edge \(uv\) whenever \((\ell_{u},r_{u})\) and \((\ell_{v},r_{v})\) intersect in \(\pi\). It is clear that a graph \(G\) is interval if and only if there exists an interval representation \(\pi\) over \(V\) such that \(G=G_{\pi}\). Let \((\mathbb{T},\chi)\) be a rooted tree decomposition of \(G\). The state of \(t\in V(\mathbb{T})\) is a set of triples \((\pi,I,c)\) where \(\pi\) is an interval representation over \(\chi(t)\) such that \(G_{\pi}\) is a subgraph of \(G[\chi(t)]\), \(I\) is a set of intervals from \(LR_{\chi(t)}\), and \(c\in\mathbb{N}\). For each interval representation \(\tau\) over \(V_{t}\), for which \(G_{\tau}\) is a subgraph of \(G[V_{t}]\) we define its abstraction as a triple \((\pi,I,c)\) where \(\pi\) is a restriction of \(\tau\) to \(\chi(t)\), for each connected component \(C\) of \(G[U_{t}]\) there is an interval \((\ell_{c},r_{c})\in I\) so that \((\ell_{c},r_{c})\) is a inclusion-wise minimal interval from \(LR_{\chi}(t)\) that contains all the intervals from \(C\), and \(c=|E(G[V_{t}])-E(G_{\tau})-E(G[\chi(t)])|\) counts the number of edges from \(G[V_{t}]\) with at most one endpoint in \(\chi(t)\) which are not present in \(G_{\tau}\). We write \(I\sqsubseteq_{\pi}I^{\prime}\) if every interval from \(I\) is contained in some interval from \(I^{\prime}\). We say that \((\pi,I,c)\) dominates \((\pi,I^{\prime},c^{\prime})\) if \(I\sqsubseteq_{\pi}I^{\prime}\) and \(c\leq c^{\prime}\). Saitoh et al. show that when \((\pi,I,c)\) dominates \((\pi,I^{\prime},c^{\prime})\) there is no need to store \((\pi,I^{\prime},c^{\prime})\) in the state for \(t\). To see this, consider an interval representation \(\tau^{\prime}\) over \(V(G)\) which corresponds to some valid solution, so that \((\pi,I^{\prime},c^{\prime})\) is an abstraction of \(\tau^{\prime}\) restricted to \(V_{t}\). Since there are no edges between \(U_{t}\) and \(V(G)\setminus V_{t}\), for every \(v\in V(G)\setminus V_{t}\) and connected component \(C\) of \(G[U_{t}]\) the interval of \(v\) must be disjoint the interval spanned by \(C\) with respect to \(\tau^{\prime}\). Because \(I\sqsubseteq_{\pi}I^{\prime}\), we can modify \(\tau^{\prime}\) to obtain a new interval representation \(\tau\) over \(V(G)\) which coincides on \(U_{t}\) with some partial solution whose abstraction is \((\pi,I,c)\). As \(c\leq c^{\prime}\) the number of edges deleted with respect to \(\tau\) does not grow compared to \(\tau^{\prime}\). The last observation is that when \(|\chi(t)|=k\) and no triple stored at \(t\) dominates another triple then their number is \(2^{\mathcal{O}(k\log k)}\). In order to adapt the algorithm for vertex deletion, we can store tuples \((X,\pi,I,c)\) where \(X\subseteq\chi(t)\) is the set of vertices which are not deleted, \(\pi\) is an interval representation over \(X\) so that \(G_{\pi}=G[X]\), \(I\) is a set of intervals from \(LR_{X}\), and \(c\in\mathbb{N}\). Now a partial solution is a triple \((A,X,\tau)\) where \(A\subseteq U_{t},X\subseteq\chi(t)\), and \(\tau\) is an interval representation over \(A\cup X\) so that \(G_{\tau}=G[A\cup X]\). The abstraction of \((A,X,\tau)\) is \((X,\pi,I,c)\) where \(\pi\) is a restriction of \(\tau\) to \(X\), for each connected component \(C\) of \(G[A]\) there is an interval \((\ell_{c},r_{c})\in I\) so that \((\ell_{c},r_{c})\) is a inclusion-wise minimal interval from \(LR_{X}\) that contains all the intervals from \(C\), and \(c=|U_{t}|-|A|\) counts the number of vertices deleted so far. As before, we say that \((X,\pi,I,c)\) dominates \((X,\pi,I^{\prime},c^{\prime})\) if \(I\sqsubseteq_{\pi}I^{\prime}\) and \(c\leq c^{\prime}\). By the same argument as before, we can neglect the tuples which are dominated and bound the number of tuples stored for a pair \((t,X)\) by \(2^{\mathcal{O}(k\log k)}\). Since there are \(2^{k}\) choices for \(X\), the general upper bound follows. It is easy to see that both algorithms can be also extended to incorporate weights. ## 6 Conclusion and open problems We have obtained ETH-tight bounds for vertex-deletion problems into the classes of chordal and interval graphs, under the treewidth parameterization. The status of the corresponding edge-deletion problems remains unclear (see [66]). The related problem, Feedback Vertex Set, can be solved using representative families within the same running time as our algorithm for ChVD [37]. However, it admits a faster deterministic algorithm based on the determinant approach [71] and an even faster randomized algorithm based on the Cut & Count technique [35]. Could ChVD also be amenable to one of those techniques? Our algorithm for ChVD is based on a novel connection between chordal graphs and graphic matroids, which might come in useful in other settings. In particular, we ask whether this insight can be leveraged to improve the running time for ChVD parameterized by the solution size \(k\), where the current-best algorithm runs in time \(2^{\mathcal{O}(k\log k)}n^{\mathcal{O}(1)}\)[29]. A direct avenue for a potential improvement would be to reduce the problem in time \(2^{\mathcal{O}(k)}n^{\mathcal{O}(1)}\) to the case with treewidth \(\mathcal{O}(k)\) and then apply Theorem 1.1. Such a strategy has been employed in the state-of-the-art algorithm for Planar Vertex Deletion parameterized by the solution size [47].
2302.07681
Disturbances from Single Event Upsets in the GRACE Follow-On Laser Ranging Interferometer
The Gravity Recovery And Climate Experiment - Follow On (GRACE-FO) satellite mission (2018-now) hosts the novel Laser Ranging Interferometer (LRI), a technology demonstrator for proving the feasibility of laser interferometry for inter-satellite ranging measurements. The GRACE-FO mission extends the valuable climate data record of changing mass distribution in the system Earth, which was started by the original GRACE mission (2002-2017). The mass distribution can be deduced from observing changes in the distance of two low-earth orbiters employing interferometry of electromagnetic waves in the K-Band for the conventional K-Band Ranging (KBR) and in near-infrared for the novel LRI. This paper identifies possible radiation-induced Single Event Upset (SEU) events in the LRI phase measurement. We simulate the phase data processing within the Laser Ranging Processor (LRP) and use a template-based fitting approach to determine the parameters of the SEU and subtract the events from the ranging data. Over four years of LRI data, 29 of such events were identified and characterized.
Malte Misfeldt, Pallavi Bekal, Vitali MΓΌller, Gerhard Heinzel
2023-02-15T14:16:28Z
http://arxiv.org/abs/2302.07681v1
# Disturbances from Single Event Upsets in the GRACE Follow-On Laser Ranging Interferometer ###### Abstract The Gravity Recovery And Climate Experiment - Follow On (GRACE-FO) satellite mission (2018-now) hosts the novel Laser Ranging Interferometer (LRI), a technology demonstrator for proving the feasibility of laser interferometry for inter-satellite ranging measurements. The GRACE-FO mission extends the valuable climate data record of changing mass distribution in the system Earth, which was started by the original GRACE mission (2002-2017). The mass distribution can be deduced from observing changes in the distance of two low-earth orbiters employing interferometry of electromagnetic waves in the K-Band for the conventional K-Band Ranging (KBR) and in near-infrared for the novel LRI. This paper identifies possible radiation-induced Single Event Upset (SEU) events in the LRI phase measurement. We simulate the phase data processing within the Laser Ranging Processor (LRP) and use a template-based fitting approach to determine the parameters of the SEU and subtract the events from the ranging data. Over four years of LRI data, 29 of such events were identified and characterized. GRACE-FO; Laser Ranging Interferometer; LRI; Single Event Upset; Bitflip; Cosmic Radiation + Footnote †: journal: Advances in Space Research 0273-1177/(c) 2023 (2023) 0000-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-00002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-00002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-00002-0002-00002-0002-0002-0002-0002-0002-00002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-002-00002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-002-0002-0002-0002-0002-002-0002-0002-002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-002-0002-002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-002-002-0002-002-0002-0002-0002-0002-002-0002-0002-002-0002-002-0002-002-0002-0002-0002-0002-002-002-0002-0002-002-002-0002-002-0002-0002-002-0002-002-0002-002-0002-0002-002-0002-002-0002-002-002-0002-002-002-0002-002-0002-002-002-002-0002-0002-002-002-0002-0002-0002-0002-002-002-0002-002-002-002-0002-0002-002-0002-002-0002-002-002-0002-002-0002-002-0002-002-0002-002-0002-0002-002-0002-0002-0002-0002-0002-0002-002-002-0002-002-002-002-0002-002-002-0002-002-0002-0002-002-002-0002-002-0002-002-002-0002-0002-002-002-0002-002-002-002-002-0002-002-002-002-0002-002-002-002-0002-002-0002-002-002-002-002-0002-002-0002-002-0002-002-002-002-0002-002-002-002-002-002-002-002-002-0 Data processing of the inter-satellite range rate observations from KBR or LRI in addition to observations from the Global Positioning System (GPS) receivers, accelerometers, and star cameras as well as precise modeling of the ocean and solid Earth tides and other known effects yields monthly gravity maps of the Earth as the main scientific mission results (Wahr et al., 2004; Tapley et al., 2004). Comparing individual months and the long-term mean gravity reveals trends and annual hydrological signals for climate studies, such as accelerated ice sheet melting, groundwater storage depletion, closure of the sea-level rise budget, and more (Tapley et al., 2019). The successful commissioning of the LRI instrument was an essential step towards the Laser Interferometer Space Antenna (LISA) mission, which will use comparable inter-satellite laser ranging technology between three spacecraft in deep space for the detection of gravitational waves (Amaro-Seoane et al., 2017). In this paper, we investigate the ranging data of the LRI for so-called Single Event Upsets (SEUs), which are short-lived disturbances in the phase measurement due to the interaction of charged particles or cosmic radiation with the onboard electronics. Section 2 discusses the space environment in the polar low-earth orbit and introduces different classifications of radiation effects on electronics. The LRI architecture is explained in section 3 with special attention on the Laser Ranging Processor (LRP), in which the SEUs occur. We simulate the digital filtering chain within the LRP in section 4 and create templates, which are then used to detect actual SEUs in the measured phase data in section 5. The identified SEUs are discussed in section 6, and the results are summarized and concluded in section 7. ## 2 Space Environment The space radiation environment affects the electronics aboard spacecraft. Therefore space electronics are usually shielded or hardened against this radiation (Stassinopoulos and Raymond, 1988). The space environment encountered by the spacecraft is influenced by Earth's magnetic field and sources from outer space. The radiation effects from the sun are characterized by its 11-year cycle, during which the sun emits a stream of particles with varying flux called the solar wind. It consists of electrons, protons, and heavy ions (Nwankwo et al., 2020). Galactic cosmic rays are another source of particle flux composed of high-energy protons. They originate outside the solar system, from the depths of our galaxy (Blasi, 2013). The Earth's magnetic field traps these charged particles, and they follow the magnetic field lines (Van Allen, 1959). Depending on the species of particles, they populate different regions of the magnetic field, like the Van Allen radiation belts (Bosser, 2017). It is a system of two concentric belts ranging from approximately 1000 km to over 60 000 km in altitude (Metrailler et al., 2019). The probability for radiation-related incidents in space electronics is related to spatial variations of Earth's magnetic field. Over the past years, in-situ measurements were performed by several space missions and combined in the so-called CHAOS model (named after the space missions CHAMP, Orsted, and SAC-C, Olsen et al. (2006)). The currently available version 7 of the CHAOS model also includes the SWARM mission results and ground data (Finlay et al., 2020). The region over the southern Atlantic exhibits a low magnetic field intensity at the altitude of a Low Earth Orbit (LEO), which is commonly called the South-Atlantic Anomaly (SAA). Here, the inner Van Allen belt approaches Earth's surface. Like GRACE-FO, satellites in a LEO orbit usually fly below the belt but may pass through the SAA. It is known for its high radiation levels and is the site of frequent radiation-related events on satellite electronics. One such effect are SEUs, occurring within the SAA region in roughly 50% of the total cases (Zhang et al., 2021). When a single charged particle interacts with an electronic component like a transistor, it leaves a trail of electron-hole pairs within the semiconductor that generate a current pulse (Todd and Uznanski, 2015). This interaction either causes a hard error or a soft error: Hard errors cause severe malfunction up to defect of the device, while soft errors are temporary and non-destructive. Hence, SEUs are soft errors. They may influence the value of the bit stored by a memory cell (Todd and Uznanski, 2015). This bitflip prevails until a new bit value is passed into the memory cell. On the other hand, a Single Event Latchup (SEL) is a hard error that short circuits the electronics and can be disastrous Rivetta et al. (2001). The GRACE satellite, the predecessor to GRACE-FO, experienced failure of one of the Instrument Control Units onboard one of its spacecraft in 2002, which is possibly deemed as the result of a SEL Pritchard et al. (2002). ## 3 LRI Architecture The LRI is a single instrument distributed on two equally equipped spacecraft, called GF-1 and GF-2, and it measures the biased range between the spacecraft. It is operated in an active-transponder configuration Sheard et al. (2012): One of the two units (the reference unit) sends out a laser beam with approx. 25 mW optical power, which is stabilized to a reference cavity using the Pound-Drever-Hall technique Drever et al. (1983); Thompson et al. (2011). The frequency of the emitted light field appears Doppler shifted by a frequency \(f_{\mathrm{D}}<3\) MHz due to the relative motion of the two spacecraft when it is sensed on the distant transponder spacecraft Sheard et al. (2012). On the transponder unit, the incoming beam has only pico- to nanowatts of optical power due to the divergence of Gaussian beams and a small aperture at reception. The transponder laser is controlled by a feedback loop such that the incoming beam is reproduced with a well-defined phase relation but amplified in power before being sent back to the reference spacecraft. The transponder unit also intentionally introduces a frequency offset of \(f_{\mathrm{off}}=10\) MHz. A second Doppler shift on the way back is sensed on the reference spacecraft. Ultimately, the interference between the local oscillator and round-trip beams is measured on the reference side and reads \(f_{\mathrm{R}}=2f_{\mathrm{D}}+f_{\mathrm{off}}\) in terms of the beat frequency. Since the frequency offset \(f_{\mathrm{off}}\) is known, range and gravity information in the form of Doppler shifts \(f_{\mathrm{D}}\) can be extracted from the measured frequency \(f_{\mathrm{R}}\). The LRI on the transponder spacecraft, in principle, measures zero phase variations except for a well-defined phase ramp, due to the afore mentioned feedback loop implementing the frequency offset. Within the LRI, the main computing engine is called the LRP, which was built by Jet Propulsion Laboratory (JPL) Bachman et al. (2017). It hosts the phase readout electronics alongside control loops for the laser, cavity, steering mirror and more. In this article, we focus on the data acquisition and processing chain, which we assume to function as depicted in figure 1. The phase of the interfering light on both spacecraft is sensed by a Quadrant Photodiode (QPD) allowing to retrieve ranging and beam tilt information Sheard et al. (2012) from the four phase channels per spacecraft. The photocurrents are converted into voltages within the optical bench electronics and digitized at a rate of approximately 40 MHz before the phase information is extracted using an all-digital phase-locked loop within the LRP, see e. g. Ware et al. (2006); Wand et al. (2006). The nominal clock rates of the digitization are 38.656 000 MHz for GF-1 and 38.656 792 MHz for GF-2. The phase extraction is divided between an FPGA, where an IQ-demodulation, filtering and decimation to 9.664 kHz is performed (9.664 198 kHz on GF-2), and a processor, which extracts the ranging phase \(\varphi=\arctan(I/Q)\), which is further decimated in a 2-step low-pass-filtering and decimation chain. The whole phase extraction and decimation chain runs individually on each of the four phase channels on both spacecraft. The decimation in the processor comprises two FIR filters (A and B of length \(l_{A}\) and \(l_{B}\)) and two decimators by a factor of 100 and 10, respectively, to derive the final data rate of 9.664 Hz (9.664 198 Hz on GF-2), at which the phase data is transmitted to ground. Filtering before decimation is needed to prevent aliasing Ware et al. (2006) of higher frequencies into the measurement band of 2 mHz to 0.1 Hz Dahl et al. (2017). The two filters A and B each constitute some hundred registers (labelled \(m\)) and corresponding filter coefficients Ware et al. (2006) with \(l_{A}>l_{B}\). The filter coefficients \(c_{A/B}\) contain the impulse response of such a filter. The phase delay of all three filters adds up, giving a combined filter delay of 28 802 038 clock ticks \(\approx\) 0.745 s Wen et al. (2019). After discussions with the JPL, the manufacturer of the LRP, we identify the two FIR filters A and B in the processor as the most probable source for radiation-induced SEUs because the FPGA can be expected to be better hardened against radiation than the memory of the processor and currently available space-qualified FPGAs even feature error detection and correction implemented in the hardware, see e. g., the RTG4 FPGA Series (Microchip Technology Inc., 2022). In the following, we will use approximate values for the frequencies (e. g. 40 MHz instead of 38.656 MHz) in the text and sketches for brevity, while the simulations and data analysis uses the exact values. ## 4 Simulation of Events In a time-domain simulation, the output of the FIR filtering chain was computed. A block diagram of the simulation is shown in figure 2. The filter response at a single time step is given by the sum over all the products of the register values \(m_{i}\), containing the data \(\varphi\), and their corresponding filter coefficients \(c^{i}_{A/B}\). For the next time step, the registers values are shifted one sample to the right, and the register \(m_{0}\) receives a new value from the input phase data. We simulate the effect of SEU-induced bitflips with a trivial filter input being \(\varphi\equiv 0\), i. e., without any ranging signal, in order to obtain just the disturbance from a bitflip, and we assume that this disturbance adds to the regularly filtered signal due to linearity of FIR filters. Hence, upon a bitflip, we set the \(m\)-th register from 0 to 1 during execution of the simulation. If the SEU occurs in filter A, it will then propagate through the subsequent filter and decimation stages. Manipulation of the \(0^{\text{th}}\) register in filter A is equivalent to setting one sample of the input phase \(\varphi\) to one. However, manipulation of higher registers can not be replaced by equivalent input data \(\varphi\). All intermediate data streams are computed, where \(F_{A}\) denotes the output of the first filter, which is then decimated by a factor of 100 (denoted \(D_{100}\)). The second filter output is \(F_{B}\), and its decimated outcome at a 10 Hz data rate is called \(D_{10}\). We identified the defining parameters of a bitflip to be * The affected filter (A or B). * The occurrence time of the SEU, expressed as a sample number or tick \(k_{\text{A/B}}\) at the filter's clock rate. Due to the fixed decimation rates from \(F_{A}\) or \(F_{B}\) to the 10 Hz output data rate (\(1000=100\cdot 10\) or 10, respectively), the output of a varying \(k\) repeats. Thus, if the SEU occurs in filter A we use \(k_{\text{A}}\in[0,1000)\subseteq\mathbb{N}_{0}\) and for an SEU in filter B we use \(k_{\text{B}}\in[0,10)\subseteq\mathbb{N}_{0}\). Now, \(k\) can be regarded as the sub-sample time in between of two data samples of the 10 Hz output data. * The affected register number \(m_{\text{A}}\in[0,l_{\text{A}})\subseteq\mathbb{N}_{0}\) or \(m_{\text{B}}\in[0,l_{\text{B}})\subseteq\mathbb{N}_{0}\) of the filter. We usually provide this number in % of the full filter length \(l_{\text{A/B}}\). * The bit number \(b\in[0,64)\subseteq\mathbb{N}_{0}\) that flipped of the presumed 64-bit register (i. e. the \(2^{b}\) magnitude of the flipped bit). For simulation, \(b=0\) is usually used, since this parameter is a linear scale factor that can easily be estimated through a least squares algorithm. We simulate the bitflips with \(\varphi=0\) as initial condition and the bit flipping from zero to one. However, one could also initialize \(\varphi=1\) and flip from one to zero. This results in the same shapes of the output data but with inverted sign. Figure 3 shows an exemplary simulation result for an SEU in the first register ( Figure 1: Phasemeter processing chain. The optical signal is converted to a voltage by the QPD and its electronics. The voltage is filtered and digitized at a rate of approximately 40 MHz before the FPGA, which demodulates the signal. The phase is extracted in the processor, where further filtering and decimation takes place as well. The low-pass FIR filters in the processor are of length \(l_{\text{A}}\) and \(l_{\text{B}}\), respectively, and the subsequent decimations are by a factor of 100 and 10. All shown elements are implemented independently for the four phase channels on each spacecraft. Red lines denote optical signals, blue is analog electronic and green are digital signals. 0) of the first filter (A) at time \(k_{A}=0\). Orange and green are the intermediate data streams after the first filter, red is the second filter's output, and cyan is the final \(10\,\mathrm{Hz}\) output data. A larger injection sample, \(k_{A}>0\), would cause a delay of \(F_{A}\) and thus a slightly different shape and amplitude of the subsequent data streams due to the different sampling of \(F_{B}\). When a low register number \(m\) is affected by the bitflip it implies that almost the complete filter impulse response is visible in the immediate output, as shown by the solid lines of figure 4, where the red line depicts the immediate output of filter B at \(100\,\mathrm{Hz}\) and blue denotes the decimated data at \(10\,\mathrm{Hz}\). A higher register number \(m\) yields cropped filter responses in the immediate output, as shown by the dashed lines in figure 4. For a high register number \(m\) in filter A, the \(10\,\mathrm{Hz}\) output data would not appear cropped since the cropped and decimated filter output \(D_{100}\) is filtered once more in \(F_{B}\), which ultimately dominates the shape of the output \(D_{10}\). For a fixed register number \(m\), the output \(D_{10}\) can have very different shapes, depending on the time or sample \(k\) at which the SEU was induced in the data. There are 1000 unique patterns in the \(10\,\mathrm{Hz}\) output data stream for an SEU in the first filter A and ten patterns for the second filter B, according to the sampling rate decimation factors. The ten patterns of filter B are approximately a subset of the 1000 patterns of filter A since the output \(D_{100}\) of filter A is approximately only a single peak which is then fed into filter B. Two Look-Up Tables (LUTs) for events either in filter A or B were created from the simulations, where the injection sample number \(k\) and the register number \(m\) at which the SEU was injected into the filter were varied over the parameter space. The resulting output data after the second decimation, i. e., at \(10\,\mathrm{Hz}\), is stored in the LUTs. The full 3D-LUTs have the dimensions \(1000\times l_{A}\times 15\) and \(10\times l_{B}\times 15\), respectively, where the first dimension represents the injection sample number \(k\), the second Figure 4: Simulated data showing the effect of an SEU in a higher register number for the second filter. The solid lines depict the response of an impulse travelling through the full filter (i. e., for \(m=0\)), while the dashed lines show the response for an SEU that affects the register \(m=50\%\) in the middle of the filter. Color coding as in figure 3. Note, that the magnitude here is larger than in figure 3, because this SEU was simulated in the second instead of the first FIR filter. These examples show artificial filter coefficients, as the exact coefficients employed in-flight can unfortunately not be disclosed here. Figure 3: Simulated data throughout the filtering chain for an SEU in the first FIR filter with injection sample and register number \(k=m=0\) and a magnitude \(a=1\). The input data \(\psi\) is zero and thus not shown. The output of the first filter \(F_{A}\) (orange) is sampled at \(10\,\mathrm{kHz}\), the first decimation \(D_{100}\) (green) and the output of the second filter \(F_{B}\) (red) at \(100\,\mathrm{Hz}\) and the final output \(D_{10}\) (cyan) is sampled at \(10\,\mathrm{Hz}\). Both time-axes are in units of seconds, but note the different scale. These examples show artificial filter coefficients, as the exact coefficients employed in-flight can unfortunately not be disclosed here. Figure 2: Block Diagram of the two FIR filter stages as implemented for the simulation. Green denotes clock signals, orange denotes the phase data and blue denotes memory cells. The FIR filter coefficients (\(c_{A/B}^{\lambda}\)) are multiplied with the data points in the registers and the filtered result is the sum over all multiplications. dimension is the register number \(m\) and the third dimension is the total number of data points of the complete filter response at \(10\,\mathrm{Hz}\) in \(D_{10}\). The individual rows of the LUTs are denoted as \(\mathrm{LUT}_{\mathrm{A/B}}^{k,m}\). For better readability, we will omit the subscript A/B in the following, where we usually mean that all the equations are evaluated independently for both LUTs. Since the true LRP-internal filter coefficients are only available project-internally, we use exemplary FIR filters to show the principle in figures 3 and 4. The following analysis of actual flight-data however uses the true in-flight LRP filter coefficients. ## 5 Detection of SEUs in LRI Phase Data The SEU detection algorithm is part of a larger framework developed at the Albert-Einstein-Institute in Hanover to automatically process and analyze LRI data in near-real-time (Misfeldt, 2019). It features an outlier-detection, originally developed to remove thruster-induced phase jumps (Abich et al., 2019) but was now extended to identify SEUs. The overall process is two-fold: first, all phase disturbance events are detected and categorized. All events where the first derivative of the measured phase (or the phase rate) exhibits steps larger than \(\pm 30\,\mathrm{mHz}\), or where the Differential Wavefront Sensing (DWS) combination shows outliers larger than \(2\times 10^{-4}\,\mathrm{cycles/s}\), are marked as potential phase disturbance events. Subsequently, modeling and subtraction are performed. The criterion for deciding whether a phase disturbance is an SEU and not a true phase jump due to optical or mechanical disturbances is that an SEU occurs in a single channel only since the filtering and decimation of the four channels are performed separately. In contrast, a phase jump affects all four channels, and further, an SEU produces a short-lived peak (after propagating through the filter, the disturbance vanishes), while a phase jump causes a persistent step in the ranging data (caused by a non-zero integral of fast laser frequency variations, cf. Misfeldt (2019)). A short segment of \(N\leq 30\) samples of the affected channel is extracted from the measured phase data once an SEU candidate is identified. The mean over the three unaffected channels is subtracted from the affected channel to remove the common (ranging) signal and extract a clean signature of the SEU. We call this extracted bitflip signal \(\varphi(t_{i})\) or \(\varphi_{i}\), where \(t_{i}\) are the discrete-time samples and \(i\) is the sample number. Exemplarily, if an SEU occurs in channel C, then \(\varphi(t_{i})=\varphi_{C}(t_{i})-\left(\varphi_{A}(t_{i})+\varphi_{B}(t_{i}) +\varphi_{D}(t_{i})\right)/3\). This expression additionally suppresses common-mode noises like laser frequency noise (on the reference side). The measurement noise of the LRI is discussed in more detail in Muller et al. (2022). We introduce our model for the SEU phase \[\eta_{i}^{k,m}(a)=\eta^{k,m}(a,t_{i})=a\cdot\mathrm{LUT}^{k,m}(t_{i})\, \tag{1}\] which essentially is an LUT entry scaled by an amplitude \(a\), and the residuals \[r_{i}^{k,m}(\mathbf{\vartheta}) =r\left((a,c_{2},c_{1},c_{0})^{\mathsf{T}},t_{i}\right)\] \[=\varphi_{i}-\eta_{i}^{k,m}(a)-c_{2}\cdot r_{i}^{2}-c_{1}\cdot t _{i}-c_{0}\, \tag{2}\] where we further subtract a second order polynomial, which may still be present in the data \(\varphi\) due to insufficient (ranging) signal removal or similar effects. This equation defines the regression coefficients \(\mathbf{\vartheta}=(a,c_{2},c_{1},c_{0})^{\mathsf{T}}\). To assess which of the \(k\times m\) models in the LUTs matches the data best, we employ the framework of maximum likelihood estimation. First, we compute the likelihood of \(\mathbf{\vartheta}\) given the measured data \(\varphi\) as (Koch, 1999) \[\mathcal{L}^{k,m}(\varphi\,|\,\mathbf{\vartheta})=\frac{1}{\sqrt{|2\pi\Sigma|}} \cdot\exp\left(-\frac{1}{2}^{k,m}(\mathbf{\vartheta})^{\mathsf{T}}\,\Sigma^{-1}\, \,r^{k,m}(\mathbf{\vartheta})\right). \tag{3}\] The covariance matrix \(\Sigma\) will be discussed later. The best fitting model \(\eta^{k,m}(a)\) can be identified by the maximum value of the likelihood function \(\mathcal{L}\) over the parameter space or equivalently by the minimum of its negative logarithm \[\ell^{k,m}(\varphi\,|\,\mathbf{\vartheta}) =-\ln\mathcal{L}^{k,m}(\varphi\,|\,\mathbf{\vartheta}) \tag{4}\] \[=\frac{1}{2}\ln\left(|2\pi\Sigma|\right)+\frac{1}{2}r^{k,m}(\mathbf{ \vartheta})^{\mathsf{T}}\cdot\Sigma^{-1}\cdot r^{k,m}(\mathbf{\vartheta}). \tag{5}\] The parameter space is discrete for the parameters \(k\) and \(m\) and continuous for \(\mathbf{\vartheta}\). Hence we minimize the negative log-likelihood \(\ell^{k,m}\) for all \(k\), \(m\) through a generalized least squares, i. e., by estimating \[\hat{\boldsymbol{\vartheta}}=\operatorname*{argmin}_{\boldsymbol{\vartheta}}r^{k,m} (\boldsymbol{\vartheta})^{\mathsf{T}}\cdot\Sigma^{-1}\cdot r^{k,m}(\boldsymbol {\vartheta})\;. \tag{6}\] Ultimately, the best estimate for the SEU model is determined by finding the minimum of \(\ell^{k,m}\!\left(\varphi\,|\,\hat{\boldsymbol{\vartheta}}\right)\) in the two-dimensional \(k\times m\)-sized grid. The covariance matrix \(\Sigma\), which is needed to compute the generalized least squares (cf. equation (6)), is derived from the expectation value \(\mathrm{E}\) of the measurement noise \(n\) as \[\Sigma_{ij}=\mathrm{E}[n_{i}\cdot n_{j}]=R_{n}(t_{i}-t_{j})\;. \tag{7}\] Here, the expectation value \(\mathrm{E}\) can be computed through the unbiased correlation function of the (real-valued) data \(n\) of length \(N\) as \[R_{n}(\tau)=\left\{\begin{array}{ll}\dfrac{1}{N-\tau}\sum_{i=0}^{N-\tau-1}n _{i+\tau}n_{i}\,,&\tau\geq 0\\ R_{n}(-\tau)\,,&\tau<0\;.\end{array}\right. \tag{8}\] The correlation function is obtained from the autocorrelation of actual phase data in absence of an SEU event. Shown in figure 5 is the mean over 20 000 autocorrelations of consecutive data segments with 30 samples length for the two spacecraft in both roles. A trend was removed from the phase data before computing each autocorrelation. The function differs a bit in their shape between GF-1 and GF-2, and the magnitude varies insignificantly between different days in different roles (reference or transponder). The values of the solid lines are used as the correlation function \(R_{n}(\tau)\) to form the covariance matrix \(\Sigma\) from the measurement noise \(n\). From the fitted amplitudes \(a\) of the LUT rows, we directly obtain the amplitude and sign of the SEU as it occurred before the filtering. We can further compute the bit number \(b\) of the affected bit by \[b=\log_{2}\left(10\cdot 2^{24}\cdot a\right)\;, \tag{9}\] where \(1/(10\cdot 2^{24})\) is the least significant bit in units of phase cycles in the LRI phase measurement (Wen et al., 2019). Ideally, \(b\) yields an integer number. The above computation is done individually for all templates in the two filter's LUTs (\(\mathrm{LUT}_{\mathrm{A}}^{k,m}\) and \(\mathrm{LUT}_{\mathrm{B}}^{k,m}\)). We compare the two minimal values of the log-likelihood over the LUTs to identify the most likely filter (A or B) when the SEU occurred. ## 6 Discussion Over the analyzed mission time ranging from June 2018 until the end of December 2022, in which the LRI was in science mode for more than 75% of the time, we identified 29 SEU events in the LRI phase data, whose parameters are shown in table 1. A time series of an exemplary SEU event (#1 in the table), the fitted model, and the corresponding residuals are shown in figure 6. Of all events, GF-1 recorded 14 events, while GF-2 recorded 15 events. As the reference/transponder role can be switched, 16 were detected on the transponder unit, and 13 on the reference unit of the LRI. The distribution over the four channels is almost equal (A: 8 events, B: 6, C: 8, D: 7). Filter A shows more events (19 vs. 10 in filter B). This is expected, since filter A has more registers \(l_{A}>l_{B}\), i. e., a physically larger area in the electronics that can be hit by radiation. Figure 5: Exemplary autocorrelation for a single channel phase combination \(\varphi_{A}-(\varphi_{B}+\varphi_{C}+\varphi_{D})/3\) of GF-1 and GF-2 on two different days with different roles. Figure 6: Event #1: Example of a good SEU fitting result. The blue trace shows the isolated segment from the phase data of channel D on GF-2 on 2018-July-09 around 18:25 UTC. An SEU in bit \(b=60\) of the register 67% - \(l_{B}\) of the filter B (dashed orange) was subtracted, which yields the green residuals (scale according to right axis). The noise after subtraction is in the order of \(10^{-5}\) cycles. in science mode for more than 85% of the time in 2019-2021, approximately nine events can be expected annually. It was not in science mode for long periods in 2018 and 2022; thus, fewer events were observed. The smallest observed event occurred in the 29th bit (event #28). Thus we expect that there are actually more SEU events at low bit numbers, but they are not detectable in the LRI noise. The subtraction of the SEU signature from ranging data, in general, works well since the rms of the residuals is in the order of some \(10^{-5}\) cycles in most of the cases, which is the noise level of the phase measurement system (cf. Muller et al. (2022)). The distribution of the ground-track position of the spacecraft at the time of the SEU events (shown in figure 7) reveals an expected clustering within the South-Atlantic Anomaly, where almost 50% of the events take place. This is consistent with results from the literature (Zhang et al., 2021). We did explicitly exclude the possibility, that an SEU could also alter the filter coefficients. A bitflip in the coefficients would cause a different filter gain and noise suppression. However, the exact effects also strongly depend on the architecture and implementation in the LRP. \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline \# & Event Time & SC & Role & Ch & FIR & \(k\) & \(m\) & Dir & Bit No. \(b\) & 95\% CI & Residuals \\ & [UTC] & & & & & [\%] & & int+frac & Bit No. \(b\) & [cycles rms] \\ \hline 1 & 09-Jul-2018 18:25:01 & GF-1 & T & D & B & 9 & 66 & \(\uparrow\) & \(60\,+1.25\times 10^{-12}\) & \(\pm 4.29\times 10^{-12}\) & \(1.33\times 10^{-5}\) \\ \hline 2 & 20-Jan-2019 09:31:55 & GF-1 & R & D & A & 296 & 31 & \(\uparrow\) & 35 & -1.97e-02 & \(\pm 1.49\times 10^{-4}\) & \(9.09\times 10^{-6}\) \\ [MISSING_PAGE_POST] ec-2021 04:06:04 & GF-1 & R & C & A & 454 & 60 & \(\uparrow\) & 38 & -3.05e-02 & \(\pm 3.05\times 10^{ ### Non-Integer Bit Numbers Some events show bit number \(b\), that are not integer within the 95% confidence interval, marked with different colors in table 1. Though non-integer bit numbers seem contra-intuitive in the first place, it can be explained when considering a simultaneous bitflip in separate bits. This increases or decreases the signal's amplitude and thus the retrieved bit number \(b\) (cf. equation (9)). The allowed fractional bit numbers obtained from our fit only depend on the separation in bits between the affected bits: \[\mathcal{O}_{\pm}(n)=\log_{2}(2^{b}\pm 2^{b-n})-\log_{2}(2^{b})\;. \tag{10}\] The sign of the \(2^{b-n}\)-term denotes the direction of the lower bit at position \(b\!-\!n\) with respect to the flip direction of the upper bit \(b\), which is indicated in table 1. The first 12 allowed fractional bit values are shown in table 2. Note that \(\mathcal{O}_{+}(1)\) and \(\mathcal{O}_{-}(2)\) are degenerate and also a flip in bit \(b\) and \(b\!-\!1\) in different directions can not be distinguished from a single flip in the \(b\!-\!1\)-th bit. Comparing the allowed fractional bit numbers from table 2 with the values in column "Bit No. \(b\)" of table 1, several events can be explained by multiple bitflips. For instance, we observe \(\mathcal{O}_{+}(4)\) (the 30th and 26th bit flipped in the same direction) for event #5. All these events are marked green. The numbers of table 2 can not directly explain the events marked yellow. However, these fractional bit positions can be explained when considering even more than two bitflips simultaneously. The fractional bit number of event #10 is \(-0.159\approx\mathcal{O}_{-}(3)+\mathcal{O}_{+}(5)\), which denotes bitflips in the 31st, 28th and 26th bit. Any assessment of how likely a single particle's impact may induce two bits to flip strongly depends on the exact architecture and physical arrangement of the memory cells, which is unknown to the authors. ### Other Events There are two events where the residuals still show a comparatively large rms value (#3 and #9; marked gray). Their residuals look like another SEU event, separated from the initial one by a few milliseconds. Event #3 is exemplarily shown in figure 8. A short experiment of feeding these residuals again into the fitting algorithm did not succeed, likely because these two events lived simultaneously within the filter and can not be treated as a simple superposition of two independent events. \begin{table} \begin{tabular}{c c c} \hline \(n\) & \(\mathcal{O}_{+}(n)\) & \(\mathcal{O}_{-}(n)\) \\ \hline 1 & 0.58496 & \(-\)1 \\ 2 & 0.32193 & \(-\)0.41504 \\ 3 & 0.16993 & \(-\)0.19265 \\ 4 & 0.08746 & \(-\)0.09311 \\ 5 & 0.04439 & \(-\)0.04580 \\ 6 & 0.02237 & \(-\)0.02272 \\ 7 & 0.01123 & \(-\)0.01131 \\ 8 & 0.00562 & \(-\)0.00565 \\ 9 & 0.00282 & \(-\)0.00282 \\ 10 & 0.00141 & \(-\)0.00141 \\ 11 & 0.00070 & \(-\)0.00070 \\ 12 & 0.00035 & \(-\)0.00035 \\ \hline \end{tabular} \end{table} Table 2: Fractional bit number for two bitflips at the same time as a function of the separation between bit numbers. The number \(n\) denotes position \(b-n\) of the second bit, relative to the one at position \(b\), \(b\) = \(n\). Figure 8: Event #3: The residuals are shaped like a second SEU. Figure 7: World map showing the location of the GRACE-FO spacecraft at occurrence of SEUs (diamonds). The color coding depicts the strength of the magnetic field in ΞΌT at an altitude of 490 km above Earth’s surface, as derived from the CHAOS-7 model for January 2021 (Finlay et al., 2020). There is evidence for an increased number of SEUs in the region of the South-Atlantic Anomaly. Extending the LUTs also to incorporate such events was beyond the scope of this study and would exponentially increase the size of the LUTs and the computation time. ## 7 Conclusion In this paper, we presented an approach to identify, extract and model SEU-induced disturbances in the measured phase data of the LRI in the GRACE-FO mission. We explain the filtering within the LRP, where we expect SEUs to show an effect in the measured phase through flipped bits in the registers of the lowpass FIR filters. Further, we showed simulated data and discussed the parameters needed to describe the SEU. Ultimately, we found 29 events in more than three years of LRI ranging data. The events clustered at the South-Atlantic Anomaly. Some of the events seem to originate from multiple bits flipping simultaneously or possibly even with a slight time delay. Radiation-induced SEU in the LRI phase data are rare and short-lived events. Thus, we expect that their non-removal has only little to none impact on the retrieved gravity fields. Nevertheless, LRI data products with removed SEUs can be found at [https://www.aei.mpg.de/grace-fo-ranging-datasets](https://www.aei.mpg.de/grace-fo-ranging-datasets). This study shows that it is possible to identify and remove this particular noise source in post-processing. Future instruments might overcome this source of short measurement disturbances by implementing radiation-hardened memory or incorporating error correction algorithms. ## Funding This work has been supported by: The Deutsche Forschungsgemeinschaft (DFG, German Research Foundation, Project-ID 434617780, SFB 1464); Clusters of Excellence "QuantumFrontiers: Light and Matter at the Quantum Frontier: Foundations and Applications in Metrology" (EXC-2123, project number: 390837967); the European Space Agency in the framework of Next Generation Geodesy Mission development and ESA's third-party mission support for GRACE-FO (RFP/3-17121/21/I-DT-lr); the Max Planck Society (MPG) for future mission support (M.IF.A.QOP18108) and in the framework of the LEGACY cooperation on low-frequency gravitational-wave astronomy (M.IF.A.QOP18098). ## Acknowledgments The authors would like to thank the LRI team at JPL for helpful regular discussions and insights.
2310.19876
Connecting the avoided quantum critical point to the magic-angle transition in three-dimensional Weyl semimetals
We theoretically study the interplay of short-ranged random and quasiperiodic static potentials on the low-energy properties of three-dimensional Weyl semimetals. This setting allows us to investigate the connection between the semimetal to diffusive metal "magic-angle" phase transition due to quasiperiodicity and the rare-region induced crossover at an avoided quantum critical point (AQCP) due to disorder. We show that in the presence of both random and quasiperiodic potentials the AQCP becomes lines of crossovers, which terminate at magic-angle critical points in the quasiperiodic, disorder-free limit. We analyze the magic-angle transition by approaching it along these lines of avoided transitions, which unveils a rich miniband structure and several AQCPs. These effects can be witnessed in cold-atomic experiments through potential engineering on semimetallic band structures.
J. H. Pixley, David A. Huse, Justin H. Wilson
2023-10-30T18:00:01Z
http://arxiv.org/abs/2310.19876v1
Connecting the avoided quantum critical point to the magic-angle transition in three-dimensional Weyl semimetals ###### Abstract We theoretically study the interplay of short-ranged random and quasiperiodic static potentials on the low-energy properties of three-dimensional Weyl semimetals. This setting allows us to investigate the connection between the semimetal to diffusive metal "magic-angle" phase transition due to quasiperiodicity and the rare-region induced crossover at an avoided quantum critical point (AQCP) due to disorder. We show that in the presence of both random and quasiperiodic potentials the AQCP becomes lines of crossovers, which terminate at magic-angle critical points in the quasiperiodic, disorder-free limit. We analyze the magic-angle transition by approaching it along these lines of avoided transitions, which unveils a rich miniband structure and several AQCPs. These effects can be witnessed in cold-atomic experiments through potential engineering on semimetallic band structures. ## I Introduction There is a significant push to discover and understand the nature of gapless topological materials. This has been fueled by the experimental discovery of three-dimensional (3D) topological Dirac and Weyl semimetals in weakly correlated narrow gap semiconductors [1; 2; 3; 4; 5; 6; 7; 8; 9], as well as their observation in several strongly correlated materials [10; 11; 12; 13; 14; 15; 16; 17; 18]. However, the Fermi energy does not typically coincide with the Weyl or Dirac touching points in the band structure, making the effects on the low-energy thermodynamic properties indirect. Nonetheless, nodal touching points have been identified using a combination of ARPES experiments [1; 2; 3; 19; 20] and _ab initio_ calculations [21; 22], while their manifestation in transport arises through a negative magnetoresistance [23; 24; 25]. These measurements provide a systematic means to identify the existence of Dirac and Weyl nodes in several weakly correlated material candidates. Recently, the demonstration of a 3D Weyl semimetal in an ultracold atom experiment using artificial spin-orbit coupling [26] opens the door to a new level of control over Weyl semimetals. These systems are tunable; filling is controlled by the number of atoms in the trap, and disorder and lattice imperfections are removed altogether. Therefore, perturbations can be turned on at will to determine the fate of Weyl semimetals experimentally while opening the door to study effects that are out of reach in solid-state compounds. Due to the interplay of topology and a vanishing pseudogap density of states, single particle perturbations can have several non-trivial effects. In particular, the effects of disorder on noninteracting Weyl semimetals have been well studied [27; 28]. Within perturbative (e.g., self-consistent Born [29; 30], large-\(N\)[31], and renormalization group [32; 33]) treatments of the problem, a disorder-driven quantum critical point was found. However, when taking into account the non-perturbative effects of disorder, rare regions of the random potential give rise to power-law quasibound states where the disorder is atypically large and cannot be treated perturbatively [34]. These rare states were found to endow the Weyl semimetal with a finite density of states at the Weyl node, destabilizing the Weyl semimetal phase into a diffusive metal for any weak random potential [28]. As a result, it was shown that the putative critical point is rounded out into a crossover, dubbed an avoided quantum critical point (AQCP) [35; 36; 37; 38]. Instanton fluctuation calculations (about the saddle point) for a single Weyl cone found that this picture is modified [39], and these results were interpreted in terms of a non-trivial scattering phase shift [40]. However, it was then later shown that such phase shifts are inherently problematic as their conclusions violate Levinson's Theorem [41] and instead the AQCP is the correct description [42]. This was also consistent with a numerical study of a disordered single Weyl cone in the continuum limit that showed the transition remains strongly avoided with essentially the same kind of AQCP as previously studied lattice models with multiple Weyl cones [43]. On the other hand, the fate of Weyl semimetals in the presence of quasiperiodicity that does not have any rare regions (due to the potential being infinitely long-range correlated) was only considered recently. It was numerically shown [44], and then rigorously proven [45] that the Weyl semimetal phase is stable to a quasiperiodic potential. As a result, quasiperiodicity drives a bona fide semimetal to diffusive metal phase transition at a non-zero critical quasiperiodic strength [44]. At this transition the Weyl velocity goes to zero continuously, the density of states becomes non-analytic, and the single particle wavefunctions at the Dirac node energy delocalize in momentum space. Studies of similar effects in two-dimensional Dirac semimetal models [46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 188; 187; 188; 189; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 224; 221; 225; 226; 227; 231; 232; 241; 233; 242; 243; 244; 245; 246; 247; 258; 26; 261; 279; 281; 290; 229; 291; 282; 292; 283; 284; 285; 286; 287; 288; 289; 293; 300; 310; 320; 331; 332; 3334; 34; 35; 36; 37; 38; 39; 40; 41; 429; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 53; 54; 55; 56; 57; 58; 59; 61; 62; 63; 64; 65; 66; 66; 67; 68; 69; 70; 72; 73; 74; 75; 76; 77; 78; 79; 80; 82; 84; 85; 86; 87; 88; 89; 91; 89; 92; 93; 94; 95; 96; 97; 98; 99; 101; 102; 103; 104; 105; 106; 107; 108; 109; 111; 113; 114; 115; 116; 117; 118; 119; 121; 123; 124; 125; 126; 127; 128; 129; 131; 140; 152; 154; 155; 156; 157; 168; 179; 180; 181; 193; 194; 195; 196; 197; 198; 199; 200; 210; 211; 222; 213; 214; 216; 217; 218; 219; 223; 242; 243; 244; 245; 246; 247; 259; 260; 261; 271; 283; 285; 286; 287; 288; 299; 310; 321; 333; 341; 35; 36; 37; 38; 390; 391; 300; 329; 311; 32; 343; 35; 36; 38; 37; 39; 41; 42; 436; 437; 44; 44; 45; 46; 47; 48; 49; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 65; 66; 67; 68; 69; 70; 71; 72; 74; 73; 75; 76; 77; 78; 79; 81; 82; 83; 84; 85; 86; 87; 88; 89; 92; 93; 94; 95; 96; 97; 98; 99; 100; 103; 104; 105; 106; 107; 108; 109; 111; 115; 116; 117; 118; 119; 132; 120; 121; 1233; 124; 125; 126; 127; 128; 129; 133; 134; 135; 136; 137; 138; 139; 140; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 156; 157; 158; 159; 160; 171; 179; 181; 197; 198; 199; 201; 199; 211; 202; 223; 234; 245; 256; 257; 268; 270; 288; 299; 320; 331; 340; 342; 351; 35; 358 48] have linked this quantum phase transition with the magic-angle phenomena originally discovered in twisted bilayer graphene [49] (and extended to incorporate incommensurate effects [46; 50]). Thus, the transition that was originally sought in disordered Weyl semimetals was uncovered in the quasiperiodic limit by removing rare regions from the problem. We therefore refer to the critical point due to a quasiperiodic potential as a "magic-angle transition" (MAT); here the "angle" refers to the incommensurate wave vector characterizing the quasiperiodic potential. In the following manuscript, we make a direct link between the avoided transition and the magic-angle quantum critical point in 3D. We do so by considering how the avoided transition is connected to the magic-angle condition, of a vanishing velocity, by studying the interplay of disorder and quasiperiodicity on equal footing. A closely related problem has been studied in two-dimensional Dirac semimetal models and is pertinent to understand the role of twist disorder in magic-angle graphene experiments [50; 51; 52; 53; 54; 55], which have attracted a great deal of attention. However, in two dimensions the marginal relevance of disorder removes the AQCP from the problem and does not allow a direct link between the two effects to be exposed. Through numerical calculations of the density of states of an inversion-broken 3D Weyl semimetal using the kernel polynomial method (KPM) we show that the AQCP becomes a line of crossovers that terminate at the MAT, as shown in the phase diagram of Fig. 1. We study the fate of the analytic properties of the zero energy density of states when disorder is added to the quasiperiodic Weyl semimetal model, demonstrating the interplay of incommensurate induced miniband formation and non-perturbative rare region effects. Last, the critical properties of the magic-angle transition are determined by approaching it along the crossover line of avoided transitions, which allows us to provide an accurate estimate of the power law nature of the vanishing Weyl velocity. The remainder of the paper is organized as follows: The model and the method used are introduced in Sec. II, we determine the phase diagram of the model in Sec. III, and the critical properties along the line of cross overs in Sec. IV. Finally in Sec. V we discuss the implications of our results, its detection using ultracold atoms, and conclude. ## II Model and method To investigate the interplay of the effects of disorder (D) and quasiperiodicity (Q) on Weyl semimetals we add two separate potentials to a lattice model of an inversion symmetry broken Weyl semimetal given by \[H = \sum_{\mathbf{r},\mu}\left(it_{\mu}\psi_{\mathbf{r}}^{\dagger} \sigma_{\mu}\psi_{\mathbf{r}+\hat{\mu}}+\mathrm{H.c}\right)+\sum_{\mathbf{r}} \psi_{\mathbf{r}}^{\dagger}V(\mathbf{r})\psi_{\mathbf{r}} \tag{1}\] where \(\mu=x,y,z\), the potential is a sum of two separate contributions from randomness (that we denote with a \(D\) for disorder) and quasiperiodicity (denoted with a \(Q\)) \[V(\mathbf{r})=V_{D}(\mathbf{r})+V_{Q}(\mathbf{r}), \tag{2}\] which we parameterize below. The model lives on the simple cubic lattice of linear size \(L\) and we average over twisted boundary conditions to reduce finite size effects. The hopping is then given by \(t_{\mu}=te^{i\theta_{\mu}/L}/2\) where the twist in the \(\mu\) direction \(\theta_{\mu}\) is randomly sampled between \(0\) and \(2\pi\). In the absence of the potentials the band structure is given by \[E_{0}(\mathbf{k})=\pm t\sqrt{\sum_{\mu=x,y,z}\sin^{2}(k_{\mu}+ \theta_{\mu}/L)} \tag{3}\] with 8 Weyl cones labeled by \(\mathbf{K}_{W}\) at the time-reversal invariant momenta (for no twist) in the Brillioun zone. Near each Weyl point \(\mathbf{K}_{W}\) the dispersion is given by \[E_{0}(\mathbf{k})\approx\pm v_{0}(\mathbf{K}_{W})|\mathbf{k}- \mathbf{K}_{W}| \tag{4}\] Figure 1: **Phase diagram in disorder (\(W_{D}\)) and quasiperiodic (\(W_{Q}\)) potential strength at the Weyl node energy (\(E=0\)).** Solid blue lines are stable Weyl semimetal phases that terminate at magic-angle transitions (the Weyl semimetal at larger \(W_{Q}\) is an inverted semimetal phase). At any non-zero \(W_{D}\) the model is in the diffusive metal phase, dark red marks the semimetal regime \(\rho(E)\approx\rho(0)+\rho^{\prime\prime}(0)E^{2}/2\) where \(\rho(0)\) is nonzero but exponentially small and light red is past the AQCP where \(\rho(0)\sim\mathrm{O}(1)\). The location of the peak in \(\rho^{\prime\prime}(0)\) as a function of \(W_{D}\) (for fixed \(W_{Q}\)) provides an estimate of the AQCP (and the quasiperiodic transition at \(W_{D}=0\)) that we label as \(W_{A}(W_{Q})\). We compare two choices of the distribution of the disorder potential \(P[V]\) showing their distinction is insignificant near the magic-angle transitions, Gaussian and binary distributions, the latter has been shown to weaken the avoidance for \(W_{Q}=0\)[56]. This analysis is done on a system size of \(L=89\), KPM expansion order \(N_{C}=2^{10}\), and 100 samples. Sufficiently close to the magic angle transitions additional minibands appear that give rise to more structure in \(W_{A}(W_{Q})\), see Sec. IV; that additional miniband structure is not shown here. with a velocity \(v_{0}({\bf K}_{W})=\pm t\) that depends on the helicity. The disorder potential \(V_{D}({\bf r})\) is sampled independently at each site from a probability distribution \(P[V]\). In the following we consider two different distributions. To enhance rare region effects we consider a Gaussian distribution with zero mean and standard deviation \(W_{D}\). To suppress rare regions and enhance the critical scaling properties we also consider a binary distribution where the value of the potential is equally likely to be \(\pm W_{D}\). In the absence of the quasiperiodic potential the semimetal phase of this model is unstable to rare region effects that induce a diffusive metal phase at infinitesimal disorer strength. The resulting perturbative transition is avoided and rounded into a cross over. By varying the tails of the distribution \(P[V]\) we can control the probability to generate rare events, removing the tails as in the binary case quantitatively suppresses (but does not eliminate) rare region effects [56]. All results shown are for the case of Gaussian disorder unless otherwise specified. The quasiperiodic potential is given by \[V_{Q}({\bf r})=W_{Q}\sum_{\mu=x,y,z}\cos(Q_{L}r_{\mu}+\phi_{\mu}) \tag{5}\] where the quasiperiodic wavevector is taken as a rational approximant \(Q_{L}=2\pi F_{n-2}/L\) where the system size is given by the \(n\)th Fibonacci number \(L=F_{n}\) and in the thermodynamic limit \(Q_{L}\to Q=2\pi[2/(\sqrt{5}+1)]^{2}\). The random phases \(\phi_{\mu}\) are randomly sampled between \(0\) and \(2\pi\) as the origin of the quasiperiodic potential is arbitrary. In the absence of the random potential this model has been shown to host several "magic-angle" transitions between Weyl semimetal and diffusive metal phases as a function of increasing \(W_{Q}\). Near the magic-angle transition \(W_{c}\), in the semimetallic phase, the velocity of the Weyl cone vanishes as \[v(W_{Q})\sim|W_{Q}-W_{c}|^{\beta/d}, \tag{6}\] where \(\beta\approx 2\) and \(d=3\) is the spatial dimension [44]. The plane wave Weyl eigenstates delocalize in momentum space and the level statistics become consistent with random matrix theory when one enters the diffusive metallic phase. In the following manuscript we use disorder to round out the critical properties of the quasiperiodic induced transition, which allows us to approach the MAT from a new direction. To characterize the system we numerically compute the density of states (DOS) that is given by \[\rho(E)=\frac{1}{L^{3}}\sum_{i}\delta(E-E_{i}) \tag{7}\] where \(E_{i}\) are the eigenenergies of \(H\). The DOS is computed using the kernel polynomial method (KPM) by expanding it in terms of Chebyshev polynomials to order \(N_{C}\) and evaluating the expansion coefficients with sparse matrix-vector multiplication. For the system sizes of \(L=55,89\) considered here, \(N_{C}\) is the most dominant finite size effect and therefore we try to converge our results with \(N_{C}\). The analytic properties of the density of states are investigated via assuming the DOS is always analytic and Taylor expanding \[\rho(E)=\rho(0)+\frac{1}{2}\rho^{\prime\prime}(0)E^{2}+\ldots \tag{8}\] and we directly compute the second derivative of the DOS with KPM at the Weyl node energy (\(E=0\)) [36]. If the DOS becomes nonanalytic then \(\rho^{\prime\prime}(0)\to\infty\), whereas if the system undergoes a crossover it will remain finite. For a stable Weyl semimetal phase we have \(\rho(0)=0\) and \(\rho^{\prime\prime}(0)=N_{W}/(2\pi^{2}v^{3})\), where \(v\) is the velocity of the Weyl cone and \(N_{W}\) denotes the number of Weyl points in the band structure. Using this relation, an estimate of the velocity in the Weyl semimetal phase \(W_{D}=0\) is shown in Fig. 2. Importantly, this also implies that when \(v\to 0\) the DOS becomes non-analytic as \(\rho^{\prime\prime}(0)\to\infty\) signalling a MAT. Thus, Fig. 2 also demonstrates the existence of three MATs taking place at \(W_{M,1}\approx 0.38t\) into the DM phase, out of the DM phase to a reentrant SM at \(W^{\prime}_{M,1}\approx 0.395t\) and then a transition back to the DM phase at \(W_{M,2}\approx 0.6345t\) for this range of \(W_{Q}\) and \(Q/2\pi=[2/(\sqrt{5}+1)]^{2}\). We note that the reentrant semimetal phase for \(W>W^{\prime}_{M,1}\) occurs by inverting the positive and negative energy bands, which we refer to as an inverted Weyl semimetal phase. ## III Phase diagram As the Weyl semimetal phase is stable in the presence of a quasiperiodic potential we find it most natural to explain the structure of the phase diagram at the Weyl node energy by adding disorder to the quasiperiodic Weyl semimetal model. From this perspective, we can safely use perturbation theory to determine a new low energy effective model in the quasiperiodic renormalized semimetal phase. We note that for \(W^{\prime}_{M,1}<W_{Q}<W_{M,2}\) the inverted semimetal phase requires a rather high order in perturbation theory to be described. For \(W_{D}=0\) we can evaluate the self energy of the single particle Green function by treating \(W_{Q}\) perturbatively [46], similar to what is done to describe twisted bilayer graphene [49]. Focusing on the present case with \(Q\) close to \(\pi\) we only need to consider internode scattering. This results in a renormalized velocity \(v(W_{Q})\), with a perturbative expression to leading order [46] \[\frac{v(W_{Q})}{v_{0}}\approx\frac{1-2(2-\cos(Q))\alpha^{2}}{1+6\alpha^{2}} \tag{9}\] where the dimensionless coupling constant \(\alpha=W/[2t\sin(Q)]\) and a magic-angle condition occurs where \(v(W_{Q}=W_{\rm MA})=0\). We note that sufficiently high orders in perturbation theory are required to describe the data in Fig. 2. Nonetheless, our numerical results confirm beyond perturbation theory that the quasiperiodic potential produces a magic-angle transition where the velocity vanishes. At the same time, away from the magic-angle transition the quasiperiodic potential carves out a mini Brillouin zone (mBZ), with an effective band structure on an emergent moire lattice that is qualitatively described by perturbation theory (see Ref. [46] for an explicit construction of the band structure along these lines). The band gap in the density of states in Fig. 3(a) for \(W_{D}=0;W_{Q}=0.2t\) demonstrates the stability of the Weyl semimetal phase at low energies and the presence of the mBZ. As we will demonstrate below, our numerical results in a portion of the weakly disordered semimetal phases of the model can thus be interpreted as introducing disorder to a Weyl semimetal that lives on the mBZ with a renormalized velocity \(v(W_{Q})\). At larger quasiperiodic strength, in particular, in the reentrant semimetallic phase with \(W_{Q}\gtrsim 0.5t\) this is modified due to the inversion of the bands and the lack of a true band gap, an example of which is shown for \(W_{D}=0;W_{Q}=0.55t\) in Fig. 3(d). In each case, introducing disorder smoothly fills in these band gaps, pseudogaps, and fine features while rounding out the sharp structure that is due to quasiperiodicity. To determine the location of the AQCP at finite disorder and quasiperiodic potential strength we evaluate \(\rho^{\prime\prime}(0)\) for fixed \(W_{Q}\) as a function of \(W_{D}\) to determine the location of the peak in \(\rho^{\prime\prime}(0)\) as shown in Fig. 3(b,e), which provides an accurate estimate of the AQCP crossover location \(W_{A}(W_{Q})\). Importantly, this data is converged in system size \(L\) and KPM expansion order \(N_{C}\) and there is no divergence of \(\rho^{\prime\prime}(0)\) upon increasing either \(L\) or \(N_{C}\) demonstrating the cross over nature of the AQCP. Doing this across the parameter regime results in the phase diagram shown in Fig. 1. We note that the phase boundary is obtained for a fixed system size \(L=89\) and KPM expansion order \(N_{C}=2^{10}\) and very close to the MAT at small \(W_{D}\) it could be weakly shifted. Remarkably, the AQCP smoothly connects from the termination of the small diffusive metal phase due to the first magic-angle transition at at \(W_{Q}=W^{\prime}_{M,1}\) to the second magic-angle transition near \(W_{Q}=W_{\text{MA},2}\approx 0.63t\). Comparing the cross over boundary in Fig. 1 with the estimates of the velocity of the disorder free model \(v(W_{Q})\) shown in Fig. 2 demonstrates that the line of avoided transitions \(W_{A}(W_{Q})\) is simply parameterized by the relation \[W_{A}(W_{Q})\propto v(W_{Q}), \tag{10}\] for \(W_{Q}<W_{M,1}\). This relation inside the semimetal phase demonstrates that in the low energy limit the only relevant scale left in the problem is the Weyl cone velocity \(v(W_{Q})\), which we comment on in more detail at the end of this section. In Fig. 1 we compare the line of AQCPs between Gaussian and binary disorder distributions. The distinction between these two distributions is only significant at sufficiently weak \(W_{Q}\). This can be understood as follows: As we increase the quasiperiodic potential strength (with \(W_{D}=0\)) a semimetal miniband forms around zero energy for \(W_{Q}\gtrsim 0.15t\) with a hard gap [e.g. Fig. 3(a)] and a new effective mini Brillioun zone of linear size \((\pi-Q)/a\) (where \(a=1\) denotes the lattice spacing). As a result, an emergent unit cell develops that goes from size \(a\) to \(a_{\text{MB}}=a\frac{2\pi}{(\pi-Q)}\approx 8.5a\). As the magic-angle transition is approached further, lower energy minibands continue to appear [46]. If we then project the Hamiltonian onto the lowest energy miniband, that is separated from the rest of the states by a hard gap via the projection operator \(\hat{P}_{\text{MB}}=\sum_{E_{n}\in MB}|E_{n}\rangle\langle E_{n}|\) (that sums over energy eigenstates with energies within the miniband), we can then compute Wannier states on the lowest energy miniband. Thanks to the hard gaps and no topology in the band structure these are exponentially localized to Wannier centers [57] labeled by \(\mathbf{R}\) on the moire lattice with the Wannier functions \(W_{\mathbf{R}}(\mathbf{r})\). Applying this unitary operation plus a projection onto the lowest energy miniband maps the disorder potential in the Hamiltonian to \(\sum_{\mathbf{r}}V_{D}(\mathbf{r})\psi_{\mathbf{r}}^{\dagger}\psi_{\mathbf{r }}\rightarrow\sum_{\mathbf{R}}\hat{V}_{D}(\mathbf{R})\psi_{\mathbf{R}}^{ \dagger}\psi_{\mathbf{R}}\) where \(\mathbf{R}\) labels the Wannier centers and \[\tilde{V}_{D}(\mathbf{R})=\sum_{r\in a_{\text{MB}}}V(\mathbf{r})|W_{\mathbf{ R}}(\mathbf{r})|^{2} \tag{11}\] Figure 2: **Renormalized velocity and related scales**: Comparison of the renormalized velocity \(v(W_{Q})\) with the location of the avoided transition \(W_{A}(W_{Q})\) and the dependence of the density of states \(A(W_{Q})\), see Eq. (12). The velocity \(v(W_{Q})\) of the renormalized semimetal is obtained from the disorder free limit using \(\rho^{\prime\prime}(0)\propto v^{-3}\) (for system size \(L=144\) and \(N_{C}=2^{10}\) from Ref. [44]), \(A(W_{Q})\) is extracted from the rare region dependence of \(\log\rho(0)\sim A(W_{Q})/(W_{D})^{2}\) (from the data that is converged in \(L\) and \(N_{C}\) see Eq. (12)), and \(W_{A}(W_{Q})\) denotes the line of AQCPs determined from the maximum in \(\rho^{\prime\prime}(0)\) as a function of \(W_{D}\) (obtained from a system size of \(L=89\) and KPM expansion order \(N_{C}=2^{10}\)). Sufficiently close to each MAT the formation of minibands enriches the picture beyond the relations implied by these data; that requires sufficiently large \(L\) and or \(N_{C}\) to observe, see Sec. IV. is the coarse grained random potential on the scale of the moire unit cell. As a result, the sharp distinction between the Gaussian potential that has large local fluctuations and the binary distribution that does not is lost after coarse graining over this larger unit cell. This conclusion is consistent with the lack of a distinction between Gaussian and binary disorder in the vicinity each magic-angle transition. Now turning on finite disorder strength at a fixed value of \(W_{Q}\) (that remains in the semimetal phase), we expect that a non-zero density of states will be induced at the Weyl node energy due to rare-region effects. As shown in Fig. 3(c) and (f) we find that the DOS is converged in \(N_{C}\) and goes like \[\log\rho(0)\sim-\frac{A(W_{Q})}{W_{D}^{2}} \tag{12}\] (for each value \(W_{Q}\) that is in the semimetal phase of the model for \(W_{D}=0\)). From fits to this rare region form as shown in Figs. 3(c) and (f) we extract \(A(W_{Q})\). At weak disorder and quasiperiodicity strength we find good agreement with the identification \[A(W_{Q})\propto v(W_{Q})^{2} \tag{13}\] as demonstrated in Fig. 2 by comparing to the numerically estimated value of \(v(W_{Q})\) from \(\rho^{\prime\prime}(0)\). However, we do find a distinction at larger quasiperiodic strengths \(W_{Q}\gtrsim 0.5t\) where the Weyl semimetal miniband is no longer isolated from the rest of the states by a hard gap such as in Fig. 3(d) (this minigap closure occurs near \(W\approx 0.5t\) that is not shown), which alters the shape of the cut off function and energy dramatically. Correspondingly the prefactor in the DOS \(A(W_{Q})\) doesn't simply follow \(v(W_{Q})\) in this regime (\(W_{Q}\gtrsim 0.5t\)). The above data and results in Eqs. (10) and (13) suggests that in the semimetal phase there is only one relevant scale \(v(W_{Q})\) and described in the following picture. For a Weyl semimetal in the presence of disorder, the low-energy continuum model \(H=v{\bf k}\cdot\sigma+V({\bf x})\) has one dimensionless parameter that controls the physics: \(\alpha_{D}=W_{D}/(v/a)\) for disorder strength \(W_{D}\), velocity \(v\), and cutoff (lattice) scale \(a\). As the length scale is not varied in this problem, only \(v=v(W_{Q})\) in the low-energy, suggesting \(W_{A}\sim v(W_{Q})\) as we find. In a similar manner, the density of states should be exponentially suppressed by \(\log\rho(0)\sim-1/\alpha_{D}^{2}\sim-v(W_{Q})^{2}/W_{D}^{2}\). This simple single parameter which controls the low-energy theory is consistent with all of the data, allowing us to even discover properties of the low-energy Hamiltonian. It further lends credence to the statement that what we are witnessing is physics occurring due to the Weyl point itself and not other structure imposed by say the myriad of gaps opened by either subtle tight-binding model effects nor the fine structure emergent at higher energies due to quasiperiodicity. Figure 3: **Evolution of the density of states**: \(\rho(E)\) (a), (d) (with \(L=89\) and \(N_{C}=2^{13}\)); \(\rho^{\prime\prime}(0)\) (b), (e); and \(\rho(0)\) (c), (f) as a function of \(W_{D}\) and \(W_{Q}\) for various values of the KPM expansion order \(N_{C}\), \(L=55\) (open symbols), and \(L=89\) (closed symbols). The values of the quasiperiodic potential are: \(W_{Q}=0.2t\) in the semimetal phase (Top row) and \(W_{Q}=0.55t\) in the inverted semimetal phase (Bottom row). These data are averaged over 1000 samples for \(N_{C}=2^{9},2^{10},2^{11}\) and 5000 samples for \(N_{C}=2^{12},2^{13}\). Straight lines in (c) and (f) are fits to the rare region form in Eq. (12). ## IV Approaching the MAT along the LINE of AQCPs Having identified the cross-over boundaries marked by the line of AQCPs, defined by the disorder strength \(W_{D}=W_{A}(W_{Q})\) that terminates at the magic-angles i.e. \(W_{A}(W_{\rm MA})=0\), we now study the critical properties of the magic-angle transition. The inclusion of disorder allows us to approach the magic-angle transition from the line of avoided transitions, which are effectively parameterizing the path through parameter space of maximal correlation length (as a function of \(W_{D}\)) for each value of \(W_{Q}\). For concreteness we focus on the second magic-angle that occurs for \(W_{\rm MA,2}/t\approx 0.63\) and approach it from below. Fig. 4(a) and (b) shows the energy dependence of the density of states along the line of avoided transitions terminating at \(W=W_{\rm MA,2}\). At each avoided critical point the density of states develops the scaling at finite energy [\(|E|>|E^{*}|\), where \(E^{*}\) marks the cross over energy set by the finite value of \(\rho(0)\)] \[\rho(|E|>|E^{*}|)\sim|E| \tag{14}\] where this power law is consistent with the one-loop renormalization group results that produces a dynamic exponent \(z=3/2\)[32]. We note that this energy dependence is seen at each avoided transition (i.e. maximum in \(\rho^{\prime\prime}(0)\) versus \(W_{D}\)) as also seen in Fig. 3(a) and (d). However, at the AQCP and sufficiently low energy, the density of states is non-zero \(\rho(0)>0\) (as exemplified in Figs. 3(c,f) and Eq. (12)), which rounds out this power law. At the MAT on the other hand (with \(W_{D}=0\)), \(\rho(0)\) is non-zero as seen in Fig. 4(b). One can clearly see that the slope of the linear part of the density of states that follows Eq. (14) is monotonically increasing as we approach the MAT, which is directly reflected in the behavior of \(\rho(E)\) and \(\rho^{\prime\prime}(0)\) at the lowest energies, that we now come to. First, we recall that \(\rho^{\prime\prime}(0)\) diverges at the magic-angle transition [44], by approaching this singular behavior from finite disorder strength, it allows us to approach the magic-angle transition in a unique way to probe the critical scaling properties. We first focus on \(\rho^{\prime\prime}(0)\) along \(W_{A}(W_{Q})\) at large enough KPM expansion order to resolve the second miniband opening near \(W_{Q}\approx 0.62t\) (\(W_{D}=0\)) in Fig. 5(a). Before the second miniband opens (\(W_{Q}<0.62t\)) we see a single clear maximum in \(\rho^{\prime\prime}(0)\). In contrast after the second miniband has opened (\(W_{Q}>0.62t\)), we see the AQCP becomes significantly sharper, leaving behind a second peak. While we are able to converge both of these peaks in \(N_{C}\) for \(W_{Q}=0.625t\) as shown Fig. 5(b) this is not possible as we get closer to the MAT. As an example, we show \(W_{Q}=0.63t\) in Fig. 5(c), while we are able to converge the weaker peak at larger \(W_{D}\) we cannot converge the sharper weak at weaker disorder strength. In order to associate these two peaks with AQCPs in the first and second miniband we show the evolution of \(\rho(E)\) at fixed \(W_{Q}=0.63t\) as a function of \(W_{D}\) in Fig. 6. Importantly, we find that at each peak in \(\rho^{\prime\prime}(0)\), \(\rho(E)\) follows the expected AQCP "scaling" form in Eq. (14). As the value of \(\rho^{\prime\prime}(0)\) is significantly larger and not fully converged in \(N_{C}\) for the second miniband we now turn to how this is beginning to diverge as we come to the MAT. Focusing on \(\rho^{\prime\prime}(0)\) along the avoided line we plot it versus \(W_{A}(W_{Q})\) as we approach the MAT in Fig. 7(a) for both the dominant peak and the subleading peak while the inset shows the location of each maximum in \(\rho^{\prime\prime}(0)\). As we previously described, the dominant peak we are able to converge in \(N_{C}\) when we are far enough away Figure 4: **Density of states along the line of AQCPs that is defined by \(W_{A}(W_{Q})\)**. We show \(\rho(E)\) for \(L=89\) at larger \(W_{D}=W_{A}(W_{Q})\) in (a) (with \(N_{C}=2^{13}\)) and closer to the transition at a much lower energy scale in (b) (with \(N_{C}=2^{14}\)) as \(W_{A}\to 0\), (recall \(W_{A}(W_{Q})\) is shown in the phase diagram in Fig. 1). The data is shown starting from \(W_{Q}=0.55t\) and terminating at the magic-angle transition at \(W_{Q}^{c}=0.6345t\) for \(W_{D}=0\). The finite but low energy dependence along the line of AQCPs is consistent with the expected behaviour in Eq. (14). However, as we go from panel (a) to (b) we can see that a consequence of the second miniband opening up for \(W_{Q}\approx 0.62t\) realizes a dramatically renormalized \(\rho^{\prime\prime}(0)\) seen through the shape near zero energy until we hit the MAT and a finite density of states is generated (\(W_{D}=0\)). from the MAT. This regime, which is controlled by the first miniband, is well described by the partial power law \[\rho^{\prime\prime}(0)\sim\frac{1}{(W_{A})^{2}}\quad\text{in miniband 1}, \tag{15}\] and this also describes the well converged peak (that is also associated with miniband one). We pause here, to note that if this power law where to hold all the way to \(W_{A}=0\) we would in fact find our results to not be internally consistent. To understand why consider the following: we established in Eq. (10) that \(W_{A}\sim v(W_{Q})\), but this implies that the density of states diverges like \(\rho^{\prime\prime}(0)\sim 1/v(W_{Q})^{2}\), a slower divergence than in the clean limit \(W_{D}=0\) where \(\rho^{\prime\prime}(0)\sim 1/v(W_{Q})^{3}\), a contradiction. This issue is alleviated, however, by considering how the second miniband enhances \(\rho^{\prime\prime}(0)\). The nature in which \(\rho^{\prime\prime}(0)\) is increasing in the second miniband on the other hand is stronger, where our limited numerical data yields the partial power law \[\rho^{\prime\prime}(0)\sim\frac{1}{(W_{A})^{2.5}}\quad\text{in miniband 2}. \tag{16}\] Importantly, our results are now internally consistent with the limit of \(W_{D}=0\). To look at this divergence in a different way we consider \(\rho^{\prime\prime}(0)\) as a function of \(N_{C}\) along \(W_{A}\) from where we can, to where we can't converge \(\rho^{\prime\prime}(0)\). Precisely at the MAT \(W_{D}=0,W_{Q}=0.6345t\) we find \(\rho^{\prime\prime}(0)\) diverges with \(N_{C}\) as \(\rho^{\prime\prime}(0)\sim(N_{C})^{2.5}\) for the largest \(N_{C}\) that we have accessed. This brings us to argue that as we approach the MAT each miniband produces a partial power law like divergence of \(\rho^{\prime\prime}(0)\) with \(1/(W_{A})\) that is described by \[\rho^{\prime\prime}(0)\sim\frac{1}{(W_{A})^{\beta_{n}}} \tag{17}\] where \(\beta_{n}\) depends on the \(n\)th miniband. As two of us conjectured in Ref. [46], there should be an infinite sequence of minibands opening up as we get exponentially closer to the MAT that each correspond to a given order in perturbation theory that is dictated by the irrational nature of the incommensurate wavevector \(Q\) in Eq. (5). Here it is the sequence \(F_{3n}/2\), where \(F_{m}\) are Fibonacci numbers; the sequence represents the denominators of the continued fraction of \(\sqrt{5}\). As a result, we conjecture that along the line of AQCPs there is an infinite sequence of \(\beta_{n}\)'s, obtaining \(\beta_{3}\) in our problem though remains a challenging computational task. Figure 5: **Evolution of the AQCP on approach to the magic-angle transition along \(W_{A}(W_{Q})\).** (a) Focusing on \(\rho^{\prime\prime}(0)\) on approach to the transition for \(N_{C}=2^{13}\) we see that the original peak that we associate to the AQCP splits off leaving behind a much weaker second peak at larger \(W_{D}\) that can be associated with an approximate AQCP due to the parts of the band outside of the second miniband. (b) We study the evolution of the peaks for \(W_{Q}=0.625t\) as a function of the expansion order demonstrating a converged AQCP peak at this system size (\(L=89\)). However, for \(W_{Q}=0.63t\) as we get closer to the MAT we are unable to converge the peak, see also Fig. 7(a). Figure 6: **Demonstration of AQCPs in the first and second miniband for \(W_{Q}/t=0.63\)**. We show the density of states \(\rho(E)\) as a function of energy \(E\) for \(L=89\) and \(N_{C}=2^{13}\) at the first maximum in \(\rho^{\prime\prime}(0)\) in black depicting the scaling \(\rho(E)\sim|E|\) in the second miniband. The location of the second weaker peak in \(\rho^{\prime\prime}(0)\) is marked in red that depicts the scaling \(\rho(E)\sim|E|\) in the first miniband. For reference, see Fig. 5(a) for the structure of \(\rho^{\prime\prime}(0)\). At each peak in \(\rho^{\prime\prime}(0)\) we find the low energy dependence follows the AQCP form in Eq. (14) allowing us to identify the signatures of two AQCPs as a function of \(W_{D}\) sufficiently close to the MAT. ## V Discussion In this manuscript, we made a direct link between disorder-driven avoided quantum criticality and the semimetal-to-diffusive metal magic-angle phase transitions tuned by quasiperiodicity. By viewing the problem as adding disorder to the quasiperiodic model, we constructed a complete phase diagram. The quasiperiodic potential renormalizes the Weyl semimetal parameters and away from the magic-angle transitions the Weyl semimetal survives. Adding disorder to this system fills in the band gaps and pseudogaps due to quasiperiodicity, introduces a finite density of states at the Weyl node due to rare regions of the random potential, and rounds out the magic-angle transition into a crossover. The line of crossovers is parameterized by the Weyl cone velocity renormalized by the quasiperiodic potential. Last, the divergence of the Weyl velocity at the magic-angle transition was computed accurately by approaching the transition along the lines of avoided critical points. The disordered and quasiperiodic Weyl semimetal model we expect can be realized in future realizations of ultracold atom experiments that use 3D spin-orbit coupling to realize a Weyl semimetal phase. Disorder can be introduced using several approaches (e.g., speckle patterns [58], programmable potentials [59; 60], digital mirror devices [61]), while quasiperiodicity can be achieved through a second optical lattice incommensurate with the first. The phase transition and cross-overs can be measured through time of flight imaging of wave packet dynamics [62] as well as through the spectral function measured using radiofrequency spectroscopy [63; 64]. It will be exciting to see if the transition and its connection to avoided quantum criticality can be exposed in future experiments. Alternatively, circuit quantum electrodynamic setups have been proposed by one of us to also be able to realize this phenomena and see its effect in spectroscopic transport measurements of the junctions [65; 66]. Future theoretical work needs to utilize a graphical processing units (GPUs) implementation of the KPM to make further progress. To see further minibands open up as we get exponentially close to the magic-angle transition requires systematically increasing the system size beyond \(L=89\) as we have considered here that reveals the second miniband at these expansion orders. In fact, the sequence of perturbation theory order needed to see the next gap opening [46] up gives us a clue for this: If we take \(L=3,5\), the first miniband has been opened up, but the unit cell size is the entire size of the system (hence cannot be disordered); for \(L=13,21\), the second miniband has formed and the first miniband can begin to be disordered. For \(L=55,89\) the second miniband can be disordered, while the third miniband has only formed as one unit cell. (Note: We skip even numbers since this degenerate case puts the new miniband gap precisely at zero energy, and no new miniband has yet been formed.) Following this pattern, in order to begin to see disorder effects in the third miniband, we would need to access \(L=233,377\) (that are accessible with GPU implementations of the KPM). While continuing this process indefinitely will quickly lead to prohibitively large system sizes, this may also be improved by a renormalization scheme when \(W_{D}/t\ll 1\) that focuses computational effort on the lowest available miniband, which we leave for future work. Figure 7: **Divergence of \(\rho^{\prime\prime}(0)\) on approach to the MAT along the line of AQCPs.** (a) The solid data points represent peaks in \(\rho^{\prime\prime}(0)\) that are converged in \(N_{C}\) that follow \(\rho^{\prime\prime}(0)\sim 1/W_{A}^{2}\) (blue solid line is a fit). The open symbols are not yet converged, [though they are close to converged as depicted in (b)] and we show their \(N_{C}\) dependence, with a fit to the largest \(N_{C}=2^{16}\) (grey line) yielding \(\rho^{\prime\prime}(0)\sim 1/W_{A}^{2.5}\). (Inset) The location of the leading AQCPs \(W_{A}(W_{Q})\) are shown as black symbols while the location of the subdominant peak is shown as magenta circles. (b) Dependence of the peak in \(\rho^{\prime\prime}(0)\) on the KPM expansion order \(N_{C}\) as we approach the MAT along the line of AQCPs. At the MAT (\(W_{D}=0\)) we find this diverges like \(\rho^{\prime\prime}(0)\sim(N_{C})^{2.5}\) (fit to the largest three expansion orders shown as a black line). All data here is obtained for a system size \(L=89\). ## Acknowledgments We thank Sankar Das Sarma, Sarang Gopalakrishnan, and Elio Konig for useful discussions. J.H.P. is partially supported by NSF CAREER Grant No. DMR-1941569 and the Alfred P. Sloan Foundation through a Sloan Research Fellowship. Part of this work was performed at the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-2210452 (J.H.W, J.H.P.) as well as the Kavli Institute of Theoretical Physics that is supported in part by the National Science Foundation under Grants No. NSF PHY-1748958 and PHY-2309135 (J.H.W, J.H.P.). D.A.H. was supported in part by NSF QLCI grant OMA-2120757. The authors acknowledge the following research computing resources: the Beowulf cluster at the Department of Physics and Astronomy of Rutgers University, and the Amarel cluster from the Office of Advanced Research Computing (OARC) at Rutgers, The State University of New Jersey ([https://it.rutgers.edu/oarc](https://it.rutgers.edu/oarc)).
2306.15014
Searches for supersymmetric particles with prompt decays with the ATLAS detector
Supersymmetry (SUSY) provides elegant solutions to several problems in the Standard Model and searches for SUSY particles are an important component of the LHC physics program. The latest results from electroweak and strong SUSY searches are reported here, conducted by the ATLAS experiment at the CERN LHC. The searches target multiple final states and different assumptions about the decay mode of the produced SUSY particles, including searches for both R-parity conserving models and R-parity violating models, and their possible connections with the recent observation of the flavour and muon g-2 anomalies. The talk will also highlight the employment of novel analysis techniques, including advanced machine learning techniques and special object reconstruction, that are necessary for many of these analyses to extend the sensitivity reach to challenging regions of the phase space.
Francesco Giuseppe Gravili
2023-06-26T18:55:37Z
http://arxiv.org/abs/2306.15014v1
# Searches for supersymmetric particles with prompt decays with the ATLAS detector ###### Abstract Supersymmetry (SUSY) provides elegant solutions to several problems in the Standard Model and searches for SUSY particles are an important component of the LHC physics program. The latest results from electroweak and strong SUSY searches are reported here, conducted by the ATLAS experiment at the CERN LHC. The searches target multiple final states and different assumptions about the decay mode of the produced SUSY particles, including searches for both R-parity conserving models and R-parity violating models, and their possible connections with the recent observation of the flavour and muon _g-2_ anomalies. The talk will also highlight the employment of novel analysis techniques, including advanced machine learning techniques and special object reconstruction, that are necessary for many of these analyses to extend the sensitivity reach to challenging regions of the phase space. PRESENTED AT DIS2023: XXX International Workshop on Deep-Inelastic Scattering and Related Subjects, Michigan State University, USA, 27-31 March 2023 MICHIGAN STATE UN I V E R S I T Y Copyright 2023 CERN for the benefit of the ATLAS Collaboration. CC-BY-4.0 license Introduction The Standard Model (SM) of fundamental interactions is the underlying theory of elementary particles and their interactions. Despite the many experimental results precisely confirming the predictions of this theory, there are still some open questions within the model. SUperSYmmetry (SUSY) is a Beyond Standard Model (BSM) physics theory, predicting the existence of fermionic (bosonic) supersymmetric partners for the bosons (fermions) in the SM, differing by 1/2 unit in spin. A new quantum number is introduced, _R_-parity, \(R=(-1)^{3(B-L)+2s}\) with \(B\), \(L\) being the baryon and lepton numbers, while \(s\) is the spin of the particle. According to the violation or not of this quantum number, two scenarios are then introduced: _R_-parity-violating (RPV) and _R_-parity-conserving (RPC) models. The superpartners of the SM Higgs (_higgsinos_) and electroweak (EWK) gauge bosons (collectively referred to as _electroweakinos_) mix to form _chargino_ (\(\tilde{\chi}^{\pm}_{i}\), \(i=1,2\)) and _neutralino_ (\(\tilde{\chi}^{0}_{j}\), \(j=1,\ldots,4\)) mass eigenstates. The \(\tilde{\chi}^{0}_{1}\) is usually assumed to be the lightest supersymmetric particle (LSP) in the SUSY decay chains. The latest results from ATLAS SUSY searches are highlighted, both for _R_-parity conserving and _R_-parity violating models. Several final states are targeted, in the context of SUSY prompt decays, i.e. \(c\tau<{\cal O}(1)\) mm. Searches exploit the full LHC Run 2 proton-proton collision dataset collected with the ATLAS [1] detector at the CERN LHC, at \(\sqrt{s}=13\) TeV and corresponding to an integrated luminosity of 139 fb\({}^{-1}\). ## 2 _R_-parity-violating searches A BSM search for pair production of supersymmetric particles with RPV decays into final states with high jet multiplicity, at least one isolated light lepton and either zero or at least three _b_-tagged jets is presented [2]. The RPV analysis models involve either baryon-number-violating \(\lambda^{{}^{\prime\prime\prime}}_{323}\) coupling and lepton-number-violating \(\lambda^{{}^{\prime}}\) coupling. \(\lambda^{{}^{\prime\prime\prime}}_{323}\) is assumed to be dominant and similar final states apply to \(\lambda^{{}^{\prime\prime}}_{313}\) as well. With this choice of model parameters, final state signatures includes \(\tilde{\chi}^{0}_{1/2}\to tbs\) and \(\tilde{\chi}^{\pm}_{1}\to bbs\) with a Branching Ratio (BR) of 100% and \(\tilde{\chi}^{0}_{1}\to q\bar{q}\ell/\nu\), with an equal probability to produce any of the four first- and second-generation leptons. Events are splitted into two categories, according to the lepton content: two same electric charge leptons (SS) and single lepton. Due to the presence of many jets in the final states, a multi-bin fit is performed in each lepton category and a key variable, based on a machine-learning discriminant, is introduced to improve the sensitivity. No significant excess is observed over SM expectations, and results are interpreted in the framework of simplified models. In the model with \(\tilde{g}\to t\bar{t}\tilde{\chi}^{0}_{1}\to t\bar{t}tbs\), gluino masses up to 2.38 TeV are excluded at 95% confidence level (CL). Top squarks masses up to 1.36 TeV are excluded in a model with direct stop production and RPV decays of the LSP. Concerning the direct production of electroweakinos, higgsino (wino) masses between 200 (197) GeV and 320 (365) GeV are excluded. The introduction of a bilinear lepton-number-violating term allows to test sensitivity to the direct pair production of light, nearly mass degenerate higgsinos, \(\tilde{\chi}^{0}_{2}\), \(\tilde{\chi}^{\pm}_{1}\) and \(\tilde{\chi}^{0}_{1}\). Mass splittings are below 2 GeV. The dominant production processes feature same electric charge sign dilepton and trilepton final states. The inclusive production and all possible allowed higgsino decays are considered in this analysis [3]. Observed data are compatible with SM predictions, and limits are set on the parameters for this scenario: after a statistical combination of the two orthogonal signal regions (SRs), one for each final state, mass degenerate higgsinos are excluded up to masses of 440 GeV. These are the first experimental constraints on bilinear RPV (bRPV) models with degenerate higgsino masses. ## 3 \(R\)-parity-conserving searches: _Strong_ sector A search for new phenomena in final states with one or more hadronically decaying \(\tau\)-leptons, \(b\)-jets and missing transverse momentum is presented [4]. This signature is sensitive to models in which the new particles have preferential decay modes into third-generation SM particles and leptoquarks (LQ). In the latter case, the analysis is optimized for scalar LQ, while an additional interpretation is provided for vector LQ. The analysis covers the single-\(\tau\) and di-\(\tau\) channels separately. Hadronically decaying \(\tau\)-leptons are distinguished from quark- and gluon- initiated jets thanks to a recurrent neural network. A combination of high-level discriminating variables, as well as tracking and calorimeter measurements, is given as input. In the case of the supersymmetric model, masses up to 1.4 TeV are excluded for top squarks decaying via \(\tilde{\tau}\) sleptons into nearly massless gravitinos \(\tilde{G}\), across a wide range of \(\tilde{\tau}\) masses. In the case of _up_-type and _down_-type scalar LQ, masses up to about 1.25 TeV are excluded. On the other hand, for vector LQ, masses up to about 1.8 TeV are excluded. Another key search involves the direct pair production of gluinos decaying via off-shell third-generation squarks into the lightest neutralino [5]. The final state signature is multiple \(b\)-jets and high missing transverse momentum; potentially, additional jets and/or an isolated electron or muon. Three different benchmark simplified models scenarios are introduced: the first two, referred to as \(Gtt\) and \(Gbb\), feature exclusively gluino decays to the LSP via off-shell top or bottom quarks. The last one, referred to as \(Gtb\), takes into account different branching ratios for the \(t\bar{t}\), \(b\bar{b}\), \(t\bar{b}\) and \(b\bar{t}\) decays. Two alternative methodologies are used to define SRs: for \(Gtb\) models, a standard cut-and-count approach is used, well suited to subsequent reinterpretation of the results. For \(Gtt\) and \(Gbb\) models, a neural network methodology classifies events into 4 different output scores (\(Gtt\) or \(Gbb\) signal event, \(t\bar{t}\) or \(Z\)+jets background event), exploiting correlations between the input discriminating variables to maximise the exclusion power. No significant excess over the expected SM predictions is observed, and exclusion limits are set on the SUSY particles involved in the models: for the exclusively \(Gtt\) and \(Gbb\) gluino decays, masses below 2.44 and 2.35 TeV are excluded at 95% CL for massless neutralinos, respectively. Other exclusion limits on gluino masses are coming from a general gauge mediated (GGM) analysis, featuring many jets in the final states, in combination with a highly energetic photon and a gravitino. It is considered as the LSP, coming from the decay of a next-to-lightest SUSY particle (NLSP), typically the lightest neutralino [6]. The decay topologies \(\gamma/Z\) and \(\gamma/h\) are targeted in the analysis, with the first ATLAS Run 2 results for the latter one. Good agreement is found between observed data and expected SM backgrounds in all the SRs, and pair-produced gluinos with masses up to 2.2 TeV are excluded for most of the NLSP masses investigated. ## 4 \(R\)-parity-conserving searches: _Ewk_ sector In the context of gauge mediated supersymmetry models (GMSM), one of the latest ATLAS result is about the direct pair production of higgsinos, decaying into a light gravitino either via a Higgs or \(Z\) boson [7]. The final state features a photon pair coming from the Higgs boson, a \(b\bar{b}\) pair coming from the other Higgs or \(Z\) boson and missing transverse momentum associated with the two gravitinos. Events are required to pass diphoton triggers and three different SRs are defined. This choice allows to gain sensitivity to different mass hypotheses and decay modes, differing in the requirements on the invariant mass of the \(b\bar{b}\) pair and on the missing transverse momentum. The first two SRs target small higgsino mass ranges, with the invariant mass requirement being consistent to the Higgs/\(Z\) boson mass. The last SR is designed for higher mass higgsino decays, consequently demanding high missing transverse momentum. SR observed yields are found to be consistent with SM expectations, and exclusion limits are set on pure-higgsino branching ratio \(BR(\tilde{\chi}\to h\tilde{G})\) against the higgsino mass, assuming the two aforementioned decay topologies. This analysis fills the gap left by previous analysis signatures in that phase space. A statistical combination of the SRs is performed to derive limits on cross section for higgsino pair production: cross sections above 1 pb are excluded at 95% CL for masses higher than 150 GeV, and the theoretical prediction for the pure higgsino cross section is excluded at 95% CL for neutralino masses below 320 GeV. In terms of complementary analyses, the search for electroweak production of chargino pairs with decays into a \(W\) boson (both leptonic and hadronic consequent decays are allowed) and the lightest neutralino is presented [8]. Single lepton triggers are used to identify candidate events. The requirement of at least one _large-Radius_ jet allows to probe boosted \(W\) boson hadronic decays. Three SRs are defined, using the transverse mass to target regions sensitive to the increasing mass difference between the lightest chargino and neutralino, making them mutually exclusive. No significant deviation from the SM expectations are observed in any of the SRs, and chargino masses between 260 and 520 GeV are excluded for massless neutralino. Previous ATLAS searches targeted the low- and high-mass areas, while the current one covers the intermediate region. Concerning sleptons, the direct pair production of electroweakinos decaying via intermediate \(\tilde{\tau}\) sleptons into final states with hadronically decaying \(\tau\)-leptons is presented [9]. Following a standard cut-and-count based approach, using kinematic variables with good signal-to-background separation, exclusion limits are extended for high \(\tilde{\chi}_{1}^{\pm}/\tilde{\chi}_{2}^{0}\) degenerate masses up to 1160 GeV for massless lightest neutralino. Previous results are improved by 340-400 GeV. Finally, a general overview of SUSY results for the direct \(\tilde{\mu}\) pair production is presented. Interesting regions of phase space are highlighted, consistent with _g-2_ anomaly under different assumptions of SUSY parameters, as well as possible future strategies in order to tackle the \(\tilde{\mu}\)-corridor [10]. ## 5 Summary The latest ATLAS searches for supersymmetric particles with prompt decays are described, both for RPV and RPC scenarios. The searches used proton-proton collision data at \(\sqrt{s}=13\) TeV, corresponding to an integrated luminosity of 139 fb\({}^{-1}\). Observed data are found to be in agreement with SM background expectations. Results are interpreted in terms of exclusion limits on the masses of the particles considered or on the branching ratios associated with the decays. Analyses are employing new techniques, with machine learning algorithms increasingly being used. Sensitivity is greatly improved, or set for the first time, with respect to the previous published results in all the considered SUSY scenarios.
2307.02641
Active Class Selection for Few-Shot Class-Incremental Learning
For real-world applications, robots will need to continually learn in their environments through limited interactions with their users. Toward this, previous works in few-shot class incremental learning (FSCIL) and active class selection (ACS) have achieved promising results but were tested in constrained setups. Therefore, in this paper, we combine ideas from FSCIL and ACS to develop a novel framework that can allow an autonomous agent to continually learn new objects by asking its users to label only a few of the most informative objects in the environment. To this end, we build on a state-of-the-art (SOTA) FSCIL model and extend it with techniques from ACS literature. We term this model Few-shot Incremental Active class SeleCtiOn (FIASco). We further integrate a potential field-based navigation technique with our model to develop a complete framework that can allow an agent to process and reason on its sensory data through the FIASco model, navigate towards the most informative object in the environment, gather data about the object through its sensors and incrementally update the FIASco model. Experimental results on a simulated agent and a real robot show the significance of our approach for long-term real-world robotics applications.
Christopher McClurg, Ali Ayub, Harsh Tyagi, Sarah M. Rajtmajer, Alan R. Wagner
2023-07-05T20:16:57Z
http://arxiv.org/abs/2307.02641v1
# Active Class Selection for Few-Shot ###### Abstract For real-world applications, robots will need to continually learn in their environments through limited interactions with their users. Toward this, previous works in few-shot class incremental learning (FSCIL) and active class selection (ACS) have achieved promising results but were tested in constrained setups. Therefore, in this paper, we combine ideas from FSCIL and ACS to develop a novel framework that can allow an autonomous agent to continually learn new objects by asking its users to label only a few of the most informative objects in the environment. To this end, we build on a state-of-the-art (SOTA) FSCIL model and extend it with techniques from ACS literature. We term this model Few-shot Incremental Active class SeleCiOn (FIASco). We further integrate a potential field-based navigation technique with our model to develop a complete framework that can allow an agent to process and reason on its sensory data through the FIASco model, navigate towards the most informative object in the environment, gather data about the object through its sensors and incrementally update the FIASco model. Experimental results on a simulated agent and a real robot show the significance of our approach for long-term real-world robotics applications. ## 1 Introduction A primary challenge faced by robots deployed in the real world is continual adaptation to dynamic environments. Central to this challenge is object recognition (Ayub and Wagner, 2020), a task typically requiring labeled examples. In this work, we address the problem of parsimonious object labelling wherein a robot may request labels for a small number of objects about which it knows least. In recent years, several works have been directed toward the problem of Few-Shot Class Incremental Learning (FSCIL) (Tao et al., 2020; Ayub and Wagner, 2020) to develop models of incremental object learning that can learn from limited training data for each object class. The literature has made significant progress toward developing robots that can continually learn new objects from limited training data while preserving knowledge of previous objects. However, existing methods make strong assumptions about the training data that are rarely true in the real world. For example, FSCIL assumes that in each increment the robot will receive a fully labeled image dataset for the object classes in that increment, and the robot will not receive more data for these classes again (Tao et al., 2020; Ayub and Wagner, 2020; d). In real world environments, however, robots will most likely encounter many unlabeled objects in their environment, and they will have to direct their learning toward a smaller subset of those unknown objects. Active learning is a subfield of machine learning that focuses on improving the learning efficiency of models by selectively seeking labels from within a large unlabeled data pool (Settles, 2009; Ayub and Fendley, 2022). Related to active learning is active class selection (ACS) in which a model seeks labels for specific object classes (Lomasky et al., 2007). ACS can allow autonomous robots operating in real-world environments to focus their learning objects about which they know least. Most ACS models, however, have been designed for batch learning, i.e., they require all the previous training data to be available when learning in an increment (Lomasky et al., 2007). Further, both active learning and ACS techniques have previously been tested on static datasets rather than with real agents/robots (Lomasky et al., 2007; Yoo and Kweon, 2019; Siddiqui et al., 2020). In this paper, we combine ideas from ACS and FSCIL to develop a framework that can allow an autonomous agent roaming in its environment to continually adapt by learning about the most informative objects through interaction with its human users. Toward this, we build on a state-of-the-art (SOTA) FSCIL model and extend it with techniques from ACS literature. We term this model Few-shot Incremental Active class SeleCiOn (FIASco). We further integrate a potential field-based navigation technique with our model to develop a complete framework that can allow an agent to process and reason about its sensory data, navigate towards the most informative object in the environment, gather the data for the object through its sensors and incrementally update the FIASco model. We perform extensive evaluations of our approach in a simulated Minecraft environment and with a real robot in a laboratory setting. The main contributions of the paper are as follows: (1) We develop a novel framework extending FSCIL techniques with ideas from ACS and integrating it with autonomous agents. (2) Our experiments on a simulated and a real autonomous agent demonstrate the effectiveness and applicability of our framework for the long-term deployment of robots in real-world environments. Our code is available at [https://github.com/chrismcclurg/FSCIL-ACS](https://github.com/chrismcclurg/FSCIL-ACS). ## 2 Background **Class-incremental learning (CIL)** considers the problem where labeled data is provided to the learner in increments of full classes. When applied to neural networks, CIL results in catastrophic forgetting, where the model forgets the previously learned classes and classification accuracy erodes (Kirkpatrick et al., 2017). A limitation of recent CIL methods is the reliance on storing a portion of the data from prior classes when learning new classes (Rebuffi et al., 2017; Castro et al., 2018; Wu et al., 2019). These methods, often storing high-dimensional images, are not practical in situations when the system has limited memory. To avoid storing real images, some CIL methods use a regularization loss term to prevent the weights of the model from changing drastically when learning new classes (Kirkpatrick et al., 2017; Li and Hoiem, 2018; Dhar et al., 2019). Other CIL methods regenerate images of the old classes with generative models (Ostapenko et al., 2019; Ayub and Wagner, 2021). In a preliminary experiment, we compare the performance of a recent clustering approach (CBCL-PR) (Ayub and Wagner, 2020;a;b) against three popular CIL algorithms in a few-shot class-incremental learning setting: iCARL, PODNet, and DER. **i**CaRL (Rebuffi et al., 2017) stores exemplars in memory, uses a regularization term called distillation-loss (Hinton et al., 2015), and Nearest Class Mean (NCM) to classify data (Mensink et al., 2013; Dehghan et al., 2019). PODnet (Douillard et al., 2020) stores proxy vectors in memory, uses a spatially-based distillation-loss, and also uses an NCM classifier. DER (Yan et al., 2021) uses a two-stage approach that freezes previously learned representations and then augments the model with features from a fine-tuned extractor. The results of this preliminary experiment are contained in the appendix. Figure 1: Pepper learns about the environment by actively selecting classes to incrementally train on. **Few-shot class-incremental learning (FSCIL)** adapts the class-incremental learning problem by limiting the number of training examples per class. Specifically, the data is first divided among training and test sets such that \(x_{i}\in(X^{train}\cup X^{test})\), \(y_{i}\in(y^{train}\cup y^{test})\). Then the training data is divided into increments \(x_{i}^{train}\in(D_{0}^{train}\cup D_{1}^{train}\cup...D_{n}^{train})\), \(y_{i}^{train}\in(C_{0}\cup C_{1}\cup...C_{n})\) such that each increment is composed of a unique set of classes (i.e., \(\forall i,j\ni i\neq j,C_{i}\cap C_{j}=\emptyset\)). In the \(i\)-th increment, the model only trains on the corresponding training data \(\{D_{i}^{train},C_{i}\}\). The model is then evaluated on a test set that includes all classes seen so far (i.e., \(\bigcup_{j=1}^{i}D_{j}^{test},\bigcup_{j=1}^{i}C_{j}\)). The size of an increment is \(D_{0}^{train}\) containing \(N_{b}\) of full classes. A problem setting which contains \(N\) classes per increment and \(k\) examples per class is known as \(N\)-way \(k\)-shot learning. In FSCIL, the problem is typically formatted with 100 full classes in the first increment, and then \(10\)-way \(5\)-shot learning for the remaining increments (Tao et al., 2020). In a preliminary experiment, we compare the performance of CBCL-PR (Ayub and Wagner, 2020;x,ab) against five other FSCIL algorithms: TOPIC, SPPR, Decoupled-DeepEMD, CEC, and FACT. TOPIC (Tao et al., 2020) represents knowledge with a neural gas network in order to preserve the topology of the feature space. SPPR (Zhu et al., 2021) uses prototype learning, including random episode selection to adapt the feature representation and a dynamic relation projection between old and new classes. Decoupled-DeepEMD (Zhang et al., 2020) decouples the training of the embedding and the classifier; the embedding is trained on the initial increment of 100 full classes, while the subsequent increments replace class-specific classifiers with new mean embeddings. CEC (Zhang et al., 2021) trains an additional graph model to adapt prototypes of old and new classes. FACT (Zhou et al., 2022) is the current state-of-art, which uses prototypes to limit the embedding space of old classes, reserving space for new classes. The results of this preliminary experiment are contained in the appendix. **Active Class Selection (ACS)** considers the problem where the learner can improve learning efficiency by requesting more data from a specific class (Lomasky et al., 2007). In prior work, ACS was piloted to enable an artificial nose to efficiently learn to discriminate vapors (Lomasky et al., 2007). In a batch learning setting, the learner used feedback from the previous batch to influence the class distribution among samples in the next class. A recent approach to ACS, PAL-ACS, demonstrated high performance by generating pseudo-examples, transforming an ACS problem into an active learning problem (Kottke et al., 2021). This study was, however, limited to synthetic data. **Active incremental learning** considers the problem where incremental learning and active learning are combined. In active learning, the learner may actively request labels for training data. One study assumed labels are no longer provided in the CIL setting (Belouadah et al., 2020). Another study allowed a learner to incrementally select points for labeling from a point cloud (Lin et al., 2020). A third study allowed a learner to incrementally select examples for annotation by a human expert (Brust et al., 2020). In these studies, the incremental learner selects training data to label, which defines the active learning problem. In contrast, this paper uses incremental learning to select classes to receive additional training instances, which is an active class selection problem. ## 3 Model Description Our goal is to develop a model (FIASco) that can not only learn incrementally, but can also select - from observed classes in a novel environment- classes which to receive more training instances. This problem is a modified class-incremental learning problem, whereas the next training class is determined by environmental availability and agent affinity. To learn incrementally, we ran preliminary experiments (see appendix) to identify CBCL-PR (Ayub and Wagner, 2020; 2023) as the most promising approach for this problem. The identified approach not only produces SOTA results on few-shot incremental learning benchmarks, but also represents object classes as clusters, which have intrinsic statistics that can be used to to select the next training class in an environment. An overview of the model is shown in Figure 2. In this section, we describe the components of FIASco, including incremental learning with clustering (Section 3.1), active class selection with cluster statistics (Section 3.2), and navigation using a potential field created by cluster-averaged statistics of the observed classes in the environment (Section 3.3). ### Incremental Learning with Clusters In each increment, the learner receives the training examples (images) for new classes. Feature vectors of the images are generated using a pre-trained convolutional neural net as a feature extractor. Clusters are created from feature vectors that are within a tolerable distance of one another, enabling discrimination between classes and consolidation of these classes into long-term memory. For more details of this clustering approach, please see the appendix or related literature (Ayub and Wagner, 2020;x,ab). ### Active Class Selection with Cluster Statistics We extend the learning approach to use feedback from cluster statistics. Specifically, the cluster space allows for measures - cluster weight, class weight, and cluster variance - to guide the selection of new samples for training. Cluster weight is the number of training examples included in an individual cluster within a class. Likewise, class weight is the number of training examples per class. Cluster variance is calculated in a recursive manner such that prior training data is not needed. As defined by Welford's method, the \(n\)-th update (\(n>1\)) of a cluster's variance is \(s_{n}^{2}\)(Welford, 1962; Knuth, 2014): \[(n-1)s_{n}^{2}-(n-2)s_{n-1}^{2}=(x_{n}-\bar{x}_{n})(x_{n}-\bar{x}_{n-1}) \tag{1}\] These internal measures give direct feedback for active class selection (ACS). Recall that previous ACS methods use results from the previous batch as feedback to specify the distribution of classes in the next batch. In incremental Figure 3: _Left._ An example distribution of data, where each point represents a training instance plotted in a two-dimensional feature space. _Middle._ The clustering process groups similar training instances, extracting useful information such as cluster weight and cluster variance. _Right._ The cluster-averaged class weight and class variance can be used for determining the next class to request. Figure 2: This flow summarizes the training phase of FIASco. An agent uses a fixed feature extractor to obtain and cluster feature vectors from training images (solid line). The resulting centroids are used to fit a linear SVM, which is then used for predicting real objects. Cluster statistics are used to inform the agent which real objects to pursue and request more examples (training images). The training process combines few-shot class-incremental learning with active class selection. _*Please see the appendix for additional info on clustering or pseudo-rehearsal._ learning, the learner does not control the size of new batches. Therefore, class selection is instead an ordering of preferred classes: 1. _Low Class Weight:_ Prioritize classes with lower class weight. The intuition for this ordering is that adding instances to a class with fewer instances will likely add useful information (new clusters), increasing overall accuracy. 2. _Low Cluster Weight:_ Prioritize classes with lower average cluster weight. The intuition for this ordering is that adding instances to classes with undeveloped clusters (outliers) will be more likely to impact (shift/ add weight to) the class-specific space, increasing overall accuracy. 3. _Low Cluster Variance:_ Prioritize classes with lower average cluster variance. The intuition for this ordering is that adding instances from classes with less noise will likely add valuable information with minimal overall noise. 4. _High Cluster Variance:_ Prioritize classes with higher average cluster variance. The intuition for this ordering is that adding instances from classes with more uncertainty will likely provide more distinct clusters within the class. To further illustrate these measures, consider a distribution of two classes of data, as shown in Figure 3. Each instance of data is initially plotted in the two-dimensional vector space (left). The clustering process (middle) extracts useful cluster information, such as weight and variance. Finally, the extracted information can be cluster-averaged per each class of data (right). Which class of data should be requested next for the purpose of training? According to the low class weight metric, class A should be requested \((4.0<7.0)\). According to the low cluster weight metric, class B should be requested \((3.5<4.0)\). For low cluster variance, class A should be requested \((0.5<1.3)\). Of course, class B should be requested for high cluster variance \((1.3>0.5)\). ### Navigation from Active Class Selection Integrating our incremental ACS approach on an autonomous agent requires developing a method for navigation to move towards the most informative data samples. The selected method for navigation was a potential field approach, simplified from (Koren et al., 1991). Figure 4 shows a potential field created from agent observations in the simulation. Motivated to apply these methods on a real robot that can make some inference about distal objects (\(d\leq d_{far}\)) and then identify objects at a closer distance (\(d\leq d_{close}<d_{far}\)), the learner is given similar characteristics. In experiment, distances for class identification and feature extraction were set to \(d_{far}\) and \(d_{close}\), respectively. Objects within distance \(d_{far}\) would be included in the learner's internal potential field, where the true class label would be known by the robot (i.e., close enough to ask a person for the true labels). For the \(i\)-th object in the potential field, an attractive or repulsive force \(f_{i}\) was assigned based on the order of class priority determined in ACS. The potential field is then defined by equation (2), where the \(i\)-th observation is made at (\(x_{i}\), \(y_{i}\)) and the robot position is (\(x_{0}\), \(y_{0}\)). Objects are only learned when the robot is within the distance \(d_{close}\), where an image can be taken and features extracted for training. Figure 4: A view of the agent in simulation (A) and the potential field created for navigation (B). \[(F_{x},F_{y})=\Bigg{(}\sum_{i=1}^{n_{i}}\frac{f_{i}}{x_{i}-x_{0}},\sum_{i=1}^{n_{i} }\frac{f_{i}}{y_{i}-y_{0}}\Bigg{)} \tag{2}\] A common problem with potential fields is that the agent can get stuck in a local minima. Past solutions for this local minima problem have included adding small, random perturbations or adjusting the gain of a particular contribution to a potential field (Arkin, 1989). In our simulated experiment, the number of time steps spent inside a relative location is counted. If the learner exceeds a specified count limit, it is directed back to the start position. Every time the learner returned to the start position, it is sent in a new direction (i.e., if the learner came from the North, it is randomly sent East, South, or West). In our experiment with a real robot, sensor error also presented problems. That is, not only is there possibility of getting stuck in a local minima, an undetected obstacle would also prevent movement of the robot. To mitigate the effects of sensing error, rather that use a continually-adapting potential field, the robot observed its surroundings once, then used A* path-planning (Hart et al., 1968) to get to the location of selected class. This navigation method has less benefit for actively selecting classes; please see the results from Section 5.2 for a discussion, or the appendix for more info on the A* method. Figure 5 shows the A* path planning from robot observations in the environment. ## 4 Experiment: FSCIL-ACS in Minecraft Our first experiment is an image classification task within the Minecraft simulation environment. We aim to show that a simulated robot can use internal feedback based on what it has learned about the environment (cluster space) to more efficiently seek unknown objects in the environment. ### Experimental Setup **Overview.** A robot in Minecraft is given two minute intervals to search the environment for new visual examples of objects. The robot navigates with an internal potential field, created from objects within an observable distance (\(d<d_{far}=15\)). The robot can observe visual examples of an object _only_ when it stands over that object (\(d<d_{close}=1\)). After the interval of searching, the robot processes the visual examples by updating its cluster space (FIASco) or retraining on all of the previous training data (SVM). Finally, the robot makes predictions on the test data (static subset of original dataset) and classification accuracy is recorded. The robot's affinity to different classes of items is updated using the ACS methods described in Section 3.2, which directly affects the future potential field for navigation. The experiment continues for 360 minutes. Please see the supplemental material for experiment replication notes. **Baselines.** Cluster-based ACS methods were compared with a batch learner using 'uniform' and'redistricting' class selection. The 'uniform' method randomly sets the class order so that all classes have an equal opportunity to be prioritized. The'redistricting' method uses cross-validation to determine the most volatile (changing predictions when new samples are added in the validation stage) classes to prioritize. The cluster-based ACS methods are described in Figure 5: A view of the robot in the environment (A) and the A* path created for navigation (B). Section 3.2. Note that 'uniform' is also run for FIASco and that 'high cluster variance' is most similar to the previous'redistricting' method without the time-consuming validation step. **Environment.** Minecraft was used because it offers a large number of items and user control to create maps, enabling a realistic, yet constrained spatio-temporal situation for an agent (Johnson et al., 2016). The experiment map (Figure 6, left) contained four buildings. These buildings housed four unique groups of classes, grouped by the similarity of class-averaged feature vectors (centroids). Within a building, items were randomly, uniquely assigned to one of the thirty containers (Figure 6, middle). These containers served as the link to real-world items. As an agent approached the location of a container, it would observe a certain type of Minecraft item. This observation was then mapped to a class of the training dataset. While in the proximity of a container, the agent could choose to learn about the class by standing directly over the container. In this case, the agent would receive a random 5-9 instances of a class for training, after which the container would be empty. The container does not restock until the next round of exploration, after the agent trains and updates its class affinity. **Data.** Two datasets were used for training and testing of the image classifier: CIFAR-100 (Krizhevsky et al., 2009) and the Grocery Store (Klasson et al., 2019) datasets. CIFAR-100 contains 60,000 32x32 images, evenly distributed among 100 classes. The classes include various types of objects, such as "beaver" or "rocket." The Grocery Store dataset contains 5,125 348x348 pixel images, non-uniformly distributed among 81 classes. The classes include various goods found in grocery stores, such as types of fruits, vegetables, and packages. Both datasets were modified to have a 90:10 stratified train-test split. Please see the Appendix for more information about the data selection. **Implementation.** The fixed feature extractor in this experiment was a Resnet-34 model pre-trained with Imagenet. The test was run with ten random seeds and the average was determined. For clustering, the distance threshold \(D\) and number of pseudo-exemplars \(N_{P}\) were determined by validation. For the CIFAR-100 test, the values for \(D\) and \(N_{P}\) were set to \(17\) and \(5\), respectively. For the Grocery Store test, the values for \(D\) and \(N_{P}\) were \(15\) and \(40\), respectively. For batch learning, a support vector machine with a linear kernel was used (Boser et al., 1992) to make test predictions given all extracted features. ### Experimental Results Results are shown in Figure 7. The metric used for comparison was average incremental accuracy. Note that the accuracy computed in this experiment is different from the preliminary study: rather than testing over only _seen_ classes, the learner is tested over _all_ classes in the environment. The highest performer in the CIFAR-100 test was FIASco with 'low class weight' ACS (\(44.2\%\)), an improvement of \(3.7\%\) over the best case of batch learning 'uniform' ACS. The highest performer in the Grocery Store test was FIASco using 'low class weight' ACS (\(63.4\%\)), an improvement of \(5.3\%\) over the best case of batch learning 'uniform' ACS. Figure 6: _Left_. The map depicts the layout of the simulation environment. _Middle_. A single building has 30 containers of items, which are arranged randomly. _Right_. The agent navigates between buildings, looking for particular items. ## 5 Experiment: FSCIL-ACS with Pepper In the final experiment, a Softbank Pepper robot was tasked with an image classification in an indoor environment. We aim to demonstrate that a real robot can use active class selection to more efficiently seek unknown objects (see Figure 1). ### Experimental Setup **Overview.** The robot is given sixty iterations to search the environment for new visual examples of objects. An iteration consists of the robot (1) relocating, (2) searching, (3) choosing an object, and (4) receiving training examples. To relocate, the robot first rotates with range sensors to define a localized map; an end location is chosen among the free space, and A* path planning is used (Hart et al., 1968). To search, the robot uses a top camera, which provides up to 2560x1080 pixel resolution at 5 fps. After taking images of the surrounding area, the robot uses the YOLO algorithm (Redmon et al., 2016) pre-trained on the Microsoft COCO dataset (Lin et al., 2014) for object localization. To choose an object, the robot uses centroids for (initially weaker) classification with active class selection to pick the most desirable class. To receive training examples, the robot shows the human experimenter an image of the desired class, for which the human can give the true label of the predicted class, as well as ten visual examples. After every iteration, the robot updates its cluster space of learned classes. The robot's affinity to different classes of items is updated using the ACS methods. At the end of every three iterations, the robot makes predictions on the test data and classification accuracy is recorded. **Baselines.** Cluster-based ACS methods (Section 3.2) were compared with a batch learner using 'uniform' class selection, which randomly sets the class order so that all classes have an equal opportunity to be prioritized. **Environment.** This test was completed in an indoor environment, where items were purchased from a local grocery store to represent classes in the Grocery Store dataset (Klasson et al., 2019). Black cloths were used to cover tables and serve as a backdrop for items. Please see supplemental materials for images of the included classes. **Data.** The Grocery Store dataset (Klasson et al., 2019) was used for training and testing of the image classifier, as in Section 4. The continually-trained image classifier was used for object recognition of the real objects in the experiment. A subset of 41 classes of the Grocery store dataset was used, comprised of items that could be primarily Figure 7: Test prediction accuracy over time in Minecraft simulation. Note that SVM classifier is a batch learner, while FIASco, CEC, and FACT do not re-use training data. Average incremental accuracy is indicated. stored at room temperature. The dataset was modified to have a 90:10 stratified train-test split. Real items were distributed randomly by their coarse labels, such that similar items were grouped together (e.g., Red Delicious and Yellow Delicious apples). Please see the Appendix for more information about the data selection. **Implementation.** The fixed feature extractor in this experiment was a Resnet-34 model pre-trained with Imagenet. For clustering, the distance threshold \(D\) and number of pseudo-exemplars \(N_{P}\) were determined by validation. For this test, the values for \(D\) and \(N_{P}\) were \(15\) and \(40\), respectively. For batch learning, a support vector machine with a linear kernel was used (Boser et al., 1992) to make test predictions given all extracted features. ### Experimental Results Results are shown in Figure 8. The metric used for comparison was average incremental accuracy. The accuracy computed in this experiment is the same as in Section 4: the learner is tested over _all_ classes in the environment. The highest performer in the test was FIASco with 'high cluster variance' ACS (\(60.7\%\)), an improvement of \(0.4\%\) over the best case of batch learning 'uniform' ACS. While both experiments have a learner using the same measures to prioritize classes, there is a difference in the value of particular measures (e.g., high cluster variance). This difference is likely due to the slight change in process, where the robot learner is making an initially weak prediction about the detected object classes before requesting a class (see Section 5.1). Hence, a wrong prediction about a class with a high variance may actually provide valuable insight into the divisions of nearby classes. In terms of average incremental accuracy, the FIASco model does not show as much improvement over ACS with SVM, as compared to the simulated experiment. This result is likely due to the limitations in real navigation, noted in Section 3.3. When the agent moved in simulation, the potential field was updated at every time step, calculating attractive weights for each new position of observed class. In the real environment, the robot made one full turn to observe its surroundings, then followed a path prescribed by A*. As the robot moved, new class observations were not included as options for the robot. The reason for this change was due to our particular robot being susceptible to drift error and sensor noise; we choose to reduce the sensing demand such that the robot would not get itself stuck as frequently. Note that the navigation method is kept constant in each experiment, so the comparison of ACS methods still holds true. In future studies, it would be helpful to improve the robot controller so that the more reactive navigation method could be used. ## 6 Conclusion To the authors' knowledge, active class selection (ACS) has not previously been combined with few-shot incremental learning (FSCIL). This paper extends an incremental learner to use cluster statistics as feedback for actively selecting classes to learn. We have shown that the selected incremental learner (CBCL-PR) is not only state-of-the-art in a pure few-shot class incremental learning setting, but also that the cluster space is valuable to intrinsically motivate the learner to select specific classes. In both Minecraft simulation and real indoor environments, a robot that used cluster statistics for active class selection out-performed uniform batch-learning. A challenge of any (machine) learner is to gather labeled data for supervised training. We lay the groundwork for more efficient gathering and usage of labeled data, relaxing previous assumptions that have hindered the feasibility of robot learning. As opposed to previous methods in FSCIL, we do not rely on a prescribed class order, nor require training on half the dataset prior to incremental learning. These assumptions are both unrealistic and not applicable to Figure 8: Test prediction accuracy over iterations in indoor environment with Pepper. Note that SVM classifier is a batch learner, while FIASco does not re-use training data. a robot learning in a new environment. As opposed to previous methods in ACS, we incorporate more current efforts of incremental learning such that computational complexity is more favorable in the long-term (see Appendix). Future work should build on the merging of active class selection and incremental learning. The most obvious reason is that it is critical to bridge the gap between robot and agent learning. Additionally, there is opportunity to further advance the state-of-art in FSCIL-ACS. For instance, in the context of clustering, a combination of statistics could be used to guide class selection. More broadly, alternative internal measures could be used as feedback for class selection. Regardless, the advantages in combining ACS and FSCIL motivate a new direction for robot learning. #### Acknowledgments This material is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-21-1-0197.
2307.12452
Characterizing non-Markovian Quantum Process by Fast Bayesian Tomography
To push gate performance to levels beyond the thresholds for quantum error correction, it is important to characterize the error sources occurring on quantum gates. However, the characterization of non-Markovian error poses a challenge to current quantum process tomography techniques. Fast Bayesian Tomography (FBT) is a self-consistent gate set tomography protocol that can be bootstrapped from earlier characterization knowledge and be updated in real-time with arbitrary gate sequences. Here we demonstrate how FBT allows for the characterization of key non-Markovian error processes. We introduce two experimental protocols for FBT to diagnose the non-Markovian behavior of two-qubit systems on silicon quantum dots. To increase the efficiency and scalability of the experiment-analysis loop, we develop an online FBT software stack. To reduce experiment cost and analysis time, we also introduce a native readout method and warm boot strategy. Our results demonstrate that FBT is a useful tool for probing non-Markovian errors that can be detrimental to the ultimate realization of fault-tolerant operation on quantum computing.
R. Y. Su, J. Y. Huang, N. Dumoulin. Stuyck, M. K. Feng, W. Gilbert, T. J. Evans, W. H. Lim, F. E. Hudson, K. W. Chan, W. Huang, Kohei M. Itoh, R. Harper, S. D. Bartlett, C. H. Yang, A. Laucht, A. Saraiva, T. Tanttu, A. S. Dzurak
2023-07-23T23:10:18Z
http://arxiv.org/abs/2307.12452v2
# Characterizing non-Markovian Quantum Process by Fast Bayesian Tomography ###### Abstract To push gate performance to levels beyond the thresholds for quantum error correction, it is important to characterize the error sources occurring on quantum gates. However, the characterization of non-Markovian error poses a challenge to current quantum process tomography techniques. Fast Bayesian Tomography (FBT) is a self-consistent gate set tomography protocol that can be bootstraped from earlier characterization knowledge and be updated in real-time with arbitrary gate sequences. Here we demonstrate how FBT allows for the characterization of key non-Markovian error processes. We introduce two experimental protocols for FBT to diagnose the non-Markovian behavior of two-qubit systems on silicon quantum dots. To increase the efficiency and scalability of the experiment-analysis loop, we develop an online FBT software stack. To reduce experiment cost and analysis time, we also introduce a native readout method and warm boot strategy. Our results demonstrate that FBT is a useful tool for probing non-Markovian errors that can be detrimental to the ultimate realization of fault-tolerant operation on quantum computing. ## I Introduction The development of fault-tolerant and scalable quantum computers relies on achieving quantum gate fidelities that are beyond the error correction thresholds [1; 2; 3]. However, qubits, the fundamental building blocks of quantum computers, reside in a noisy environment and are manipulated with imperfect control sequences. Over the past few decades, a range of quantum characterization, verification, and validation (QCVV) techniques have been developed to diagnose errors in quantum circuits. Randomized benchmarking (RBM) [4; 5; 6], a widely accepted metric in the community, is experimentally simple and efficient in characterizing gate performance. Although fidelity will indicate the overall performance of the gates, it does not provide information about the types of errors that degrade the gates. Fully diagnosing the errors quantitively with quantum tomography protocols would help further mitigate errors in the gate set. Quantum process tomography (QPT) [7] reconstructs a single gate process by applying an informationally complete ensemble of state preparation and measurement (SPAM) before and after the gate operation. However, it is not a calibration-free protocol, and yields biased results when SPAM are noisy. Self-consistent methods for gate set tomography [8; 9; 10] overcome this problem by reconstructing the whole gate set, including the noisy SPAM channels, so that no prior calibrations are required. Fast Bayesian tomography (FBT) [11] is a self-consistent gate set process tomography method which harnesses the power of Bayesian inference. It is a flexible and agile tool that al lows arbitrary random sequences to be fed to the model, and FBT can be bootstrapped from the knowledge obtained from earlier characterization. Since the model is updated iteratively, sequence by sequence, the experiment and analysis can run simultaneously in realtime. In this paper, we use spin qubits to demonstrate that FBT can be used to probe and characterize non-Markovian errors in spin qubits. Noticeably, spatial-temporally correlated noise is prevalent on silicon quantum devices [12; 13], being one of the major sources of non-Markovian behavior in our experiment. Such non-Markovian errors can impede the performance of quantum error correction [14; 15] and challenge most QCVV methods. Ideally, the performance of a gate is independent of its position in a quantum circuit and the lab time being performed. In other words, the gate processes are expected to be memory-less Markovian processes, which is a fundamental premise for quantum gate process tomography protocols. Methods to probe the context-dependent quantum errors have been proposed [16; 17; 18], but it remains challenging at the quantum circuit level with gate set process tomography. A promising way to study non-Markovian dynamics for gate processes is via process tensor tomography (PTT) [19; 20], which reconstructs the process tensor to obtain the spatial-temporal correlation of the gate processes in a sequence context. However, PTT is experimentally resource intensive and tricky to scale up with quantum systems. As with other protocols, without special treatment, non-Markovian errors degrade the performance of FBT. In addition, it has been unclear how FBT could identify the non-Markovian nature of the quantum process. We use a device consisting of a pair of spin qubits as a test bed. From the early stages of device characterization, we have observed that gate performance drifts slowly in hours-long experiments, in addition, performance appears to have sequence length dependency. In this paper, we propose two experimental designs for FBT to characterize the non-Markovian behavior of the gate processes. Based on the time scale where the noise is effective, we study the behavior in intra-sequence and inter-sequence regimes individually. The intra-sequence regime explores variations in the gate error process across different gate sequence lengths. The inter-sequence regime tracks the time evolution of the gate noise process by correlating intermediate FBT results with lab time. To minimize experimental costs for timing-sensitive experiments, we study the validity of using native measurement results (here, parity readouts) as the input for FBT. Moreover, we discuss several bootstrapping strategies for the initial priors of the gate set, which is critical to lower the experimental cost. To make gauge-variant metrics more consistent, we eliminate gauge ambiguity by implementing gauge optimization for FBT. ## II Results ### Online FBT setup To minimize the delay between experiment and analysis, we develop a web-based online FBT service, which acts as the infrastructure for the characterization experiments. Details about the FBT protocol can be found in Ref. [11], with an outline of the protocol and further developments detailed in section III. We call FBT an "online" protocol because FBT updates the model as the experimental data becomes available. From an engineering standpoint, deploying the FBT service online provides benefits for the experiment-analysis loop. It frees up the computation load from the experimental setup and makes it possible to run multiple characterization experiments in parallel using an in-house high-performance computer. As shown in Fig. 1, the FBT server communicates with the experimental setup and client machines through web application program interfaces (APIs). To start an online FBT analysis session, experimental setup or client machines provide information about the whole gate set, including ideal operators and previous knowledge about the gates for bootstrapping the initial prior for each gate's noise channels. Once the FBT server has finished bootstrapping the initial channels, it is ready for incoming updates from the client machines. FBT updates the model on the fly based on measurement results from the client. Gate set post-processing includes gauge optimization and complete positive and trace preserving (CPTP) projection, which will only be per Figure 1: High-level schematic of web-based online FBT service. (a) On the top is the scanning electron micrograph (SEM) image of a device similar to the one used in the experiment. A pair of qubits is hosted by few-electron-quantum dots under gates P1 and P2. With interstitial J1 controls the exchange coupling between the qubits. On the left is the antenna that delivers high-frequency microwaves, which controls the qubits magnetically through electron-spin resonance (ESR). The single electron transistor (SET) is the charge sensor observing the electron tunneling events. Field programable logical array (FPGA) manipulates the sequential pulsing on gate electrodes and modulation on high-frequency microwaves. (b) Flow chart of the web-based FBT analysis service. The model can be bootstrapped from earlier characterization results. As the posteriors of the last update will become the prior of the next update, the model will be iteratively updated when the measurement outcome arrives. Post-processing on the estimated channels, including CPTP projection and gauge optimization, is performed over the gate set. (c) An example screenshot of the webpage of live update report displayed on a client end. Displays the Hinton diagrams of the estimated channels, error process, and received experiment data. (d) A detailed diagram for channel prior bootstrap and Bayesian update procedures. formed for every \(N\) updates (post-processing interval \(N\) is customizable) and will not overwrite the original Bayesian statistics. The live updated webpage displays the most recent update of the gate processes and error metrics. As a crucial component of the online FBT service, shown in Fig. 1(d), bootstrapping the initial prior sets the starting point for the analysis. In realistic experimental runs, the level of knowledge about the system evolves as more characterization data becomes available. Information obtained from other characterizations, such as Rabi oscillation, randomized benchmarking, and even other process tomography methods, can all serve as prior information for bootstrapping the initial gate prior. The software stack also provides various prior bootstrapping strategies, presented in TABLE. 1 as a brief guideline for experimentalists. Details about bootstrapping techniques are discussed in section III.7. ### Characterizing non-Markovian gate process with FBT Based on our earlier device characterization, we observed that the gates could perform inconsistently depending on the context. Here we study this inconsistency in two different time scales. For the inter-sequence time scale, over an hours-long experiment time, we observe that the gate performance gradually drifts over time. For the intra-sequence time scale, gates perform differently in sequences with different lengths. This behavior was observed on silicon spin qubits from RB experiments on a single qubit [21]. In that work, the presence of the non-exponential RB decays was attributed to be an indication of the impact of non-Markovian errors when running long sequences. In the presence of non-Markovian noise, process tomography protocols find it challenging to reconstruct such context-dependent processes. Though standard process tomography protocols do not capture the non-Markovian dynamics, \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Strategy** & \multicolumn{2}{|c|}{**Prior Knowledge**} & **Usage Scenario** & **Boot time** & \begin{tabular}{c} **Estimation** \\ **time** \\ \end{tabular} \\ \hline Blind & \(\bar{x},\Gamma_{x}\): Guessed channel statistics. & No prior knowledge & 70 min & 50 min \\ cold boot & \begin{tabular}{c} General choice will be \\ depolarisation channel \\ \end{tabular} & about the new device & 70 min & 50 min \\ \hline Fidelity constrained & \(\bar{x},\Gamma_{x}\): Guessed channel statistics. & Managed to run first & \multirow{2}{*}{50 min} & \multirow{2}{*}{12 min} \\ cold boot & \(\bar{f},\sigma_{f}^{2}\): Gate fidelity statistics. & RBM, based on fidelity & & \\ \hline Partially trusted & \(\bar{x}\): Mean gate process matrix & Finished some FBT/ & \multirow{2}{*}{35 min} & \multirow{2}{*}{6 min} \\ warm boot & from finished FBT/GST results. & GST analysis already. & & \\ \hline Fully trusted & \(\bar{x},\Gamma_{x}\): Gate statistic & Experiment & \multirow{2}{*}{0 min} & \multirow{2}{*}{6 min} \\ warm boot & from finished FBT/GST results. & changed since last & & \\ \cline{3-3} & & FBT run. & & \\ \hline \end{tabular} \end{table} Table 1: Comparison of different prior bootstrapping strategies. Conditions for the time consumption estimation: 1. Server computer for analyzing FBT has a 28-core CPU (Intel Xeon W-2175) and 32GB RAM. 2. Time consumptions (wall time) in this table are estimated on FBT analysis on random sequences simulation based on a two-qubit toy model. 3. Boot times depend on the initial guess statistics and the number of samples (typically need 1000 samples) for cold boot and partially trusted warm boot methods. 4. Total estimation time varies, which is strongly dependent on approximation error sampling (here set to 100 samples) and the closeness between the initial guessed model and the true model. one might hope to learn how badly the estimation is being affected. GST [9; 10] provides model violation as the metric of the goodness of fit. A large model violation in a data set is one indicator that the underlying process could be highly non-Markovian. However, this metric does not provide any information about what types of errors appear non-Markovian. One feature of FBT is that it can take arbitrary sequences as input to update its model. This freedom in the choice of sequences allows us to design experiments to uncover gate error inconsistencies under different contexts. We focus on the intra-sequence and inter-sequence regimes individually and investigate the non-Markovian nature of the gate processes. For the intra-sequence regime, we design a sequence-length-dependent experiment to probe how the error changes as we run different lengths of sequences. For the inter-sequence regime, we track the slow drifting of the gate performances by warm booting the model and running experiments in batches within a fixed time window. #### ii.2.1 Sequence length dependency To look into the sequence length dependency of gate errors, we run a series of experiments with different fixed sequence lengths (length in the number of primitive gates) and analyze each individually. The fixed sequence lengths are chosen to be 8, 16, 32, 64, and 128. Each experiment consists of 5000 randomly generated sequences with the same length, generally enough to ensure informational completeness data for two-qubit system's tomographic characterization. Each sequence is repeated for 100 shots, and no extra projections are needed (see section III.5). Though 5000 sequences with 100 shots only takes around 20 minutes to acquire, the device is still impacted by undesired Figure 2: (a) Illustration of the sequence length dependency experiment. To investigate the dependence of gate errors on the sequence length, we perform five independent experiments, where each experiment has 5000 random gate sequences with the fixed gate length of \(L=8,16,32,64,128\). To minimize the impact from slow drifting, the sequences are run in a rasterized manner. (b) Extracted Hamiltonian error from the final estimated results of the experiment \(L\)=32. (c) Plots of the amplitude of the Hamiltonian error components (\(H_{IZ}\), \(H_{ZI}\), \(H_{ZZ}\)) as a function of sequence length. Entries of the error parameter are indicated by their corresponding color by frames on (b). Error bars (99.7% confidence interval) of the Hamiltonian errors are obtained by resampling the noise channel with the statistic from the last FBT update. slow sub-Hertz noise. There will be significant inter-sequence inconsistency if the shot repeating loop is nested inside the sequence selection loop, for example, if we were to acquire all repeated shots for one sequence and then loop to the next sequence. To minimize the inter-sequence inconsistency for this experiment, we repeat the list of sequences in a rasterized manner, also used in Ref. [22]. Sequence rasterization means all sequences are looped for one single shot, then the next shot repeats all sequences. This can be implemented on FPGA by programming the sequence compilation loop to be embedded in the shot-repeat loop. We choose the last update for each experiment to investigate the sequence length dependent errors. FBT is primitively estimating the noise residual PTM (Pauli transfer matrix) for non-ideal gate processes, which describes the input-output relationships on Pauli basis. To make it more physically intuitive, we convert the final estimated gate noise channel to elementary error generators (see section III.3). Since we find that the Hamiltonian error captures 77% to 97% of the error generator, we mainly focus on the Hamiltonian errors. As shown in Fig. 2(b), the dominating error is \(H_{IX}\) error on \(x_{1}\) (see Methods. III.1 for gate definitions), which indicates an over-rotation error. Smaller errors, including \(H_{IZ},H_{ZI},H_{ZZ}\) errors, appear to have sequence length dependency. Noticeably, we find the \(H_{IZ}\) error on \(x_{2}\) gate has a strong sequence length dependency, which likely originated from the Larmor frequency shift in Figure 3: (a) Illustration of the slow drift tracking experiment. Sequences with various lengths are executed, batch by batch. Each batch of sequences is looped in a rasterized manner, which runs through 80 sequences for 100 shots. The whole tracking experiment runs over 500 batches of sequences, and each batch is finished in roughly 0.54 minute time window. (b) Entanglement infidelities (colored solid lines on top of each plot) and their top-6 significant error generator infidelity (stacked colored areas under solid line) of gates \(x_{1},x_{2}\), DCZ as a function of lab time of each physical gate. duced by transient microwave effects [21; 23; 24; 25]. #### ii.2.2 Slow drift tracking The slow-drift tracking experiment aims to observe sub-Hertz noise, which manifests as gate performance metrics drifting over hours-long experiments. The validity of this experiment-analysis is built on the assumption that the system is relatively stable within a few-minutes-long time window, and drifts so slowly that it allows the changes of the error parameters to be smooth and continuous. Within each "stable" window, we sample as much data as possible. We rule out the length dependency errors by running various lengths of random sequences. With the arguments in hand, we use the experiment design as shown in Fig. 3(a). Random sequence batches, which consist of various lengths of random sequences, are streamed into an FPGA after the previous batch is finished. Each batch of sequences was executed in a rasterized manner as in section II.2.1. To track the drift of the error parameters from the beginning, the FBT prior was bootstrapped with an educated guess that is closest to the first batch of experiment. The best practice for this is a warm boot. The details about the warm boot strategy are discussed in section III.7. Since warm boot already sets the model with the best prior information, the estimated results of the first batch of experimental data can be approximately treated as the pseudo-transient state at its corresponding lab time. The following batches are analyzed in lab time order so that we can correlate the batch analysis with lab time. Fig. 3(b) shows gate entanglement infidelities [26] and their generator infidelities as a function of lab time. Stochastic errors contribute most to gate infidelity for all three gates. \(S_{ZI}\) type error on \(x_{1}\) and \(S_{IZ}\) error on \(x_{2}\) are due to the idling on the spectator qubit when operating the single qubit gates. The Hamiltonian error on \(H_{IZ}\) on \(x_{2}\) gate could have a similar physical origin as we find in section II.2.1. The stochastic errors on DCZ vary quite significantly, possibly due to fridge temperature fluctuations or charge movement in silicon. ## III Methods ### Device and gate implementation Shown in Fig. 1(a) (same as device A and B in [27]), the devices (A and B) we use are Silicon-Metal-Oxide-Semiconductor (SiMOS) devices with a purified \({}^{28}\)Si substrate and aluminum gate electrodes. Few-electron double quantum dots are formed at the interface of Si/SiO\({}_{2}\) under gates P1 and P2. An external static magnetic field is applied in-plane and creates around 20 GHz Zeeman splittings. Two spins can be addressed individually due to the Zeeman energy difference (around 10 MHz for A, and 22 MHz for B). The exchange coupling between the two spins is controlled by gate J1. The spin readout is achieved by tunneling one electron from one dot to another, where the tunneling event is observed by a single-electron-transistor (SET) when the two spins are in odd parity [28; 29]. In this work, the gate set includes five elementary gates, which are \(X_{Q1}^{\pi/2}\), \(X_{Q2}^{\pi/2}\), \(Z_{Q1}^{\pi/2}\), \(Z_{Q2}^{\pi/2}\) and controlled phase gate CZ (in device A) or decoupled-controlled phase gate DCZ (in device B) [30; 31]. For clarity, we label them as \(\{x_{1},x_{2},z_{1},z_{2}\), CZ (or DCZ)\(\}\) individually. Here \(x_{1}\) and \(x_{2}\) are implemented by electron-spin resonance by pulsing modulated oscillating magnetic field, which is delivered by the ESR antenna. Gates \(z_{1}\) and \(z_{2}\) are software-implemented virtual gates, which are implemented by changing the rotating frame of each resonator that tracks qubit frequency [32]. Both the CZ and DCZ gates are implemented by pulsing the J1 gate, with the key difference that the DCZ has a \(\pi\)-pulse on both qubits sandwiched in the middle of two J1 pulses. Pulsing on J1 introduces an extra Stark shift on each of the qubits, which are compensated by software phase correction for the CZ gate, while this will be can celed out with the DCZ gate. Residual \({}^{29}\)Si nuclear spin flips result in jumps of spin resonance frequency, and we have implemented feedback protocols to track and correct the Larmor frequencies. In addition, we have feedback on Rabi frequencies and exchange coupling. Feedback protocols are interleaved between the main experiment runs. ### Fast Bayesian tomography protocol Before introducing the new improvements on FBT, in this subsection, we review the basics of the protocol [11]. FBT is a self-consistent method that uses Bayesian inference to reconstruct the whole gate set simultaneously. A gate set is a full mathematical description of the capability of a quantum system, including state initializations, gate operations, and measurements. Generally, an experiment begins with initialization of the quantum system to state \(\rho\), then follows a sequence \(S_{k}\) composed of the unitary gates and, finally, measurement is a projection to effect \(E\). Collecting the counts of the states yields the probability of the specified outcome of that experiment. This is described by: \[m_{k}=\langle\langle E||\prod_{i\in S_{k}}G_{i}\,||\rho\rangle\rangle \tag{1}\] where \(m_{k}\) is the outcome probability, \(\langle\langle E||\) and \(||\rho\rangle\rangle\) are vectorized measurement operator and preparation density matrix and \(\prod_{i\in S_{k}}\) represents the sequential operation of the gates in sequence \(S_{k}\). To account for errors in the model, a noisy gate \(\tilde{G}_{i}\) is modeled as an ideal gate \(G_{i}\) followed by a noise channel \(\Lambda_{i}\). Since the SPAM processes (\(\langle\langle\tilde{E}||\) and \(||\tilde{\rho}\rangle\rangle\)) are imperfect, we also associate them with noise channels to capture the errors, represented by \(\Lambda_{E}\) and \(\Lambda_{\rho}\) respectively. So the noisy gate set is described by: \[\left\{\begin{aligned} &\langle\langle\tilde{E}||=\langle \langle E||\,\Lambda_{E}\\ &\tilde{G}_{i}=G_{i}\Lambda_{i}\\ &||\tilde{\rho}\rangle\rangle=\Lambda_{\rho}\,||\rho\rangle \rangle\end{aligned}\right. \tag{2}\] The tomographic reconstruction of the noise channels requires solving the non-linear problem, which is computationally challenging. This problem can be solved by linearizing Eq.1, which decomposes the noise channel into \(\Lambda_{i}=I+\varepsilon_{i}\) and drops higher-order terms [33]. For a sequence with a length of \(N_{k}\), we have \[m_{k}\approx\langle\langle E||\prod_{s\in S_{k}}G_{s}\,||\rho\rangle\rangle\\ +\langle\langle E||\,\varepsilon_{E}\prod_{s\in S_{k}}G_{s}\,|| \rho\rangle\rangle\\ +\sum_{j=1}^{N_{k}}\langle\langle E||\left[\prod_{i=j+1}^{N_{k}} G_{i}\right]\varepsilon_{j}G_{j}\left[\prod_{i=1}^{j-1}G_{i}\right]||\rho\rangle \rangle\\ +\langle\langle E||\prod_{s\in S_{k}}G_{s}\varepsilon_{\rho}\,|| \rho\rangle\rangle\\ =m_{\text{ideal}}+A_{k}x \tag{3}\] where \(m_{\text{ideal}}\) is the ideal output and \(x\) is the vectorised form of noise channel residual \(\varepsilon_{i}\). However, this model only works for gates with high fidelity and fails when errors are large. Based on the linearized model, we construct a Bayesian model to estimate the noise channel parameters. Firstly, let each noise channel residual parameter be distributed as a Gaussian variable, then \(x\) will become a multi-variable Gaussian \(||\varepsilon_{i}\rangle\rangle\sim\mathcal{N}(||\bar{\varepsilon}_{i} \rangle\rangle\,,\Gamma_{i})\). If we are bootstrapping the multi-variable Gaussian from an educated guessed prior, then the FBT model is: \[m_{k}\approx\langle\langle E||\,\bar{\Lambda}_{E}\prod_{s\in S_{k}} \bar{\Lambda}_{s}G_{s}\bar{\Lambda}_{\rho}\,||\rho\rangle\rangle+\langle\langle E ||\,\varepsilon_{E}\prod_{s\in S_{k}}\bar{\Lambda}_{s}G_{s}\bar{\Lambda}_{\rho} \,||\rho\rangle\rangle+\\ \sum_{j=1}^{N_{k}}\left\langle\langle E||\,\bar{\Lambda}_{E} \left[\,\prod_{i=j+1}^{N_{k}}\bar{\Lambda}_{i}G_{i}\right]\varepsilon_{j}G_{j} \left[\prod_{i=1}^{j-1}\bar{\Lambda}_{i}G_{i}\right]\bar{\Lambda}_{\rho}\,|| \rho\rangle\right\rangle+\langle\langle E||\,\bar{\Lambda}_{E}\prod_{s\in S_{k }}\bar{\Lambda}_{s}G_{s}\varepsilon_{\rho}\,||\rho\rangle\rangle\\ =\bar{m}_{k}+\bar{A}_{k}x+\epsilon_{k}+\eta_{k} \tag{4}\] Based on this, the FBT protocol updates itself iteratively. For example, by forwarding the posterior of the \((k-1)^{th}\) update as the prior for the new coming \(k^{th}\) update, the model infers the expected outcome \(\bar{m}_{k}\) of \(k^{th}\) sequence and the linear model \(\bar{A}_{k}\) which acts on the centralized residual error parameters \(x\). Besides gate set errors, FBT also models two error processes individually, which are approximation error and sampling error. We note that, the approximation error captures the error due to model linearization, which avoids overfitting issues when gate errors are relatively large. Since estimating the approximation error requires sampling over the estimated channels, which is time-consuming, it becomes unnecessary and can be dropped off when it becomes much smaller than shot noise [11]. ### Decomposing into elementary error generators FBT represents noise channels in PTM, which makes channel parameterization easier. However, it is not intuitive to understand the physical sources of the errors. Error taxonomy method for small Markovian errors [34] decomposes the error generators into elementary error generators, which allows us to correlate the physical error sources from the estimated noise channels. To decompose the noise channels estimated by FBT, PTM of noise channels \(\Lambda\) need to be converted to error generator \(\mathcal{L}\) by matrix logarithm: \[\mathcal{L}=\text{logm}(\Lambda) \tag{5}\] Error generators can be projected to error subspaces. The elementary error generator can be categorized into four classes: Hamiltonian, Pauli-stochastic, Pauli-correlation, and active error generators (denoted as H, S, C, and A individually). The coefficients of elementary error generators can be extracted by its dual basis. For instance, for Hamiltonian error generators: \[H_{P}^{\prime}[\cdot]=-\frac{i}{2d^{2}}[P,\cdot] \tag{6}\] where \(P\) denotes Pauli operators and \(d\) is the dimension of Hilbert space. Then the coefficient of the elementary Hamiltonian error generator can be calculated by: \[h_{P}=Tr(H_{P}^{\dagger}\mathcal{L}) \tag{7}\] where \(H_{P}^{\dagger}\) is the dual Hamiltonian generator in Pauli basis. We can also evaluate the elementary error generator infidelity contributions to entanglement infidelity by: \[\epsilon_{\text{ent}}=\epsilon_{J}+\theta_{J}^{2}+O(|L_{S}|^{2}) \tag{8}\] where the Jamiolkowski probability \(\epsilon_{J}\) and Jamiolkowski amplitude \(\theta_{J}^{2}\) are: \[\epsilon_{J}=\Sigma_{P}S_{P} \tag{9}\] \[\theta_{J}^{2}=\Sigma_{P}H_{P}^{2} \tag{10}\] The error taxonomy as a post-processing procedure for FBT allows us to reinterpret the outcomes in a more sensible way. The coefficients of elementary error generators tell what are the dominating errors that limit us from higher fidelity and indicate the potential physical origins of the errors. ### Informational completeness Treating tomography protocols as black boxes, simply intaking measurement results and outputting the reconstructed mathematical description of the model, often raises the question of how much data is required to reconstruct the model faithfully. The minimal information required by traditional state, process, and measurement tomography should span the whole Hilbert-Schmidt space. For example, reconstruction of an \(n\)-qubit state requires a minimum of \(4^{n}\) projective measurements that are orthogonal to each other. Under the picture of self-consistent tomographies, like FBT, the requirements for informational completeness is no longer as straightforward as process tomography based on linear inversion reconstruction. We shall see that each measurement outcome is a collective effect of a subset of unknown parameters from the gates used in that sequence. Since FBT is a Bayesian method, we can update our estimates without requiring informational completeness. Though quantitively determining the minimum number of experiments for FBT is beyond the scope of this work, we list the following empirical principles: * Distance between the initial guess and the true model: large distances require more updates to make the final estimation reach a decent level of accuracy. * The initial guess of the uncertainty in the model parameters needs to balance the speed of the convergence towards a result while being loose enough to allow for significant updates on the model with incoming new data. * Randomness of the sequences set: unbiased appearances of each gate improves the chances of all parameters reaching the target level of precision with a finite number of experiments. Eventually, the precision will be limited by finite sampling and the non-Markovianity of the system. In this work, we are chasing to make the cost of experiments lower without compromising the estimation accuracy with the advantages of FBT. The next section approaches this target by reducing unnecessary measurement projections, while Section III.7 discusses how the characterization information can be maximally used by the initial prior bootstrap. ### Native measurement for FBT Most process tomography protocols assume that each qubit can be measured individually, which means each qubit gives 1 bit of classical information for each shot. However, for a multi-qubit system, it is more straightforward to measure the qubits pairwise. Parity readout [28] based on the Pauli exclusion principle yields one bit of classical binary information to determine whether a pair of spins are parallel or antiparallel. Earlier work [29] has shown that we can access the complete two-qubit measurement basis by repeating the main sequence with different projection sequences. For QPT, it is necessary to have an informationally complete measurement to guarantee linear inversion. However, for self-consistent methods like FBT, as we have discussed in section III.4, it is not necessary to measure multiple projections for each random circuit to satisfy information completeness. We show here that the measurement for FBT can be formulated either with multiple projections or directly using a single native measurement outcome. To clarify the problem, we naively look into the problem with Eq. 1. A sequence repeated with \(N_{E}\) projections can be written as: \[\left[\begin{array}{c}m_{k0}\\ m_{k1}\\ \cdots\\ m_{kN_{E}}\end{array}\right]=\left[\begin{array}{c}\langle\langle E_{0}||\\ \langle\langle E_{1}||\\ \cdots\\ \langle\langle E_{N}||\end{array}\right]\\ \prod_{i\in S_{k}}\Lambda_{i}G_{i}\Lambda_{\rho}\,||\rho_{0}\rangle\rangle \tag{11}\] As indicated by the model, every shot of the experiment starts with state initialization, executing the main sequence, and ending with a measurement. Suppose here we have only one native measurement, the rest can be implemented by performing projection sequences before the native measurement. Alternatively, the equation above can be written as: \[\left[\begin{array}{c}m_{k0}\\ m_{k1}\\ \cdots\\ m_{kN_{E}}\end{array}\right]=\langle\langle E_{0}||\\ \cdots\\ \Lambda_{E}\prod_{i\in P_{N}}\Lambda_{i}G_{i}\right]\times\\ \prod_{i\in S_{k}}\Lambda_{i}G_{i}\Lambda_{\rho}\,||\rho_{0}\rangle\rangle \tag{12}\] By joining each projection sequence to the main sequences individually, the original M sequences dataset is now unpacked to be \(N_{E}\times M\) sequences with native measurement. To prove that feeding FBT with a single projection does not harm informational completeness, we compare three ways of using the projective measurement results as the input to FBT. * 1. FBT receives \(M\) updates with original main sequences, each update takes multiple projections as a vector input. Figure 4: Validity of native measurement for FBT. (a) Illustration of three ways of feeding projective measurement outcomes to FBT. Each main sequence is repeated twice, one for measuring odd parity and another for even parity. Cases A and B take one of the two projections for FBT analysis. Case B absorbs the projection sequence into the main sequence. While case C takes both of the two projections as a full parity basis. (b) Final estimated gate noise residual channel of \(x_{2}\) and measurement channel for cases A and C. (c) Traces of channel parameters of \(x_{1}\) gate for each case mentioned in (a). Case D mixes data from cases A and B as a reference, which processes twice the amount of the sequences. * 2. FBT receives \(M\) updates, keep one of the projections and join that one projection sequence to the main sequences, each update takes that projection's result as a scalar input. * 3. FBT receives \(N_{E}\times M\) updates, utilize all projection's results and inputs them to FBT like 2. Without losing generality, we demonstrate with parity readout, which is used in all the experiments shown in the previous sections. For instance, for a pair of electron spins, if tunneling of one electron from one dot to its neighbor happens, this indicates an odd parity state. To measure the probability of the opposite parity, we invert the parity by applying a \(\pi\) pulse on one of the qubits, which is implemented by a projection sequence of \(x_{2}-x_{2}\). For a parity readout natively reading out odd parity, the projections can be represented as: \[\begin{split}\langle\langle E_{Odd}||=\frac{1}{2}(\langle \uparrow\downarrow\downarrow||+\langle\downarrow\uparrow||)\\ \langle\langle E_{Even}||=\frac{1}{2}(\langle\langle\uparrow \uparrow\uparrow\rangle|+\langle\downarrow\downarrow||)=\\ \frac{1}{2}(\langle\langle\uparrow\downarrow\downarrow||+\langle \downarrow\uparrow\uparrow\rangle|)(G_{x_{2}}G_{x_{2}})\end{split} \tag{13}\] As shown in Fig. 4, the testing experiment contains 4220 random sequences as main sequences, and each sequence is repeated twice to get both even and odd parity projections, which means 8440 different sequences in total were executed for this experiment. Cases A and B each use one of the projections individually. However, for case B, the projection sequence is seen as part of the main sequences, so FBT intakes a single native measurement outcome like case A. While case C routinely takes two projections as a complete parity measurement basis, case D is a reference case, which utilizes all 8440 sequences, but FBT sees the projection sequences as part of main sequences, like cases A and B. Based on the four cases of analysis of the testing experiment, the estimated parameters' accuracy are not sacrificed even with the single-projection native measurement cases (comparing A and B to C), which indicates that multi-projection parity measurement does not benefit the accuracy of estimation. From Fig. 4(b), it is also noticeable that both cases A and C show that \(x_{2}\) has quite a significant over-rotation error. However, for case C, the \(x_{2}\) over-rotation error appears on the measurement channel, which is not desirable when trying to diagnose the intrinsic errors in the measurement channel. Therefore, a single native measurement as input for FBT does not harm the accuracy of the estimation. By dropping off the non-native measurements, the experiment and analysis costs were reduced to \(1/N_{E}\), and mingling of gate error to measurement channel can be avoided. ### Gauge optimization for FBT Under the self-consistent picture, the representation of the reconstructed gate set is not unique. That means multiple alternative representations of the gate set yield the same experimental outcomes. The transformation between those equivalent representations is called gauge transformation: \[\begin{cases}\langle\langle\tilde{E}^{\prime}_{0}||=\langle\langle\tilde{E}_{0 }||\,S\\ \tilde{G}^{\prime}_{i}=S^{-1}\tilde{G}_{i}S\\ ||\tilde{\rho}^{\prime}_{0}\rangle\rangle=S^{-1}\,||\tilde{\rho}_{0}\rangle \rangle\end{cases} \tag{14}\] where the transformation matrix \(S\) is arbitrary as long as \(S\) is invertible and trace-preserving, which maintains the CPTP of the transformed gate set. In principle, there is no preferred choice of gauge for a gate set. However, metrics that indicate the overall performance of the gates, like fidelity, are gauge-variant. This is particularly impactful in the case where the state preparation and measurement are being studied as noisy channels themselves, such as we do here. In this case, we are prescribing the initialization and measurement quantization axes. We resolve the issue of gauge ambiguity for our FBT protocol by fixing the gauge to be the one that optimizes some choice of metric for gate fidelity. In this work, we choose the target of the gauge optimization to be minimizing a "distance" metric between the transformed gate set and the ideal gate set. Though there are a few metrics, like fidelity, diamond norm, etc., that can be chosen as the objective function, minimizing the weighted Frobenius distance between the estimated gate set and ideal gate set would be the optimal option [10]: \[\begin{split}\operatorname*{argmin}_{S}g(\mathcal{G},\mathcal{G} ^{\prime})=w_{G}\sum_{i}||\tilde{G}_{i}-\tilde{G}^{\prime}_{i}||^{2}_{\mathcal{ F}}\\ +w_{S}(||\tilde{\rho}-\tilde{\rho}^{\prime}||^{2}_{\mathcal{F}}+|| \tilde{E}-\tilde{E}^{\prime}||^{2}_{\mathcal{F}})\end{split} \tag{15}\] where \(||\cdot||_{\mathcal{F}}\) denotes Frobenius norm, \(w_{G}\) and \(w_{S}\) are unitary gate weight and spam weight individually. Typically we set \(w_{G}/w_{S}>>1\), because the SPAM errors can not be amplified. Similar to the implementation in pyGSTi [35], gauge transformation matrix \(S\) is constrained to be trace-preserving and invertible. The weighted sum of the Frobenius distance, as the object function, is minimized by the L-BFGS-B method [36] from the _Scipy_ python package. ### Initial prior bootstrap strategies One major advantage of the Bayesian method is that we can incorporate as much knowledge as we have known into the model prior before we start the analysis. Every coin has two sides, the bootstrapping strategy for the initial prior of the channels is critical --a good educated guess can help get trustworthy results with low experimental and computational cost, while a poor guess could lead to convergency problems. There are several reasons why a good initial guess is important for FBT. Firstly, we can feed FBT with "more than enough" amounts of updates to guarantee the accuracy of estimation, but they would also take a long time to run experimentally, which in turn leads to slow drifts becoming significant. Since a lesser amount of experiments is always more desirable, an educated guess for the initial prior allows the model to be updated with less data, but without compromising the accuracy. Secondly, a decent initial guess allows the model to start with lower approximation error. The approximation error, which captures the error from linearizing the model, is expensive to sample for each update and can not be dropped before meeting the turn-off threshold [11]. Thus, a wise choice of the bootstrap strategy, as briefly summarised in Table 1 is critical for FBT. In practice, the quantum system we are characterizing is not entirely unknown to us. As more characterization data becomes available, we have more trustable information to bootstrap the initial prior statistics for FBT. In the worst case, we don't have any useful information about the whole gate set and have to cold boot blindly with roughly estimated noise channels based on metrics like quality factors and visibility of Rabi oscillations, where depolarising noise channels is usually the conventional choice. In this case, we do have to feed more sequences to FBT to get reliable results. Another cold boot strategy --cold boot with fidelity [11] estimated by earlier RBM experiments --provides a tighter error bound than a blind cold boot. This significantly reduces the approximation error for beginning updates and doesn't require a large number of sequences to feed FBT. Routine calibration and diagnosis with FBT will accumulate abundant results in the database, which is also a source of knowledge for bootstrapping the prior. In this work, we introduce warm boot strategies to leverage prior information from historical process tomography results. We can either trust the earlier analysis fully (full warm boot) or partially (partial warm boot), depending on the device setup. The fully warm boot is applicable when no significant changes happen to the system configurations. The new analysis fully inherits the complete gate set statistic of the previous analysis. Minor updates of the setup or slow drift can impact the gate performance locally but we can still partially trust the earlier results by overwriting a different uncertainty based on educa tional guesses. Algorithm 1 shows how partial warm boot initializes the initial prior with the previous estimation results and new guess of the covariance matrix. ``` Input:\(\bar{x}\): Estimated noise channel mean from previous analysis Input:\(\Gamma_{x}\): Guessed covariance matrix \(x\sim\mathcal{N}(\bar{x},\Gamma_{x})\) for\(i=0\)to\(N_{sample}\)do Gaussian sample a process matrix \(\mathcal{X}\) CPTP project \(\mathcal{X}\) Save \(\mathcal{X}\) to \(P_{f}(\mathcal{X})\) end \(\bar{x^{\prime}}\gets mean(P_{f}(\mathcal{X}))\) \(\Gamma^{\prime}_{x}\gets cov(P_{f}(\mathcal{X}))\) Output:\((\bar{x^{\prime}},\Gamma^{\prime}_{x})\) ``` **Algorithm 1**Initial prior bootstrap: ## IV Discussion The web-based online analysis platform developed in this work opens up the possibility for real-time gate set calibration in the future. We characterize the non-Markovian quantum process by two specially designed FBT experiments in different scales of noise effective time on a pair of spin qubits in a silicon quantum dot system. We observed that the Hamiltonian error dominates, and some of its components appear stronger in longer sequences. Slow drift tracking experiment shows that qubit fidelity can vary over a long experiment time, which can be captured by FBT. To reduce the experimental cost and acquire more information per unit of time, we verified that the native measurement method does not compromise the estimation accuracy and proposed the warm boot for the initial prior bootstrap to speed up the analysis. Though our experiment-analysis protocol for observing non-Markovian error is not a rigorous tomographic method for reconstructing non-Markovian quantum process dynamics, it clearly indicates the types of errors that have non-Markovian behavior. To close the loop between characterization and correction, we will need to correlate the estimated errors back to instrumental control parameters in future work. Scaling up the quantum system will also bring challenges to FBT. The real-time online feedback may no longer be valid as the analysis time scales non-linearly with the number of qubits. Refining the model and reducing the number of parameters is the potential solution to this challenge. Overall, in this work, we have demonstrated the potential of FBT for tackling non-Markovian noise and made it ready to be an online real-time feedback tool for building future fault-tolerant quantum computers. ## V Acknowledgement We acknowledge technical discussions with Matthew Otten from HRL Laboratories. We acknowledge support from the Sydney Quantum Academy, the Australian Research Council (FL190100167, CE170100012, and IM230100396), the US Army Research Office (W911NF-23-10092), and the NSW Node of the Australian National Fabrication Facility. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, expressed or implied, of the Army Research Office or the US Government. The US Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
2306.13465
3DSAM-adapter: Holistic adaptation of SAM from 2D to 3D for promptable tumor segmentation
Despite that the segment anything model (SAM) achieved impressive results on general-purpose semantic segmentation with strong generalization ability on daily images, its demonstrated performance on medical image segmentation is less precise and not stable, especially when dealing with tumor segmentation tasks that involve objects of small sizes, irregular shapes, and low contrast. Notably, the original SAM architecture is designed for 2D natural images, therefore would not be able to extract the 3D spatial information from volumetric medical data effectively. In this paper, we propose a novel adaptation method for transferring SAM from 2D to 3D for promptable medical image segmentation. Through a holistically designed scheme for architecture modification, we transfer the SAM to support volumetric inputs while retaining the majority of its pre-trained parameters for reuse. The fine-tuning process is conducted in a parameter-efficient manner, wherein most of the pre-trained parameters remain frozen, and only a few lightweight spatial adapters are introduced and tuned. Regardless of the domain gap between natural and medical data and the disparity in the spatial arrangement between 2D and 3D, the transformer trained on natural images can effectively capture the spatial patterns present in volumetric medical images with only lightweight adaptations. We conduct experiments on four open-source tumor segmentation datasets, and with a single click prompt, our model can outperform domain state-of-the-art medical image segmentation models on 3 out of 4 tasks, specifically by 8.25%, 29.87%, and 10.11% for kidney tumor, pancreas tumor, colon cancer segmentation, and achieve similar performance for liver tumor segmentation. We also compare our adaptation method with existing popular adapters, and observed significant performance improvement on most datasets.
Shizhan Gong, Yuan Zhong, Wenao Ma, Jinpeng Li, Zhao Wang, Jingyang Zhang, Pheng-Ann Heng, Qi Dou
2023-06-23T12:09:52Z
http://arxiv.org/abs/2306.13465v2
# 3DSAM-adapter: Holistic Adaptation of SAM from 2D to 3D for Promptable Medical Image Segmentation ###### Abstract Despite that the segment anything model (SAM) achieved impressive results on general-purpose semantic segmentation with strong generalization ability on daily images, its demonstrated performance on medical image segmentation is less precise and not stable, especially when dealing with tumor segmentation tasks that involve objects of small sizes, irregular shapes, and low contrast. Notably, the original SAM architecture is designed for 2D natural images, therefore would not be able to extract the 3D spatial information from volumetric medical data effectively. In this paper, we propose a novel adaptation method for transferring SAM from 2D to 3D for promptable medical image segmentation. Through a holistically designed scheme for architecture modification, we transfer the SAM to support volumetric inputs while retaining the majority of its pre-trained parameters for reuse. The fine-tuning process is conducted in a parameter-efficient manner, wherein most of the pre-trained parameters remain frozen, and only a few lightweight spatial adapters are introduced and tuned. Regardless of the domain gap between natural and medical data and the disparity in the spatial arrangement between 2D and 3D, the transformer trained on natural images can effectively capture the spatial patterns present in volumetric medical images with only lightweight adaptations. We conduct experiments on four open-source tumor segmentation datasets, and with a single click prompt, our model can outperform domain state-of-the-art medical image segmentation models on 3 out of 4 tasks, specifically by 8.25%, 29.87%, and 10.11% for kidney tumor, pancreas tumor, colon cancer segmentation, and achieve similar performance for liver tumor segmentation. We also compare our adaptation method with existing popular adapters, and observed significant performance improvement on most datasets. Our code and models are available at: [https://github.com/med-air/3DSAM-adapter](https://github.com/med-air/3DSAM-adapter). ## 1 Introduction Foundation models trained on massive data have demonstrated impressive ability on various general tasks [1; 2; 3], and are envisaged to impact downstream domains, especially where data collection and labeling are expensive. Segment anything model (SAM) [4] is the one from the computer vision field, which has shown success for general-purpose promptable object segmentation. As is known, such powerful discrimination ability relies on the coverage of distributions as exhibited in training data. This is also the underlying reason for the reported suboptimal performance when applying SAM to domain-specific tasks such as medical image segmentation [5; 6; 7; 8; 9; 10; 11]. For instance, Huang et al. [7] extensively tested SAM on 52 medical datasets and observed limited performance on objects with irregular shapes or limited contrast. Mazurowski et al. [5] evaluated SAM under different prompt settings, and showed unstable results upon single-point prompts on 3D medical images. Therefore, adaptation for out-of-distribution domains is needed, but how to design the adapter is yet unclear. The success of SAM can be partly ascribed to the powerful prompt engineering, which originates from large language models [12] with recent variants improving model generalizability across tasks. SAM uses the form of positional prompts manifested in a point click or bounding box, so that the model predicts a segmentation mask for where the prompt is provided. The problem of such positional prompts is the lack of high-level semantic awareness, because the segment tends to discriminate nearby structures relying on local features such as edges, shapes, or contrast [13]. This is the case for SAM pre-training data where the objects mostly have clear boundaries, but are not suitable for medical images because the boundaries between the tumors and their surrounding tissues are often ambiguous. Therefore, adaptation or model redesign is required to transfer SAM to specific applications with domain knowledge considered. Recent works on parameter-efficient adaptation methods have tried to update pre-trained parameters via learning task-specific vision prompts [14], specifying a small proportion of parameters to tune [15], or incorporating lightweight plug-and-play adapters [16; 17]. Promising results have been achieved even though just a small amount of parameters are fine-tuned or added, which motivates us to also seek an efficient version of **3DSAM-adapter** for medical imaging. How to adapt SAM from 2D to 3D for medical image segmentation should be carefully considered from all aspects in order to fit volumetric data. Related work includes methods adapting foundation models from 2D images to 3D videos [18], for example, Pan et al. [19] modify a standard adapter by incorporating depth-wise convolutions to impose spatial-temporal reasoning. Domain experts on medical imaging also proposed 2D to 3D adapters, for example, Want et al. [20] develop a 3D convolution-based adapter together with Fast Fourier Transform to extract spatial information. Wu et al. [21] replicate the weights learned from 2D images to help fuse spatial information of the third dimension. However, their main backbones (especially transformer-based) are still based on 2D, whereas the 3D information is compensated through additional fusion structures. This may work for video data where the temporal dimension is essentially different from spatial dimensions, but is definitely sub-optimal for medical images where the 3-dimensional spatial information is isotropic. To date, how to effectively adapt parameters pre-trained on 2D images to catch 3D spatial information is not explored, due to challenging requirements on holistic modification of the network which would affect many pre-trained weights. In addition, raising the dimension can greatly increase the number of tokens in transformation blocks which also potentially leads to numerical and memory issues. In this paper, we propose a new parameter-efficient adaptation method to holistically adapt SAM from 2D to 3D for medical image segmentation. First, for the image encoder at the input level, we precisely design the modification scheme to allow the original 2D transformer supports volumetric inputs, while keeping as many as possible pre-trained weights reusable. We find that weights pre-trained on 2D images can still capture some 3D spatial patterns through parameter-efficient fine-tuning. Second, for the prompt encoder level, instead of using positional encoding as the prompt representation, we propose a visual sampler from the image embedding to serve as the representation of the point prompt, and further use a set of global queries to eliminate noisy prompts. This strategy proves to behave well to overcome the over-smoothing issues caused by a drastic increase in image token size accompanying the dimension raising, and also improves the model's robustness to inaccurate prompts. Last, for the mask decoder at the output level, we emphasize a lightweight design, with the promotion of adding multi-layer aggregation. We conduct experiments on medical tumor segmentation datasets with comprehensive comparisons with domain SOTA approaches including nn-UNet [22], as well as recent adapters in general. The results show our methods can outperform existing methods by a large margin. The method also shows robustness on the number and position of the prompt. For instance, a single point at the margin of the tumor can also serve as a prompt for accurate segmentation. Our major contributions are summarized as: * We propose a holistic 2D to 3D adaptation method via carefully designed modification of SAM architecture, which adds only 7.79% more parameters and keeps most of the pre-trained weights reusable while performing well for volumetric medical image segmentation. * We introduce a novel parameter-efficient fine-tuning method to effectively capitalize a large image model pre-trained on 2D images for 3D medical image segmentation with only 16.96% tunable parameters (including newly added parameters) of the original model. * We conducted experiments with four datasets for medical image segmentation. Results show that our 3DSAM-adapter significantly outperforms nn-UNet [22] on three of the four datasets (by 8.25% for kidney tumor, 29.87% for pancreas tumor and 10.11% for colon cancer) and comparable on the liver tumor. We also demonstrate superior performance of our proposed method over recent parameter-efficient fine-tuning methods such as ST-adapter [19]. Related Works **Foundation models in computer vision.** With the breakthroughs in deep learning models, most modern vision models follow the pre-training and fine-tuning paradigm [23; 24]. Large and generalizable foundation models have seen significant interest in computer vision, benefit from pre-training techniques comprising self-supervised learning [25; 26], contrastive learning [27; 28], language-vision pre-training [29; 30], etc. Recently, SAM [4] pre-trained on over 11M images stands out as a generalist foundation model to image segmentation and shows powerful zero-shot capabilities of segmenting anything in the wild in an interactive and promptable manner. One of the concurrent works, SEEM [31], presents a more universal prompting scheme to support semantic-aware open-set segmentation. SegGPT [32] further pursues board in-context segmentation tasks in images or videoes. **Parameter-efficient model fine-tuning.** As the widespread use of foundation models, the topic of parameter-efficient fine-tuning has attracted lots of attention. Existing efficient-tuning methods can be classified into three categories [33]. Addition-based methods insert lightweight adapters [16; 19; 20] or prompts [12; 34] into the original model and only tune these parameters. Specification-based methods [15; 35] select a small proportion of the original parameters to tune. Reproparameterization-based methods [36] use low-rank matrices to approximate the parameter updates. Recently, there are a few works adapted pre-trained image models to video understanding [18; 19] or volumetric segmentation [20]. However, these methods interpret the additional dimension as a "word group", and use the special modules to aggregate the information on that dimension. We consider all three dimensions are isotropic and directly adapt the trained transformer block to catch 3D patterns. **Tumor segmentation in medical imaging.** Tumor segmentation is one of the most common yet challenging tasks in computer-aided medical image analysis. Recent achievements in deep neural networks have contributed significantly to performance improvement on applications for different anatomical regions, such as liver [37; 38], kidney [39], pancreas [40] and colon [41]. However, precise tumor segmentation is still challenging even for state-of-the-art segmentation networks such as nnU-Net [22], UNETER++ [42] and 3D UX-Net [43], because tumors usually have notable properties of small size, irregular shape, low contrast, and ambiguous boundaries. Unsurprisingly, in recently reported SAM applications on medical images, SAM obtained much worse and unstable results on tumor segmentation tasks compared with other anatomical structures such as 3D organs. Therefore, in this paper, we will focus on evaluating our proposed adapter for tumor segmentation scenarios, in order to address the most significant weakness of the original SAM. ## 3 Methods In this section, we will introduce how we adapt the original SAM architecture for volumetric medical image segmentation. Fig. 1 presents the overview of our method. We first give a brief overview of the SAM, then we explain the technical details for adapting the image encoder, prompt encoder, and mask decoder, respectively. ### Overview of SAM The SAM [4] is a large promptable segmentation model with impressive performance and generalization ability on segmenting daily objects. The model consists of three components, i.e., image encoder, prompt encoder, and mask decoder. The image encoder utilizes the structure of the Vision Transformer (ViT) [44] to transform the original images into image embeddings. The prompt encoder encodes prompts (points, box, etc.) into embedding representations, designed to be lightweight by summating a frozen positional encoding and a learnable embedding for each prompt type. The mask decoder comprises a prompt self-attention block and bidirectional cross-attention blocks (prompt-to-image attention and vice-versa). After conducting attention blocks, the feature map is up-sampled and transformed into segmentation masks by MLP. However, the original structure is designed for 2D natural image segmentation. When transferring to volumetric images, it has to make predictions in a slice-wise manner, which fails to capture the inter-slice spatial information. The model also exhibits performance degradation when being tested on medical images due to the domain gap between natural and medical images. Therefore, task-specific adaptation and fine-tuning are required. ### Adapting Image Encoder for Volumetric Inputs The original SAM is based on 2D ViT, which excels at catching global patterns for natural 2D images. However, many widely adopted medical imaging modalities, such as CT and MRI, are 3D volumes. 3D information is vital for applications such as organ segmentation and tumor quantification since their representative patterns need to be captured from a 3D perspective. Purely relying on 2D views can result in low accuracy due to ambiguous boundaries and non-standard scanning pose. Existing methods adapting the 2D pre-trained models for 3D applications usually process the images in a slice-wise manner and then use an additional spatial adaptor or temporal modules to fuse the 2D information [18]. The major parts of the backbone, such as the transformer blocks are still built in 2D. This can work well on video-related tasks but is a suboptimal solution for medical image analysis, as volumetric medical images are isotropic in terms of spatial resolutions and inherent 3D information. It would be problematic to process the third depth dimension differently from the width and height. To this end, we consider our adaptation method based on two criteria: 1) enabling the model to learn 3D spatial patterns directly, and 2) inheriting most of the parameters from the pre-trained model, and forging incremental parameters of small size and easy-to-tune. Of course, the devil is in the details, as illustrated in Fig. 1. The original SAM is based on the transformer, which comprises multiple attention blocks and thereby supports inputs of variant token size naturally. Meanwhile, the volumetric medical images are usually isotropic, and the spatial relationship among pixels would be very similar to that of the 2D case. Therefore, we hypothesize the network trained to learn 2D spatial features can be easily adapted to capture 3D patterns as well. The only things remaining are how to initialize the tokens for 3D patches and how to inform a model of the new positional information in a parameter-efficient way. Specifically, we carefully modify each module of the network as follows: * **Patch embedding.** We take advantage of the combination of \(1\times 14\times 14\) and \(14\times 1\times 1\) 3D convolutions as an approximation of the \(14\times 14\times 14\) convolution. We initialize the \(1\times 14\times 14\) convolution with the weight of the pre-trained 2D convolution and keep it frozen during the fine-tuning phase. For the newly introduced \(14\times 1\times 1\) 3D convolution, depth-wise convolution is used to further reduce the number of tunable parameters. * **Positional encoding.** The pre-trained ViT contains a lookup table of size \(c\times H\times W\) with the positional encoding. We additionally initialize a tunable lookup table of size \(c\times D\) with zeros. The positional encoding of 3D point \((d,h,w)\) can be the summation of embedding in the frozen lookup table with \((h,w)\) and embedding in the tunable lookup table with \((d)\). * **Attention block.** Attention blocks can be directly modified to fit 3D features. For 2D inputs, the size of the queries is \([B,HW,c]\), which can be easily adapted to be \([B,DHW,c]\) for 3D ones with all the pre-trained weights inherited. We use similar sliding-window mechanisms as the Swin Transformer [45] to reduce the memory cost caused by dimension raising. * **Bottleneck.** As convolution layers are usually easier to optimize than transformers [46], we replace all the 2D convolutions in the bottleneck with 3D ones and train them from scratch. Figure 1: Overview of our proposed method for 3DSAM-adapter. The original ViT is modified to support volumetric inputs. The prompt encoder is redesigned to support 3D point prompt, and the mask decoder is updated to 3D CNN with multi-layer aggregation to generate 3D segmentation. With the above modification, we can elegantly upgrade the 2D ViT to 3D ViT while keeping most of the parameters reusable. Fully fine-tuning the 3D ViT can be memory-intensive. To address this issue, we propose to leverage the lightweight adapter [16] for efficient fine-tuning. The firstly-proposed adapter is composed of a down-projection linear layer and an up-projection linear layer, which can be represented as \(\text{Apater}(\mathbf{X})=\mathbf{X}+\sigma(\mathbf{X}W_{down})W_{up}\), where \(\mathbf{X}\in\mathbb{R}^{N\times c}\) is the original feature representation, \(W_{down}\in\mathbb{R}^{c\times m}\) and \(W_{up}\in\mathbb{R}^{m\times c}\) indicate the down-projection layer and up-projection layer, respectively, and \(\sigma(\cdot)\) is the activation function. As illustrated in Fig. 2, we append a depth-wise 3D convolution after the down-projection layer so the adapter can better leverage 3D spatial information. During the training phase, we only tune the parameters of convolutions, spatial adapters, and normalization layers, while keeping all other parameters frozen. The frozen scheme makes the training process memory-efficient. Fine-tuning the adapter and normalization layers can help the model narrow the domain gap between natural images and medical images. ### Prompt Encoding by Visual Sampler The original SAM leverages positional embedding to represent the prompt. The same way of positional embedding based on Fourier features [47] is applied to both the prompt and the image so that the prompt and the image embedding corresponding to the same position can have the same positional encoding. The prompt embedding is then cross-attention with the image embedding and thereby transforms from pure positional features to semantic features. This cross-attention works well for 2D cases but may cause over-smoothing issues [48] when applying to 3D feature maps. Raising to 3D can cause a catastrophic rising in token numbers so that the probability will tend to be a uniform distribution, which makes prompt embedding hard to sufficiently extract semantic information. Another issue for medical images is that offering accurate prompt guidance requires lots of professional knowledge, as many background points are actually pretty similar to the foreground ones. The model can easily fail if the prompts are inaccurately given. It is desirable if the system can have higher intelligence, which has a higher tolerance for noisy prompts, thereby supporting non-expert users or even using predicted masks of a coarse segmentation algorithm as prompts. The third issue is the form of the prompt can be limited for 3D cases, as the bounding boxes can be difficult to draw for volumetric images. A desired method would work even with one point per volume as the prompt, which can be a more challenging setting. Meanwhile, other prompts widely used in the medical domain (e.g., scrawlings) can be transformed into points through sampling. To this end and following [31], we propose to use a visual sampler instead of positional encoding to represent the prompt. The whole process is illustrated in Fig. 3. Given the coordinates of the points, we directly interpolate from the feature map to fetch the embeddings, thereby guaranteeing the prompts share the same semantic features with the image embeddings. We initialize a few tokens as global queries and then apply self-attention among the global queries and these points embeddings. After that, we apply cross-attention from image embeddings to these global queries only. As the number of points prompt and global queries are all quite small, this can alleviate the over-smoothing issues. Besides, this can bring higher tolerance towards noisy points as the global queries can serve as prototypes and only point embeddings with specific features will have high similarities. We also include pure background images with false positive prompts during the training phase. At each interaction, if there is no foreground pixel, we randomly sample 10 points from the background as the prompts. Otherwise, we randomly sample about 40 points from the foreground. This training Figure 3: Structure of our prompt encoder based on visual sampler and global queries cross-attention. Figure 2: Spatial adapter. strategy can enhance the model's robustness towards noisy prompt, and the model performs well even if the user gives only one point during the inference phase. Note that we discard the original interactive training and inference mode in SAM, where points are added progressively with the next point selected in the misclassified area of the masked predicted from the previous step. It is not as convenient as in the 2D scenery to detect the misclassified area in volumetric segmentation, where the users may need to check every slice. Therefore, we apply a simpler scheme for training and inference, where one or a few point prompts are given all at once. ### Lightweight Mask Decoder The mask decoder of SAM is designed to be lightweight, with stacks of convolution layers. We replace all the 2D convolutions with 3D convolutions to directly generate 3D masks. The initial decoder is designed without any progressive upsampling or skip connection. This works well for nature image where the size of the objects are usually large and the boundary is clear. However, for volumetric medical image segmentation, it is widely acknowledged that U-shape networks with skip connections at multiple levels are critical [22], as the objects in medical images are usually tiny and have ambiguous boundaries, which requires details with higher resolution to be better discriminated. To alleviate this issue and meanwhile maintain the lightweight property, we utilize a multi-layer aggregation mechanism [49] in our decoder, where the intermediate output of the encoder is concatenated together to produce a mask feature map while the whole structure remains lightweight. To better leverage the information from the original resolution, after upsampling the mask feature map to the original resolution, we concatenate it with the original image and use another 3D convolution to fuse the information and generate the final mask. We remove multi-masks generation and ambiguity awareness of the original SAM, as our goal is to fine-tune the SAM into a specific downstream task. The backbone of the mask decoder is lightweight and mainly comprises 3D convolutional layers which are optimization friendly, so we train all the parameters from scratch. Overall, we have introduced a holistic scheme to adapt SAM for medical image segmentation. The adaptation of the image encoder focuses on how to efficiently take advantage of the pre-trained parameters to extract 3D spatial features, while the modification of the prompt encoder and mask decoder mainly resolves the computational issues raised by dimension lifting and also facilitates the model to better align with domain-specific demands. Combining both transforms SAM into a potentially powerful tool for volumetric medical image segmentation. ## 4 Experiments ### Setup **Datasets.** We make our experiments focus on tumor segmentation, as this is the reported most challenging task for the original SAM when being applied to medical images. To this end, we employ four public datasets for volumetric tumor segmentation, including: 1) Kidney Tumor Segmentation challenge 2021 dataset (KiTS21) [39], 2) Pancreas Tumor Segmentation task of 2018 MICCAI Medical Segmentation Decathlon challenge dataset (MSD-Pancreas) [50], 3) Liver Tumor Segmentation Challenge 2017 dataset (LiTS17) [38], and 4) Colon Cancer Primaries Segmentation task of 2018 MICCAI Medical Segmentation Decathlon challenge dataset (MSD-Colon) [50]. The public datasets contain 300, 281, 118 and 126 abdominal CT scans respectively. The original dataset (except for the MSD-Colon) includes both organ and tumor segmentation labels while we are only using tumor labels for training and testing. The datasets are randomly split into 70%, 10%, and 20% for training, validation, and testing. More details can be found in the released code and the descriptions in appendix. Figure 4: Structure of our lightweight mask decoder with multi-layer aggregation. **Implementation Details.** We implement our method and benchmark baselines in PyTorch and MONAI. We are using SAM-B for all the experiments and comparisons, which utilize ViT-B as the backbone of the image encoder. The model is trained with batch size 1 on an NVIDIA A40 GPU, using AdamW [51] optimizer with a linear scheduler for 200 epochs. We set the initial learning rate of \(1e{-4}\), momentum of 0.9 and weight decay of \(1e{-5}\). We preprocess the data to have isotropic spacings of \(1mm\). For data augmentation, we perform random rotation, flip, zoom, and shift intensity. We also randomly sample foreground/background patches at a ratio of 1:1 during training. The complete training details are available in the appendix. Overall, we evaluate the performance of our method via comparisons with current SOTA volumetric segmentation and fine-tuning approaches. The Dice coefficient (Dice) and Normalized Surface Dice (NSD) are used as evaluation metrics. ### Comparison with State-of-the-Arts We extensively compare our model with the recent 3D medical image segmentation state-of-the-art, including the most recent Transformer-based methods including Swin-UNETR [54], UNETR++ [42], TransBTS [52], and nnFormer [53], as well as CNN-based methods such as nnU-Net [22] and 3D UX-Net [55]. As reported in Table 1, we observe that the original SAM [4] developed in natural images gets suboptimal performances in domain-specific tumor segmentation tasks, even with as many as \begin{table} \begin{tabular}{l|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c|}{Kidney Tumor} & \multicolumn{2}{c|}{Pannexas Tumor} & \multicolumn{2}{c|}{Liver Tumor} & \multicolumn{2}{c}{Colon Cancer} & \multirow{2}{*}{\#Tuned Params} \\ \cline{2-2} \cline{5-10} & Dice \(\uparrow\) & NSD \(\uparrow\) & Dice \(\uparrow\) & NSD \(\uparrow\) & Dice \(\uparrow\) & NSD \(\uparrow\) & \multicolumn{1}{c}{Dice \(\uparrow\)} & NSD \(\uparrow\) \\ \hline nnU-Net (Nat. Methods 2021) [22] & 73.07 & 77.47 & 41.65 & 62.54 & **60.10** & **75.41** & 43.91 & 52.52 & 30.76M \\ TransBTS (MICAI 2021) [52] & 40.79 & 37.74 & 31.90 & 41.62 & 34.69 & 49.47 & 17.05 & 21.63 & 32.33M \\ nnFormer (arXiv 2021) [53] & 45.14 & 42.28 & 36.53 & 53.97 & 45.54 & 60.67 & 24.28 & 32.19 & 149.40M \\ Swin-UNETR (CVPR 2022) [54] & 65.54 & 72.04 & 40.57 & 60.05 & 50.26 & 64.32 & 35.21 & 42.94 & 62.19M \\ UNETR++ (arXiv 2022) [42] & 56.49 & 60.04 & 37.25 & 53.59 & 37.13 & 51.99 & 25.36 & 30.68 & 55.70M \\ 3D UX-Net (CLR2 2023) [43] & 57.59 & 58.55 & 34.83 & 52.56 & 45.54 & 60.67 & 28.50 & 32.73 & 53.01M \\ \hline SAM-B (1 pt/slice) [4] & 36.30 & 29.86 & 24.01 & 26.74 & 6.71 & 7.63 & 28.83 & 33.63 & – \\ Ours (1 pt/volume) & **73.78** & **83.86** & **54.09** & **76.27** & 54.78 & 69.55 & **48.35** & **63.65** & 25.46M \\ \hline SAM-B (3 pts/slice) [4] & 39.66 & 34.85 & 29.80 & 33.24 & 7.87 & 6.76 & 35.26 & 39.31 & – \\ Ours (3 pts/volume) & **74.91** & **84.35** & **54.92** & **77.57** & 56.30 & 70.02 & **49.43** & **65.02** & 25.46M \\ \hline SAM-B (10 pts/slice) [4] & 40.07 & 34.96 & 30.55 & 32.91 & 8.56 & 5.97 & 39.14 & 42.70 & – \\ Ours (10 pts/volume) & **75.95** & **84.92** & **57.47** & **79.62** & 56.61 & 69.52 & **49.99** & **65.67** & 25.46M \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison with classical medical image segmentation methods on four tumor segmentation datasets. Well-acknowledged and latest state-of-the-art domain segmentation methods are compared with. Evaluation metrics on Dice score and normalized surface dice (NSD) are reported. Figure 5: Qualitative visualizations of the proposed method and baseline approaches on kidney tumor, pancreas tumor, liver tumor and colon cancer segmentation tasks. ten clicks being prompts for small tumors. With the proposed effective and efficient adaptation, we obtain state-of-the-art accuracy with a single point only in the whole volume, outperforming both the powerful medical image segmentation baselines and nnU-Net [22], which is widely used in volumetric segmentation competitions. Distinct improvements can be specifically observed for pancreas tumors and colon cancers, which are always notorious for ambiguous boundaries, of 12.44% and 10.11% in Dice against the prior state-of-the-art. Besides, we observe stable performance rising when feeding more points for prompting. Our model gets Dice of 75.95%, 57.47%, 56.61% and 49.99% in kidney tumor, pancreas tumor, liver tumor and colon cancer segmentation respectively with 10 points per volume (which is usually affordable in common clinical usage). Relevant evidence is also observed in Fig. 5, where SAM usually fails to distinguish tumor borders and gives false positive predictions, whereas our adaptation demonstrates visually better tumor identification results in most tasks. The performance of liver tumor segmentation is inferior to that of nnU-Net [22], which is reasonable because the liver tumor is comprised of multiple smaller tumors, scattered around the liver. The prompt-based method can only click one of them, leaving the rest unsegmented and therefore resulting in many false negative pixels. Nevertheless, compared with the original SAM, our model still shows significant improvements. ### Comparison with Existing Adapters We further compare our adaptation strategy with existing parameter-efficient adaptation methods, which includes 2D adaptations such as adapter [16] and Pro-tuning [14], as well as 2D-3D adapters such as ST-Adapter [19] and Med-Tuning [20]. For 2D adaptations, we encode the images in a slice-wise manner and then concatenate them to form a 3D feature map before plugging them into the decoder. We also compare with the method of full fine-tuning, which tunes all the parameters of the original SAM. To conduct a fair comparison and be more focused on whether adaptations help the model learn volumetric medical image features, we remove the prompt encoder and only train the model with an image encoder followed by a lightweight decoder (STER-MLA [49]). So that the model will conduct automatic segmentation without prompt guidance. The performance can well reflect the presentation learning ability of the encoder. All the pre-trained weights are from SAM-B. The results are given in Table 2, which shows our adaptation strategy outperforms all existing methods with a comparable number of tuned parameters. Our method excels the second-best method by 16.24% on kidney tumor segmentation Dice, 4.22% on pancreas tumor segmentation NSD and 12.53% on Colon Cancer segmentation Dice. It even outperforms many classical segmentation methods with fewer tunable parameters and a vanilla mask decoder. Our method also outperforms full fine-tuning of SAM by 2.08% \(\sim\) 29.80% with only less than 16.96% tunable parameters. These results can well substantiate our hypothesis that parameters pre-trained on 2D images can be used to learn 3D spatial features with only minor adaptations. And treating all dimensions equally is actually a better strategy than interpreting the depth dimension as a group in medical image segmentation, especially when the images have similar spacings in all dimensions. ### Ablation Studies We also conduct comprehensive ablation studies regarding the design of the prompt encoder and mask decoder. Without loss of generality, the experiments are conducted on the KiTS 21 dataset and for each trial, we only give one point as a prompt while each result is based on 10 random trials. \begin{table} \begin{tabular}{l|c c|c c|c c|c c|c} \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c|}{ Kidney Tumor} & \multicolumn{2}{c|}{Pancreas Tumor} & \multicolumn{2}{c|}{Liver Tumor} & \multicolumn{2}{c|}{Colon Cancer} & \multicolumn{1}{c}{\#Tuned Params} \\ \cline{2-10} & Dice \(\uparrow\) & NSD \(\uparrow\) & Dice \(\uparrow\) & NSD \(\uparrow\) & Dice \(\uparrow\) & NSD \(\uparrow\) & Dice \(\uparrow\) & NSD \(\uparrow\) \\ \hline Full fine-tuning & 52.31 & 50.35 & 26.49 & 33.28 & 45.59 & 52.53 & 24.63 & **40.67** & 89.67M \\ \hline Adapter (ICML 2019) [16] & 46.99 & 43.76 & 20.28 & 30.81 & 42.17 & 57.52 & 22.55 & 38.10 & 7.61M \\ Pro-tuning (arXiv 2022) [14] & 50.73 & 50.81 & 18.93 & 30.45 & 47.33 & 55.61 & 21.24 & 25.10 & 7.17M \\ ST-Adapter (NeurIPS 2022) [19] & 47.30 & 45.61 & **30.27** & 43.53 & 51.93 & 59.93 & 28.41 & 34.60 & 7.15M \\ Med-tuning (arXiv 2023) [20] & 44.73 & 40.28 & 22.87 & 30.02 & **52.06** & **68.44** & 21.08 & 37.78 & 11.10M \\ \hline Ours & **61.60** & **70.40** & 30.20 & **45.37** & 49.26 & 59.48 & **31.97** & **40.67** & 15.21M \\ \hline \end{tabular} \end{table} Table 2: Comparison with existing parameter-efficient and full fine-tuning methods. We discard the prompt encoder and only tune the image encoder and mask decoder for fully automatic segmentation. **Positional encoding v.s. visual sampler.** In order to prove visual sampler is actually a better way for prompt encoding compared with the original positional encoding method used in SAM in terms of volumetric segmentation, we fix the image encoder and mask decoder while only changing the prompt encoder. The results are shown in Fig. 7. Our proposed visual sampler algorithm outperforms the original positional encoding by 40.00% for Dice. This is align with our claim that visual sampler work well when the size of tokens is large as it does not have over-smoothing issues. **Position of the point prompts.** We analyze how the model performance is different when the point prompt is given at the center of the objects, at its adjacent regions, or at the margin. We divide the ground truth tumor mask into three regions, the center region, the surrounding region, and the marginal region, as illustrated in Fig. 6. For cases with multiple tumors, we are sampling from the one with the largest size. From the results in Fig. 7, we find that the model is not very sensitive to the position of the point prompts. Giving prompt at different positions yield almost the same results. **Effects of multi-layer aggregation.** To verify our modification of the mask decoder actually brings performance gain, we compare it with the one without multi-layer aggregation. The result Fig. 7 shows the model with multi-layer aggregation has 15.75% higher Dice than its counterpart. This also aligns with common practice in medical imaging segmentation as local texture features are very important, especially for tumor segmentation which usually has tiny shapes and low contrast. **Effects of deep prompts.** Since it is beneficial to include features from multiple levels of the image encoder, a natural idea comes that incorporating prompts at multiple levels might also improve performance. To this end, we conduct experiments to plug in the prompt encoder to other feature levels besides the final bottleneck levels. The results are shown in Table 3. However, we find incorporating deep prompts brings no gain or even degenerates the performance. This can be left as a topic for further research to make prompts take effect at multiple feature levels. ## 5 Discussion and Conclusion In this work, we propose a comprehensive scheme to adapt SAM from a 2D natural image generalist to a volumetric medical imaging expert, especially for tumor segmentation. Through parameter-efficient fine-tuning, our method significantly improves SAM's performance in the medical domain, making it outperform SOTA, with only a single coarse click as the prompt. Our proposed method can also beat existing adaptation methods for volumetric adaptation. Figure 6: Segmentation results in terms of different prompt location. Blue marker denotes the prompt. Figure 7: Results on our ablation studies. (a) Performance v.s prompt. (b) Visual sampler v.s. positional encoding. (c) Effects of multi-layer aggregation. **Limitations and future work.** In the experiment, our reported metric is lower than that on official leaderboards. This is because we are not taking advantage of the organ segmentation label for multi-label prediction. Although multi-task segmentation brings superior performance gain, pure tumor segmentation is more clinically meaningful as it can bypass the intensive work of organ labeling. One observation is that although many transformer-based methods can outperform nnU-Net for multi-class segmentation, for pure tumor segmentation, the general trend is that CNN-based methods have better performance and are easier to train. This maybe because the size of the tumor is very small, and tumor detection relies more on local texture information. So the global information, which is the strength of the transformer, is not very useful. This poses great obstacles for our work as SAM is based on ViT and lots of detailed texture information can be lost during the first downsampling operation. Our future direction may entail how to adapt the architecture to recover these texture details so that the performance can achieve SOTA even in a fully automatic manner. **Social impact.** Our work transforms SAM into a powerful medical image segmentation tool, which has the potential to be put into real clinic use. It can significantly improve the segmentation accuracy and therefore benefits related diagnostic tasks. Additionally, our pioneering work can serve as a more general guideline on how to adapt foundation models for domain-specific downstream tasks, which can help the ever-increasing foundation models play more important roles in all industries.
2306.04119
Improving Survey Inference in Two-phase Designs Using Bayesian Machine Learning
The two-phase sampling design is a cost-effective sampling strategy that has been widely used in public health research. The conventional approach in this design is to create subsample specific weights that adjust for probability of selection and response in the second phase. However, these weights can be highly variable which in turn results in unstable weighted analyses. Alternatively, we can use the rich data collected in the first phase of the study to improve the survey inference of the second phase sample. In this paper, we use a Bayesian tree-based multiple imputation (MI) approach for estimating population means using a two-phase survey design. We demonstrate how to incorporate complex survey design features, such as strata, clusters, and weights, into the imputation procedure. We use a simulation study to evaluate the performance of the tree-based MI approach in comparison to the alternative weighted analyses using the subsample weights. We find the tree-based MI method outperforms weighting methods with smaller bias, reduced root mean squared error, and narrower 95\% confidence intervals that have closer to the nominal level coverage rate. We illustrate the application of the proposed method by estimating the prevalence of diabetes among the United States non-institutionalized adult population using the fasting blood glucose data collected only on a subsample of participants in the 2017-2018 National Health and Nutrition Examination Survey.
Xinru Wang, Lauren Kennedy, Qixuan Chen
2023-06-07T03:25:13Z
http://arxiv.org/abs/2306.04119v1
# Improving Survey Inference in Two-phase Designs Using Bayesian Machine Learning ###### Abstract The two-phase sampling design is a cost-effective sampling strategy that has been widely used in public health research. The conventional approach in this design is to create subsample specific weights that adjust for probability of selection and response in the second phase. However, these weights can be highly variable which in turn results in unstable weighted analyses. Alternatively, we can use the rich data collected in the first phase of the study to improve the survey inference of the second phase sample. In this paper, we use a Bayesian tree-based multiple imputation (MI) approach for estimating population means using a two-phase survey design. We demonstrate how to incorporate complex survey design features, such as strata, clusters, and weights, into the imputation procedure. We use a simulation study to evaluate the performance of the tree-based MI approach in comparison to the alternative weighted analyses using the subsample weights. We find the tree-based MI method outperforms weighting methods with smaller bias, reduced root mean squared error, and narrower 95% confidence intervals that have closer to the nominal level coverage rate. We illustrate the application of the proposed method by estimating the prevalence of diabetes among the United States non-institutionalized adult population using the fasting blood glucose data collected only on a subsample of participants in the 2017-2018 National Health and Nutrition Examination Survey. **Keywords:** Bayesian Additive Regression Trees (BART); High dimensional auxiliary variables; Multiple imputation; NHANES; Weighting. ## 1 Introduction Over the past century, survey sampling has been applied to a wide range of scientific and industrial disciplines, such as healthcare, economic policy, agriculture and environmental applications (Postelnicu et al., 1977; Hassan et al., 2020), for obtaining estimates of characteristics in the target population, e.g. disease prevalence and risk factors. Such estimates could provide useful information for decision-making and policy formulation (LaVange et al., 2001). However, if the desired measures are difficult or expensive to obtain, such as requiring sophisticated tests, it may not be feasible to collect these measures from all survey subjects given practical considerations and budget constraints. A two-phase sampling design can be used to overcome this challenge. The first phase selects a sample from a finite population using probability sampling, during which easy-to-access and inexpensive measures are collected (Hughes et al., 1996). The hard-to-obtain and expensive measures are then collected on a subsample selected from the phase-I sample. To give a better appreciation of the two-phase sampling, take the Mental Health Surveillance Study (MHSS) (Karg et al., 2014) as an example. The National Survey on Drug Use and Health (NSDUH) (Cotto et al., 2010) is a nationwide study that provides up-to-date information on tobacco, alcohol, and drug use, mental health and other health-related issues in the United States. MHSS used data from clinical interviews administered to a sub-sample of respondents of the NSDUH, aiming to make inferences of the prevalence of serious mental illness among people aged 18 years and older. Given that it is impractical to conduct clinical interviews with the entire NSDUH sample with approximately 46,000 adults participants, a phase-II subsample was selected to collect the serious mental illness data in the MHSS. Statistical analysis of the phase-II sample could generate biased inference of population quantities if the distribution of the subsample is different from that of the population. A common approach for reducing bias in the subsample is weighting, which accounts for the subsample selection and nonresponse, and results in a new weight variable created by multiplying the weights for the Phase-I sample with the subsample weight adjustments (Kalton, 1986; Kalton and Miller, 1987). However, weighting has a number of limitations in two-phase designs. First, weighting adjustments can lead to highly variable weights for the Phase-II sample by multiplying the already variable weights for the phase-I sample with the additional weighting adjustments. Second, weights are less efficient in their use of phase-I participant data in that often only a subset of variables are used to reduce weight variability, and the adjustment focuses on differences between phase-I and phase-II samples and not the relationship with the outcome. Third, the weighting adjustments may produce biased estimates if the nonresponse propensity model used to create the phase-II weights is misspecified. An alternative approach is to impute the outcome variables measured only in the Phase-II sample for the survey subjects who participate in Phase-I but not Phase-II of the study. This approach utilises the rich information collected for all participants in the Phase-I sample and a multiple imputation (MI) approach. After imputation, the population means are then estimated by applying the survey weights created for the Phase-I sample to the multiply-imputed phase-I data. Well-chosen imputation models can generate efficient estimates of population quantities (Kalton, 1986; Kalton and Miller, 1986; Chen et al., 2015). However, it can perform poorly if the imputation model is misspecified. Many works have been done to address the issue of model misspecification. Robins et al. (1994) proposed the augmented inverse propensity weighted (AIPW) estimator, which is a double robust (DR) estimator that can produce a consistent estimator if either the outcome model or propensity model is correctly specified. Penalized splines of propensity prediction (PSPP) is another DR estimator based on a Bayesian framework (Zhang and Little, 2009; Kim and Haziza, 2014). When both models are misspecified, however, these DR estimators fail to provide reliable estimates. The risk of model misspecification motivates the use of non-parametric imputation models that are less sensitive to model misspecification. Bayesian Additive Regression Trees (BART), which was first proposed by Chipman et al. (2007), is a sum-of-trees machine learning model that has gained widespread popularity in recent years (Chipman et al., 2010). The essential idea of BART is to use the sum of multiple trees developed by Bayesian back-fitting Markov chain Monte Carlo (MCMC) to model the posterior distribution of the missing variables. BART allows the inclusion of a large number of predictors, rather than requiring the analyst to select a few. This method is less sensitive to model misspecification and allows for nonlinear effects and multi-way interactions between auxiliary variables and outcomes of interest. It was found that BART outperforms other machine learning methods (such as boosting, neural networks, and random forests) for prediction (Chipman et al., 2007). In addition to binary and continuous outcomes, BART has also been extended into different types of outcomes, such as count and categorical responses (Murray, 2021), semi-continuous outcomes (Linero et al., 2020), and survival outcomes (Sparapani et al., 2016). BART and its extensions have been applied to a variety of research areas (Tan and Roy, 2019) including proteomic data (Hernandez et al., 2015; Lualdi and Fasano, 2019), causal inference (Hill, 2011; Leonti et al., 2010; Hahn et al., 2020), missing data literature (Xu et al., 2016; Kapelner and Bleich, 2015) and survey inference (Liu et al., 2023; Rafei et al., 2022). Several advantageous features of BART were reported in the study of the cardiovascular proteomic high-dimensional data: 1) it can incorporate complex interactions between variables; 2) it can provide variable importance scores; 3) it had a higher AUC value compared with the Bayesian Lasso method (Hernandez et al., 2015). BART was also applied to predict the counterfactual outcome when estimating the causal effect of historical texts on the frequency of medicinal plant use (Leonti et al., 2010). The benefits of BART for the propensity score adjustments have been demonstrated in the work of Kern et al. (2016) and Wendling et al. (2018). Given the superiority of BART, Tan et al. (2019) proposed extensions of the AIPW and PSPP that use BART to improve the robustness of doubly robust methods, and employed a simulation study to compare the performance of different methods. The results show that BARTps, which builds the outcome model with the additional estimated propensity by BART as predictors, performs better than other DR estimators when both models are misspecified. In addition, Liu et al. (2023) showed the efficiency of using BART and its extensions for estimating population means when high-dimensional auxiliary information is available in both the non-random samples and the target population. They demonstrate that BART and soft BART can yield efficient and accurate estimates of population means with coverage rates close to the nominal level. They also demonstrate that including the estimated propensity score as a predictor further improves inference when there is possible model-misspecification in the predictive model. Inspired by and building on their work, we propose a tree-based MI approach to improve the inference of population means in two-phase designs. We apply BART and its extensions in two ways. Firstly we use BART-based models to adjust for difference between the phase-I and phase-II samples by creating subsample weights. Secondly we apply BART-based models to impute the phase-I sample for the variables only measured in the phase-II sample. We investigate the quality of population estimation by comparing these two approaches using a simulation study, with a real data illustration using the National Health and Nutrition Examination Survey (NHANES) data. ## 2 Motivating example: estimate prevalence of diabetes using NHANES data Diabetes is one of the major causes of mortality. Patients with diabetes are more likely to have macrovascular and microvascular disease, increasing the health burden on their life. Over the past century, there has been an increase in the prevalence of diabetes (Harding et al., 2019; Ogurtsova et al., 2017; Wang et al., 2021). The increasing trend of diabetes and the profound effect it has on quality of life necessitates early diagnosis and intervention to prevent any complications (Yang et al., 2021). An accurate estimation of the prevalence of diabetes is also needed to provide a solid basis for policy-making about the prevention and management of diabetes. NHANES is a series of cross-sectional, population-based surveys conducted by the National Center for Health Statistics to estimate disease prevalence, trends and the relationship between health and daily behaviour (Mishra et al., 2021). The procedure for collecting diabetes-related information in the NHANES is a two-phase sampling design. The phase-I sample is drawn from the target population of non-institutionalized United States residents through a four-stage stratified sample design. In the first stage, primary sampling units (PSUs) are selected with probability proportional to measures of size (PPS). The majority of these PSUs are single counties, but in a few cases, groups of contiguous counties. In the second stage, segments that include one or more contiguous census blocks are sampled among each selected PSU using PPS sampling. In the third stage, dwelling units (DUs) or households within each selected segment are randomly sampled at rates designed to produce a national, approximately equal probability sample. In the last stage, eligible individuals in each selected DU are invited to participate in the study. Individuals are randomly selected within designated age-sex-race/ethnicity screening groups. In this application, we use the data of NHANES 2017-2018. Phase-I respondents are those belonging to the screened households who are selected and agree to participate in the interview and mobile examination. The basic demographic, health, and nutrition information for these phase-I respondents is collected through interviews and physical examinations in a mobile examination center. Individuals in the phase-I sample are randomly assigned to participate in three examination sessions (morning, afternoon, or evening). Physical examination results that require fasting, such as the fasting blood glucose (FBS) test, are collected only among individuals in the morning sessions, which we treat as the phase-II sample. From this subsample, we are interested in estimating the prevalence of diabetes among the non-institutionalised United States population aged 20 years or older. We define diabetes using two criteria: an FBS level higher than 126 mg/dL or HbA1c value higher than 6.5% according to the standard diagnosis method given by American Diabetes Association (American Diabetes Association, 2018). The FBS is only collected among the phase-II sample, but the HbA1c is measured in the phase-I study. Figure 1 shows the diagram of the two-phase sampling procedure for the FBS test data in NHANES. Figure 1: The two-phase sampling procedure for the fasting blood glucose test data in NHANES 2017-2018. Methods ### Notations and background Let \(\mathcal{P}\) represent a finite population of size \(N\) with strata \(h=1,\ldots,H\), each of which has clusters \(j=1,\ldots,N_{h}\), where \(N_{h}\) is the number of clusters for the \(h\)-th stratum. Let \(y_{i}\) be the survey outcome of interest. The estimand of interest is the population mean \(Q(y)=\sum_{i=1}^{N}y_{i}/N\). Let \(\mathcal{C}\) denote the phase-I sample comprising of \(n_{c}\) individuals selected from \(\mathcal{P}\) using a stratified cluster sampling design; and \(\mathbf{x}_{i}\) and \(\mathbf{z}_{i}\in(0,1)\) are vectors of continuous and binary predictors for \(i\)-th individual with length \(L_{1}\) (number of continuous predictors) and \(L_{2}\) (number of binary predictors), respectively, which are collected at the individual level in the probability sample \(\mathcal{C}\). To make the sample \(\mathcal{C}\) representative of the population \(\mathcal{P}\), a first-phase weight, \(w_{ci}\), is assigned to sample unit \(i\) to adjust for unequal probability of selection and nonresponse, and calibrate to the population. When \(y_{i}\) is fully observed for all the sample units in \(\mathcal{C}\), the population mean can be estimated using \[\hat{Q}_{c}(y)=\frac{\sum_{i=1}^{n_{c}}w_{ci}\cdot y_{i}}{\sum_{i=1}^{n}w_{ci}}, \tag{1}\] with variance estimated using the Taylor series linearization (Rust and Rao, 1996) or the resampling methods, such as jackknife repeated replicates, balanced repeated replicates, or bootstrap (Rao, 1996). In the two-phase designs considered in this paper, however, \(y_{i}\) is only measured among individuals who also participate in the second phase of the study, which is denoted by \(\mathcal{S}\). Under this situation, Formula (1) cannot be applied directly. The conventional approach is to assign a subsample weight \(w_{si}\) for the subsample in the second phase, which is often calculated as the product of the sample weight \(w_{ci}\) and an adjustment reflecting the potential unequal probability of selection and nonresponse for the subsample units. The selection probability can be easily calculated based on the sampling procedure. The nonresponse adjustment can be obtained using either an adjustment cell method or a propensity score adjustment method. In our simulations we use BART to create a propensity score adjustment and compare it to a tree-based adjustment cell method. The population mean using the subsample is estimated as \[\hat{Q}_{S}(y)=\frac{\sum_{i=1}^{n_{s}}w_{si}\cdot y_{i}}{\sum_{i=1}^{m}w_{si}} \tag{2}\] where \(n_{s}\) is the sample size of the subsample. For the subsample nonresponse adjustment, the adjustment cell approach assigns units in the sample into different cells \(m=1,\ldots,M\) with size \(n_{m}\) based on discretized auxiliary information \(\mathbf{x}\) and binary \(\mathbf{z}\). The response status indicator for \(i\)-th unit is denoted by \(r_{i}\), with \(r_{i}=1\) for respondents, \(r_{i}=0\) for non-respondents. The estimated response rate in the \(m\)-th cell is \(\hat{\pi}_{m}=\sum_{i=1}^{n_{m}}r_{i}/n_{m}\), and the nonresponse adjustment for all units in the \(m\)-th cell is \(1/\hat{\pi}_{m}\). It assumes that respondents and nonrespondents in the same cell have the same distributions in the survey outcomes of interest (Little and Vartivarian, 2005; Kalton and Flores-Cervantes, 2003). However, this method requires that all continuous variables \(\mathbf{x}\) are discretized, and as the number of auxiliary variables grows large, the cell size and the number of response units in a specific cell decreases quickly, resulting in extreme and nonstable weighting adjustments (Kalton and Flores-Cervantes, 2003). One solution for this is tree-based methods. The chi-square automatic interaction detection (CHAID) (Kass, 1980) is commonly used for nonresponse adjustment in survey sampling. It selects important auxiliary variables and forms adjustment cells through a merge-and-split process (Kass, 1980). For each variable, a chi-square test is conducted to test the independence between two different categories of a variable. If the chi-square test fails to reject the null with a user-specified alpha-level, these categories are merged into a single category. These merging steps are repeated until all of the categories are significantly different for each predictor. When splitting, the predictor with the smallest Bonferroni-adjusted p-value is chosen as the first split, and the splitting continues until: a) no significant predictors; b) the child node will have insufficient individuals if more splits are made; c) the number of individuals in some nodes is below a pre-specified number (Chen et al., 2015). Another method to adjust for nonresponse is propensity score adjustment. Logistic regression is commonly used to estimate the response propensity(Rizzo et al., 1996): \[\text{logit}(Pr(r_{i}=1|\mathbf{x}_{i},\mathbf{z}_{i}))=\gamma_{0}+\gamma_{1}^ {T}\mathbf{x}_{i}+\gamma_{2}^{T}\mathbf{z}_{i} \tag{3}\] where \(\gamma_{1}\) and \(\gamma_{2}\) are vectors of coefficients for \(\mathbf{x}_{i}\) and \(\mathbf{z}_{i}\). Screening of response-related variables is often conducted before building the propensity model (Rizzo et al., 1996). In contrast to the adjustment cell methods, it allows both discrete and continuous variables to be used as predictors of response propensity. Propensity score adjustment can be divided into two categories. The first is response propensity weighting where the nonresponse adjustment is the inverse of the probability of responding. However, it has some limitations: the first is that it can generate very small response propensities and thus very large weighting adjustments leading to unstable estimation of population quantities; Secondly, it relies on the correct specification for the response propensity model. The estimates for subsample weights will be biased if the response propensity model is misspecified. ### Imputation methods using Bayesian Additive Regression Trees Instead of viewing this challenge as an adjustment from the subsample to the finite population, we could instead consider this first as a missing data problem in the phase-I sample and then survey inference of population quantities using the imputed phase-I sample. Weights-based analyses use auxiliary variables in the phase-I sample to calculate nonresponse adjustments in the subsample. This wastes important information about the relationship between these variables and the outcome we wish to estimate. In contrast, an imputation approach can best use the associations between the auxiliary information and the outcome. BART (and related extensions) are sum-of-tree Bayesian models that are flexible in model specification and can achieve high accuracy of prediction by incorporating interactions and nonlinear associations without overfitting (Tan and Roy, 2019). Liu et al. (2023) proposed using BART and soft BART to improve the robustness of the estimators of population means in one-phase survey sampling. They showed that an outcome model built by BART or soft BART can generate efficient estimates of population means by using the rich auxiliary data information available in both the population and the survey sample and the BART-estimated inclusion propensity scores as predictors. In this paper, we extend this idea to two-phase complex survey designs. We consider the following BART model for continuous survey outcomes measured only in the subsample of the Phase-II study: \[y_{i}=G(\mathbf{x}_{i},\mathbf{z}_{i},d_{i1},d_{i2},\mathbf{w}_{i})+\epsilon_{ i}=\sum_{b=1}^{B}g\left(\mathbf{x}_{i},\mathbf{z}_{i},d_{i1},d_{i2},\mathbf{w}_{i};T _{b},\mu_{b}\right)+\epsilon_{i},\quad\epsilon_{i}\ \sim N(0,\sigma^{2}), \tag{4}\] where \(d_{i1}\) and \(d_{i2}\) denote strata and cluster indicators for \(i\)-th individual respectively; \(\mathbf{w}_{i}\) represents survey weights, including phase-I sample weights \(w_{ci}\) and subsample selection and nonresponse weighting adjustments \(a_{i}\); \(B\) is the number of trees, \(T_{b}\) is the structure of b-th binary tree, \(\mu_{b}\) is the parameters assigned to each terminal node, and \(g(\cdot)\) is the function that links \(\mu_{b}\) to (\(\mathbf{x}_{i}\), \(\mathbf{z}_{i}\), \(d_{i1}\), \(d_{i2}\), \(\mathbf{w}_{i}\)). One challenge for BART is the prior specification. Chipman et al. (2010) simplified the prior specification by assuming that the components of each tree are independent and not related to the error term \(\epsilon\). Under this assumption, the priors for \(p(T_{b})\), \(p(\mu_{b}|T_{b})\), and \(p(\sigma^{2})\) need to ensure that every tree is a weak learner and prevent the model from overfitting and non-convergence. The prior for \(p(T_{b})\) is related with: i) the probability of being nonterminal for a node at depth \(\rho=0,1,2,\dots\), which is specified by \(\alpha(1+\rho)^{-\beta}\), where \(\alpha\in(0,1)\), \(\beta\in[0,\infty)\); ii) the probability of being selected as a splitting variable; iii) for the selected splitting variable, the probability of splitting rules. A conjugate normal distribution is used for the prior of \(p(\mu_{b}|T_{b})\). For \(p(\sigma^{2})\), an inverse chi-square distribution is used such that \(\sigma^{2}\sim\nu\lambda/\chi_{\nu}^{2}\). Chipman et al. (2010) suggest default values for the parameters of these prior distributions and the number of trees B, but cross-validation can be used to select the optimal parameters for these priors. To account for the correlation between units from the same cluster in the first-phase sampling, we also consider random intercept BART (rBART) (Tan et al., 2018), which is an extension of BART. The outcome model by rBART is then \[y_{ji}=G(\mathbf{x}_{ji},\mathbf{z}_{ji},d_{ji1},\mathbf{w}_{ji})+\delta_{j}+ \epsilon_{ji}=\sum_{b=1}^{B}g(\mathbf{x}_{ji},\mathbf{z}_{ji},d_{ji1},\mathbf{ w}_{ji};T_{b},\mu_{b})+\delta_{j}+\epsilon_{ji},\quad\epsilon_{ji}\sim N(0, \sigma^{2}), \tag{5}\] where \(\delta_{j}\) is the random intercept for \(j\)-th cluster which follows a normal distribution and independent of the individual error term \(\epsilon_{ji}\). Each cluster is seen as a group with the same random intercept in the stratified survey design. For a binary outcome, BART and rBART can be easily extended by the probit model: \[Pr(y_{i}=1|\mathbf{x}_{i},\mathbf{z}_{i},d_{i1},d_{i2},\mathbf{w}_{i})=\Phi(G (\mathbf{x}_{i},\mathbf{z}_{i},d_{i1},d_{i2},\mathbf{w}_{i})) \tag{6}\] where \(\Phi\) is the cumulative density function of a standard normal distribution. In a two-phase complex survey setting, the data in the subsample are used to build the BART imputation model, and the unobserved outcome variable for subjects who participate in the Phase-I survey but not the Phase-II survey is then imputed using posterior draws from the model. Before building the imputation model for the outcome, BART and rBART can also be used to estimate the nonresponse adjustment, \(a_{i}\) with \(x_{i}\), \(z_{i}\), \(w_{ci}\), \(d_{i1}\) and \(d_{i2}\) as predictors. The imputation model is then fit with the estimated \(a_{i}\) included in the list of predictors. For BART, both \(d_{i1}\) and \(d_{i2}\) are categorical predictors. However, in rBART, \(d_{i2}\) is used as a group indicator. Each set of posterior draws forms one imputation for the missing \(y\) in the phase-I sample. Let \(\hat{y}_{i}^{d}\) denote the imputed value of \(y\) for individual \(i\) in the \(d\)th imputation of the Phase-I sample, \(d=1,\ldots,D\) and \(D\) is the total number of imputations. We have \(\hat{y}_{i}^{(d)}=I(i\in\mathcal{S})y_{i}+I(i\in\mathcal{C},\ i\notin\mathcal{ S})\hat{y}_{i}^{d}\). The estimated population mean using the \(d\)th imputation is calculated as \[\hat{Q}(y)^{(d)}=\frac{\sum_{i=1}^{n_{c}}w_{ci}\cdot\hat{y}_{i}^{(d)}}{\sum_{i =1}^{n_{c}}w_{ci}}, \tag{7}\] with the estimated variance \(\widehat{Var}\left(\hat{Q}(y)^{(d)}\right)\) by the Taylor series linearization method to account for the sampling design and non-response (Rust and Rao, 1996). The estimates in (7) are then combined from the \(D\) imputations using the Rubin's multiple imputation rules with the variance accounting for both between-imputation and within-imputation variances (Little and Rubin, 2002): \[\hat{Q}(y)=\frac{1}{D}\sum_{d=1}^{D}\hat{Q}(y)^{(s)},\quad\widehat{Var}\left( \hat{Q}(y)\right)=\bar{W}_{D}+(1+\frac{1}{D})B_{D}, \tag{8}\] where \(\bar{W}_{D}=\frac{1}{D}\sum_{d=1}^{D}\widehat{Var}\left(\hat{Q}(y)^{(d)}\right)\) and \(B_{D}=\frac{1}{D-1}\sum_{d=1}^{D}(\hat{Q}(y)^{(d)}-\hat{Q}(y))^{2}\) are the estimated within and between imputation variance, respectively. The reference distribution for confidence interval estimates is a \(t\) distribution, \[\frac{Q(y)-\hat{Q}(y)}{\sqrt{\widehat{Var}\left(\hat{Q}(y)\right)}}\sim t_{\nu}, \tag{9}\] with the degrees of freedom based on a Satterthwaite approximation, \[\nu=(D-1)(1+\frac{D}{D+1}\frac{\bar{W}_{D}}{B_{D}})^{2}. \tag{10}\] Based on Liu et al. (2023), three assumptions should hold for the MI approach using BART to perform well in the two-phase designs: (A1) outcome-related auxiliary variables are available in the phase-I sample; (A2) the sampling mechanism and response propensity in the subsample is conditionally ignorable, i.e., the outcome variable \(y\) is independent of subsample selection and the response given the auxiliary information and design features in the Phase-I sample; and (A3) the auxiliary variables in the subsample and in the phase-I sample have the same ranges. Assumption A3 fails in cases where the sampling mechanism or response propensity makes the ranges of the auxiliary variables in the phase-I sample wider than those in the subsample (e.g., if individuals in a particular range of the predictor \(x\) are never included in the subsample). ## 4 Simulation We conduct a simulation study to evaluate the performance of weighting and tree-based MI approaches in a two-phase survey design. We compare 4 different weighting methods for the nonresponse adjustments, including logistic regression model (LGM), CHAID, BART and rBART, with the corresponding estimators denoted using WT-LGM, WT-CHAID, WT-BART and WT-rBART. We also consider the MI approach using 2 tree-based models (MI-BART and MI-rBART). For reference, we include a benchmark estimator, where the outcome \(y\) is assumed to be measured for all subjects in the phase-I sample. We conduct 500 replicates of simulation for each simulation scenario. Model performance is evaluated by the absolute bias and root mean squared error (RMSE), \[\text{Absolute bias}=\left|\frac{\sum_{s=1}^{500}(\hat{Q}(y)^{(s)}-Q(y))}{5 00}\right|,\ \ \text{RMSE}=\sqrt{\frac{\sum_{s=1}^{500}(\hat{Q}(y)^{(s)}-Q(y))^{2}}{500}},\] where \(s=1,\ldots,500\) is the \(s\)-th simulation replicate. We also investigate the coverage rate and average width of the 95% CIs. These measurements are magnified a hundred times for the convenience of reading. ### Sampling design This subsection gives detailed steps to generate two-phase multistage complex survey data. Our process is inspired by the simulation design in Liu et al. (2023). 1. Population: We first generate finite population \(\mathcal{P}\) with 4 strata (\(H=1,2,3,4\)), each of which has \(N_{h}=25,20,15,10\) clusters respectively. The number of individuals in each cluster, \(N_{hj}\), follows an exponential distribution truncated between 100 and 300, with expected values as 200. This leads to a population of size \(N=12,931\). For each individual in the population, we generate data for continuous variables \(x_{l_{1}}\), \(l_{1}=1,\ldots,L_{1}\), and binary variables \(z_{l_{2}}\), \(l_{2}=1,\ldots,L_{2}\). The continuous variables \(x_{l_{1}}\) follows a standard normal distribution \(N(0,1)\). For each binary variable, \(Pr(z_{l_{2}}=1)\) follows a uniform distribution \(U\left(0.4,0.6\right)\). 2. Outcome model: We consider a continuous outcome variable for the \(i\)th subject in the \(j\)th cluster of the \(h\)th stratum by specifying the true outcome model as \[y_{hji}=2.47+q_{hj}-2x_{1hji}+x_{2hji}^{2}+2z_{1hji}-z_{2hji}-2z_{3hji}+x_{1hji} z_{1hji}+\epsilon_{hji},\ \epsilon_{hji}\sim N(0,1)\] where \(q_{hj}\) is the cluster random intercept which follows a normal distribution \(N(0,1)\). The true population mean of the outcome is \(Q(y)=3\). 3. Phase-I sample: To select the phase-I sample, we draw \(n_{h}=10,8,6,4\) clusters respectively from each stratum \(h=1,2,3,4\) using PPS sampling with size equal to the number of units in each cluster (Rosen, 1997), i.e., the selection probability for each cluster is proportional to its cluster size and equals to \(\frac{n_{h}N_{hj}}{\sum_{j}^{N_{h}}N_{hj}}\). The base weight \(w_{0hj}\) can be calculated as the inverse of the selection probability \(w_{0hj}=\frac{\sum_{j}^{N_{h}}N_{hj}}{n_{h}N_{hj}}\). All the units in the selected clusters are invited to participate in the study but not all respond. The response probability for \(i\)th subject in the \(j\)th selected cluster of the \(h\)th stratum is calculated as: \[\pi_{hji,res,1}=\text{logit}^{-1}(-1+2z_{1hji}+2z_{2hji}-z_{3hji}).\] (11) This sampling procedure results in a phase-I sample with approximately \(3,000\) individuals. 4. Phase-II sample: To collect outcome data, we select a subset of phase-I sample by simple random sampling with a selection probability of \(0.5\). Given that there may be some nonrespondents among the selected individuals for the phase-II sample, we consider four scenarios for the response patterns: 1. Low dimensional auxiliary variables (\(L_{1}=2\), \(L_{2}=3\)) collected in the phase-I sample. The true response propensity model is \[\pi_{hji,res,2}=\text{logit}^{-1}(1+2x_{hji1}+1.5x_{hji2}^{2}+2z_{hji1}+z_{hji2 }-2z_{hji3}-x_{hji1}z_{hji1}).\] Units in the lower tail of \(x_{hji1}\) and \(x_{hji2}\) have lower probability of response in the Phase-II sample. 2. High dimensional auxiliary variables (\(L_{1}=10\), \(L_{2}=10\)) collected in the phase-I sample. The true response propensity model is the same as S1, with the only difference that there are many noise variables \(x_{3},\ldots,x_{10}\), \(z_{4},\ldots,z_{10}\) that are not associated with outcome \(Y\) and response propensity collected in the phase-I sample. 3. High dimensional auxiliary variables (\(L_{1}=10\), \(L_{2}=10\)) collected in the phase-I sample with the assumption A3 violated in the outcome-related variables. The true response propensity model is specified as \[\pi_{hji,res,2}=\text{logit}^{-1}(1+2x_{hji1}-1.5x_{hji2}^{2}+2z_{hji1}+z_{hji2 }-2z_{hji3}-x_{hji1}z_{hji1}).\] The only difference between S3 and S2 is that the sign of the coefficient for \(x_{hji2}^{2}\) is changed to negative values such that there are sparse data in the higher and lower tails of \(x_{hji2}\). 4. High dimensional auxiliary variables (\(L_{1}=10\), \(L_{2}=10\)) collected in the phase-I sample with the assumption A3 violated in the variables that are not related with outcome. The true response propensity model is specified as \[\pi_{hji,res,2}=\text{logit}^{-1}(1+2x_{hji1}-1.5x_{hji3}^{2}+2z_{hji1}+z_{hji2 }-2z_{hji3}-x_{hji1}z_{hji1}).\] In this scenario, the higher and lower tails of \(x_{hji3}\) is under-sampled, and the ranges of \(x_{hji3}\) in the phase-II sample and phase-I sample are different, but \(x_{hji3}\) is not related to \(y_{hji}\). Note that for all the scenarios mentioned above, the LGM for the prediction of response propensity does not include nonlinear or interaction terms, so WT-LGM is faced with the issue of model misspecification. We specify the number of trees as \(B=100\) and use the default prior specification in BART and rBART in the simulation and motivating examples. Figure 2 shows scatter plots of Phase-I and Phase-II samples in a simulation for scenarios S1-S4. The y-axis is the outcome \(y\), and the x-axis is the continuous variable \(x_{2}\) that is associated with \(y\). The light grey dots represent the phase-I units that are not in the phase-II sample; dark grey triangles denote the phase-I units that are in the phase-II sample. In scenario S1, S2 and S4 (Figure 2A, Figure 2B, and Figure 2D respectively), the range of \(x_{2}\) are the same in phase-I and phase-II samples; however, in scenario S3, the phase-I units in the higher and lower tails of \(x_{2}\) are less likely to respond in the phase-II. Scenarios S2-S4 are designed to compare the performance of the competing methods in real-world settings where massive auxiliary information is available in the Phase-I sample, and researchers have no clear thoughts about the truly important predictors for the response propensity or the desired outcome. Under these three scenarios, for implementing the WT-LGM and WT-CHAID methods, we first use Lasso (Tibshirani, 1996) to select "important" variables for the response propensity before fitting a LGM or CHAID model. Scenarios S3 and S4 are set up to investigate the impact of violation of assumption A3 mentioned in Section 3.2 on the model performance. ### Simulation results Figure 3 shows the simulation results using \(D=10\) imputations, where the y-axis is the value of each metric. The simulation results using \(D=20,50,100,500,1000\) are presented in eFigure 1-5 in the supplementary materials. As we expect, the benchmark estimator performs the best, with the lowest absolute bias and RMSE, shortest interval width, and close to the nominal level coverage rate. The MI-based estimators yield similar bias and RMSE with the benchmark estimator under Scenarios S1 and S2 but larger bias and RMSE under Scenarios S3 and S4. Compared to the weighting-based methods, the MI-based methods have lower bias and RMSE in all scenarios, with comparatively smaller interval width, especially when compared with the weighting methods using LGM and CHAID. The coverage rate of the 95% CI for both the MI-based estimators are close to the nominal level in all scenarios. In scenario S3 (3rd axis tick), when the assumption A3 is violated by the sparse data in the tails of outcome-related variable Figure 2: Scatter plots of outcome \(y\) against continuous variables \(x_{2}\). Light grey dots represent subjects in the phase-I sample but not in the phase-II sample; Dark grey triangles represent subjects in the phase-I sample and also in the phase-II sample. A, B, C, and D correspond to simulation scenarios S1, S2, S3, and S4, respectively. \(x_{2}\), both the imputation and weighting methods perform worse than the other scenarios, especially for the LGM weighting methods with largest bias and RMSE. It's not surprising given that the models cannot well assess the relationship between the outcome and the covariates and between response propensity and the covariates outside the range of \(x_{2}\) in the sample. However, the MI-based methods still perform better than the weighting methods under this condition with a much smaller bias and RMSE. For scenario S4 with assumption A3 violated due to the sparse data in the tails of \(x_{3}\), a variable not correlated with \(y\), the performance of the tree-based imputation methods is still promising, with slightly wider confidence intervals and larger RMSE than in scenarios S1 and S2, but still better performance than the BART and rBART weighting based methods. For weighting-based methods, the bias and RMSE for the estimators using BART/rBART models to estimate response propensity is slightly larger than the corresponding MI-based estimators in scenarios S1 and S2. However, they are all smaller than those using LGM or CHAID to estimate response propensity. The interval widths are also smaller for BART/rBART-based weighting methods compared with LGM in scenarios S1 and S2. For scenarios S3 and S4, the interval widths for BART/rBART-based weighting methods are wider than the other methods. The WT-LGM has the largest absolute bias because the model for nonresponse adjustments ignores possible interactions and nonlinear associations. When comparing between BART and rBART-based imputation methods, rBART has a slightly smaller bias than BART, with the biggest improvement observed for Scenarios S3. Also, MI-rBART has wider interval width than MI-BART. This may be explained by the additional random intercept term in the rBART model. When we increase the number of imputations from 10 to 1000, the simulation results are almost unchanged. This suggests 10 imputations are sufficient for the MI-BART and MI-rBART methods. Overall, the MI-based estimators perform better than the weighting methods, generating less biased and more efficient estimates of population quantities, with shorter interval widths at no expense of coverage rate. By using BART/rBART-based MI methods, we can accurately estimate the relationship between auxiliary variables and \(y\) in the high-dimensional covariates setting with non-linear associations and interactions, and thus best use the rich data information collected in the Phase-I sample to improve the estimation of population quantities using the Phase-II sample. The assumption A3 is critical, especially for the covariates associated with \(y\). When conducting weighting analysis, BART-based models can be an attractive alternative to the linear logistic regression and the CHAID models. ## 5 The National Health and Nutrition Examination Survey We illustrate the application of the proposed tree-based imputation method using the NHANES 2017-2018 data. The goal is to estimate the prevalence of diabetes among the United States non-institutionalised adult population, defined as an individual with a fasting blood sugar (FBS) level higher than 126 mg/dL or HbA1c value higher than 6.5% (Ghazanfari et al., 2010; Emanepator, 1999). Our phase-I sample are the \(5,265\) NHANES 2017-2018 participants who completed the interviews and the MEC examination. All participants in the MEC examination had their HbA1c measured. The MEC examination weights are provided by NHANES to account for unequal probability of selection and unit nonresponse during interviews and examinations. Of these, only participants in the morning sessions are eligible for a FBS test (which requires an 8-hour fast). The \(2,295\) participants who participated in the morning sessions are our phase-II sample. The subsample weights are also available in the NHANES data. The descriptive statistics of the covariates that are measured in the Phase-I sample and included in the BART/rBART models are summarized in Table 1. These variables include demographic information (age, gender, race, citizen), physical examination results (BMI, HbA1c, HDL), and diabetes-related self-reported data (diagnosed diabetes - ever told by doctors that you had diabetes, DIQ160 - ever told had prediabetes, DIQ170 - ever told had health risk for diabetes, DIQ172: feel could be at risk for diabetes, BPQ020: ever told had high blood pressure, ALQ151: ever had 4/5 or more drinks every day, SMQ040: whether smoke cigarettes). The distributions of these variables between the phase-I and phase-II samples are similar. There are small proportions of missing data in some of these variables. We use the Chained Equations imputation algorithm, implemented in the "mice" package in R, to fill in the missing data in these covariates. To infer the population prevalence of diabetes, we compare the traditional weighting method using the phase-II sample weights provided by NHANES to tree-based imputation methods using BART and rBART. As the FBS level data are only available in the phase-II sample, for tree-based imputation methods, we first impute the FBS level data in the phase-I sample, and then use the imputed FBS and the observed HbA1c to define the presence of diabetes among participants in the Phase-I sample. We finally conduct weighting analysis from the multiply imputed phase-I sample using the Phase-I sample weights provided by NHANES. To impute the FBS level in the Phase-I sample, the auxiliary variables in Table 1, stratum and cluster indicators, and phase-I sample weights are included in the imputation models. Before fitting the imputation models, we examine assumption A3 by exploring the distributions of the continuous auxiliary variables in the Phase-I and Phase-II samples. Figure 6 in the supplementary materials shows that ranges of BMI, HbA1c, and HDL in the phase-I sample are similar to those in the phase-II sample. We also pre-processed the data in NHANES by log-transforming BMI, HbA1c, HDL, and the phase-I weights. For rBART, we included cluster indicators as group indicators in the model. For each imputation-based estimator, 10 imputations of the FBS variable are generated. As is shown in Figure 4, the estimate for the prevalence of diabetes using BART and rBART imputation methods are 14.3% Figure 3: Simulation results with the number of imputations \(D=10\). WT-LGM, WT-CHAID, WT-BART and WT-rBART are weighted analysis based on the Phase-II sample and using the Phase-II sample weights, MI-BART, and MI-rBART are the tree-based imputation methods. S1: low dimensional case where all the assumptions holds; S2: high dimensional case where all the assumptions holds; S3: high dimensional case where assumption (A3) does not hold for a outcome-related variable; S4: high dimensional case where assumption (A3) does not hold for a variable not related with the outcome. \begin{table} \begin{tabular}{l r r} \hline **Characteristics** & **Phase-I sample** & **Phase-II sample** \\ & (N = 5,265) & (N = 2,295) \\ \hline Gender & 2,541 (48\%) & 1,096 (48\%) \\ \multicolumn{1}{c}{Female} & 2,724 (52\%) & 1,199 (52\%) \\ Age & & \\ 20-39 & 1,589 (30\%) & 688 (30\%) \\ 40-59 & 1,658 (31\%) & 742 (32\%) \\ 60+ & 2,018 (38\%) & 865 (38\%) \\ Race & & \\ Mexican American & 698 (13\%) & 336 (15\%) \\ \multicolumn{1}{c}{Other Hispanic} & 496 (9.4\%) & 218 (9.5\%) \\ \multicolumn{1}{c}{Non-Hispanic White} & 1,807 (34\%) & 764 (33\%) \\ \multicolumn{1}{c}{Non-Hispanic Black} & 1,240 (24\%) & 519 (23\%) \\ \multicolumn{1}{c}{Non-Hispanic Asian} & 759 (14\%) & 329 (14\%) \\ \multicolumn{1}{c}{Other Race} & 265 (5.0\%) & 129 (5.6\%) \\ Citizen & & \\ Citizen by birth & 4,516 (86\%) & 1,966 (86\%) \\ \multicolumn{1}{c}{Not a citizen of the US} & 727 (14\%) & 320 (14\%) \\ \multicolumn{1}{c}{Missing} & 22 (0.4\%) & 9 (0.4\%) \\ BMI (kg/m*2) & 30 (25, 34) & 30 (25, 34) \\ HbA1c (\%) & 5.87 (5.30, 6.00) & 5.88 (5.30, 6.00) \\ Direct HDL-Cholesterol (mmol/L) & 1.38 (1.09, 1.60) & 1.38 (1.09, 1.58) \\ DIQ160 & & \\ No & 3,709 (70\%) & 1,585 (69\%) \\ \multicolumn{1}{c}{Yes} & 551 (10\%) & 240 (10\%) \\ \multicolumn{1}{c}{Missing} & 1,005 (19\%) & 470 (20\%) \\ \multicolumn{1}{c}{DIQ170} & & \\ No & 3,648 (69\%) & 1,566 (68\%) \\ \multicolumn{1}{c}{Yes} & 768 (15\%) & 333 (15\%) \\ \multicolumn{1}{c}{Missing} & 849 (16\%) & 396 (17\%) \\ \multicolumn{1}{c}{Do} & 3,018 (57\%) & 1,293 (56\%) \\ \multicolumn{1}{c}{Yes} & 1,351 (26\%) & 581 (25\%) \\ \multicolumn{1}{c}{Missing} & 896 (17\%) & 421 (18\%) \\ \multicolumn{1}{c}{BPQ020} & & \\ No & 3,243 (62\%) & 1,413 (62\%) \\ \multicolumn{1}{c}{Yes} & 2,012 (38\%) & 878 (38\%) \\ \multicolumn{1}{c}{Missing} & 10 (0.2\%) & 4 (0.2\%) \\ \multicolumn{1}{c}{ALQ151} & & \\ No & 3,688 (70\%) & 1,644 (72\%) \\ \multicolumn{1}{c}{Yes} & 679 (13\%) & 304 (13\%) \\ \multicolumn{1}{c}{Missing} & 898 (17\%) & 347 (15\%) \\ \multicolumn{1}{c}{SMQ040} & & \\ Every day & 755 (14\%) & 316 (14\%) \\ \multicolumn{1}{c}{Some days} & 203 (3.9\%) & 85 (3.7\%) \\ \multicolumn{1}{c}{Not at all} & 1,251 (24\%) & 562 (24\%) \\ \multicolumn{1}{c}{Missing} & 3,056 (58\%) & 1,332 (58\%) \\ \multicolumn{1}{c}{diagnosed\_diabetes} & 836 (16\%) & 390 (17\%) \\ \hline \end{tabular} \({}^{1}\) n (%); Mean (IQR) \({}^{2}\) DIQ160: Ever told you have prediabetes; DIQ170: Ever told have health risk for diabetes; DIQ172: Feel could be at risk for diabetes; Diagnosed diabetes: Doctor told you have diabetes; BPQ020: Ever told had high blood pressure; ALQ151: Ever have 4/5 or more drinks every day? SMQ040: Do you now smoke cigarettes? \end{table} Table 1: Descriptive statistics analysis of NHANES 2017-2018 data. (95% CI: 13.0%, 15.5%) and 14.2% (95% CI: 12.9%, 15.4%) respectively, which are slightly higher than using the NHANES weights for the phase-II sample (13.7%, 95% CI: 11.8%, 15.8%). The weighted estimate using the NHANES weights for the Phase-II sample also yields a wider 95% CI than the imputation-based estimates. To get a clearer comparison of the weighting methods versus tree-based imputation methods with the application data, we estimate the population means of two selected variables that are observed in the phase-I sample, including the diagnosed diabetes (ever being told by a doctor to have diabetes) and the HbA1c level. The benchmark estimator is calculated using the phase-I sample weights and the observed data in the phase-I sample. For the weighting and the two tree-based imputation methods, we act as if the two variables are only measured in the phase-II sample. In Figure 4, we see that when we estimate the prevalence of diagnosed diabetes, there is no obvious difference in the point estimates of the population mean between the three estimators and the benchmark estimator. The 95% CIs of MI-BART and MI-rBART are also similar to the benchmark estimator. However, the interval of the weighting method is much wider than the other estimators. For the continuous variable HbA1c value, the weighting and the two tree-based imputation methods lead to smaller estimates of the population mean than the benchmark estimator. Moreover, the 95% CIs of the two tree-based imputation methods are now wider than that of the benchmark estimator, and the weighting method still has the widest 95% CI among all the estimators. ## 6 Discussion The main objective of this paper is to improve the statistical inference for population quantities in a two-phase survey design using the rich data information collected in the Phase-I sample. In current practice, weighted analyses based on the subsample weights are often used. This approach has three limitations. First, often there are high-dimensional auxiliary variables available in the Phase-I sample. It can be challenging to create subsample weights for the Phase-II sample based on all the available auxiliary variables collected in the Phase-I sample. The weighted estimator of the Phase-II sample can be biased if the subsample Figure 4: Application results. A: point estimates and 95% confidence intervals for the prevalence of diabetes (FBS higher than 126mg/dL or HbA1c value higher than 6.5%); B: point estimates and 95% confidence intervals for the prevalence of diagnosed diabetes (ever being told by doctor to have diabetes); C: point estimates and 95% confidence intervals for the population mean of HbA1c value. WT denotes the traditional weighting method using the phase-II sample weights provided by NHANES; MI-BART and MI-rBART denote multiple imputation methods using BART and rBART, respectively. weights are not properly constructed. Second, the analyses using the subsample weights fail to account for the relationships between the auxiliary variables in the phase-I sample and the outcome of interest and thus it is a waste of useful information. Finally, the weighted estimates using the subsample weights can be inefficient when the subsample weights are highly variable. This is more of a concern in two-phase designs because the subsample weights are calculated as a product of the sample weights for the Phase-I sample and a subsample weighting adjustment. In contrast, by treating the outcome measures that are not observed in the Phase-I sample as missing data, imputation can be used to fill in the unobserved data for units in the Phase-I sample but not in the Phase-II sample in two-phase sampling designs. Imputation-based approaches can improve efficiency in the survey estimates if there exist important predictors of the outcome of interest. However, imputation is model reliant. Failing to choose the correct model form could result in poor imputations. Imputation models built on machine learning methods are attractive because they can impute the missing values in high dimensional settings and are robust to model misspecification. It was shown by Chipman et al. (2007) that BART with default hyper-parameters can achieve comparable performance as other machine learning methods, with a relatively faster execution speed and easy-to-implement steps. Given this, we propose the MI-BART and MI-rBART methods. We first impute the outcome of interest that is only measured in the Phase-II sample using a BART or rBART model with the data information collected in the Phase-I survey as the covariates, and then conduct a multiple imputation analysis using the imputed Phase-I sample. Simulations show that the proposed MI-BART and MI-rBART methods outperform the weighted analyses using the subsample weights when estimating the population means from the Phase-II sample in a two-phase design, with lower bias, reduced RMSE, and narrower confidence intervals with closer to nominal coverage rate. We apply the proposed methods to obtain the national estimate of the prevalence of diabetes among non-institutionalised United States adult residents, the prevalence of diagnosed diabetes, and the mean of HbA1c, using a subsample in the NHANES 2017-2018 who completed the morning session of physical examination. The results show that the proposed MI-BART and MI-rBART methods yield narrower 95% confidence intervals than the subsample weighted analyses. When Phase-I sample is collected using multistage probability sampling, both the MI-BART and MI-rBART methods allow to incorporate the design features in the imputation. Specifically, the design variables, such as strata, clusters, the Phase-I sample weights, and the Phase-II sample weights adjustments, can be included as covariates in the BART and rBART models. The only difference between BART and rBART is that rBART models the cluster effect using a random intercept while BART includes the cluster indicators as covariates. Simulations show that the MI-rBART performs similarly to MI-BART, except in Scenario S3, with reduced bias and RMSE and closer to the nominal coverage. Our simulations also show the advantage of BART-based models in subsample weighted analyses compared to the more conventional weighting methods using logistic regression or CHAID algorithm. Because the regular logistic regression or the CHAID models cannot handle high dimensional covariates, a lasso regression is often first used to select a subset of covariates to be included in the weights construction. The WT-LGM and WT-CHAID methods yield larger bias and RMSE than the WT-BART and WT-rBART methods, with the worst performance associated with WT-LGM due to model misspecification. The key assumption for the validity of MI-based methods is that the sampling mechanism for the Phase-II sample is ignorable given the auxiliary information and the design variables in the Phase-I sample. The ignorable assumption is more reasonable when the number of auxiliary variables in the Phase-I sample is large, which is usually the case in the two-phase design. Additionally, the auxiliary variables in the subsample and in the Phase-I sample need to have the same ranges, especially for the auxiliary variables that are predictive of the outcome of interest. If the subsampling results in a Phase-II sample that has In narrower ranges for the important predictors of the outcome of interest, the MI-based methods can be biased, although they still perform better than the weighted analyses using the subsample weights, as shown in the Simulation scenario S3. The trees-based imputation methods in two-phase designs can be used not only in surveys, but also in clinical trials or epidemiology studies, to generalize estimates from samples to a target population. In this paper, We consider BART and reBART because of their great predictive accuracy and ease of implementation. Other Bayesian machine learning techniques that achieve valid predictions could also be applied. ## 7 Ethics Statements This study involves human participants in NHANES, which was approved by the NCHS Research Ethics Review Board (protocols Numbers: NHANES Protocol #2011-17 and NHANES Protocol #2018-01). Participants gave informed consent to participate in the study before taking part. ## 8 Competing interests No competing interest is declared. ## 9 Acknowledgments and Funding This work is supported in part by funds from the National Institutes of Health (R01AG067149).
2304.12102
Unlocking Context Constraints of LLMs: Enhancing Context Efficiency of LLMs with Self-Information-Based Content Filtering
Large language models (LLMs) have received significant attention by achieving remarkable performance across various tasks. However, their fixed context length poses challenges when processing long documents or maintaining extended conversations. This paper proposes a method called \textit{Selective Context} that employs self-information to filter out less informative content, thereby enhancing the efficiency of the fixed context length. We demonstrate the effectiveness of our approach on tasks of summarisation and question answering across different data sources, including academic papers, news articles, and conversation transcripts.
Yucheng Li
2023-04-24T13:55:47Z
http://arxiv.org/abs/2304.12102v1
Unlocking Context Constraints of LLMs: Enhancing Context Efficiency of LLMs with Self-Information-Based Content Filtering ###### Abstract Large language models (LLMs) have received significant attention by achieving remarkable performance across various tasks. However, their fixed context length poses challenges when processing long documents or maintaining extended conversations. This paper proposes a method called _Selective Context_ that employs self-information to filter out less informative content, thereby enhancing the efficiency of the fixed context length. We demonstrate the effectiveness of our approach on tasks of summarisation and question answering across different data sources, including academic papers, news articles, and conversation transcripts. Machine Learning, ICML, ICML ## 1 Introduction Large language models (LLMs) have demonstrated remarkable power and impressive generalisation abilities across a wide range of natural language processing tasks, as well as real-life applications (Brown et al., 2020; Touvron et al., 2023; Bubeck et al., 2023). However, a major limitation of LLMs is their fixed context length. As LLMs have no memory outside their context window, it poses a significant challenge when tackling tasks that involve processing long documents or engaging in extended conversations (Dong et al., 2023). Increasing the context length for LLMs, particularly those based on Transformer, is very expensive due to the quadratic growth of memory and computation associated with the 2-D attention matrix (Vaswani et al., 2017). These limitations highlight the need for more efficient solutions to utilize the limited context in tasks that require extended context. Fortunately our experiments reveal that LLMs do not need all content in a document or the entire conversation history to answer users' queries. As shown in Figure 1, LLMs are able to generate the expected answer even with relevant information deleted. This might be because LLMs can infer the missing information based on the contextual clues and prior knowledge acquired from their pre-training. As a result, we argue that optimizing the use of context length by filtering out less informative content is possible without sacrificing performance. In this paper, we propose _Selective Context_, which filters out less informative content to reduce the cost of a given context, thereby making better use of the fixed context length in LLMs. _Selective Context_ employs a base language model to compute self-information for lexical units (sentences, phrases, or tokens) in a context and use it to evaluate their informativeness. By selectively retaining content with higher self-information, our method provides a more compact and efficient context representation for LLMs to process without compromising their performance on various tasks. To evaluate the effectiveness of our proposed method, we tested Selective Context on three data sources: arxiv papers, BBC news articles, and conversation transcripts with four different NLP tasks: summarisation, question answering, original context reconstruction and conversation. Our results demonstrate that Selective Context significantly enhances the efficiency of LLMs, allowing them to handle long documents and extended conversations with only minor sacrifices in generation quality. The key contributions of our paper: (1) We introduce Se Figure 1: LLMs are able to answer correctly with less informative content deleted. lective Context, a novel approach to context filtering that maximises the utility of fixed context length in LLMs. (2) We provide extensive evaluations of the proposed method. (3) Our results demonstrate the effectiveness of Selective Context in reducing the cost of context in LLMs. Code and data can be found in [https://github.com/liyucheng09/Selective_Context](https://github.com/liyucheng09/Selective_Context). ## 2 Self-Information Self-information, also known as _surprisal_ or _information content_, is a fundamental concept in information theory that quantifies the amount of information conveyed by an event (Shannon, 1948). In the context of language modelling, the event here is one step of generation (i.e., a token). It is defined as the negative log likelihood of the token: \[I(x)=-\log_{2}P(x_{t}|x_{0},x_{1},...,x_{t-1}) \tag{1}\] where \(I(x)\) represents the self-information of token \(x\) and \(P(x)\) denotes its output probability. In the information theory, self-information measures the level of surprise or uncertainty associated with an event; rare events convey more information and thus have higher self-information, while common events convey less information and have lower self-information. In the context of language modelling, self-information can be used to assess the informativeness of lexical units, e.g., words, phrases, or sentences, to see which pieces of information are more likely to be novel or important for understanding the context. Self-information is usually not directly used in NLP. Instead, terms closely related such as entropy and perplexity are widely used in language model optimisation and evaluation. \[H(S)=\frac{1}{N}\Sigma_{t}I(x_{t}) \tag{2}\] \[PP(S)=2^{H(S)} \tag{3}\] where the entropy \(H(S)\) of the sentence \(S=(x_{0},...,x_{n})\) is the average self-information of words in the sentence, and perplexity \(PP(S)\) of the sentence can be calculated with entropy. The property of self-information that is especially relevant to our method is the additivity. \[I(x_{0},x_{1}) =-\log_{2}P(x_{0},x_{1}) \tag{4}\] \[=-\log_{2}P(x_{0})P(x_{1}|x_{0})\] (5) \[=-\log_{2}P(x_{0})-\log_{2}P(x_{1}|x_{0})\] (6) \[=I(x_{0})I(x_{1}) \tag{7}\] It means we can calculate the self-information of a lexical unit by simply sum self-information of tokens in it. ## 3 Method In this section, we present the details of our proposed method, _Selective Context_, which optimizes the use of context length in LLMs by filtering out less informative content. The main idea is to compute the self-information for lexical units (such as sentences, phrases, or tokens) within a given context and utilise it to evaluate their informativeness. We first compute the self-information for each token in the context, then merge tokens and their self-information based on lexical units such as phrases or sentences. The overall approach consists of the following steps: ### Computing Self-Information Given a context \(C=x_{0},x_{1},...,x_{n}\), where \(x_{i}\) denotes a token, we use a base language model \(M\) to compute the self-information for each token \(x_{t}\) as fellow: \[I(x_{i})=-\log_{2}P(x_{i}|x_{0},x_{1},...,x_{i-1}) \tag{8}\] The base language model here should be causal language model, such as GPT-2, OPT, and LLaMA. ### Merging into Lexical Units If the content filtering of selective context is directly performed on the token level, it might lead to very disjoint context. Therefore except token level filtering, we also conduct the filtering procedure in phrase and sentence level. We call a basic unit in our filtering a _lexical unit_, which could be a token, a phrase or a sentence in our setting. To enable selective context works on phrases and sentences, we should merge tokens and their self-information into lexical units. For each lexical unit \(u=(x_{t},...,x_{t+\alpha})\), we can calculate its self-information by sum the self-information of its individual tokens according to the additivity property of self-information: \[I(u)=\sum_{i=t}^{\alpha}I(x_{i}) \tag{9}\] Sentence tokenizer is employed to obtain sentence level lexical units. And we use spacy1 to merge tokens into nouns phrases. We do not merge verb phrases as it might produce super long phrases. Footnote 1: [https://spacy.io/api/pipeline-functions#merge_noun_chunks](https://spacy.io/api/pipeline-functions#merge_noun_chunks) ### Selective Retention of Informative Context With the self-information of each lexical unit computed, we can now evaluate their informativeness. Instead of using a fixed threshold or retaining a fixed number of top \(k\) lexical units, we recommend to use a percentile-based filtering approach to adaptively select the most informative content. First, we rank the lexical units based on their self-information values in descending order. Then, we compute the \(p\)-th percentile of self-information values among all lexical units. \[I_{p}=\texttt{np.percentile}([I(u_{0}),..,I(u_{k})],p) \tag{10}\] Next, we selectively retain lexical units with self-information values greater than or equal to the \(p\)-th percentile, constructing a filtered context \(C^{\prime}\): \[C^{\prime}=U_{i}\mid I(U_{i})\geq I_{p},1\leq i\leq n \tag{11}\] The percentile-based filtering is a more flexible approach to retain the most informative content depending on the distribution of self-information values in the given context. In Figure 2, we present an example on phrase level where \(p\) is set to 50, which means half of phrases are filtered out. In this case, the context after processed by selective context only remains 57.2% tokens, which saves 42.7% of context length. We will discussion how LLMs perform on the processed context in the next section. ## 4 Experiments ### Datasets We evaluate Selective Context on three datasets from different domains: **BBC News:** A dataset containing news articles collected from the British Broadcasting Corporation (BBC) published in March 2023. This dataset covers a wide range of topics, including politics, business, sports, and technology. We use the full content of each news article in our experiments. **Arxiv Articles:** A dataset consisting of latest academic papers created in March 2023 from the arXiv preprint repository. These papers span various scientific disciplines, such as computer science, physics, and mathematics. As Arxiv articles can be quite long, we only process the first two sections for each Arxiv paper in our experiments. **Conversations from ShareGPT.com:** ShareGPT.com is a platform where ChatGPT users share their surprising and interesting conversation with ChatGPT. This datasets consists of conversations in different languages and in various scenarios (e.g., coding, chitchat, writing assistant, etc.). We use ShareGPT dataset for the conversation task in our experiments. Statistics in detail are presented in Table 1. Note that to avoid data contamination, we only collect latest data to ensure they are created after the knowledge cut off of ChatGPT. Data samples from the BBC News and Arxiv datasets were all created after March 2023. And conversations on SharedGPT.com are clearly created after the release of ChatGPT (gpt-3.5-turbo). ### Tasks and Metrics We evaluate Selective Context on four different tasks: **Original Context Reconstruction:** Given a compressed \begin{table} \begin{tabular}{l c c c c} \hline \hline Dataset & \#Doc & \#Sent & \#Phrase & \#Token \\ \hline Arxiv & 408 & 28.20 & 514.55 & 864.85 \\ ShareGPT & 470 & 27.35 & 389.42 & 689.32 \\ BBC & 294 & 25.63 & 523.96 & 732.54 \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics of the three datasets. #Sent, #Phrase, #Token are averaged per document. Figure 2: A visualisation of self-information based content filter. The paragraph is from a very recent paper. context produced by Selective Context, this task aims to evaluate whether models are able to reconstruct the original context. This task assesses how well the filtered context retains the essential information from the original context. In our experiments, the compressed contexts are used as input, and the original contexts are used as reference answers. **Summarisation:** Given a context, the task is to generate a concise and informative summary that captures the main points of the document. This task aims to evaluate whether the content filtering affects the overall understanding of models on compressed contexts. In our experiments, the input and output are the compressed context and the summaries generated based on the compressed contexts. Summaries based on the _original contexts_ are treated as the reference answers. **Question Answering (QA):** Given a document and a set of questions, the task is to generate answers based on the information available in the document. This task aims to evaluate models' fine-grained understanding toward a context. We first generate questions and answers based on the original context, where these answers are treated as reference answers. And then ask LLMs to answer these questions with selective context. **Conversation:** This task is only for the ShareGPT dataset. Given a conversation history and a user query, the task is to generate response to the query based on previous conversation. This task aims to evaluate whether selective context affect the capability of LLMs on conversation. Specifically, we ask LLMs to answer users' last query of ShareGPT conversation instances with selective context applied on previsou conversation history. We employ four metrics to assess the performance of our models on the tasks: BLEU, METEOR, ROUGE, and BERTScore. BLEU (Papineni et al., 2002) calculates n-gram precision, which is the proportion of n-grams in the generated text that are also present in the reference text. METEOR (Banerjee and Lavie, 2005) take additional features such as synonymy, stemming and word order into consideration, which leads to more comprehensive evaluation. ROUGE (Lin, 2004) focus on how much of the important information in the reference text is present in the generated summary. BERTScore (Zhang et al., 2019) is a more recent metric that leverages contextualised embeddings from pre-trained language models like BERT, computing the cosine similarity between the generated text and reference text embeddings to capture semantic similarity more effectively than traditional n-gram-based metrics. ### Models There are two main models were used in our experiments: **ChatGPT:** We test Selective Context on ChatGPT, which is based on the GPT-3.5-turbo architecture. ChatGPT is a Instruct-tuned language model further improved by RLHF with 175 billion parameters. The base language model of ChatGPT seems to be code-davinci-0022 and more previously davinci which can be found in (Brown et al., 2020). We compare the performance of ChatGPT with and without applying Selective Context to understand its impact on the efficiency and accuracy of the model. Footnote 2: [https://platform.openai.com/docs/model-index-for-researchers](https://platform.openai.com/docs/model-index-for-researchers) **Curie:** Curie is one of the variant of the GPT-3 family with 6.7B of parameters, a smaller version of casual language model davinci. We employ the Curie as the base model \(M\) in Selective Context to calculate self-information. Technically, we shall use the same base model of ChatGPT to do content filtering, but our analysis found that the filtering results of curie and davinci are nearly identical, so for the sake of cost, we choose curie instead. We access the two model via web API provided on the OpenAI platform3. Footnote 3: [https://platform.openai.com/docs/api-reference](https://platform.openai.com/docs/api-reference) ### Experimental Settings We compare different settings to evaluate the effectiveness and analysis trade-offs of Selective Context. **Baseline Comparison:** We compare Selective Context with the original context (without any content reduction) and Random Context, a baseline approach filtering out the same amount of data but does so randomly. **Reduction Ratios:** We experiment with different content reduction ratios in Selective Context: 0.2, 0.35, 0.5, 0.65, and 0.8. These ratios determine the proportion of content to be filtered out, allowing us to study the trade-off between efficiency and performance as the amount of retained information varies. **Lexical Units:** Lexical units are the basic element of content reduction in Selective Context. It can be sentence, phrases, or tokens. But due to the usage limitation of OpenAI web API ($120 per month), we only test the content filtering on phrase level. It doesn't means self-information based content filtering is not feasible on sentence and token level. We will includes experiments on these two level in the next version. **Self-Information Computation:** Here, we also focus only on sentence-wise self-information calculating, due to the API access limitation as well. Sentence-wise self-information computing means to calculate tokens' self-information sentence by sentence, instead of letting LLMs to process the entire context in one run. Same with lexical units, we will includes experiments testing self-information based on the entire context in the next version. We use generation temperature of 0.7 in our experiments. ## 5 Results ### Comparison to Baselines We first compare the performance of Selective Context with different context reduction ratios to the original context on summarisation and QA tasks, as shown in Table 2. The performance drop due the context reduction is shown in parentheses. As demonstrated in the table, using Selective Context only leads to a marginal drop when the reduction ratio is set to 0.2 or 0.35, despite it significantly reducing the context cost. The BLEU score drops by only 0.05 when 20% of the content is reduced. And the number is even smaller when it comes to ROUGE-1, where the drop is just 0.03. This indicate a high level of consistency between answers given selective contexts and original contexts when the reduction ratio is 0.2. Selective Context also yields impressive results when 35% of the content is reduced, with BERT scores around 0.9 and ROUGE-1 scores over 0.5. The results start to lose control as the reduction ratio rises to 0.5, where the average BLEU score drops 0.17 and the average ROUGE-1 drops 0.12. However, the performance of Selective Context on summarisation and conversation tasks are still acceptable, considering the decrease on BLEU and ROUGE-1 is below 0.1. When 65% of the context is reduced, the performance of summarisation decreases to 0.114 on BLEU, 0.447 on ROUGE-1, and 0.886 on BERScore. A reduction ratio of 0.8 tends to be less valuable, as the correctness of answers might not be guaranteed. In summary, the results suggest that Selective Context is very effective in preserving key information during context reduction, and is able to significantly reduce the context cost while preventing big performance loss. We then compare Selective Context to the random filtering baseline, and the results are presented in Figure 3. Our initial observation reveals that LLMs are quite robust to context reduction. With the random filtering approach, LLMs can achieve over 0.25 BLEU score when 20% of content is randomly reduced and over 0.5 ROUGE-1 when 35% of \begin{table} \begin{tabular}{l l r r r r r r r r} \hline \hline & & & & \multicolumn{3}{c}{ROUGE} & \multicolumn{3}{c}{BERTScore} \\ \cline{4-10} Method & Task & BLEU & METEOR & rouge1 & rouge2 & rougeL & Precision & Recall & F1 \\ \hline Original & Summarisation &.274 &.481 &.570 &.321 &.416 &.912 &.911 &.911 \\ & QA &.529 &.664 &.690 &.581 &.664 &.941 &.939 &.940 \\ & Conversation &.238 &.343 &.451 &.249 &.332 &.878 &.878 &.877 \\ & Avg. &.347 &.496 &.571 &.383 &.471 &.910 &.909 &.909 \\ \hline SC-0.2 & Summarisation &.251 (.02) &.475 (.01) &.563 (.01) &.305 (.02) &.402 (.01) &.910 (.002) &.909 (.002) &.909 (.002) \\ & QA &.426 (.10) &.601 (.06) &.638 (.05) &.502 (.08) &.605 (.06) &.933 (.008) &.929 (.010) &.931 (.009) \\ & Conversation &.208 (.03) &.305 (.04) &.419 (.03) &.230 (.02) &.307 (.02) &.873 (.005) &.862 (.015) &.867 (.010) \\ & Avg. &.295 (.05) &.460 (.04) &.540 (.03) &.346 (.04) &.438 (.03) &.905 (.005) &.900 (.009) &.902 (.007) \\ \hline SC-0.35 & Summarisation &.212 (.06) &.442 (.04) &.533 (.04) &.265 (.06) &.363 (.05) &.905 (.007) &.902 (.009) &.903 (.008) \\ & QA &.337 (.19) &.531 (.13) &.578 (.11) &.420 (.16) &.539 (.13) &.925 (.017) &.918 (.021) &.921 (.019) \\ & Conversation &.179 (.06) &.290 (.05) &.400 (.05) &.198 (.05) &.285 (.05) &.871 (.007) &.861 (.016) &.866 (.012) \\ & Avg. &.243 (.10) &.421 (.08) &.504 (.07) &.294 (.09) &.396 (.07) &.900 (.010) &.894 (.015) &.897 (.013) \\ \hline SC-0.5 & Summarisation &.170 (.10) &.397 (.08) &.500 (.07) &.226 (.10) &.331 (.09) &.900 (.012) &.893 (.018) &.896 (.015) \\ & QA &.237 (.29) &.434 (.23) &.487 (.20) &.321 (.26) &.447 (.22) &.912 (.029) &.903 (.036) &.907 (.033) \\ & Conversation &.132 (.11) &.254 (.09) &.360 (.09) &.163 (.09) &.254 (.08) &.867 (.012) &.850 (.028) &.858 (.020) \\ & Avg. &.179 (.17) &.362 (.13) &.449 (.12) &.237 (.15) &.344 (.13) &.893 (.018) &.882 (.027) &.887 (.023) \\ \hline SC-0.65 & Summarisation &.114 (.16) &.335 (.15) &.447 (.12) &.168 (.15) &.281 (.13) &.893 (.019) &.880 (.031) &.886 (.025) \\ & QA &.157 (.37) &.336 (.33) &.394 (.30) &.227 (.35) &.353 (.31) &.899 (.042) &.888 (.051) &.893 (.047) \\ & Conversation &.109 (.13) &.227 (.12) &.331 (.12) &.139 (.11) &.225 (.11) &.864 (.014) &.843 (.034) &.853 (.024) \\ & Avg. &.127 (.22) &.299 (.20) &.391 (.18) &.178 (.21) &.287 (.18) &.885 (.025) &.870 (.039) &.877 (.032) \\ \hline SC-0.8 & Summarisation &.063 (.21) &.259 (.22) &.380 (.19) &.114 (.21) &.231 (.19) &.884 (.028) &.863 (.048) &.873 (.038) \\ & QA &.117 (.41) &.272 (.39) &.326 (.36) &.172 (.41) &.289 (.37) &.890 (.051) &.876 (.063) &.883 (.057) \\ & Conversation &.030 (.21) &.142 (.20) &.227 (.22) &.081 (.17) &.154 (.18) &.849 (.029) &.816 (.061) &.832 (.046) \\ & Avg. &.070 (.28) &.224 (.27) &.311 (.26) &.122 (.26) &.225 (.25) &.874 (.036) &.852 (.057) &.863 (.047) \\ \hline \hline \end{tabular} \end{table} Table 2: Comparing Selective Context with different context reduction ratio to the Original context, on Summarisation and QA task. The performance drop are shown in parentheses. content is randomly reduced. Our proposed method, Selective Context, is even more effective, reaching around 0.3 BLEU score and over 0.55 ROUGE-1 score when the reduction ratio is set to 0.35. When Selective Context reduces 50% of content, the performance begins to drop dramatically on BLEU. Nevertheless, the ROUGE-1 and BERT scores remain strong. The rate of performance drop for the random baseline slows between the reduction ratios of 0.5 and 0.65, indicating that the random baseline has already lost a considerable amount of key information after reducing 50% of content. In contrast, Selective Context does not exhibit this tendency. When the reduction ratio is set to 0.8, both approaches show similar results, demonstrating that LLMs struggle to handle context with 80% information loss. Overall, our results show that Selective Context can effectively maximise the utility of fixed context length in LLMs while maintaining strong performance on various tasks. ### Tasks In this part, we examine the performances of Selective Context on the three different NLP tasks: summarisation, question answering, and original context reconstruction. The results are as shown in 4. From the results of the Original Context Reconstruction task (RC), we found that Selective Context allows LLMs to recover most of the key points in the original context when the reduction ratio is set to 0.2 or 0.35, as demonstrated by a rather high ROUGE-1 score of 0.65 and a BERTScore over 0.9. Based on that, it is safe to reduce 35% of content via Selective Context, which will only leads to minor information loss. However, the performance starts to drop as the reduction ratio increases to 0.5, indicating that partial key information is inevitably lost during the context reduction procedure (ROUGE-1: 0.59, BERTScore: 0.88). The performance decreases dramatically when the reduction ratio is set to 0.8, where we only receive a BLEU score of 0.03 and ROUGE-1 of 0.37. By comparing the four curves, we found that the summarisation and conversation task seems to be less affected by context reduction. From reduction ratio of 0.2 to 0.8, the BERTScore of summarisation task only show little decrease. On BLEU and ROUGE-1 metrics, the fluctuation of summarisation are also the smallest. The conversation task show the same tendency. On the contrary, reconstruction and QA tasks are significantly influenced by content reduction. This might be because summarisation and conversation tasks focus on overall context understanding, whereas QA and reconstruction tasks require more fine-grained information. Figure 4: Performance of Selective Context on different NLP tasks Figure 3: Performance of Selective Context compared to the random filtered baselines. As a result, we should be careful when we apply Selective Context for tasks like QA as it might reduce trivial details which is required for some queries. In summary, we observe that Selective Context is quite effective and useful in reducing context cost and can ensure a decent performance when reduction ratio is equal to or below than 0.5. ### Data Sources We also compare how Selective Context perform on different data sources, shown in Figure 5. The performance on ShareGPT is rather lower than the others, but as they are used for different tasks, so we cannot compare their absolute numbers directly. But we are able to recognise that Selective Context works well on arxiv data as long as reduction ratio is equal or lower than 0.35. A considerable performance decrease is found on arxiv as reduction ratio rises to 0.5, which show that the optimal threshold for arxiv data might be between 0.35 and 0.5. For news data, we find the steep performance decrease is between reduction ratio 0.5 to 0.65, for which we seems to be able to use more aggressive context reduction on news data. For conversation tasks, the performance appears stable up to an 80% context reduction, suggesting that we could potentially have much longer conversations using selective context beyond the fixed context length of LLMs. ### Case Study To have a more straightforward impression of how Selective Context reduce context cost, we present several cases in the Appendix. ## 6 Conclusion In this paper, we introduced Selective Context to maximise the utility of fixed context length in LLMs. We demonstrated the effectiveness of our method by filtering out less informative content, providing a more compact and efficient context representation for LLMs without sacrificing their performance on various tasks. Our extensive evaluations on arxiv papers, BBC news articles, and conversation transcripts showed that Selective Context can significantly enhance the efficiency of LLMs, enabling them to handle long documents and extended conversations more effectively.
2305.05757
Absolutely continuous Furstenberg measures
In this paper we provide a sufficient condition for a Furstenberg measure generated by a finitely supported measure to be absolutely continuous. Using this, we give a very broad class of examples of absolutely continuous Furstenberg measures including examples generated by measures supported on two points.
Samuel Kittle
2023-05-09T20:29:35Z
http://arxiv.org/abs/2305.05757v3
# Absolutely continuous Furstenberg measures ###### Abstract. In this paper we provide a sufficient condition for a Furstenberg measure generated by a finitely supported measure to be absolutely continuous. Using this we give a very broad class of examples of absolutely continuous Furstenberg measures including examples generated by measures supported on two points. ###### Contents * 1 Introduction * 2 Order k detail * 3 Taylor expansion bound * 4 Disintegration argument * 5 Entropy gap for stopped random walk * 6 More results on regular conditional distributions * 7 Proof of the main theorem * 8 Examples * 9 Appendix ## 1. Introduction In this paper we find a sufficient condition for a Furstenberg measure to be absolutely continuous. Using this we are able to give a very broad class of examples of measures \(\mu\) on \(PSL_{2}(\mathbb{R})\) supported on finitely many points - including examples supported on only two points - such that the Furstenberg measure \(\nu\) on \(P_{1}(\mathbb{R})\) generated by \(\mu\) is absolutely continuous. We will be able to find examples in both the symmetric and the non symmetric case. Given a measure \(\mu\) on \(PSL_{2}(\mathbb{R})\) we say that a measure \(\nu\) on \(P_{1}(\mathbb{R})\) is a Furstenberg measure generated by \(\mu\) if \(\nu\) is stationary under action by \(\mu\). In other words we require \[\nu=\mu*\nu\] where \(*\) denotes convolution under the natural action of \(PSL_{2}(\mathbb{R})\) on \(P_{1}(\mathbb{R})\). It is a theorem of Furstenberg in [13] that if \(\mu\) is strongly irreducible and the group generated by the support of \(\mu\) is not compact then there is a unique Furstenberg measure generated by \(\mu\). The main motivation for studying Furstenberg measures is their fundamental role in the theory of random matrix products. See [4], [2]. Throughout this paper we will only be concerned with the case were \(\mu\) is supported on finitely many points. The study of Furstenberg measures fits into the general framework of stationary measures in fractal geometry. Given a finite collection of measurable maps \(S_{1},\ldots,S_{n}\) on a measurable space \(X\) and a probability vector \(p_{1},\ldots,p_{n}\) a probability measure \(\nu\) on \(X\) is stationary if \[\nu=\sum_{i=1}^{n}p_{i}\nu\circ S_{i}^{-1}.\] If \(S_{1},\ldots,S_{n}\) are elements of \(PSL_{2}(\mathbb{R})\) acting on \(X=P_{1}(\mathbb{R})\), we get the notion of Furstenberg measures. If \(S_{1},\ldots,S_{n}\) are similarities, affine maps, or conformal maps then \(\nu\) is called a self-similar, self-affine, or self-conformal measure respectively. See the recent surveys [16], [29], [11], the paper [17] and the references contained therein. Two fundamental questions about Furstenberg measures are what are their dimensions and when are they absolutely continuous. It is a classical result that if \(\mu\) is strongly irreducible with a finite exponential moment and the group generated by the support of \(\mu\) is not compact then there exist \(C,\delta>0\) such that if we let \(\nu\) be the Furstenberg measure generated by \(\mu\), let \(x\in P_{1}(\mathbb{R})\) and let \(r>0\) then \[\nu(B(x,r))\leq Cr^{\delta}\] where \(B(x,r)\) is the open ball in \(P_{1}(\mathbb{R})\) with centre \(x\) and radius \(r\). This means that under these conditions \(\nu\) has positive dimension. In [19] it was conjectured that if \(\mu\) is supported on finitely many points then its Furstenberg measure \(\nu\) is singular. This conjecture was disproved by Barany, Pollicott, and Simon in [1] which gave a probabilistic construction of measures \(\mu\) on \(PSL_{2}(\mathbb{R})\) supported on finitely many points with absolutely continuous Furstenberg measures. In [5] Bourgain gives examples of discrete measures \(\mu\) on \(PSL_{2}(\mathbb{R})\) such that the Furstenberg measure generated by \(\mu\) is absolutely continuous and examples generating Furstenberg measures with \(n\)-times differentiable density functions. His approach was revisited by several authors to give new examples including Boutonnet, Ioana and Golsefidy [6], Lequen [26], and Kogler [24]. The aim of this paper is to give a more robust construction of such examples. In [18], building on the work of Hochman in [15], Hochman and Solomyak show that providing \(\mu\) satisfies the exponential separation condition, which we will define later, its Furstenberg measure \(\nu\) satisfies where \(h_{RW}\) is the random walk entropy and \(\chi\) is the Lypanov exponent. In particular they show that if \(\mu\) satisfies the exponential separation condition and \[\frac{h_{RW}}{\chi}\geq 2\] then \(\nu\) has dimension \(1\). It is reasonable to expect that \(\nu\) is absolutely continuous when \(h_{RW}/\chi>2\). In this paper we make progress towards this aim by showing that if \(\mu\) satisfies the exponential separation condition then there is some \(C\) which depends on, amongst other things, the rate of the exponential separation such that if \[\frac{h_{RW}}{\chi}\geq C\] then \(\nu\) is absolutely continuous. The result we end up with is similar to the result of Varju in [30, Theorem 1] but applies to Furstenberg measures rather than Bernoulli convolutions. Our techniques are somewhat inspired by those of Hochman [15], Hochman and Solomyak [18], and Varju [30] but we introduce several crucial new ingredients including, amongst other things, the concept of "detail" from [22]. ### Main results To give the main result of this paper we will first need some definitions. **Definition 1.1**.: Let \(X\) be a random variable taking discrete values with probabilities \(p_{1},p_{2},\dots\). Then we define the _entropy_ of \(X\) to be \[H(X):=-\sum p_{i}\log p_{i}.\] Here and throughout this paper the log of a positive real number means the natural logarithm with base \(e\). **Definition 1.2**.: Given a measure \(\mu\) on \(PSL_{2}(\mathbb{R})\) we define the _random walk entropy_ of \(\mu\), which we will denote by \(h_{RW}\), by \[h_{RW}:=\lim_{n\to\infty}\frac{1}{n}H(\gamma_{1}\gamma_{2}\dots\gamma_{n})\] where \(\gamma_{1},\gamma_{2},\dots\) are i.i.d. samples from \(\mu\). **Definition 1.3**.: Let \(\mu\) be a probability measure on \(PSL_{2}(\mathbb{R})\). We say that \(\mu\) is strongly irreducible if there is no finite set \(x_{1},x_{2},\dots,x_{n}\in P_{1}(\mathbb{R})\) which is invariant when acted upon by the support of \(\mu\). **Definition 1.4**.: Given a measure \(\mu\) on \(PSL_{2}(\mathbb{R})\) we define the _Lypanov exponent_ of \(\mu\) to be given by the almost sure limit \[\chi:=\lim_{n\to\infty}\frac{1}{n}\log\|\gamma_{1}\gamma_{2}\dots\gamma_{n}\|\] where \(\gamma_{1},\gamma_{2},\dots\) are i.i.d. samples from \(\mu\). It is a result of Furstenberg and Kesten [12] that if \(\mu\) is strongly irreducible and its support is not contained in a compact subgroup of \(PSL_{2}(\mathbb{R})\) then this limit exists almost surely and is positive. Throughout this paper we will also fix some left invariant Riemann metric and let \(d\) be its distance function. We then have the following definition. **Definition 1.5**.: Let \(\mu\) be a discrete measure on \(PSL_{2}(\mathbb{R})\) supported on finitely many points. Let \[S_{n}:=\bigcup_{i=1}^{n}\operatorname{supp}(\mu^{*i}).\] Then we define the _splitting rate_ of \(\mu\), which we will denote by \(M_{\mu}\), by \[M_{\mu}:=\exp\left(\limsup_{x,y\in S_{n},x\neq y}-\frac{1}{n}\log d(x,y)\right).\] Note that all left invariant Riemann metrics are equivalent and therefore \(M_{\mu}\) does not depend on our choice of Riemann metric. We also need to define the following. **Definition 1.6**.: We define the bijective function \(\phi\) by \[\phi:P_{1}(\mathbb{R}) \to\mathbb{R}/\pi\mathbb{Z}\] \[\left[\binom{\cos x}{\sin x}\right] \mapsto x.\] We now define the following quantitative non-degeneracy condition. **Definition 1.7**.: Given some probability measure \(\mu\) on \(PSL_{2}(\mathbb{R})\) generating a Furstenberg measure \(\nu\) on \(P_{1}(\mathbb{R})\) and given some \(\alpha_{0},t>0\) we say that \(\mu\) is \(\alpha_{0},t\)_-non-degenerate_ if whenever \(a\in\mathbb{R}\) we have \[\nu(\phi^{-1}([a,a+t]+\pi\mathbb{Z}))\leq\alpha_{0}.\] We now have everything needed to state the main result of this paper. **Theorem 1.8**.: _For all \(R>1\), \(\alpha_{0}\in(0,\frac{1}{3})\) and \(t>0\) there is some \(C>0\) such that the following holds. Suppose that \(\mu\) is a probability measure on \(PSL_{2}(\mathbb{R})\) which is strongly irreducible, \(\alpha_{0},t\)- non-degenerate, and is such that on the support of \(\mu\) the operator norm is at most \(R\). Suppose further that the support of \(\mu\) is not contained in any compact subgroup of \(PSL_{2}(\mathbb{R})\). Suppose that \(M_{\mu}<\infty\) and_ \[\frac{h_{RW}}{\chi}>C\left(\max\left\{1,\log\frac{\log M_{\mu}}{h_{RW}} \right\}\right)^{2}. \tag{1}\] _Then the Furstenberg measure \(\nu\) on \(P_{1}(\mathbb{R})\) generated by \(\mu\) is absolutely continuous._ The constant \(C\) may be computed by following the proof. **Remark 1.9**.: The condition \(M_{\mu}<\infty\) is closely related to the exponential separation condition in [18]. Indeed in [18] Hochman and Solomyak require \[\limsup_{x,y\in\operatorname{supp}(\mu^{*n}),x\neq y}-\frac{1}{n}\log d(x,y)<\infty.\] ### Comparison with previous results and examples As we mentioned above, Bourgain [5] gave examples of absolutely continuous Furstenberg measures generated by measures on \(PSL_{2}(\mathbb{R})\) supported on finitely many points. His approach was revisited by several authors including Boutonnet, Ioana and Golsefidy [6], Lequen [26], and Kogler [24]. We quote the following result from [24]. **Theorem 1.10**.: _For every \(c_{1},c_{2}>0\) and \(m\in\mathbb{Z}_{>0}\) there is some positive \(\varepsilon_{0}=\varepsilon_{0}(m,c_{1},c_{2})\) such that the following holds. Suppose that \(\varepsilon\leq\varepsilon_{0}\) and let \(\mu\) be a symmetric probability measure on \(PSL_{2}(\mathbb{R})\) such that_ \[\mu^{*n}\left(B_{\varepsilon^{c_{1}n}}(H)\right)\leq\varepsilon^{c_{2}n} \tag{2}\] _for all proper closed connected subgroups \(H<PSL_{2}(\mathbb{R})\) and all sufficiently large \(n\). Suppose further that_ \[\operatorname{supp}\mu\subset B_{\varepsilon}(\operatorname{Id}). \tag{3}\] _Then the Furstenberg measure generated by \(\mu\) is absolutely continuous with \(m\)-times continuously differentiable density function._ Here \(B_{\varepsilon}(\cdot)\) denotes \(\varepsilon\)-neighbourhood of a set with respect to our left invariant Riemann metric. The conditions of this theorem are not directly comparable to ours but they are related. Condition (2) can be verified for \(H=\{\operatorname{Id}\}\) if \(M_{\mu}\leq\varepsilon^{-c_{1}}\) and \(\mu^{*n}(g)\leq\varepsilon^{c_{2}n}\) for all \(g\in PSL_{2}(\mathbb{R})\) for all sufficiently large \(n\). If that is the case then \(h_{RW}\geq c_{2}\log\varepsilon^{-1}\). When condition (3) holds we must have \(\chi\leq O(\varepsilon)\). Informally speaking the conditions (2) and (3) correspond to \(M_{\mu}\leq\varepsilon^{-c_{1}}\), \(h_{RW}\geq c_{2}\log\varepsilon^{-1}\), and \(\chi\leq O(\varepsilon)\). In comparison condition (1) in Theorem 1.8 is satisfied if \(M_{\mu}\leq\exp\left(\exp\left(c\varepsilon^{-1/2}\right)\right)\), \(h_{RW}\geq c\), and \(\chi\leq\varepsilon\) for some suitably small \(c>0\) and all sufficiently small \(\varepsilon>0\). It is important to note however, that Theorem 1.10 gives higher regularity for the Furstenberg measure than our result. To demonstrate the applicability of our result we give several examples of measures satisfying the conditions of Theorem 1.8. We will prove that these examples satisfy the conditions of Theorem 1.8 in Section 8. **Definition 1.11** (Height).: Let \(\alpha_{1}\) be algebraic with algebraic conjugates \(\alpha_{2},\alpha_{3},\dots,\alpha_{d}\). Suppose that the minimal polynomial for \(\alpha_{1}\) over \(\mathbb{Z}[X]\) has positive leading coefficient \(a_{0}\). Then we define the _height_ of \(\alpha_{1}\) by \[\mathcal{H}(\alpha_{1}):=\left(a_{0}\prod_{i=1}^{n}\max\{1,|\alpha_{i}|\} \right)^{1/d}.\] Note that the height of a rational number is the maximum of the absolute values of its numerator and denominator. **Proposition 1.12**.: _For every \(A>0\) there is some \(C>0\) such that the following is true. Let \(r>0\) be sufficiently small (depending on \(A\)) and let \(\mu\) be a finitely supported symmetric probability measure on \(PSL_{2}(\mathbb{R})\). Suppose that all of the entries of the matrices in the support of \(\mu\) are algebraic and that the support of \(\mu\) is not contained in any compact subgroup of \(PSL_{2}(\mathbb{R})\). Let \(M\) be the greatest of the heights of these entries and let \(k\) be the degree of the number field generated by these entries._ _Let \(U\) be a random variable taking values in \(PSL_{2}(\mathbb{R})\) such that \(\|U\|\leq r\) almost surely and the smallest eigenvalue of the covariance matrix of \(U\) is at least \(Ar^{2}\)._ _Suppose that for any virtually solvable group \(H<PSL_{2}(\mathbb{R})\) we have \(\mu(H)\leq 1/2\)._ _Suppose further that_ \[r\leq C\left(\log\left(k(1+\log M)\right)\right)^{-2}.\] _Then the Furstenberg measure generated by \(\mu\) is absolutely continuous._ In the above Proposition we can replace the requirement that \(\mu\) is symmetric with the requirement \(|\mathbb{E}[U]|<cr^{2}\) for any \(c>0\). We can also replace the requirement \(\mu(H)\leq 1/2\) with \(\mu(H)\leq 1-\varepsilon\) for any \(\varepsilon>0\). If we do this then we must allow \(C\) to also depend on \(c\) and \(\varepsilon\). Unlike examples based on the methods of Bourgain we do not require the support of \(\mu\) to be close to the identity. We may prove the following. **Proposition 1.13**.: _For all \(r>0\) there exists some finitely supported measure \(\mu\) on \(PSL_{2}(\mathbb{R})\) such that all of the elements in the support of \(\mu\) are conjugate to a diagonal matrix with largest entry at least \(r\) under conjugation by a rotation and the Furstenberg measure generated by \(\mu\) is absolutely continuous._ We also have the following family of examples supported on two elements. **Proposition 1.14**.: _For all sufficiently large \(n\in\mathbb{Z}_{>0}\) the following is true._ _Let \(A\in PSL_{2}(\mathbb{R})\) be defined by_ \[A:=\begin{pmatrix}\frac{n^{2}-1}{n^{2}+1}&-\frac{2n}{n^{2}+1}\\ \frac{2n}{n^{2}+1}&\frac{n^{2}-1}{n^{2}+1}\end{pmatrix}\] _and let \(B\in PSL_{2}(\mathbb{R})\) be defined by_ \[B:=\begin{pmatrix}\frac{n^{3}+1}{n^{3}}&0\\ 0&\frac{n^{3}}{n^{3}+1}\end{pmatrix}.\] _Let \(\mu=\frac{1}{2}\delta_{A}+\frac{1}{2}\delta_{B}\). Then the Furstenberg measure generated by \(\mu\) is absolutely continuous._ ### Outline of the proof We will now give an overview of the proof of Theorem 1.8. We adapt the concept of detail from [22] to work with measures on \(P_{1}(\mathbb{R})\) or equivalently \(\mathbb{R}/\pi\mathbb{Z}\) instead of measures on \(\mathbb{R}\). The detail of a measure \(\lambda\) around scale \(r\), denoted by \(s_{r}(\lambda)\), is a quantitative measure of how smooth a measure is at scale \(r\). We will define this in Definition 2.4. Recall the following result from [22]. **Lemma 1.15**.: _Suppose that \(\lambda\) is a probability measure on \(\mathbb{R}/\pi\mathbb{Z}\) and that there exists some constant \(\beta>1\) such that for all sufficiently small \(r>0\) we have_ \[s_{r}(\lambda)<\left(\log r^{-1}\right)^{-\beta}.\] _Then \(\lambda\) is absolutely continuous._ This is proven in [22] for measures on \(\mathbb{R}\). The same proof works for measures on \(\mathbb{R}/\pi\mathbb{Z}\). In Definition 2.7 we introduce a new quantity for measuring how smooth a measure is at some scale \(r>0\) which we will call order \(k\) detail around scale \(r\) and denote by \(s_{r}^{(k)}(\cdot)\). The definition is chosen such that trivially we have \[s_{r}^{(k)}(\lambda_{1}*\lambda_{2}*\cdots*\lambda_{k})\leq s_{r}(\lambda_{1}) s_{r}(\lambda_{2})\ldots s_{r}(\lambda_{k}). \tag{4}\] We can also bound detail in terms of order \(k\) detail using the following lemma. **Lemma 1.16**.: _Let \(k\) be an integer greater than \(1\) and suppose that \(\lambda\) is a probability measure on \(\mathbb{R}/\pi\mathbb{Z}\). Suppose that \(a,b>0\) and \(\alpha\in(0,1)\). Suppose that \(a<b\) and that for all \(r\in[a,b]\) we have_ \[s_{r}^{(k)}(\lambda)\leq\alpha.\] _Then we have_ \[s_{a\sqrt{k}}(\lambda)\leq\alpha k\left(\frac{2e}{\pi}\right)^{\frac{k-1}{2}} +k!\cdot ka^{2}b^{-2}.\] **Remark 1.17**.: Combining Lemma 1.16 with (4) we get a result that can be stated informally as follows. Let \(\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\) be measures on \(\mathbb{R}/\pi\mathbb{Z}\). Assume that we have some bound on \(s_{r}(\lambda_{i})\) for all \(i\in[n]\) and all \(r\) in a suitably large range of scales around some scale \(r_{0}\). Then we can get a vastly improved bound for \(s_{r_{0}}(\lambda_{1}*\lambda_{2}*\cdots*\lambda_{n})\). This is essentially the same as Theorem 1.18 from our earlier paper [22]. However [22, Theorem 1.18] is not sufficient for the purposes of this paper. In what follows, we decompose the Furstenberg measure \(\nu\) as the convex combination of measures that can be approximated by the convolutions of measures. This allows us to estimate \(s_{r}^{(k)}(\nu)\) for arbitrary scales using (4) among other things. Unlike the setting of [22], we cannot estimate the detail of the convolution factors at a sufficiently large range of scales and so cannot apply [22, Theorem 1.18]. In fact, the decomposition we use to estimate \(s_{r}^{(k)}(\nu)\) depends on the exact value of \(r\). For this reason the notion of order \(k\) detail is a key innovation of this paper that is necessary for the proof. We now need tools for bounding the detail of a measure at a given scale. One of them is the following. **Lemma 1.18**.: _For every \(\alpha>0\) there exists some \(C>0\) such that the following is true. Let \(X_{1},X_{2},\ldots,X_{n}\) be independent random variables taking values in \(\mathbb{R}/\pi\mathbb{Z}\) such that \(|X_{i}|<\tilde{r}\) almost surely for some \(\tilde{r}>0\). Let \(\hat{r}>0\) be defined by \(\hat{r}^{2}=\sum_{i=1}^{n}\operatorname{Var}X_{i}\). Let \(r\in(\tilde{r},\hat{r})\). Suppose that_ \[\frac{\hat{r}}{r},\frac{r}{\hat{r}}\geq C.\] _Then_ \[s_{r}(X_{1}+X_{2}+\cdots+X_{n})\leq\alpha.\] Here and through out this paper when \(x\in\mathbb{R}/\pi\mathbb{Z}\) we use \(|x|\) to denote \(\min_{y\in x}|y|\). The idea of the proof of Theorem 1.8 is to show that \(\nu\circ\phi^{-1}\) can be expressed as a convex combination of measures each of which can be approximated by the law of the sum of many small independent random variables with some control over the variances of these variables. One difficulty with this is that the measures which \(\nu\circ\phi^{-1}\) is a convex combination of are only approximately the laws of sums of small independent random variables of the required form. To deal with this we will need the following. **Lemma 1.19**.: _There is some constant \(C>0\) such that the following is true. Let \(\lambda_{1}\) and \(\lambda_{2}\) be probability measures on \(\mathbb{R}/\pi\mathbb{Z}\) and let \(r>0\). Let \(k\in\mathbb{Z}_{>0}\). Then_ \[\left|s_{r}^{(k)}(\lambda_{1})-s_{r}^{(k)}(\lambda_{2})\right|\leq Cr^{-1} \mathcal{W}_{1}(\lambda_{1},\lambda_{2}).\] Here \(\mathcal{W}_{1}(\cdot,\cdot)\) denotes Wasserstein distance. Now we need to explain how we express \(\nu\circ\phi^{-1}\) as a convex combination of measures each of which are close to the law of a sum of small independent random variables. To do this we will need a chart for some neighbourhood of the identity in \(PSL_{2}(\mathbb{R})\). To do this we use the logarithm from \(PSL_{2}(\mathbb{R})\) to its Lie algebra \(\mathfrak{psl}_{2}(\mathbb{R})\) defined in some open neighbourhood of the identity in \(PSL_{2}(\mathbb{R})\). We also fix some basis of \(\mathfrak{psl}_{2}(\mathbb{R})\) and use this to identify \(\mathfrak{psl}_{2}(\mathbb{R})\) with \(\mathbb{R}^{3}\) and fix some Euclidean product and corresponding norm on \(\mathfrak{psl}_{2}(\mathbb{R})\). Now we consider the expression \[x=\gamma_{1}\gamma_{2}\dots\gamma_{T}b\] where \(T\) is a stopping time, \(\gamma_{1},\gamma_{2},\dots\) are random variables taking values in \(PSL_{2}(\mathbb{R})\) which are i.i.d. samples from \(\mu\), and \(b\) is a sample from \(\nu\) independent of the \(\gamma_{i}\). Clearly \(x\) is a sample from \(\nu\). We then construct some \(\sigma\)-algebra \(\mathcal{A}\) and write \[x=g_{1}\exp(u_{1})g_{2}\exp(u_{2})\dots g_{n}\exp(u_{n})b \tag{5}\] where all of the \(g_{i}\) are \(\mathcal{A}\) -measurable random variables taking values in \(PSL_{2}(\mathbb{R})\) and \(b\) is an \(\mathcal{A}\)-measurable random variable taking values in \(P_{1}(\mathbb{R})\). Furthermore the \(u_{i}\) are random variables taking values in \(\mathfrak{psl}_{\mathcal{L}}(\mathbb{R})\) in a small ball around the origin such that conditional on \(\mathcal{A}\) we can find a lower bound on their variance. We then Taylor expand to show that \(\phi(x)\) can be approximated in the required way after conditioning on \(\mathcal{A}\). To explain this statement more precisely we need to define the singular value decomposition. **Definition 1.20** (Singular value decomposition).: We can write each element \(g\) of \(PSL_{2}(\mathbb{R})\) with \(\|g\|>1\) in the form \[R_{\theta_{1}}A_{\lambda}R_{-\theta_{2}}\] where \[R_{x}:=\begin{pmatrix}\cos x&-\sin x\\ \sin x&\cos x\end{pmatrix}\] is the rotation by \(x\) and \[A_{\lambda}:=\begin{pmatrix}\lambda&0\\ 0&\lambda^{-1}\end{pmatrix}\] in exactly one way with \(\lambda\geq 1\) and \(\theta_{1},\theta_{2}\in\mathbb{R}/\pi\mathbb{Z}\). We will let \(b^{+}(g)=\phi^{-1}(\theta_{1})\) and \(b^{-}(g)=\phi^{-1}(\theta_{2}+\frac{\pi}{2})\). **Remark 1.21**.: Note that in this notation we have that if \(\|g\|\) is large then providing \(b\in P_{1}(\mathbb{R})\) is not too close to \(b^{-}(g)\) we have that \(gb\) is close to \(b^{+}(g)\). We will make this more precise in Lemma 3.9. We now let \(d\) denote the metric on \(P_{1}(\mathbb{R})\) induced by \(\phi\). In other words if \(x,y\in P_{1}(\mathbb{R})\) then \(d(x,y):=|\phi(x)-\phi(y)|\). Whenever we write \(d(\cdot,\cdot)\) it will be clear as to whether we are applying it to elements of \(PSL_{2}(\mathbb{R})\) or elements of \(P_{1}(\mathbb{R})\) and so clear if we are referring to the distance function of our left invariant Riemann metric on \(PSL_{2}(\mathbb{R})\) or to our metric on \(P_{1}(\mathbb{R})\). By carrying out some calculations about the singular value decomposition and applying Taylor's theorem we can prove the following. **Proposition 1.22**.: _Let \(c,t>0\). Then there exists \(C,\delta>0\) such that the following is true. Let \(n\in\mathbb{Z}_{>0}\) and let \(u^{(1)},u^{(2)},\dots,u^{(n)}\) be independent random variables taking values in \(\mathfrak{psl}_{2}(\mathbb{R})\). Let \(g_{1},\ldots,g_{n}\in PSL_{2}(\mathbb{R})\) and let \(b\in P_{1}(\mathbb{R})\). Suppose that for each \(i\in[n]\) we have_ \[\left\|g_{i}\right\|\geq C\] _and_ \[\left\|u^{(i)}\right\|\leq c\left\|g_{1}g_{2}\ldots g_{i}\right\|^{2}\tilde{r}.\] _Suppose that for each \(i\in[n-1]\) we have_ \[d(b^{+}(g_{i}),b^{-}(g_{i+1}))>t\] _and also that_ \[d(b,b^{-}(g_{n}))>t.\] _Suppose further that_ \[\left\|g_{1}g_{2}\ldots g_{n}\right\|^{2}\tilde{r}<\delta.\] _Let \(x\) be defined by_ \[x=g_{1}\exp(u^{(1)})\ldots g_{n}\exp(u^{(n)})b. \tag{6}\] _For \(i\in[n]\) let \(\zeta_{i}\in\mathfrak{psl}_{2}^{\ast}\) be the derivative defined by_ \[\zeta_{i}=D_{u}(\phi(g_{1}g_{2}\ldots g_{i}\exp(u)g_{i+1}g_{i+2}\ldots g_{n}b) )|_{u=0} \tag{7}\] _and let \(S\) be defined by_ \[S=\phi(g_{1}g_{2}\ldots g_{n+1})+\sum_{i=1}^{n}\zeta_{i}(u^{(i)}).\] _Then we have_ \[\mathcal{W}_{1}\left(\phi(x),S\right)\leq C^{n}\left\|g_{1}g_{2}\ldots g_{n} \right\|^{2}\tilde{r}^{2}.\] Informally this proposition states that under some conditions, when \(x\) is of the form (6) then \(\phi(x)\) is close to its first order Taylor expansion in the \(u^{(i)}\). In (7) \(D_{u}\) denotes the derivative of the map with respect to \(u\). We will later use this along with some results about the first derivatives of the exponential at \(0\), Lemma 1.18, and (4) to get a bound on the order \(k\) detail of the expression \(x\). We can then get an upper bound on the order \(k\) detail of some sample \(x\) from \(\nu\) conditional on some \(\sigma\)-algebra \(\mathcal{A}\). Due to the convexity of \(s_{r}^{(k)}(\cdot)\) we can then find an upper bound for \(s_{r}^{(k)}(\nu)\) by taking the expectation of this bound. We make the following abuse of notation. Given some \(b\in P_{1}(\mathbb{R})\) we will also write \(b\) to mean a non-zero vector which is a representative of \(b\). We will now outline some of the tools we will use to decompose \(x\) in the way described in (5). Let \(\gamma_{1},\gamma_{2},\ldots\) be i.i.d. samples from \(\mu\) and given \(n\in\mathbb{Z}_{>0}\) let \(q_{n}=\gamma_{1}\gamma_{2}\ldots\gamma_{n}\). Let \(b\in P_{1}(\mathbb{R})\), let \(t>0\) and define \[\tau_{t,b}:=\min\{n:\left\|\gamma_{n}^{T}\gamma_{n-1}^{T}\ldots\gamma_{1}^{T} b\right\|\geq t\left\|b\right\|\}.\] We will show that we can find some \(\sigma\)-algebra \(\hat{\mathcal{A}}\), some \(\hat{\mathcal{A}}\)-measurable random variable \(a\) taking values in \(PSL_{2}(\mathbb{R})\) and some random variable \(u\) taking values in a small ball around the origin in \(\mathfrak{psl}_{2}(\mathbb{R})\) such that we may write \(q_{\tau_{t,b}}=a\exp(u)\) and such that conditional on \(\hat{\mathcal{A}}\) we know that \(u\) has at least some variance. First we need to define some analogue of variance for random values taking values in \(PSL_{2}(\mathbb{R})\). For this we will make use of log. Specifically given some fixed \(g_{0}\in PSL_{2}(\mathbb{R})\) and some random variable \(g\) taking values in \(PSL_{2}(\mathbb{R})\) such that \(g_{0}^{-1}g\) is always in the domain of log we will define \(\operatorname{VAR}_{g_{0}}[g]\) by \[\operatorname{VAR}_{g_{0}}[g]:=\operatorname{Var}[\log(g_{0}^{-1}g)].\] By the the variance of a random variable taking values in \(\mathfrak{psl}_{2}(\mathbb{R})\), or any other finite dimensional Euclidean vector space, we mean the trace of its covariance matrix. We now define the quantity \(v(g;r)\) as follows. **Definition 1.23**.: Let \(g\) be a random variable taking values in \(PSL_{2}(\mathbb{R})\) and let \(r>0\). We then define \(v(g;r)\) to be the supremum of all \(v\geq 0\) such that we can find some \(\sigma\)-algebra \(\mathcal{A}\) and some \(\mathcal{A}\)- measurable random variable \(a\) taking values in \(PSL_{2}(\mathbb{R})\) such that \(|\log(a^{-1}g)|\leq r\) almost surely and \[\mathbb{E}\left[\operatorname{VAR}_{a}\left[g|\mathcal{A}\right]\right]\geq vr ^{2}.\] **Proposition 1.24**.: _There is some absolute constant \(c>0\) such that the following is true. Let \(\mu\) be a strongly irreducible probability measure on \(PSL_{2}(\mathbb{R})\) whose support is not contained in a compact subgroup of \(PSL_{2}(\mathbb{R})\) and let \(\hat{\nu}\) be some probability measure on \(P_{1}(\mathbb{R})\). Suppose that \(M_{\mu}<\infty\) and that \(h_{RW}/\chi\) is sufficiently large. Let \(M>M_{\mu}\) be chosen large enough that \(\log M\geq h_{RW}\). Suppose that \(t\) is sufficiently large (depending on \(\mu\) and \(M\)) and let \(\hat{m}=\left\lfloor\frac{\log M}{100\chi}\right\rfloor\)._ _Let \(\gamma_{1},\gamma_{2},\dots\), \(q_{n}\) and \(\tau_{t,v}\) be as earlier in this section._ _Then there exists some \(\tilde{r}_{1},\tilde{r}_{2},\dots,\tilde{r}_{\hat{m}}>0\) such that for each \(i\in[\hat{m}]\)_ \[\tilde{r}_{i}\in\left(t^{-\frac{\log M}{\chi}},t^{-\frac{h_{RW}}{10\chi}}\right)\] _and for each \(i\in[\hat{m}-1]\)_ \[\tilde{r}_{i+1}\geq t^{3}\tilde{r}_{i}\] _and such that_ \[\sum_{i=1}^{\hat{m}}\int_{P_{1}(\mathbb{R})}v(q_{\tau_{t,w}};\tilde{r}_{i})\, \hat{\nu}(dw)\geq c\left(\frac{h_{RW}}{\chi}\right)\left(\max\left\{1,\log \frac{\log M}{h_{RW}}\right\}\right)^{-1}.\] The measure \(\hat{\nu}\) for which we apply Proposition 1.24 comes from the following result in renewal theory. **Theorem 1.25**.: _Let \(\mu\) be a probability measure on \(PSL_{2}(\mathbb{R})\) which is strongly irreducible and has positive Lypanov exponent. Then there is some probability measure \(\hat{\nu}\) on \(P_{1}(\mathbb{R})\) such that the following is true. Let \(\gamma_{1},\gamma_{2},\dots\) be i.i.d. samples from \(\mu\) and let \(q_{n}:=\gamma_{1}\gamma_{2}\dots\gamma_{n}\). Given \(b\in P_{1}(\mathbb{R})\) and \(t>0\) let \(\tau_{t,b}:=\inf\{n:\left\|q_{n}^{T}b\right\|\geq t\left\|b\right\|\}\). Then for all \(b\in P_{1}(\mathbb{R})\) the law of \(q_{\tau_{t,b}}^{T}v\) converges weakly to \(\hat{\nu}\) as \(t\to\infty\). Furthermore this convergence is uniform in \(b\)._ In [21, Theorem 1] it is proven that Theorem 1.25 holds without the condition that it is uniform in \(b\) in a much more general setting providing some conditions are satisfied. In [14, Section 4] it is shown that the conditions of [21, Theorem 1] are satisfied in the setting of Theorem 1.25. In Section 9, we will prove Theorem 1.25 by deducing uniform convergence from (not necessarily uniform) convergence. A formula for \(\hat{\nu}\) is given in [21, Theorem 1] though this will not be needed for the purposes of this paper. We construct the decomposition (5) of a sample \(x\) from \(\nu\) in Section 7. See Proposition 7.1. The details are very technical so we only discuss in this outline how given a sufficiently small scale \(\tilde{r}\) on can construct a stopping time \(\tau\), and a \(\sigma\)-algebra \(\mathcal{A}\) such that \[\gamma_{1}\gamma_{2}\dots\gamma_{\tau}=g\exp(u)\] for some \(\mathcal{A}\)-measurable random variable \(g\) taking values in \(PSL_{2}(\mathbb{R})\) and some random \(u\) taking values in \(\mathfrak{psl}_{2}(\mathbb{R})\) such that \(\left\|u\right\|\leq\left\|g\right\|^{2}\tilde{r}\) almost surely and after conditioning on \(\mathcal{A}\) we have a good lower bound for \(\frac{\operatorname{Var}(u)}{\left\|g\right\|^{4}\tilde{r}^{2}}\). Fix an arbitrary \(b\in P_{1}(\mathbb{R})\). Let \(s=(\tilde{r}/\tilde{r}_{1})^{1/2}/t\). By Theorem 1.25, there is a random variable \(w\) taking values in \(P_{1}(\mathbb{R})\) such that \(w^{\perp}\) has law \(\hat{\nu}\) and \[d(b^{-}(\gamma_{1}\gamma_{2}\dots\gamma_{\tau_{s,b}}),w)\] is small with high probability. Next we apply Proposition 1.24 and obtain some \(\sigma\)-algebra \(\tilde{\mathcal{A}}\) such that \[\gamma_{\tau_{s,b}+1}\gamma_{\tau_{s,b}+2}\dots\gamma_{\tau_{t,w^{\perp}}}=a \exp(u)\] where \(a\) is an \(\tilde{\mathcal{A}}\)-measurable random element of \(PSL_{2}(\mathbb{R})\) and \(u\) is a random element of \(\mathfrak{psl}_{2}(\mathbb{R})\) with \(\left\|u\right\|\leq\tilde{r}_{1}\) and a good lower bound on \(\frac{\operatorname{Var}(u)}{\tilde{r}_{1}^{2}}\). We fix a small \(\tilde{r}\) and some \(t\) that is much smaller that \(\tilde{r}^{-1}\). Let \(\tilde{r}_{1}\) be one of the scales we get when we apply Proposition 1.24 with the measure from Theorem 1.25 in the role of \(\hat{\nu}\). Now we define \(g=\gamma_{1}\dots\gamma_{\tau_{s,b}}a\). Using the definition of \(w\) it is possible to show that \(\left\|g\right\|\) is approximately \(s\cdot t=(\tilde{r}/\tilde{r}_{1})^{1/2}\). Note that the scale \(\tilde{r}_{1}\) depends on the measure \(\hat{\nu}\) so the convergence in Theorem 1.25 is important. On the other hand it does not matter what this limit measure is. The construction is Section 7 is significantly more elaborate. In particular, we will make use of all the scales \(\tilde{r}_{1},\dots,\tilde{r}_{\hat{m}}\) provided by Proposition 1.24. Moreover, we will need to apply it for a carefully chosen sequence of parameters in the role of \(t\). Finally we discuss some ingredients of the proof of Proposition 1.24. We take the entropy of an absolutely continuous random variable taking values in \(PSL_{2}(\mathbb{R})\) to be the differential entropy with respect to a certain normalisation of the Haar measure and denote this by \(H(\cdot)\). We will define this in Section 4.3. We will then prove the following theorem. **Theorem 1.26**.: _Let \(g,s_{1}\) and \(s_{2}\) be independent random variables taking values in \(PSL_{2}(\mathbb{R})\) such that \(s_{1}\) and \(s_{2}\) have finite entropy. Define \(k\) by_ \[k:=H(gs_{1})-H(s_{1})-H(gs_{2})+H(s_{2})\] _and let \(c:=\frac{3}{2}\log\frac{2}{3}\pi e\operatorname{VAR}_{\operatorname{Id}}[s_{ 1}]-H(s_{1})\). Suppose that \(k>0\). Suppose further that \(s_{1}\) and \(s_{2}\) are supported on the ball of radius \(\varepsilon\) centred at the origin for some sufficiently small \(\varepsilon>0\). Suppose also that \(\operatorname{VAR}_{\operatorname{Id}}[s_{1}]\geq A\varepsilon^{2}\) for some positive constant \(A\). Then_ \[\mathbb{E}\left[\operatorname{VAR}_{gs_{2}}\left[g|gs_{2}\right]\right]\geq \frac{2}{3}(k-c-C\varepsilon)\operatorname{VAR}_{\operatorname{Id}}[s_{1}]\] _where \(C\) is some positive constant depending only on \(A\)._ We apply this theorem when \(s_{1}\) and \(s_{2}\) are smoothing functions at appropriate scales with \(s_{2}\) being at a larger scale than \(s_{1}\). We take \(s_{1}\) and \(s_{2}\) to be compactly supported approximations of the image of the spherical normal distribution on \(\mathfrak{psl}_{2}(\mathbb{R})\) under \(\exp\). To do this we will find bounds on the differential entropy of various objects smoothed with these compactly supported approximations to the normal distribution at different scales. We then combine Theorems 1.26 and a bound for the entropy of the stopped random walk along with some calculations about the entropy and variance of the smoothing functions to prove Proposition 1.24. ### Notation We will use Landau's \(O(\cdot)\) notation. Given some positive quantity \(X\) we write \(O(X)\) to mean some quantity whose absolute values is bounded above by \(CX\) some constant \(C\). If \(C\) is allowed to depend on some other parameters then these will be denoted by subscripts. Similarly we write \(o(X)\) to mean some quantity whose absolute value is bounded above by \(c(X)\) where \(c(X)\) is some positive value which tends to \(0\) as \(X\to\infty\). Again if \(c\) is allowed to depend on some other parameters then these will be denoted by subscripts. We also let \(\Theta(X)\) be some quantity which is bounded below by \(CX\) where \(C\) is some positive absolute constant. If \(C\) is allowed to depend on some other parameters then these will be denoted by subscripts. We write \(X\lesssim Y\) to mean that there is some constant \(C>0\) such that \(X\leq CY\). Similarly we write \(X\gtrsim Y\) to mean that there is some constant \(C>0\) such that \(X\geq CY\) and \(X\cong Y\) to mean \(X\lesssim Y\) and \(X\gtrsim Y\). If these constants are allowed to depend on some other parameters then these are denoted in subscripts. ## 2. Order k detail In this section we will introduce a new method for measuring the smoothness of a measure at a given scale. We will call this the order \(k\) detail around a scale. ### Detail First we will give the definition of detail given in [22] along with some basic properties. Since in this paper we are only concerned with measures on the one dimensional manifold \(P_{1}(\mathbb{R})\) which we identify with \(\mathbb{R}/\pi\mathbb{Z}\) we will only introduce this in the one dimensional case. In [22] this is extended to higher dimensions. Before defining this quantity we need to define the following. **Definition 2.1**.: Given some \(y>0\) let \(\eta_{y}\) be the density function of the normal distribution with variance \(y\) and mean \(0\). Specifically let \[\eta_{y}(x):=\frac{1}{\sqrt{2\pi y}}\exp\left(-\frac{x^{2}}{2y}\right).\] Unlike in [22] we will primarily be concerned with measures on \(\mathbb{R}/\pi\mathbb{Z}\). For this reason we introduce the following. **Definition 2.2**.: Given some \(y>0\) let \(\tilde{\eta}_{y}\) be the density of the pushforward of the normal distribution with mean \(0\) and variance \(y\) onto \(\mathbb{R}/\pi\mathbb{Z}\). In other words given \(x\in\mathbb{R}/\pi\mathbb{Z}\) let \[\tilde{\eta}_{y}(x):=\sum_{u\in x}\eta_{y}(u).\] We will also use the following notation. **Definition 2.3**.: Given some \(y>0\) let \(\tilde{\eta}^{\prime}_{y}\) be defined by \[\tilde{\eta}^{\prime}_{y}:=\frac{\partial}{\partial y}\tilde{\eta}_{y}.\] Similarly we let \(\eta^{\prime}_{y}=\frac{\partial}{\partial y}\eta_{y}\). Given a signed measure \(\lambda\) on a set \(X\) with \(\sigma\)-algebra \(\mathcal{B}\) we let \(\left\|\lambda\right\|_{1}\) be defined by \[\left\|\lambda\right\|_{1}:=\sup_{A\in\mathcal{B}}\lambda(A)-\lambda(X\backslash A).\] We now define the following. **Definition 2.4**.: Given a probability measure \(\lambda\) on \(\mathbb{R}/\pi\mathbb{Z}\) and some \(r>0\) we define the _detail of \(\mu\) around scale \(r\)_ by \[s_{r}(\lambda):=r^{2}\sqrt{\frac{\pi e}{2}}\left\|\mu\ast\tilde{\eta}_{r^{2}}^{ \prime}\right\|_{1}.\] Similarly we define the detail of a probability measure on \(P_{1}(\mathbb{R})\) to be the detail of the pushforward measure under \(\phi\) and we define the detail of a random variable to be the detail of its law. The factor \(r^{2}\sqrt{\frac{\pi e}{2}}\) exists to ensure that \(s_{r}(\mu)\in[0,1]\). The smaller the value of detail around a scale the smoother the measure is at that scale. We will now state some basic facts about detail which are proven in [22]. **Lemma 2.5**.: _Let \(\lambda_{1}\) and \(\lambda_{2}\) be probability measures on \(\mathbb{R}/\pi\mathbb{Z}\). Then we have_ \[\left\|\lambda_{1}\ast\lambda_{2}\ast\tilde{\eta}_{y}^{\prime}\right\|_{1} \leq\left\|\lambda_{1}\ast\tilde{\eta}_{y}^{\prime}\right\|_{1}.\] _In particular_ (8) The above lemma is proven in [22, Lemma 2.3] for measures on \(\mathbb{R}\). The same proof works for measures on \(\mathbb{R}/\pi\mathbb{Z}\). We also note that (8) shows that \(s_{r}(\mu)\in[0,1]\) whenever \(\mu\) is a probability measure. We will also need the following fact. **Lemma 2.6**.: _Let \(y>0\). Then we have_ \[\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}\tilde{\eta}_{y}=\frac{\partial} {\partial y}\tilde{\eta}_{y}.\] This is well known and follows from a trivial computation. We can now define the order \(k\) detail around a scale. **Definition 2.7** (Order \(k\) detail around a scale).: Given a probability measure \(\lambda\) on \(\mathbb{R}/\pi\mathbb{Z}\) and some \(k\in\mathbb{Z}_{>0}\) we define the _order \(k\) detail of \(\lambda\) around scale \(r\)_, which we will denote by \(s_{r}^{(k)}(\lambda)\), by \[s_{r}^{(k)}(\lambda):=r^{2k}\left(\frac{\pi e}{2}\right)^{k/2}\left\|\lambda \ast\left.\frac{\partial^{k}}{\partial y^{k}}\tilde{\eta}_{y}\right|_{y=kr^{2} }\right\|_{1}.\] We also define the order \(k\) detail of a measure on \(P_{1}(\mathbb{R})\) to be the order \(k\) detail of the pushforward measure under \(\phi\) and define the order \(k\) detail of a random variable to be the order \(k\) detail of its law. It is worth noting that \(s_{r}^{(1)}(\cdot)=s_{r}(\cdot)\). We will now prove some basic properties of order \(k\) detail. **Lemma 2.8**.: _Let \(\lambda_{1},\lambda_{2},\ldots,\lambda_{k}\) be probability measures on \(\mathbb{R}/\pi\mathbb{Z}\). Then we have_ \[s_{r}^{(k)}(\lambda_{1}\ast\lambda_{2}\ast\cdots\ast\lambda_{k})\leq s_{r}( \lambda_{1})s_{r}(\lambda_{2})\ldots s_{r}(\lambda_{k}).\] Proof.: Note that by Lemma 2.6 and standard properties of convolution we have \[\frac{\partial^{k}}{\partial y^{k}}\tilde{\eta}_{y}\Bigg{|}_{y=kr^{2}} =2^{-k}\frac{\partial^{2k}}{\partial x^{2k}}\tilde{\eta}_{kr^{2}}\] \[=\underbrace{\left(\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}} \tilde{\eta}_{r^{2}}\right)*\left(\frac{1}{2}\frac{\partial^{2}}{\partial x^{2 }}\tilde{\eta}_{r^{2}}\right)*\cdots*\left(\frac{1}{2}\frac{\partial^{2}}{ \partial x^{2}}\tilde{\eta}_{r^{2}}\right)}_{k\text{ times}}\] \[=\underbrace{\tilde{\eta}_{r^{2}}^{\prime}*\tilde{\eta}_{r^{2}}^{ \prime}*\cdots*\tilde{\eta}_{r^{2}}^{\prime}}_{k\text{ times}}\] and therefore \[\lambda_{1}*\lambda_{2}*\cdots*\lambda_{k}*\left.\frac{\partial^{k}}{ \partial y^{k}}\tilde{\eta}_{y}\right|_{y=kr^{2}}=\lambda_{1}*\tilde{\eta}_{ r^{2}}^{\prime}*\lambda_{2}*\tilde{\eta}_{r^{2}}^{\prime}*\cdots*\lambda_{k}* \tilde{\eta}_{r^{2}}^{\prime}.\] This means \[\left\|\lambda_{1}*\lambda_{2}*\cdots*\lambda_{k}*\left.\frac{\partial^{k}}{ \partial y^{k}}\tilde{\eta}_{y}\right|_{y=kr^{2}}\right\|_{1}\leq\left\| \lambda_{1}*\tilde{\eta}_{r^{2}}^{\prime}\right\|_{1}\cdot\left\|\lambda_{2}* \tilde{\eta}_{r^{2}}^{\prime}\right\|_{1}\cdots\cdot\left\|\lambda_{k}*\tilde {\eta}_{r^{2}}^{\prime}\right\|_{1}.\] The result follows. We also need the following corollary. **Corollary 2.9**.: _Suppose that \(\lambda\) is a probability measure on \(\mathbb{R}/\pi\mathbb{Z}\). Then_ \[s_{r}^{(k)}(\lambda)\leq 1.\] Proof.: This is immediate by letting all but one of the measures in Lemma 2.8 be a delta function. There is no reason to assume that the bound in Corollary 2.9 is optimal for any \(k\geq 2\). Indeed it is fairly simple to show that it is not. However the trivial upper bound of \(1\) will still prove useful. ### Bounding detail using order k detail The purpose of this subsection is to prove Lemma 1.16. For this we first need the following result. **Lemma 2.10**.: _Let \(k\) be an integer greater than \(1\) and suppose that \(\lambda\) is a probability measure on \(\mathbb{R}/\pi\mathbb{Z}\). Suppose that \(a,b,c>0\) and \(\alpha\in(0,1)\). Suppose that \(a<b\) and that for all \(r\in[a,b]\) we have_ \[s_{r}^{(k)}(\lambda)\leq\alpha+cr^{2k}. \tag{9}\] _Then for all \(r\in\left[a\sqrt{\frac{k}{k-1}},b\sqrt{\frac{k}{k-1}}\right]\) we have_ \[s_{r}^{(k-1)}(\lambda)\leq\frac{k}{k-1}\sqrt{\frac{2e}{\pi}}\alpha+\left(b^{-2 k+2}+kb^{2}c\right)r^{2(k-1)}.\] Proof.: Recall that \[s_{r}^{(k)}(\lambda)=r^{2k}\left(\frac{\pi e}{2}\right)^{\frac{k}{2}}\left\| \lambda*\left.\frac{\partial^{k}}{\partial y^{k}}\tilde{\eta}_{y}\right|_{y=kr^ {2}}\right\|_{1}.\] This means by (9) that when \(y=kr^{2}\) we have \[\left\|\lambda*\frac{\partial^{k}}{\partial y^{k}}\tilde{\eta}_{y }\right\|_{1} \leq\alpha r^{-2k}\left(\frac{\pi e}{2}\right)^{-\frac{k}{2}}+c \left(\frac{\pi e}{2}\right)^{-\frac{k}{2}}\] \[=\alpha y^{-k}k^{k}\left(\frac{\pi e}{2}\right)^{-\frac{k}{2}}+c \left(\frac{\pi e}{2}\right)^{-\frac{k}{2}}\] for all \(y\in[ka^{2},kb^{2}]\). This means that for \(y\in[ka^{2},kb^{2}]\) we have \[\left\|\lambda*\frac{\partial^{k-1}}{\partial y^{k-1}}\tilde{ \eta}_{y}\right\|_{1}\] \[\leq\left\|\lambda*\left.\frac{\partial^{k-1}}{\partial u^{k-1}} \tilde{\eta}_{u}\right|_{u=kb^{2}}\right\|_{1}+\int_{y}^{kb^{2}}\left\| \lambda*\frac{\partial^{k}}{\partial u^{k}}\tilde{\eta}_{u}\right\|_{1}\,du\] \[\leq\left\|\frac{\partial^{k-1}}{\partial u^{k-1}}\tilde{\eta}_{u }\right|_{u=kb^{2}}\right\|_{1}+\int_{y}^{kb^{2}}\alpha u^{-k}k^{k}\left( \frac{\pi e}{2}\right)^{-\frac{k}{2}}+c\left(\frac{\pi e}{2}\right)^{-\frac{k }{2}}\,du\] \[\leq\left(\frac{kb^{2}}{k-1}\right)^{-k+1}\left(\frac{\pi e}{2} \right)^{-(k-1)/2}+\alpha\frac{y^{-k+1}}{k-1}k^{k}\left(\frac{\pi e}{2}\right) ^{-\frac{k}{2}}+kb^{2}c\left(\frac{\pi e}{2}\right)^{-\frac{k}{2}} \tag{10}\] where in (10) we bound \(\left\|\frac{\partial^{k-1}}{\partial u^{k-1}}\tilde{\eta}_{u}\right|_{u=kb^{ 2}}\right\|_{1}\) using the fact that order \(k-1\) detail is at most one, we bound \(\int_{y}^{kb^{2}}\alpha u^{-k}k^{k}\left(\frac{\pi e}{2}\right)^{-\frac{k}{2}}\,du\) by \(\int_{y}^{\infty}\alpha u^{-k}k^{k}\left(\frac{\pi e}{2}\right)^{-\frac{k}{2} }\,du\) and bound \(\int_{y}^{kb^{2}}c\left(\frac{\pi e}{2}\right)^{-\frac{k}{2}}\,du\) by \(\int_{0}^{kb^{2}}c\left(\frac{\pi e}{2}\right)^{-\frac{k}{2}}\,du\). Noting that \[\left(\frac{k}{k-1}\right)^{-k+1}<1\] and \[\left(\frac{\pi e}{2}\right)^{-\frac{1}{2}}<1\] we get \[\left\|\lambda*\frac{\partial^{k-1}}{\partial y^{k-1}}\eta_{y}\right\|_{1} \leq\alpha\frac{y^{-k+1}}{k-1}k^{k}\left(\frac{\pi e}{2}\right)^{-\frac{k}{2} }+\left(b^{-2k+2}+kb^{2}c\right)\left(\frac{\pi e}{2}\right)^{-\frac{k-1}{2}}.\] Substituting in the definition of order \(k\) detail gives \[s_{r}^{(k-1)}(\lambda) =r^{2(k-1)}\left(\frac{\pi e}{2}\right)^{\frac{k-1}{2}}\left\| \lambda*\frac{\partial^{k-1}}{\partial y^{k-1}}\tilde{\eta}_{y}\right|_{y=(k- 1)r^{2}}\right\|_{1}\] \[\leq r^{2(k-1)}\left(\frac{\pi e}{2}\right)^{-\frac{1}{2}}\alpha \frac{((k-1)r^{2})^{-k+1}}{k-1}k^{k}+r^{2(k-1)}\left(\frac{\pi e}{2}\right)^ {-\frac{1}{2}}\left(b^{-2k+2}+kb^{2}c\right)\] and so we have \[s_{r}^{(k-1)}(\lambda)\leq\alpha\sqrt{\frac{2}{\pi e}}\left(1+\frac{1}{k-1} \right)^{k}+(b^{-k+1}+kcb)r^{2(k-1)}\] for all \(r\in\left[a\sqrt{\frac{k}{k-1}},b\sqrt{\frac{k}{k-1}}\right]\). Noting that \(\left(1+\frac{1}{k-1}\right)^{k}\leq\frac{k}{k-1}e\) gives the required result. We apply this inductively to prove Lemma 1.16. Proof of Lemma 1.16.: Using Lemma 2.10 we will prove by induction for \(j=k,k-1,\ldots,1\) that for all \(r\in\left[a\sqrt{\frac{k}{j}},b\sqrt{\frac{k}{j}}\right]\) we have \[s_{r}^{(j)}(\lambda)\leq\alpha\frac{k}{j}\left(\frac{2e}{\pi}\right)^{\frac{k -j}{2}}+\frac{k!}{j!}b^{-2j}r^{2j}.\] The case \(j=k\) follows by the conditions of the lemma. Suppose that for all \(r\in\left[a\sqrt{\frac{k}{j+1}},b\sqrt{\frac{k}{j+1}}\right]\) we have \[s_{r}^{(j+1)}(\lambda)\leq\alpha\frac{k}{j+1}\left(\frac{2e}{\pi}\right)^{ \frac{k-j-1}{2}}+\frac{k!}{(j+1)!}b^{-2j-2}r^{2(j+1)}.\] Then by Lemma 2.10 for all \(r>0\) such that \(r\in\left[a\sqrt{\frac{k}{j}},b\sqrt{\frac{k}{j}}\right]\) we have \[s_{r}^{(j)}(\lambda) \leq\alpha\frac{k}{j}\left(\frac{2e}{\pi}\right)^{\frac{k-j}{2}}+ \left(b^{-2j}+jb^{2}\left(\frac{k!}{(j+1)!}b^{-2j-2}\right)\right)r^{2j}\] \[\leq\alpha\frac{k}{j}\left(\frac{2e}{\pi}\right)^{\frac{k-j}{2}}+ \left(\frac{k!}{(j+1)!}b^{-2j}+jb^{2}\left(\frac{k!}{(j+1)!}b^{-2j-2}\right) \right)r^{2j}\] \[=\alpha\frac{k}{j}\left(\frac{2e}{\pi}\right)^{\frac{k-j}{2}}+ (j+1)\frac{k!}{(j+1)!}b^{-2j}r^{2j}\] \[=\alpha\frac{k}{j}\left(\frac{2e}{\pi}\right)^{\frac{k-j}{2}}+ \frac{k!}{j!}b^{-2j}r^{2j}\] as required. Lemma 1.16 follows easily from the \(j=1\) case. ### Wasserstein distance bound In this subsection we will bound the difference in order \(k\) detail between two measures in terms of the Wasserstein distance between those two measures. Specifically we will prove Lemma 1.19. First we need to define Wasserstein distance. **Definition 2.11** (Coupling).: Given two measures probability measures \(\lambda_{1}\) and \(\lambda_{2}\) on a set \(X\) we say that a _coupling_ between \(\lambda_{1}\) and \(\lambda_{2}\) is a measure \(\gamma\) on \(X\times X\) such that \(\gamma(\cdot\times X)=\lambda_{1}(\cdot)\) and \(\gamma(X\times\cdot)=\lambda_{2}(\cdot)\) **Definition 2.12** (Wasserstein distance).: Given two probability measures \(\lambda_{1}\) and \(\lambda_{2}\) on \(\mathbb{R}/\pi\mathbb{Z}\) the Wasserstein distance between \(\lambda_{1}\) and \(\lambda_{2}\), which we will denote by \(\mathcal{W}_{1}(\lambda_{1},\lambda_{2})\), is given by \[\mathcal{W}_{1}(\lambda_{1},\lambda_{2}):=\inf_{\gamma\in\Gamma}\int_{\mathbb{ R}/\pi\mathbb{Z}^{2}}|x-y|\,\gamma(dx,dy)\] where \(\Gamma\) is the set of couplings between \(\lambda_{1}\) and \(\lambda_{2}\). We can now prove Lemma 1.19. Proof of Lemma 1.19.: Let \(X\) and \(Y\) be random variables with laws \(\lambda_{1}\) and \(\lambda_{2}\) respectively. Then we have \[(\lambda_{1}-\lambda_{2})*\left.\frac{\partial^{k}}{\partial y^{k}}\tilde{ \eta}_{y}\right|_{y=kr^{2}}(v)=\mathbb{E}\left[\left.\frac{\partial^{k}}{ \partial y^{k}}\tilde{\eta}_{y}\right|_{y=kr^{2}}(v-X)-\left.\frac{\partial^{ k}}{\partial y^{k}}\tilde{\eta}_{y}\right|_{y=kr^{2}}(v-Y)\right].\] In particular \[\left|(\lambda_{1}-\lambda_{2})*\left.\frac{\partial^{k}}{\partial y^{k}} \tilde{\eta}_{y}\right|_{y=kr^{2}}(v)\right|\leq\mathbb{E}\left|\left.\frac{ \partial^{k}}{\partial y^{k}}\tilde{\eta}_{y}\right|_{y=kr^{2}}(v-X)-\left. \frac{\partial^{k}}{\partial y^{k}}\tilde{\eta}_{y}\right|_{y=kr^{2}}(v-Y) \right|.\] We note that \[\left|\left.\frac{\partial^{k}}{\partial y^{k}}\tilde{\eta}_{y}\right|_{y=kr^ {2}}(v-X)-\left.\frac{\partial}{\partial y}\tilde{\eta}_{y}\right|_{y=kr^{2}} (v-Y)\right|\leq\int_{X}^{Y}\left|\left.\frac{\partial^{k+1}}{\partial x \partial y^{k}}\tilde{\eta}_{y}\right|_{y=kr^{2}}(v-u)\right|\,|du|\] where \[\int_{x}^{y}\cdot|du|\] is understood to be the integral along the shortest path between \(x\) and \(y\). This means that \[\left\|(\lambda_{1}-\lambda_{2})*\left.\frac{\partial^{k}}{ \partial y^{k}}\tilde{\eta}_{y}\right|_{y=kr^{2}}\right\|_{1} \leq\int_{\mathbb{R}/\pi\mathbb{Z}}\mathbb{E}\left[\left.\int_{X} ^{Y}\left|\left.\frac{\partial^{k+1}}{\partial x\partial y^{k}}\tilde{\eta}_ {y}\right|_{y=kr^{2}}(v-u)\right|\,|du|\right]\,dv\] \[=\mathbb{E}\left[\left.\int_{X}^{Y}\int_{\mathbb{R}/\pi\mathbb{Z }}\left|\left.\frac{\partial^{k+1}}{\partial x\partial y^{k}}\tilde{\eta}_{y} \right|_{y=kr^{2}}(v-u)\right|\,dv\,|du|\right]\] \[=\left\|\left.\frac{\partial^{k+1}}{\partial x\partial y^{k}} \tilde{\eta}_{y}\right|_{y=kr^{2}}\right\|_{1}\mathbb{E}|X-Y|.\] We now bound \(\left\|\left.\frac{\partial^{k+1}}{\partial x\partial y^{k}}\tilde{\eta}_{y} \right|_{y=kr^{2}}\right\|_{1}\). To do this note that \[\left\|\left.\frac{\partial^{k+1}}{\partial x\partial y^{k}}\tilde{\eta}_{y} \right|_{y=kr^{2}}\right\|_{1}\leq\left\|\left.\frac{\partial^{k+1}}{\partial x \partial y^{k}}\eta_{y}\right|_{y=kr^{2}}\right\|_{1}.\] By using Lemma 2.6 in the same way as in the proof of Lemma 2.8 we get \[\frac{\partial^{k+1}}{\partial x\partial y^{k}}\eta_{y}\bigg{|}_{y=kr^{2}}=\left. \frac{\partial}{\partial x}\eta_{y}\right|_{y=r^{2}}\ast\underbrace{\left. \frac{\partial}{\partial y}\eta_{y}\right|_{y=r^{2}}\ast\left.\frac{\partial }{\partial y}\eta_{y}\right|_{y=r^{2}}\ast\cdots\ast\left.\frac{\partial}{ \partial y}\eta_{y}\right|_{y=r^{2}}}_{k\text{ times}}\] and so \[\left\|\frac{\partial^{k+1}}{\partial x\partial y^{k}}\eta_{y}\right|_{y=kr^{2 }}\right\|_{1}\leq\left\|\frac{\partial}{\partial x}\eta_{r^{2}}\right\|_{1} \cdot\left\|\eta_{r^{2}}^{\prime}\right\|_{1}^{k}.\] Note that trivially there is some constant \(C>0\) such that \[\left\|\frac{\partial}{\partial x}\eta_{r^{2}}\right\|_{1}=Cr^{-1}.\] From Lemma 2.5 we have \[\left\|\frac{\partial}{\partial y}\eta_{y}\right|_{y=r^{2}}\right\|_{1}=r^{-2 }\sqrt{\frac{2}{\pi e}}\] meaning \[\left\|\frac{\partial^{k+1}}{\partial x\partial y^{k}}\eta_{y}\right|_{y=kr^{ 2}}\right\|_{1}\leq Cr^{-2k-1}\left(\frac{\pi e}{2}\right)^{-\frac{k}{2}}.\] Therefore \[r^{2k}\left(\frac{\pi e}{2}\right)^{\frac{k}{2}}\left\|\frac{\partial^{k+1}}{ \partial x\partial y^{k}}\eta_{y}\right|_{y=kr^{2}}\right\|_{1}\leq Cr^{-1}.\] Choosing a coupling for \(X\) and \(Y\) which minimizes \(\mathbb{E}|X-Y|\) gives the required result. ### Small random variables bound In this subsection we will prove Lemma 1.18. Recall that this gives a bound for the detail of the sum of many independent random variables each of which are contained in a small interval containing \(0\) and have at least some variance. To prove this we will need the following lemma. **Lemma 2.13**.: _Let \(X_{1},X_{2},\ldots,X_{n}\) be independent random variables taking values in \(\mathbb{R}\) with mean \(0\) and for each \(i\in[n]\) let \(\mathbb{E}[X_{i}^{2}]=\omega_{i}^{2}\) and \(\mathbb{E}[|X_{i}|^{3}]=\gamma_{i}^{3}<\infty\). Let \(\omega^{2}=\sum_{i=1}^{n}\omega_{i}^{2}\) and let \(S=X_{1}+\cdots+X_{n}\). Then_ \[\mathcal{W}_{1}(S,\eta_{\omega^{2}})\lesssim\frac{\sum_{i=1}^{n}\gamma_{i}^{3 }}{\sum_{i=1}^{n}\omega_{i}^{2}}.\] Proof.: A proof of this result may be found in [10]. We are now ready to prove Lemma 1.18. Proof of Lemma 1.18.: We will prove this in the case where the \(X_{i}\) take values in \(\mathbb{R}\). The case where they take values \(\mathbb{R}/\pi\mathbb{Z}\) follows trivially from this case. For \(i=1,\ldots,n\) let \(X^{\prime}_{i}=X_{i}-\mathbb{E}[X_{i}]\) and let \(S^{\prime}=\sum_{i=1}^{n}X^{\prime}_{i}\). Note that \(s_{r}(S)=s_{r}(S^{\prime})\). Let \(\mathbb{E}[|X^{\prime}_{i}|^{2}]=\omega_{i}^{2}\) and \(\mathbb{E}[|X^{\prime}_{i}|^{3}]=\gamma_{i}^{3}\). Note that \(\operatorname{Var}X_{i}=\omega_{i}^{2}\) and so \(\hat{r}^{2}=\sum_{i=1}^{n}\omega_{i}^{2}\). Note that almost surely \(|X^{\prime}_{i}|\leq 2\hat{r}\). This means that \(\gamma_{i}^{3}\leq 2\hat{r}\omega_{i}^{2}\). Therefore by Lemma 2.13 we have \[\mathcal{W}_{1}\left(S^{\prime},\eta_{\hat{r}^{2}}\right)\leq O(\hat{r}).\] We also compute \[s_{r}(\eta_{\hat{r}^{2}}) =\frac{\left\|\eta^{\prime}_{r^{2}+\hat{r}^{2}}\right\|_{1}}{ \left\|\eta^{\prime}_{r^{2}}\right\|_{1}}\] \[=\frac{r^{2}}{r^{2}+\hat{r}^{2}}\] and so noting that \(s_{r}(\cdot)=s_{r}^{(1)}(\cdot)\) we have by Lemma 1.19 that \[s_{r}(S) =s_{r}(S^{\prime})\] \[\leq O\left(\frac{\tilde{r}}{r}\right)+\frac{r^{2}}{r^{2}+\hat{r} ^{2}}.\] This gives the required result. ## 3. Taylor expansion bound In this Section we will prove Proposition 1.22. We also do some computations on the derivatives \(\zeta_{i}\in\mathfrak{psl}_{2}^{\ast}\) from Proposition 1.22 which will later enable us to give bounds on the order \(k\) detail of \(x\) from the proposition. First we will give more detail on our notation. Given normed vector spaces \(V\) and \(W\), some vector \(v\in V\), and a function \(f:V\to W\) which is differentiable at \(v\) we write \(D_{v}f(v)\) for the linear map \(V\to W\) which is the derivative of \(f\) at \(v\). Similarly if \(f\) is \(n\) times differentiable at \(v\) we write \(D_{v}^{n}f(v)\) for the \(n\)-multi-linear map \(V^{n}\to W\) which is the \(n\)th derivative of \(f\) at \(v\). Now given some normed vector space \(V\), some vector \(v\in V\), and a function \(f:V\to\mathbb{R}/\pi\mathbb{Z}\) which is \(n\) times differentiable at \(v\) we can find some open set \(U\subset V\) containing \(v\) such that there exists some function \(\tilde{f}:U\to\mathbb{R}\) which is \(n\) times differentiable at \(v\) and such that for all \(u\in U\) we have \[f(u)=\tilde{f}(u)+\pi\mathbb{Z}.\] In this case we take \(Df_{v}^{n}(v)\) to be \(D_{v}^{n}\tilde{f}(v)\). Clearly this does not depend on our choice of \(U\) or \(f\). Similarly given a sufficiently regular function \(f:\mathbb{R}/\pi\mathbb{Z}\to V\) we take \(D_{v}f(v)\) to be \(D_{v}\tilde{f}(v)\) where \(\tilde{f}:\mathbb{R}\to V\) is defined by \[\tilde{f}(x)=f(x+\pi\mathbb{Z}).\] As well as proving Proposition 1.22 we also derive some bounds on the size of various first derivatives. **Definition 3.1**.: Given some \(b\in P_{1}(\mathbb{R})\) we let \(\varrho_{b}\in\mathfrak{psl}_{2}^{*}\) be defined by \[\varrho_{b}=D_{u}\phi(\exp(u)b)|_{u=0}\] **Proposition 3.2**.: _For all \(t>0\) there is some \(\delta>0\) such that the following is true. Let \(v\in\mathfrak{psl}_{2}(\mathbb{R})\) be a unit vector. Then there exists some \(a_{1},a_{2}\in\mathbb{R}\) such that if_ \[b\in P_{1}(\mathbb{R})\backslash\phi^{-1}((a_{1},a_{1}+t)\cup(a_{2},a_{2}+t))\] _then_ \[|\varrho_{b}(v)|\geq\delta.\] _Additionally we may assume that the \(a_{1}\) and \(a_{2}\) are measurable functions of \(v\)._ Motivated by this we have the following definition. **Definition 3.3**.: Let \(t\), \(v\), \(a_{1}\), and \(a_{2}\) be as in Proposition 3.2 and let \(\varepsilon>0\). Then we define \(U_{t}(v)\) and \(U_{t,\varepsilon}(v)\) by \[U_{t}(v):=P_{1}(\mathbb{R})\backslash\phi^{-1}((a_{1},a_{1}+t)\cup(a_{2},a_{2 }+t))\] and \[U_{t,\varepsilon}(v):=P_{1}(\mathbb{R})\backslash\phi^{-1}((a_{1}-\varepsilon,a_{1}+t+\varepsilon)\cup(a_{2}-\varepsilon,a_{2}+t+\varepsilon)).\] We also have the following. **Definition 3.4**.: Let \(X\) be a random variable taking values in some vector space \(V\). We say that \(u\in V\) is a _first principal component_ of \(X\) if it is an eigenvector of its covariance matrix with maximal eigenvalue. **Definition 3.5**.: Given a random variable \(X\) taking values in \(\mathfrak{psl}_{2}(\mathbb{R})\), \(t>0\), and \(\varepsilon>0\) we let \[U_{t}(X)=\cup_{v\in P}U_{t}(v)\] and \[U_{t,\varepsilon}(X)=\cup_{v\in P}U_{t,\varepsilon}(v)\] where \(P\) is the set of first principal components of \(X\). Similarly if \(\mu\) is a probability measure which is the law of a random variable \(X\) then we define \(U_{t}(\mu):=U_{t}(X)\) and \(U_{t,\varepsilon}(\mu):=U_{t,\varepsilon}(X)\). From this we may deduce the following. **Proposition 3.6**.: _For all \(t>0\) there is some \(\delta>0\) such that the following is true. Suppose that \(v\) is a random variable taking values in \(\mathfrak{psl}_{2}(\mathbb{R})\) and that \(b\in P_{1}(\mathbb{R})\). Suppose that_ \[b\in U_{t}(v).\] _Then_ \[\operatorname{Var}\rho_{b}(v)\geq\delta\operatorname{Var}v.\] Here by the variance of a random variable taking values in \(\mathfrak{psl}_{z}(\mathbb{R})\) we mean the trace of its covariance matrix. We will prove Propositions 3.2 and 3.6 in Section 3.3. ### Singular value decomposition The purpose of this subsection is to prove the following proposition and a simple corollary of it. **Proposition 3.7**.: _Given any \(t>0\) and \(\varepsilon>0\) there exists some constants \(C,\delta>0\) such that the following is true. Suppose that \(n\in\mathbb{Z}_{>0}\), \(g_{1},\ldots,g_{n}\in PSL_{2}(\mathbb{R})\), for \(i=1,\ldots,n\) we have_ \[\|g_{i}\|\geq C\] _and for \(i=1,\ldots,n-1\)_ \[d(b^{-}(g_{i}),b^{+}(g_{i+1}))>t.\] _Suppose also that there are \(u_{1},u_{2},\ldots,u_{n-1}\in\mathfrak{psl}_{z}(\mathbb{R})\) such that for \(i=1,2,\ldots,n-1\) we have_ \[\|u_{i}\|<\delta.\] _Then if we let \(g^{\prime}=g_{1}\exp(u_{1})g_{2}\exp(u_{2})\ldots g_{n}\) we have_ \[\|g^{\prime}\|\geq C^{-(n-1)}\left\|g_{1}\right\|\cdot\|g_{2}\|\cdot\cdots \cdot\|g_{n}\| \tag{11}\] _and_ \[d(b^{+}(g^{\prime}),b^{+}(g_{1}))<\varepsilon \tag{12}\] _and_ \[d(b^{-}(g^{\prime}),b^{-}(g_{n}))<\varepsilon. \tag{13}\] **Corollary 3.8**.: _Given any \(t>0\) and \(\varepsilon>0\) there exists some constants \(C,\delta>0\) such that the following is true. Suppose that \(n\in\mathbb{Z}_{>0}\), \(g_{1},\ldots,g_{n}\in PSL_{2}(\mathbb{R})\), \(b\in P_{1}(\mathbb{R})\), for \(i=1,\ldots,n\) we have_ \[\|g_{i}\|\geq C\] _and for each \(i=1,2,\ldots,n-1\) we have_ \[d(b^{-}(g_{i}),b^{+}(g_{i+1}))>t.\] _Suppose also that_ \[d(b^{-}(g_{n}),b)>t.\] _Suppose also that there are \(u_{1},u_{2},\ldots,u_{n}\in\mathfrak{psl}_{z}(\mathbb{R})\) such that for \(i=1,2,\ldots,n\) we have_ \[\|u_{i}\|<\delta.\] _Then if we let \(g^{\prime}=g_{1}\exp(u_{1})g_{2}\exp(u_{2})\ldots g_{n}\exp(u_{n})b\) we have_ \[d(b^{+}(g^{\prime}),b^{+}(g_{1}))<\varepsilon.\] We will prove Proposition 3.7 by induction and then deduce Corollary 3.8 from it. First we need the following lemmas. **Lemma 3.9**.: _Let \(\varepsilon>0\), \(C>0\), \(g\in PSL_{2}(\mathbb{R})\), and \(b\in P_{1}(\mathbb{R})\). Suppose that_ \[\left\|g\right\|\geq C\] _and_ \[d(b^{-}(g),b)\geq\varepsilon.\] _Then_ \[d(b^{+}(g),gb)\lesssim C^{-2}\varepsilon^{-1}\] _and we have_ \[\left\|gb\right\|\gtrsim\varepsilon\left\|g\right\|\cdot\left\|b\right\|.\] Proof.: Without loss of generality suppose that \[g=\begin{pmatrix}\lambda&0\\ 0&\lambda^{-1}\end{pmatrix}\] and \(b\) is of the form \[b=\begin{pmatrix}\sin x\\ \cos x\end{pmatrix}.\] Our requirement that \(\left\|g\right\|\geq C\) becomes \(\lambda\geq C\) and our requirement that \(d(b^{-}(g),b)\geq\varepsilon\) becomes \(x\geq\varepsilon\). Note that \(b^{+}(g)=(1,0)^{T}\) and \(b^{-}(g)=(0,1)^{T}\). Trivially \[gb=\begin{pmatrix}\lambda\sin x\\ \lambda^{-1}\cos x\end{pmatrix}.\] This means that \[\cot d(b^{+}(g),gb)=\lambda^{2}\tan x.\] In particular \[d(b^{+}(g),gb)\lesssim C^{-2}\varepsilon^{-1}.\] We also know that \[\left\|gb\right\|\geq\lambda\sin x\gtrsim\varepsilon\left\|g\right\|\cdot \left\|b\right\|.\qed\] We also have the following simple corollary. **Corollary 3.10**.: _For every \(\varepsilon>0\) there exists some \(C>0\) such that the following is true. Let \(g\in PSL_{2}(\mathbb{R})\) and \(b\in P_{1}(\mathbb{R})\). Suppose that_ \[\left\|g\right\|\geq C\] _and_ \[d(b^{-}(g),b)\geq\varepsilon.\] _Then_ \[d(b^{+}(g),gb)\leq\varepsilon\] _and we have_ \[\left\|gb\right\|\geq C^{-1}\left\|g\right\|\cdot\left\|b\right\|.\] This corollary is trivial and left as an exercise to the reader. **Lemma 3.11**.: _Let \(g_{1},g_{2}\in PSL_{2}(\mathbb{R})\). Then_ \[\left\|g_{1}\right\|\cdot\left\|g_{2}\right\|\sin d(b^{-}(g_{1}),b^{+}(g_{2})) \leq\left\|g_{1}g_{2}\right\|\leq\left\|g_{1}\right\|\cdot\left\|g_{2}\right\|. \tag{14}\] _Furthermore, for every \(A>1\) and \(t>0\) there exists some \(C>0\) with_ \[C\leq O((A-1)^{-1}t^{-1})\] _such that if \(\left\|g_{1}\right\|,\left\|g_{2}\right\|\geq C\) and \(d(b^{-}(g_{1}),b^{+}(g_{2}))\geq t\) then_ \[\left\|g_{1}g_{2}\right\|\leq A\left\|g_{1}\right\|\cdot\left\|g_{2}\right\| \sin d(b^{-}(g_{1}),b^{+}(g_{2})). \tag{15}\] Proof.: The right hand side of (14) is a well known result about the operator norm. Without loss of generality suppose that \[g_{1}=\begin{pmatrix}\lambda_{1}&0\\ 0&\lambda_{1}^{-1}\end{pmatrix}\] and \[g_{2}=\begin{pmatrix}\cos x&-\sin x\\ \sin x&\cos x\end{pmatrix}\begin{pmatrix}\lambda_{2}&0\\ 0&\lambda_{2}^{-1}\end{pmatrix}=\begin{pmatrix}\lambda_{2}\cos x&-\lambda_{2} ^{-1}\sin x\\ \lambda_{2}\sin x&\lambda_{2}^{-1}\cos x\end{pmatrix}.\] Note that \[g_{1}g_{2}\begin{pmatrix}1\\ 0\end{pmatrix}=\begin{pmatrix}\lambda_{1}\lambda_{2}\cos x\\ \lambda_{1}^{-1}\lambda_{2}\sin x\end{pmatrix}.\] This means \(\left\|g_{1}g_{2}\right\|\geq\lambda_{1}\lambda_{2}\cos x=\left\|g_{1}\right\| \cdot\left\|g_{2}\right\|\sin\left|\phi(b^{-}(g_{1}))-\phi(b^{+}(g_{2}))\right|\) which proves (14). For (15) note that \[g_{1}g_{2}=\begin{pmatrix}\lambda_{1}\lambda_{2}\cos x&-\lambda_{1}\lambda_{2 }^{-1}\sin x\\ \lambda_{1}^{-1}\lambda_{2}\sin x&\lambda_{1}\lambda_{2}^{-1}\cos x\end{pmatrix}.\] This means that \[\left\|g_{1}g_{2}\right\|\leq\left\|g_{1}g_{2}\right\|_{2}\leq\left(1+3C^{-2} \left(\cos x\right)^{-1}\right)\lambda_{1}\lambda_{2}\cos x.\] This gives the required result. **Lemma 3.12**.: _Given any \(\varepsilon>0\) and any \(t>0\) there is some constant \(C>0\) such that the following holds. Let \(g_{1},g_{2}\in PSL_{2}(\mathbb{R})\) such that \(\left\|g_{1}\right\|,\left\|g_{2}\right\|\geq C\) and \(d(b^{-}(g_{1}),b^{+}(g_{2}))\geq t\). Then_ \[d(b^{+}(g_{1}),b^{+}(g_{1}g_{2}))<\varepsilon \tag{16}\] _and_ \[d(b^{-}(g_{2}),b^{-}(g_{1}g_{2}))<\varepsilon. \tag{17}\] _Furthermore we have \(C\leq O\left(\left(\min\{\varepsilon,t\}\right)^{-1}\right)\)._ Proof.: Without loss of generality we assume that \(\varepsilon<t\). Choose \(C\) large enough to work with \(\frac{1}{10}\varepsilon\) in the role of \(\varepsilon\) in Corollary 3.10. Note that by Lemma 3.9 we may assume that \(C\leq O\left(\left(\min\{\varepsilon,t\}\right)^{-1}\right)\). Now choose any \(b\in P_{1}(\mathbb{R})\) such that \[d(b,b^{-}(g_{2}))>\varepsilon\] \[d(b,b^{-}(g_{1}g_{2}))>\varepsilon.\] By Corollary 3.10 we know that \[d(g_{2}b,b^{+}(g_{2}))<\frac{1}{10}\varepsilon\] and so in particular \[d(g_{2}b,b^{-}(g_{1}))>\varepsilon.\] By Corollary 3.10 this means that \[d(g_{1}g_{2}b,b^{+}(g_{1}))<\frac{1}{10}\varepsilon.\] We also have that \[d(g_{1}g_{2}b,b^{+}(g_{1}g_{2}))<\frac{1}{10}\varepsilon.\] In particular this means that \[d(b^{+}(g_{1}),b^{+}(g_{1}g_{2}))<\varepsilon.\] This proves (16). (17) follows by taking the transpose. **Lemma 3.13**.: _Given any \(\varepsilon>0\) there exists \(C,\delta>0\) such that the following is true. Suppose that \(g\in PSL_{2}(\mathbb{R})\), \(b\in P_{1}(\mathbb{R})\), and \(u\in\mathfrak{psl}_{2}(\mathbb{R})\). Suppose further that \(\|g\|\geq C\) and \(\|u\|<\delta\). Then we have_ \[C^{-1}\left\|g\right\|\leq\left\|\exp(u)g\right\|\leq C\left\|g\right\|, \tag{18}\] \[d(b,\exp(u)b)<\varepsilon, \tag{19}\] _and_ \[d(b^{+}(g),b^{+}(\exp(u)g))<\varepsilon. \tag{20}\] Proof.: First note that (18) and (19) both follow from the fact that \(\exp(\cdot)\) is smooth and \(P_{1}(\mathbb{R})\) is compact. (20) follows from (18), (19) and applying Lemma 3.9 with some element of \(P_{1}(\mathbb{R})\) which is not close to \(b^{-}(g)\) or \(b^{-}(\exp(u)g)\) in the role of \(b\). This is enough to prove Proposition 3.7 and Corollary 3.8. Proof of Proposition 3.7.: Without loss of generality assume that \(\varepsilon<t\). Let \(C_{1}\) be as in Corollary 3.10 with \(\frac{1}{10}\varepsilon\) in the role of \(\varepsilon\). Let \(C_{2}\) and \(\delta_{2}\) be \(C\) and \(\delta\) from Lemma 3.13 with \(\frac{1}{10}\varepsilon\) in the role of \(\varepsilon\). We now take \(C=\max\{C_{1}C_{2},\left(\sin\frac{1}{10}t\right)^{-1}\}\) and \(\delta=\delta_{2}\). First we will deal with (12). Choose \(b\) such that \[d(b,b^{-}(g_{n}))>\frac{1}{10}\varepsilon\] and \[d(b,b^{-}(g^{\prime}))>\frac{1}{10}\varepsilon.\] Note that by Corollary 3.10 we know that \[d(g_{n}b,b^{+}(g_{n}))<\frac{1}{10}\varepsilon.\] By Lemma 3.13 we know that \[d(\exp(u_{n-1})g_{n}b,g_{n}b)<\frac{1}{10}\varepsilon\] and so \[d(\exp(u_{n-1})g_{n}b,b^{-}(g_{n-1}))>\frac{1}{10}\varepsilon.\] Repeating this process we are able to show that \[d(g^{\prime}b,b^{+}(g_{1}))<\frac{1}{10}\varepsilon.\] We also know that \[d(g^{\prime}b,b^{+}(g^{\prime}))<\frac{1}{10}\varepsilon.\] Hence \[d(b^{+}(g^{\prime}),b^{+}(g_{1}))<\varepsilon.\] To prove (13) simply take the transpose of everything. Now to prove (11). Let \(b\) be chosen as before and let \(u\in b\) be a unit vector. Note that by Corollary 3.10 \[\left\|g_{n}u\right\|\geq C_{1}^{-1}\left\|g_{n}\right\|\cdot\left\|u\right\|\] and by Lemma 3.13 we know that \[\left\|\exp(u_{n-1})g_{n}u\right\|\geq C_{1}^{-1}C_{2}^{-1}\left\|g_{n} \right\|\cdot\left\|u\right\|.\] Repeating this gives the required result. I will also prove Corollary 3.8. Proof of Corollary 3.8.: This follows from applying Proposition 3.7 to \(g_{1}\exp(u_{1})g_{2}\exp(u_{2})\dots g_{n-1}\exp(u_{n-1})g_{n}\), applying Lemma 3.13 to \(\exp(u_{n})b\) and then applying Lemma 3.9. ### Proof of Proposition 1.22 In this subsection we will prove Proposition 1.22. To do this we will need to find an upper bound on the size of various second derivatives and apply Taylor's theorem. We will use the following version of Taylor's theorem. **Theorem 3.14**.: _Let \(f:\mathbb{R}^{n}\to\mathbb{R}/\pi\mathbb{Z}\) be twice differentiable and let \(R_{1},R_{2},\dots,R_{n}>0\). Let \(U=[-R_{1},R_{1}]\times[-R_{2},R_{2}]\times\dots\times[-R_{n},R_{n}]\). For \(i,j\in[n]\) let \(K_{i,j}=\sup\limits_{U}\left|\frac{\partial^{2}f}{\partial x_{i}\partial x_{ j}}\right|\) and let \(\mathbf{x}\in U\). Then we have_ \[\left|f(\mathbf{x})-f(0)-\sum_{i=1}^{n}x_{i}\left.\frac{\partial f}{\partial x _{i}}\right|_{\mathbf{x}=0}\right|\leq\frac{1}{2}\sum_{i,j=1}^{n}x_{i}K_{i,j}x _{j}.\] In order to prove Proposition 1.22 we need the following proposition. **Proposition 3.15**.: _Let \(t>0\). Then there exists some constants \(C,\delta>0\) such that the following holds. Suppose that \(n\in\mathbb{Z}_{>0}\), \(g_{1},g_{2}\ldots,g_{n}\in PSL_{2}(\mathbb{R})\), \(b\in P_{1}(\mathbb{R})\) and let \(u^{(1)},u^{(2)},\ldots,u^{(n)}\in\mathfrak{psl}_{2}(\mathbb{R})\) be such that \(\left\|u^{(i)}\right\|\leq\delta\). Suppose that for \(i\in[n]\) we have_ \[\left\|g_{i}\right\|\geq C\] _and for \(i\in[n-1]\) we have_ \[d(b^{-}(g_{i}),b^{+}(g_{i+1}))>t\] _and_ \[d(b^{-}(g_{n}),b)>t.\] _Let \(x\) be defined by_ \[x=g_{1}\exp(u^{(1)})g_{2}\exp(u^{(2)})\ldots g_{n}\exp(u^{(n)})b\] _Then for any \(i,j\in\{1,2,3\}\) and any \(k,\ell\in[n]\) with \(k\leq\ell\) we have_ \[\left|\frac{\partial^{2}}{\partial u^{(k)}_{i}\partial u^{(\ell)}_{j}}\phi(x) \right|<C^{n}\left\|g_{1}g_{2}\ldots g_{\ell}\right\|^{-2}.\] We will prove this later in this subsection. Note that given some \(u\in\mathfrak{psl}_{2}(\mathbb{R})\) and some \(i\in\{1,2,3\}\) by \(u_{i}\) we mean the \(i\)th component of \(u\) with respect to our choice of basis for \(\mathfrak{psl}_{2}(\mathbb{R})\) which we will fix throughout this paper. To prove this we need to understand the size of the second derivatives. For this we will need the following lemmas. **Lemma 3.16**.: _Let \(t>0\), let \(x\in\mathbb{R}/\pi\mathbb{Z}\), and let \(g\in PSL_{2}(\mathbb{R})\). Suppose that_ \[d(b^{-}(g),\phi^{-1}(x))>t. \tag{21}\] _Let \(y=\phi(g\phi^{-1}(x))\). Then_ \[\left\|g\right\|^{-2}\leq\frac{\partial y}{\partial x}\leq O_{t}\left(\left\| g\right\|^{-2}\right)\] _and_ \[\left|\frac{\partial^{2}y}{\partial x^{2}}\right|\leq O_{t}\left(\left\|g \right\|^{-2}\right).\] Proof.: Let \(g=R_{\phi}A_{\lambda}R_{-\theta}\). First note that \[y=\tan^{-1}\left(\lambda^{-2}\tan(x-\theta)\right)+\phi. \tag{22}\] Recall that if \(v=\tan^{-1}u\) then \(\frac{dv}{du}=\frac{1}{u^{2}+1}\). This means that by the chain rule we have \[\frac{\partial y}{\partial x} =\left(\frac{1}{\lambda^{-4}\tan^{2}(x-\theta)+1}\right)\cdot \lambda^{-2}\cdot\left(\frac{1}{\cos^{2}(x-\theta)}\right)\] \[=\frac{1}{\lambda^{2}\cos^{2}(x-\theta)+\lambda^{-2}\sin^{2}(x- \theta)}.\] Differentiating this again gives \[\frac{\partial^{2}y}{\partial x^{2}}=\frac{2(\lambda^{2}+\lambda^{-2})\cos(x- \theta)\sin(x-\theta)}{\left(\lambda^{2}\cos^{2}(x-\theta)+\lambda^{-2}\sin^{2} (x-\theta)\right)^{2}}.\] Noting that (21) forces \(\cos(x-\theta)\geq\sin t\) gives the required result. We also need to bound the second derivatives of various expressions involving exp. **Lemma 3.17**.: _There exists some constant \(C>0\) such that the following is true. Let \(b\in P_{1}(\mathbb{R})\) and define \(w\) by_ \[w:\mathfrak{psl}_{z}(\mathbb{R}) \to\mathbb{R}/\pi\mathbb{Z}\] \[u \mapsto\phi\left(\exp(u)b\right).\] _Then whenever \(\|u\|\leq 1\) we have_ \[\|D_{u}(w)\|\leq C\] _and_ \[\left\|D_{u}^{2}(w)\right\|\leq C.\] Proof.: This follows immediately from the fact that \(\|D(w)(u)\|\) and \(\|D^{2}(w)(u)\|\) are continuous in \(b\) and \(u\) and compactness. We will also need the following bound. Unfortunately this lemma doesn't follow easily from a compactness argument and needs to be done explicitly. **Lemma 3.18**.: _For every \(t>0\) there exist some constants \(C,\delta>0\) such that the following holds. Let \(g\in PSL_{2}(\mathbb{R})\), let \(b\in P_{1}(\mathbb{R})\) and let \(w\) be defined by_ \[w:\mathfrak{psl}_{z}(\mathbb{R}) \times\mathfrak{psl}_{z}(\mathbb{R}) \to\mathbb{R}/\pi\mathbb{Z}\] \[(x,y) \mapsto\phi\left(\exp(x)g\exp(y)b\right).\] _Suppose that_ \[d(b^{-}(g),b)>t\] _and that \(\left\|x\right\|,\left\|y\right\|\leq\delta\). Then_ \[\left|\frac{\partial^{2}w(x,y)}{\partial x_{i}\partial y_{j}}\right|\leq C\left \|g\right\|^{-2}.\] Proof.: Let \(\hat{v}=\phi(\exp(y)b)\). First note that by compactness we have \[\left|\frac{\partial\hat{v}}{\partial y_{j}}\right|\leq O(1).\] Now let \(\tilde{v}:=\phi(g\exp(y)b)\). By Lemma 3.16 we have \[\left|\frac{\partial\tilde{v}}{\partial\tilde{v}}\right|\leq O_{t}\left(C \left\|g\right\|^{-2}\right).\] Also note that by compactness \[\left|\frac{\partial^{2}w}{\partial\tilde{v}\partial x_{i}}\right|\leq O(1).\] Hence \[\left|\frac{\partial^{2}w}{\partial x_{i}\partial y_{j}}\right|=\left|\frac{ \partial^{2}w}{\partial\tilde{v}\partial x_{i}}\right|.\left|\frac{\partial \tilde{v}}{\partial\hat{v}}\right|\cdot\left|\frac{\partial\hat{v}}{\partial y _{j}}\right|\leq O_{t}\left(\left\|g\right\|^{-2}\right).\] We are now done by Lemma 3.13. This is enough to prove Proposition 3.15. Proof of Proposition 3.15.: First we will deal with the case where \(k=\ell\). Let \[a=g_{1}\exp(u^{(1)})g_{2}\exp(u^{(2)})\dots g_{k-1}\exp(u^{(k-1)})g_{k}\] and \[b=g_{k+1}\exp(u^{(k+1)})g_{k+2}\exp(u^{(k+2)})\dots g_{n}\exp(u^{(n)})g_{n+1}\] and let \(\tilde{b}=\phi(\exp(u^{(k)})b)\). We have \[\frac{\partial y}{\partial u_{i}^{(k)}}=\frac{\partial y}{\partial\tilde{b}} \frac{\partial\tilde{b}}{\partial u_{i}^{(k)}}\] and so \[\frac{\partial^{2}y}{\partial u_{i}^{(k)}\partial u_{j}^{(k)}}=\frac{\partial ^{2}y}{\partial\tilde{b}^{2}}\frac{\partial\tilde{b}}{\partial u_{i}^{(k)}} \frac{\partial\tilde{b}}{\partial u_{j}^{(k)}}+\frac{\partial y}{\partial \tilde{b}}\frac{\partial^{2}\tilde{b}}{\partial u_{i}^{(k)}\partial u_{j}^{(k )}}.\] By Proposition 3.7 we know that providing \(C\) is sufficiently large and \(\delta\) is sufficiently small that \[d(b^{-}(a),b)>\frac{1}{2}t\] By Lemmas 3.16 and 3.17 this means that \[\left|\frac{\partial^{2}y}{\partial u_{i}^{(k)}\partial u_{j}^{(k)}}\right| \leq O_{t}\left(\left\|a\right\|^{-2}\right).\] In particular by Proposition 3.7 there is some constant \(C\) depending only on \(t\) such that \[\left|\frac{\partial^{2}y}{\partial u_{i}^{(k)}\partial u_{j}^{(k)}}\right|< C^{n}\left\|g_{1}g_{2}\dots g_{k}\right\|^{-2}\] as required. Now we will deal with the case where \(k<\ell\). Let \[a_{1}=g_{1}\exp(u^{(1)})g_{2}\exp(u^{(2)})\dots g_{k-1}\exp(u^{(k-1)})g_{k}\] and \[a_{2}=g_{k+1}\exp(u^{(k+1)})g_{k+2}\exp(u^{(k+2)})\dots g_{\ell-1}\exp(u^{( \ell-1)})g_{\ell}\] and \[b=g_{\ell+1}\exp(u^{(\ell+1)})g_{\ell+2}\exp(u^{(\ell+2)})\dots g_{n}\exp(u^{( n)})g_{n+1}.\] Let \(\tilde{b}=\phi(\exp(u^{(k)})a_{2}\exp(u^{(\ell)})b)\). Again we have \[\frac{\partial^{2}y}{\partial u_{i}^{(k)}\partial u_{j}^{(k)}}=\frac{\partial^{2 }y}{\partial\tilde{b}^{2}}\frac{\partial\tilde{b}}{\partial u_{i}^{(k)}}\frac{ \partial\tilde{b}}{\partial u_{j}^{(k)}}+\frac{\partial y}{\partial\tilde{b}} \frac{\partial^{2}\tilde{b}}{\partial u_{i}^{(k)}\partial u_{j}^{(k)}}.\] In a similar way to the case \(k=\ell\) but using Lemma 3.18 instead of Lemma 3.17 we get \[\left|\frac{\partial^{2}y}{\partial u_{i}^{(k)}\partial u_{j}^{(\ell)}}\right| <C^{n}\left\|g_{1}g_{2}\dots g_{\ell}\right\|^{-2}\] as required. From this we can now prove Proposition 1.22. Proof of Proposition 1.22.: By Theorem 3.14 and Proposition 3.15 we know that \[\left|\phi(x)-\phi(g_{1}g_{2}\dots g_{n+1})-\sum_{i=1}^{n}\zeta_{i}(u^{(i)})\right|\] \[\leq n^{2}C^{n}\min\left\{\left\|g_{1}g_{2}\dots g_{i}\right\|^{-2}:i\in[n] \right\}\tilde{r}^{2}.\] The result follows by replacing \(C\) with a slightly larger constant and noting that by Proposition 3.7 \[\min\left\{\left\|g_{1}g_{2}\dots g_{i}\right\|^{-2}:i\in[n]\right\}=\left\|g_ {1}g_{2}\dots g_{n}\right\|^{-2}.\qed\] ### Bounds on first derivatives The purpose of this subsection is to prove Propositions 3.2 and 3.6. This bounds the size of various first derivatives. First we need the following lemma. **Lemma 3.19**.: _Let \(u\in\mathfrak{psl}_{\mathbb{z}}(\mathbb{R})\backslash\{0\}\) and given \(b\in P_{1}(\mathbb{R})\) define \(\varrho_{b}\) as in Proposition 3.2. Then there are at most two points \(b\in P_{1}(\mathbb{R})\) such that_ \[\varrho_{b}(u)=0.\] Proof.: Let \(\tilde{\phi}\) be defined by \[\tilde{\phi}:\mathbb{R}^{2}\backslash\{0\} \to\mathbb{R}/\pi\mathbb{Z}\] \[\tilde{b} \mapsto\phi([\tilde{b}])\] where \([\tilde{b}]\) denotes the equivalent class of \(\tilde{b}\) in \(P_{1}(\mathbb{R})\). Given \(b\in P_{1}(\mathbb{R})\) let \(\tilde{b}\in b\) be some choice of element in \(\mathbb{R}^{2}\backslash\{0\}\). Note that this means \[\phi(\exp(v)b)=\tilde{\phi}(\exp(v)\tilde{b}).\] This means that \(\varrho_{b}(v)=0\) if and only if \(D(\exp(u)\tilde{b})|_{u=0}(v)\) is in the kernel of \(D_{\tilde{b}}(\tilde{\phi}(\tilde{b}))\). Trivially the kernel of \(D_{\tilde{b}}(\tilde{\phi}(\tilde{b}))\) is just the space spanned by \(\tilde{b}\). It also follows by the definition of the matrix exponential that for any \(v\in\mathfrak{psl}_{\mathbb{z}}(\mathbb{R})\) we have \[D(\exp(u)\tilde{b})|_{u=0}(v)=v\tilde{b}.\] Hence \(\varrho_{b}(v)=0\) if and only if \(\tilde{b}\) is an eigenvector of \(v\). Clearly for each \(v\in\mathfrak{psl}_{\mathrm{z}}(\mathbb{R})\backslash\{0\}\) there are at most two \(b\in P_{1}(\mathbb{R})\) with this property. The result follows. Proof of Proposition 3.2.: Given \(a_{1},a_{2}\in\mathbb{R}\) let \(U(a_{1},a_{2})\) be defined by \[U(a_{1},a_{2})=P_{1}(\mathbb{R})\backslash\phi^{-1}(((a_{1},a_{1}+t)\cup(a_{2 },a_{2}+t))).\] In other words \(U(a_{1},a_{2})\) is all of \(P_{1}(\mathbb{R})\) except for two arcs of length \(t\) starting at \(a_{1}\) and \(a_{2}\) respectively. Given some \(v\in\mathfrak{psl}_{\mathrm{z}}(\mathbb{R})\) let \(f(v)\) be given by \[f(v):=\max_{a_{1},a_{2}\in\mathbb{R}}\min_{b\in U(a_{1},a_{2})}|\varrho_{b}(v)|.\] Both the \(\min\) and the \(\max\) are achieved due to a trivial compactness argument. By Lemma 3.19 we know that \(f(v)>0\) whenever \(\|v\|=1\). Note that \(\{\varrho_{b}(\cdot):b\in P_{1}(\mathbb{R})\}\) is a bounded set of linear maps and so is uniformly equicontinuous. This means that \(f\) is continuous. Since the set of all \(v\in\mathfrak{psl}_{\mathrm{z}}(\mathbb{R})\) with \(\|v\|=1\) is compact this means that there is some \(\delta>0\) such that \(f(v)\geq\delta\). Finally note that trivially we can choose the \(a_{1}\) and \(a_{2}\) using this construction in such a way that they are measurable as functions of \(v\). We will now prove Proposition 3.6. Proof of Proposition 3.6.: By elementary linear algebra we can write \(X\) as \[X=X_{1}v_{1}+X_{2}v_{2}+X_{3}v_{3}\] where \(X_{1}\), \(X_{2}\) and \(X_{3}\) are uncorrelated random variables taking values in \(\mathbb{R}\) and \(v_{1}\), \(v_{2}\), and \(v_{3}\) are the eigenvectors of the covariance matrix of \(X\) with corresponding eigenvalues \(\operatorname{Var}X_{1}\), \(\operatorname{Var}X_{2}\), and \(\operatorname{Var}X_{3}\). Furthermore we may assume that \(\operatorname{Var}X_{1}\geq\operatorname{Var}X_{2}\geq\operatorname{Var}X_{3}\) and so in particular \(\operatorname{Var}X_{1}\geq\frac{1}{3}\operatorname{Var}X\). Without loss of generality we may assume that \(X_{1}\), \(X_{2}\), \(X_{3}\), and \(X\) have mean \(0\). We also note that since \(v_{1}\) is a principal component of \(X\) by Proposition 3.2 we have \(|\rho_{b}(v_{1})|\geq\delta\). We then compute \[\operatorname{Var}\rho_{b}(X) =\mathbb{E}\left[|\rho_{b}(X)|^{2}\right]\] \[=\mathbb{E}\left[X_{1}^{2}|\rho_{b}(v_{1})|^{2}+X_{2}^{2}|\rho_{b }(v_{2})|^{2}+X_{3}^{2}|\rho_{b}(v_{3})|^{2}\right]\] \[\geq\frac{1}{3}\delta\operatorname{Var}X.\] This gives the required result. ## 4. Disintegration argument The purpose of this section is to prove Theorem 1.26. We define rigorously some notions which we used informally in the introduction including regular conditional distribution, the variance of random elements in \(PSL_{2}(\mathbb{R})\) and various notions of entropy. We also discuss basic properties of these notions. After these preparations, which occupy most of the section, the proof of Theorem 1.26 will be short. Before we begin we outline the main steps of the proof of Theorem 1.26. The first step is the following simple lemma. **Lemma 4.1**.: _Let \(g\), \(s_{1}\) and \(s_{2}\) be random variables taking values in \(PSL_{2}(\mathbb{R})\). Suppose that \(s_{1}\) and \(s_{2}\) are absolutely continuous with finite differential entropy and that \(gs_{1}\) and \(gs_{2}\) have finite differential entropy. Define \(k\) by_ \[k:=H(gs_{1})-H(s_{1})-H(gs_{2})+H(s_{2}).\] _Then_ \[\mathbb{E}[H((gs_{1}|gs_{2}))]\geq k+H(s_{1}).\] Here \((gs_{1}|gs_{2})\) denotes the regular conditional distribution which we will define in Section 4.1. We prove this lemma in Section 4.3. Recall that \(s_{1}\) and \(s_{2}\) are smoothing random variables, and \(s_{2}\) corresponds to a larger scale than \(s_{1}\). The quantity \(k\) can be thought of as the difference between the information of \(g\) discretized at the scales corresponding to \(s_{1}\) and \(s_{2}\). It is well known that among all random vectors of a given variance, the spherical normal distribution has the largest (differential) entropy. This allows us to estimate the variance of a random vector in terms of its entropy from below. Once the definitions are in place, we can translate this to random elements of \(PSL_{2}(\mathbb{R})\). **Lemma 4.2**.: _Let \(\varepsilon>0\) and suppose that \(g\) is a random variable taking values in \(PSL_{2}(\mathbb{R})\) such that \(g_{0}^{-1}g\) takes values in the ball of radius \(\varepsilon\) and centre \(\mathrm{Id}\) for some \(g_{0}\in PSL_{2}(\mathbb{R})\). Then providing \(\varepsilon\) is sufficiently small we have_ \[H(g)\leq\frac{3}{2}\log\frac{2\pi e}{3}\operatorname{VAR}_{g_{0}}[g]+O( \varepsilon).\] We will prove this in Section 4.3. Combining the above two lemmas, we can get a lower bound on \(\operatorname{VAR}_{gs_{2}}[gs_{1}|gs_{2}]\). The last part of the proof of Theorem 1.26 is the following. **Lemma 4.3**.: _Let \(\varepsilon>0\) be sufficiently small and let \(a\) and \(b\) be random variables and let \(\mathcal{A}\) be a \(\sigma\)-algebra. Suppose that \(b\) is independent from \(a\) and \(\mathcal{A}\). Let \(g_{0}\) be an \(\mathcal{A}\)-measurable random variable. Suppose that \(g_{0}^{-1}a\) and \(b\) are almost surely contained in a ball of radius \(\varepsilon\) around \(\mathrm{Id}\). Then_ \[\operatorname{VAR}_{g_{0}}[ab|\mathcal{A}]=\operatorname{VAR}_{g_{0}}[a| \mathcal{A}]+\operatorname{VAR}_{\mathrm{Id}}[b]+O(\varepsilon^{3}).\] We prove this in Section 4.2. ### Regular conditional distribution In this section we will discuss some basic properties of regular conditional distributions. For a more comprehensive text on regular conditional distributions see for example [23]. Some readers may be more familiar with the use of conditional measures as described in for example [9, Chapter 5]. These two concepts are equivalent. **Definition 4.4** (Markov Kernel).: Let \((\Omega_{1},\mathcal{A}_{1})\) and \((\Omega_{2},\mathcal{A}_{2})\) be measurable spaces. We say that a function \(\kappa:\Omega_{1}\times\mathcal{A}_{2}:\to[0,1]\) is a _Markov Kernel_ on \((\Omega_{1},\mathcal{A}_{1})\) and \((\Omega_{2},\mathcal{A}_{2})\) if; * For any \(\omega_{1}\in\Omega_{1}\) the function \(A_{2}\mapsto\kappa(\omega_{1},A_{2})\) is a probability measure. **Definition 4.5**.: Let \((\Omega,\mathcal{F},\mathbb{P})\) be a probability space, let \((E,\xi)\) be a measurable space, and let \(Y:(\Omega,\mathcal{F})\to(E,\xi)\) be a random variable. Let \(\mathcal{A}\subset\mathcal{F}\) be a \(\sigma\)-algebra. Then we say that a Markov kernel \[\kappa_{Y,\mathcal{A}}:\Omega\times\xi\to[0,1]\] on \((\Omega,\mathcal{A})\) and \((E,\xi)\) is a _regular conditional distribution_ for \(Y\) given \(\mathcal{A}\) if \[\kappa_{Y,\mathcal{A}}(\omega,B)=\mathbb{P}[Y\in B|\mathcal{A}]\] for all \(B\in\xi\) and almost all \(\omega\in\Omega\). In other words we require \[\mathbb{P}\left[A\cap\{Y\in B\}\right]=\mathbb{E}\left[\mathbb{I}_{A}\kappa_{ Y,\mathcal{A}}(\cdot,B)\right]\text{ for all }A\in\mathcal{A},B\in\xi.\] In the case where \(Y\) is as above and \(X\) is another random variable taking values in some measurable space \((E^{\prime},\xi^{\prime})\) then we let the regular conditional distribution of \(Y\) given \(X\) refer to the regular conditional distribution of \(Y\) given \(\sigma(X)\). For this definition to be useful we need the following theorem. **Theorem 4.6**.: _Let \((\Omega,\mathcal{F},\mathbb{P})\) be a probability space, let \((E,\xi)\) be a standard Borel space, and let \(Y:(\Omega,\mathcal{F})\to(E,\xi)\) be a random variable. Then given any \(\sigma\)-algebra \(\mathcal{A}\subset\mathcal{F}\) there exists a regular conditional distribution for \(Y\) given \(\mathcal{A}\)._ Proof.: This is [23, Theorem 8.37]. **Definition 4.7**.: Given some random variable \(Y\) and some \(\sigma\)- algebra \(\mathcal{A}\subset\mathcal{F}\) (or random variable \(X\)) we will write \((Y|\mathcal{A})\) (or \((Y|X)\)) to mean the regular conditional distribution of \(Y\) given \(\mathcal{A}\) (or given \(X\)). We also let \([Y|\mathcal{A}]\) (or \([Y|X]\)) denote random variables defined on a different probability space to \(Y\) which have law \((Y|\mathcal{A})\) (or \((Y|X)\)). One can easily check that if the regular conditional distribution exists then it is unique up to equality almost everywhere. ### Variance on \(PSL_{2}(\mathbb{R})\) We wish to define some analogue of variance for random variables taking values in \(PSL_{2}(\mathbb{R})\). We will do this using \(\log\). **Definition 4.8**.: Given some random variable \(X\) taking values in \(\mathbb{R}^{d}\) we define the variance of \(X\), which we denote by \(\operatorname{Var}X\), to be the trace of its covariance matrix. If \(X\) takes values in \(\mathfrak{psl}_{2}(\mathbb{R})\) we do this via our identification of \(\mathfrak{psl}_{2}(\mathbb{R})\) with \(\mathbb{R}^{3}\). **Definition 4.9**.: Let \(g\) be a random variable taking values in \(PSL_{2}(\mathbb{R})\) and let \(g_{0}\in PSL_{2}(\mathbb{R})\). Suppose that \(g_{0}^{-1}g\) is always in the domain of \(\log\). Then define the _variance of \(g\) with respect to \(g_{0}\)_ by \[\operatorname{VAR}_{g_{0}}[g]:=\operatorname{Var}\log(g_{0}^{-1}g).\] We need the following lemma. **Lemma 4.10**.: _Let \(\varepsilon>0\) be sufficiently small and let \(g\) and \(h\) be independent random variables taking values in \(PSL_{2}(\mathbb{R})\). Suppose that the image of \(g\) is contained in a ball of radius \(\varepsilon\) around \(\operatorname{Id}\) and the image of \(h\) is contained in a ball of radius \(\varepsilon\) around some \(h_{0}\in PSL_{2}(\mathbb{R})\). Then_ \[\operatorname{VAR}_{h_{0}}[hg]=\operatorname{VAR}_{h_{0}}[h]+\operatorname{ VAR}_{\operatorname{Id}}[g]+O(\varepsilon^{3}).\] Proof.: Let \(X=\log(h_{0}^{-1}h)\) and let \(Y=\log(g)\). Then by Taylor's theorem \[\log(\exp(X)\exp(Y))=X+Y+E\] where \(E\) is some random variable with \(|E|\leq O(\varepsilon^{2})\) almost surely. Note that we also have \(|X|,|Y|\leq O(\varepsilon)\). Therefore \[\operatorname{VAR}_{h_{0}}[hg] =\mathbb{E}[|X+Y+E|^{2}]-|\mathbb{E}[X+Y+E]|^{2}\] \[=\mathbb{E}[|X+Y|^{2}]-|\mathbb{E}[X+Y]|^{2}+2\mathbb{E}[(X+Y) \cdot E]+\mathbb{E}[|E|^{2}]\] \[\quad-2\mathbb{E}[X+Y]\cdot\mathbb{E}[E]-|\mathbb{E}[E]|^{2}\] \[=\operatorname{Var}[X+Y]+O(\varepsilon^{3})\] as required. We also need to describe the variance of a regular conditional distribution. **Definition 4.11**.: Given some random variable \(g\) taking values in \(PSL_{2}(\mathbb{R})\), some \(\sigma\)-algebra \(\mathcal{A}\) and some \(\mathcal{A}\)-measurable random variable \(g_{0}\) taking values in \(PSL_{2}(\mathbb{R})\) we let \(\operatorname{VAR}_{g_{0}}[g|\mathcal{A}]\) to be the \(\mathcal{A}\)-measurable random variable given by \[\operatorname{VAR}_{g_{0}}[g|\mathcal{A}](\omega)=\operatorname{VAR}_{g_{0}( \omega)}[(g|\mathcal{A})(\omega)].\] Similarly given a random variable \(h\) we let \(\operatorname{VAR}_{g_{0}}[g|h]=\operatorname{VAR}_{g_{0}}[g|\sigma(h)]\). Proof of Lemma 4.3.: First note that we have \([ab|\mathcal{A}]=[a|\mathcal{A}][b|\mathcal{A}]=[a|\mathcal{A}]b\). We are now done by Lemma 4.10. ### Entropy In this subsection we will describe some of the properties of entropy used in this paper. We will describe entropy for both absolutely continuous and discrete measures on \(\mathbb{R}^{d}\) and \(PSL_{2}(\mathbb{R})\). **Definition 4.12** (KL-divergence).: Let \(\lambda_{1}\) be a probability measure on a measurable space \((E,\xi)\) and let \(\lambda_{2}\) be a measure on \((E,\xi)\). Then we define the _KL-divergence_ of \(\lambda_{1}\) given \(\lambda_{2}\) by, \[\mathcal{KL}(\lambda_{1},\lambda_{2}):=\int_{E}\log\frac{d\lambda_{1}}{d \lambda_{2}}\,d\lambda_{1}\] We use this to define the entropy of continuous random variables on \(\mathbb{R}^{d}\). **Definition 4.13**.: Let \(\lambda\) be an absolutely continuous probability measure on \(\mathbb{R}^{d}\) and let \(m\) denote the Lesbegue measure on \(\mathbb{R}^{d}\). Then we define the _entropy_ of \(\lambda\) to be \[H(\lambda):=-\mathcal{KL}(\lambda,m).\] Similarly if \(X\) is a random variable then we define the entropy of \(X\) to be the entropy of its law. The conflict of notation with the entropy of a discrete measure doesn't matter because it will always be clear from context whether a probability measure is discrete or continuous. It is worth noting that in all of the cases we have discussed so far the entropy of a probability measure \(\lambda\) can be expressed as \(-\mathcal{KL}(\lambda,\alpha)\) where \(\alpha\) is some measure such that \(\lambda<<\alpha\). In the case of a discrete probability measure we have \(\alpha\) is just the counting measure and if \(\lambda\) is an absolutely continuous random variable taking values in \(\mathbb{R}^{d}\) then we take \(\alpha\) to be the Lesbegue measure. This will be the case for all measurable spaces on which we define some concept of entropy. **Definition 4.14** (Haar measure).: Given a Lie group \(\mathbf{G}\) with Borel \(\sigma\)-algebra \(\mathcal{B}(\mathbf{G})\) we say that a measure \(\lambda\) on \((\mathbf{G},\mathcal{B}(\mathbf{G}))\) is a _left invariant measure_ if for all \(g\in\mathbf{G}\) and all \(S\in\mathcal{B}(\mathbf{G})\) we have \[\lambda(gS)=\lambda(S).\] Similarly we call it a _right invariant measure_ if for all \(g\in\mathbf{G}\) and all \(S\in\mathcal{B}(\mathbf{G})\) we have \[\lambda(Sg)=\lambda(S).\] If \(\lambda\) is Radon and left invariant then it is called a _left Haar measure_. Similarly if \(\lambda\) is Radon and right invariant then it is called a _right Haar measure_. If \(\lambda\) is both a left Haar measure and a right Haar measure then we call it a _Haar measure_. It is well known that every Lie group has a non-zero left and right Haar measure and that these are unique up to multiplication by a positive constant. In the special case of \(\mathbf{G}=PSL_{2}(\mathbb{R})\) these coincide which makes our proof easier. To describe the Haar measure of \(PSL_{2}(\mathbb{R})\) we will use the NAK decomposition. **Definition 4.15** (NAK decomposition).: Each element of \(PSL_{2}(\mathbb{R})\) can be written uniquely in the form \[\begin{pmatrix}1&x\\ 0&1\end{pmatrix}\begin{pmatrix}y^{\frac{1}{2}}&0\\ 0&y^{-\frac{1}{2}}\end{pmatrix}\begin{pmatrix}\cos\theta&-\sin\theta\\ \sin\theta&\cos\theta\end{pmatrix}\] with \(x\in\mathbb{R}\), \(y\in\mathbb{R}_{>0}\) and \(\theta\in\mathbb{R}/\pi\mathbb{Z}\). This is called the \(NAK\) decomposition. **Lemma 4.16**.: _There is a Haar measure for \(PSL_{2}(\mathbb{R})\) which is given in the NAK decomposition by_ \[\frac{1}{y^{2}}\,dx\,dy\,d\theta.\] Proof.: This is proven in for example [25, Chapter III]. **Definition 4.17**.: Let \(\tilde{m}\) denote the Haar measure on \(PSL_{2}(\mathbb{R})\) normalized such that \(\frac{d\tilde{m}}{dm\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{ \mathrm{ \mathrm{ \mathrm{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\)\)\)\)\) **Lemma 4.20**.: _Let \(\lambda_{1}\) be a probability measure on some measurable space \(E\) and let \(\lambda_{2}\) and \(\lambda_{3}\) be measures on \(E\) and let \(U\subset E\). Suppose that the support of \(\lambda_{1}\) is contained in \(U\). Then,_ \[|\mathcal{KL}(\lambda_{1},\lambda_{2})-\mathcal{KL}(\lambda_{1},\lambda_{3})| \leq\sup_{x\in U}\left|\log\frac{d\lambda_{2}}{d\lambda_{3}}\right|.\] Proof.: We have \[|\mathcal{KL}(\lambda_{1},\lambda_{2})-\mathcal{KL}(\lambda_{1}, \lambda_{3})| =\left|\int_{U}\log\frac{d\lambda_{1}}{d\lambda_{2}}\,d\lambda_{1 }-\int_{U}\log\frac{d\lambda_{1}}{d\lambda_{3}}\,d\lambda_{1}\right|\] \[\leq\int_{U}\left|\log\frac{d\lambda_{1}}{d\lambda_{2}}-\log \frac{d\lambda_{1}}{d\lambda_{3}}\right|\,d\lambda_{1}\] \[=\int_{U}\left|\log\frac{d\lambda_{2}}{d\lambda_{3}}\right|\,d \lambda_{1}\] \[\leq\sup_{x\in U}\left|\log\frac{d\lambda_{2}}{d\lambda_{3}} \right|.\] We can now prove Lemma 4.2. Proof of Lemma 4.2.: This follows easily from Lemma 4.19 and Lemma 4.20. Let \(U\) be the ball in \(PSL_{2}(\mathbb{R})\) of centre Id and radius \(\epsilon\). Due to properties of the Haar measure we have \(H(g)=H(g_{0}^{-1}g)\) and by definition \(\operatorname{VAR}_{g_{0}}[g]=\operatorname{VAR}_{\operatorname{Id}}[g_{0}^{ -1}g]\). This means that it is sufficient to show that \[H(g_{0}^{-1}g)\leq\frac{3}{2}\log\frac{2\pi e}{3}\operatorname{VAR}_{ \operatorname{Id}}[g_{0}^{-1}g]+O(\varepsilon).\] Recall that \(\frac{d\tilde{m}}{dm\log}\) is smooth and equal to \(1\) at Id. This means that providing \(\varepsilon<1\) on \(U\) we have \[\frac{d\tilde{m}}{dm\circ\log}=1+O(\varepsilon).\] In particular providing \(\varepsilon\) is sufficiently small we have \[\sup_{U}\left|\log\frac{d\tilde{m}}{dm\circ\log}\right|<O(\varepsilon).\] Clearly \[\mathcal{KL}(g_{0}^{-1}g,m\circ\log)=\mathcal{KL}(\log(g_{0}^{-1}g),m).\] We have by definition that \(H(g_{0}^{-1}g)=\mathcal{KL}(g_{0}^{-1}g,\tilde{m})\) and by Lemma 4.20 we have \(\left|\mathcal{KL}(g_{0}^{-1}g,m\circ\log)-\mathcal{KL}(g_{0}^{-1}g,\tilde{m} )\right|\leq O(\varepsilon)\). By Lemma 4.19 we know that \[\mathcal{KL}(\log(g_{0}^{-1}g),m)\leq\frac{3}{2}\log\frac{2\pi e}{3} \operatorname{VAR}_{\operatorname{Id}}[g_{0}^{-1}g].\] Therefore, \[H(g_{0}^{-1}g)\leq\frac{3}{2}\log\frac{2\pi e}{3}\operatorname{VAR}[g_{0}^{-1 }g]+O(\varepsilon)\] as required. I will also adopt the following convention for defining the entropy on a product space. Let \((E_{1},\xi_{1})\) and \((E_{2},\xi_{2})\) be measurable spaces endowed with reference measures \(m_{1}\) and \(m_{2}\) such that if \(\lambda\) is a measure on \((E_{i},\xi_{i})\) then we define the entropy of \(\lambda\) by \(H(\lambda):=-\mathcal{KL}(\lambda_{i},m_{i})\). Then we take \(m_{1}\times m_{2}\) to be the corresponding reference measure for \(E_{1}\times E_{2}\). That is given some measure \(\lambda\) on \(E_{1}\times E_{2}\) we take the entropy of \(\lambda\) to be defined by \(H(\lambda)=-\mathcal{KL}(\lambda,m_{1}\times m_{2})\). With this we can give the following definition. **Definition 4.21** (Conditional Entropy).: Let \(X_{1}\) and \(X_{2}\) be two random variables with finite entropy. Then we define the _entropy of \(X_{1}\) given \(X_{2}\)_ by, \[H(X_{1}|X_{2})=H(X_{1},X_{2})-H(X_{1}).\] Next we will need the following simple facts about conditional entropy. **Definition 4.22**.: Given some random variable \(Y\) and a \(\sigma\)-algebra \(\mathcal{A}\subset\mathcal{F}\) we define \(H((Y|\mathcal{A}))\) to be the random variable \[H((Y|\mathcal{A})):\omega\mapsto H((Y|\mathcal{A})(\omega,\cdot))\] where \((Y|\mathcal{A})(\omega,\cdot)\) is the regular conditional distribution for \(Y\) given \(\mathcal{A}\). Similarly given some random variable \(X\) we let \(H((Y|X)):=H((Y|\sigma(X)))\). **Lemma 4.23**.: _Let \(X_{1}\) and \(X_{2}\) be two random variables with finite entropy and finite joint entropy. Then_ \[H(X_{1}|X_{2})=\mathbb{E}[H((X_{1}|X_{2}))].\] Proof.: This is just the chain rule for conditional distributions. It follows from a simple computation and a proof may be found in [31, Proposition 3]. **Lemma 4.24**.: _Let \(g\) be a random variable taking values in \(PSL_{2}(\mathbb{R})\), let \(\mathcal{A}\) be a \(\sigma\)-algebra, and let \(a\) be a \(\mathcal{A}\)-measurable random variable taking values in \(PSL_{2}(\mathbb{R})\). Then_ \[H((ag|\mathcal{A}))=H((g|\mathcal{A})).\] _In particular if \(h\in PSL_{2}(\mathbb{R})\) is fixed then_ \[H(hg)=H(g).\] Proof.: For the first part note that \([ag|\mathcal{A}]=a[g|\mathcal{A}]\) almost surely. Also note that by the left invariance of the Haar measure \[H(a[g|\mathcal{A}])=H([g|\mathcal{A}]).\] The last part follows trivially by the first part. We now have all the tools required to prove Lemma 4.1. Proof of Lemma 4.1.: First note that we have \[H(gs_{2}|gs_{1})\geq H(gs_{2}|g,s_{1})=H(s_{2})\] and so \[H(gs_{2},gs_{1})\geq H(gs_{1})+H(s_{2}).\] This means that \[H(gs_{1}|gs_{2}) =H(gs_{2},gs_{1})-H(gs_{2})\] \[\geq k+H(s_{1}).\] Recalling that by Lemma 4.23\(H(gs_{1}|gs_{2})=\mathbb{E}[H((gs_{1}|gs_{2}))]\) we get \[\mathbb{E}[H((gs_{1}|gs_{2}))]\geq k+H(s_{1})\] as required. ### Proof of Theorem 1.26 We now have everything needed to prove Theorem 1.26. Proof of Theorem 1.26.: Note that by Lemma 4.1 we have \[\mathbb{E}[H((gs_{1}|gs_{2}))]\geq k+H(s_{1})\] and so by Lemma 4.2 we have \[\mathbb{E}[\frac{3}{2}\log\frac{2}{3}\pi e\operatorname{VAR}_{gs_{2}}[gs_{1} |gs_{2}]]+O(\varepsilon)\geq k+H(s_{1}). \tag{23}\] Note that \((gs_{2})^{-1}g=s_{2}^{-1}\) which is contained in a ball of radius \(\varepsilon\) centred on the identity. Therefore by Lemma 4.3 we have \[\operatorname{VAR}_{gs_{2}}[gs_{1}|gs_{2}]\leq\operatorname{VAR}_{gs_{2}}[g| gs_{2}]+\operatorname{VAR}_{\operatorname{Id}}[s_{1}]+O(\varepsilon^{3}).\] Putting this into (23) gives \[\mathbb{E}[\frac{3}{2}\log\frac{2}{3}\pi e(\operatorname{VAR}_{gs_{2}}[g|gs_{ 2}]+\operatorname{VAR}_{\operatorname{Id}}[s_{1}]+O(\varepsilon^{3}))]+O( \varepsilon)\geq k+H(s_{1})\] which becomes \[\mathbb{E}[\log\,(1+\frac{\operatorname{VAR}_{gs_{2}}[g|gs_{2}]}{\operatorname {VAR}_{\operatorname{Id}}[s_{1}]}+O_{A}(\varepsilon))]+O(\varepsilon)\geq \frac{2}{3}(k+H(s_{1})-\frac{3}{2}\log\frac{2}{3}\pi e\operatorname{VAR}_{ \operatorname{Id}}[s_{1}]).\] Noting that for \(x\geq 0\) we have \(x\geq\log(1+x)\) we get \[\mathbb{E}[\operatorname{VAR}_{gs_{2}}[g|gs_{2}]]\geq\frac{2}{3}(k-c-O_{A}( \varepsilon))\operatorname{VAR}_{\operatorname{Id}}[s_{1}]\] as required. ## 5. Entropy gap for stopped random walk The purpose of this section is to prove Proposition 1.24. This shows that for a stopped random walk \(q_{\tau}\) there are many choices of \(\tilde{r}\) such that \(v(q_{\tau};\tilde{r})\) is large. Recall that \(v(q_{\tau};\tilde{r})\) is defined to be the supremum of all \(v\geq 0\) such that we can find some \(\sigma\)-algebra \(\mathcal{A}\) and some \(\mathcal{A}\)- measurable random variable \(a\) taking values in \(PSL_{2}(\mathbb{R})\) such that \(|\log(a^{-1}g)|\leq r\) and \[\mathbb{E}\left[\operatorname{VAR}_{a}\left[g|\mathcal{A}\right]\right]\geq vr ^{2}.\] We apply Theorem 1.26 wish a careful choice of \(s_{1}\) and \(s_{2}\). We will take these to be compactly supported approximations to the image of spherical normal random variables on \(\mathfrak{psl}_{\mathbb{z}}(\mathbb{R})\) under \(\exp\). More precisely we have the following. **Definition 5.1**.: Given \(r>0\) and \(a\geq 1\) let \(\eta_{r,a}\) be the random variable on \(\mathbb{R}^{3}\) with density function \(f:\mathbb{R}^{3}\to\mathbb{R}\) given by \[f(x)=\begin{cases}Ce^{-\frac{\|x\|^{2}}{2r^{2}}}&\text{ if }\left\|x\right\|\leq ar \\ 0&\text{ otherwise}\end{cases}\] where \(C\) is a normalizing constant chosen to ensure that \(f\) integrates to \(1\). We can then define the following family of smoothing functions. **Definition 5.2**.: Given \(r>0\) and \(a\geq 1\) let \(s_{r,a}\) be the random variable on \(PSL_{2}(\mathbb{R})\) given by \[s_{r,a}=\exp(\eta_{r,a}).\] After doing some computations on the entropy and variance of the \(\eta_{r,a}\) we can prove the following proposition by putting these estimates into Theorem 1.26. **Proposition 5.3**.: _There is some constant \(c>0\) such that the following holds. Let \(g\) be a random variable taking values in \(PSL_{2}(\mathbb{R})\), let \(a\geq 1\) and let \(r>0\). Define \(k\) by_ \[k=H(gs_{r,a})-H(s_{r,a})-H(gs_{2r,a})+H(s_{2r,a}).\] _Then_ \[v(g;2ar)\geq ca^{-2}(k-O(e^{-\frac{a^{2}}{4}})-O_{a}(r))).\] This will be proven in Section 5.1. To make this useful we will need a way to bound \(k\) from Proposition 5.3 from below for appropriately chosen scales. We will do this by bounding \[H(gs_{r,a})-H(s_{r,a})-H(gs_{2^{n}r,a})+H(s_{2^{n}r,a}).\] for some carefully chosen \(n\) and \(r\) and then noting the identity \[H(gs_{r,a})-H(s_{r,a})-H(gs_{2^{n}r,a})+H(s_{2^{n}r,a})\] \[\qquad=\sum_{i=1}^{n}H(gs_{2^{i-1}r,a})-H(s_{2^{i-1}r,a})-H(gs_{2^{ i}r,a})+H(s_{2^{i}r,a}).\] We use this to find scales where we can apply Proposition 5.3. Specifically we will prove the following. **Proposition 5.4**.: _Let \(\mu\) be a discrete probability measure on \(PSL_{2}(\mathbb{R})\) which is strongly irreducible and such that its support is not contained in any compact subgroup of \(PSL_{2}(\mathbb{R})\). Suppose that \(M_{\mu}<\infty\) and \(h_{RW}/\chi\) is sufficiently large. Let \(\gamma_{1},\gamma_{2},\dots\) be i.i.d. samples from \(\mu\). Given \(n\in\mathbb{Z}_{>0}\) let \(q_{n}:=\gamma_{1}\gamma_{2}\dots\gamma_{n}\). Let \(t>1\) and \(w\in P_{1}(\mathbb{R})\) and define \(\tau=\tau_{t,w}\) by_ \[\tau=\inf\{n\in\mathbb{Z}_{>0}:\left\|q_{n}^{T}w\right\|\geq t\}.\] _Let \(M>M_{\mu}\). Suppose that \(0<r_{1}<r_{2}<1\). Suppose that \(r_{1}<M^{-\log t/\chi}\). Let \(a\geq 1\). Then_ \[H(q_{\tau}s_{r_{1},a})\geq\frac{h_{RW}}{\chi}\log t+H(s_{a,r_{1}})-o_{M,\mu,a, w}(\log t) \tag{24}\] _and_ \[H(q_{\tau}s_{r_{2},a})\leq 2\log t+o_{M,\mu,a,w}(\log t). \tag{25}\] _In particular_ \[H(q_{\tau}s_{r_{1},a})-H(s_{r_{1},a})-H(q_{\tau}s_{r_{2},a})+H(s_{r_{2},a}) \geq\left(\frac{h_{RW}}{\chi}-2\right)\log t+3\log r_{2}-o_{M,\mu,a,w}(\log t). \tag{26}\] This is proven in Section 5.2. This proposition is unsurprising. To motivate (24) note that it is well known that with high probability \(\tau\approx\log t/\chi\). We also know by the definition of \(h_{RW}\) that \[H(q_{\lfloor\log t/\chi\rfloor})\geq h_{RW}\left\lfloor\log t/\chi\right\rfloor.\] Providing \(t\) is sufficiently large \(s_{r_{1},a}\) is contained in a ball of centre \(\operatorname{Id}\) and of radius \(O_{M,\mu,a}(M^{-\log t/\chi})\). In particular providing \(t\) is sufficiently large this radius is less than half the minimum distance between points in the image of \(q_{\lfloor\log t/\chi\rfloor}\) and so \(H(q_{\lfloor\log t/\chi\rfloor}s_{r_{1},a})=H(q_{\lfloor\log t/\chi\rfloor})+ H(s_{r_{1},a})\). It turns out we can prove something similar when \(\lfloor\log t/\chi\rfloor\) is replaced by \(\tau\). The bound (25) follows easily from the fact that the Haar measure of the image of \(q_{\tau}s_{r_{2},a}\) is at most \(O_{\mu,a}(t^{2})\). Finally (26) follows from combining (24) and (25) and noting that \(H(s_{r_{2},a})=3\log r_{2}+O(1)\). We then combine Propositions 5.3 and 5.4 to get the following. **Proposition 5.5**.: _There is some constant \(c>0\) such that the following is true. Suppose that \(\mu\) is a strongly irreducible probability supported on finitely many points whose support is not contained in any compact subgroup of \(PSL_{2}(\mathbb{R})\). Suppose that \(M_{\mu}<\infty\) and that \(h_{RW}/\chi\) is sufficiently large. Let \(M>M_{\mu}\). Suppose that \(M\) is chosen large enough that \(h_{RW}\leq\log M\). Let \(b\in P_{1}(\mathbb{R})\). Then for all sufficiently large (depending on \(M\), \(\mu\) and \(w\)) \(t\) we have_ \[\int_{t^{-\frac{hRW}{\log\chi}}}^{t^{-\frac{hRW}{\log\chi}}}\frac{1}{u}v(q_{ \tau_{t,b}};u)\,du\geq c\left(\frac{h_{RW}}{\chi}\right)\left(\max\left\{1, \log\frac{\log M}{\chi}\right\}\right)^{-1}\log t.\] We prove this in Section 5.3. Proposition 1.24 follows easily from this. ### Smoothing random variables In this subsection we give bounds on the variance and entropy of the \(s_{r,a}\) and use this to prove Proposition 5.3. Recall the definition of \(\eta_{r,a}\) from Definition 5.1. First we have the following. **Lemma 5.6**.: _Let \(r>0\) and \(a\geq 1\). Then_ \[\Theta(r^{2})\leq\operatorname{Var}\eta_{r,a}\leq 3r^{2}.\] The proof of this lemma is trivial and is left to the reader. **Lemma 5.7**.: _There is some constant \(c>0\) such that the following is true. Let \(r>0\) and \(a\geq 1\). Then_ \[H(\eta_{r,a})=\frac{3}{2}\log 2\pi er^{2}+O(e^{-\frac{a^{2}}{4}}).\] The proof of Lemma 5.7 is a simple computation which we will do later. Recall that given some \(g_{0}\in PSL_{2}(\mathbb{R})\) and a random variable \(g\) taking values in \(PSL_{2}(\mathbb{R})\) such that \(g_{0}^{-1}g\) is in the domain of log we define \[\operatorname{VAR}_{g_{0}}[g]:=\operatorname{Var}[\log g_{0}^{-1}g]\] and that we define the entropy of an absolutely continuous random variable taking values in \(PSL_{2}(\mathbb{R})\) to be the relative entropy with respect to \(\tilde{m}\) where \(\tilde{m}\) is the Haar measure normalized so that \[\frac{d\tilde{m}}{dm}(\operatorname{Id})=1.\] We deduce the following about \(s_{r,a}\). **Lemma 5.8**.: _Let \(r>0\) and \(a\geq 1\). Suppose that \(ar\) is sufficiently small. Then_ \[\Theta(r^{2})\leq\operatorname{VAR}_{\operatorname{Id}}s_{r,a}\leq 3r^{2}.\] Proof.: This follows immediately from substituting Lemma 5.6 into the definition of VAR. **Lemma 5.9**.: _There is some constant \(c>0\) such that the following is true. Let \(r>0\) and \(a\geq 1\). Then_ \[H(s_{r,a})=\frac{3}{2}\log 2\pi er^{2}+O(e^{-\frac{a^{2}}{4}})+O_{a}(r).\] Proof.: This follows immediately from Lemma 5.7 and Lemma 4.20. We also have the following fact. **Lemma 5.10**.: _Let \(r>0\) and \(a\geq 1\). Suppose that \(ar\) is sufficiently small. Then_ \[\left\|\log(s_{r,a})\right\|\leq ar\] _almost surely._ Proof.: This is trivial from the definition of \(s_{r,a}\). We now have enough to prove Proposition 5.3. Proof of Proposition 5.3.: We apply Theorem 1.26 with \(s_{1}=s_{r,a}\) and \(s_{2}=s_{2r,a}\). We also take \(\varepsilon=3ar\). By Lemma 5.8 we know that \[\operatorname{VAR}_{\operatorname{Id}}[s_{1}]\geq\Theta(r^{2})\geq\Theta_{a} (\varepsilon^{2})\] and by Lemmas 5.9 and 5.8 we know that \[c=\frac{3}{2}\log\frac{2}{3}\pi e\operatorname{VAR}[s_{1}]-H(s_{1})\leq O(e^{ -\frac{a^{2}}{4}}).\] This means that \[\mathbb{E}[\operatorname{VAR}_{gs_{2}}[g|gs_{2}]]\geq\frac{2}{3}(k-O(e^{- \frac{a^{2}}{4}})-O_{a}(r))(cr^{2})\] for some absolute constant \(c\). We know that \[\left\|\log\left((gs_{2})^{-1}g\right)\right\|=\left\|\log s_{2}\right\|\leq 2ar\] and so by the definition of \(v(\cdot;\cdot)\) we have \[v(g;2ar) \geq(2ar)^{-2}\mathbb{E}[\operatorname{VAR}_{gs_{2}}[g|gs_{2}]]\] \[\geq c^{\prime}a^{-2}(k-O(e^{-\frac{a^{2}}{4}})-O_{a}(r))\] for some absolute constant \(c^{\prime}\). To finish the subsection we just need to prove Lemma 5.7. Proof of Lemma 5.7.: Recall that \(\eta_{a,r}\) has density function \(f:\mathbb{R}^{3}\to\mathbb{R}\) given by \[f(x)=\begin{cases}Ce^{-\frac{\left\|x\right\|^{2}}{2r^{2}}}&\text{ if }\left\|x\right\|\leq ar\\ 0&\text{ otherwise}\end{cases}\] where \(C\) is a normalizing constant chosen to ensure that \(f\) integrates to \(1\). First we will deal with the case where \(r=1\). Note that \[\int_{x\in\mathbb{R}^{3}:\left\|x\right\|\leq a}e^{-\frac{x^{2}}{2}}\,dx\leq\int_ {\mathbb{R}^{3}}e^{-\frac{x^{2}}{2}}\,dx=(2\pi)^{\frac{3}{2}}\] and \[\int_{x\in\mathbb{R}^{3}:\left\|x\right\|\geq a}e^{-\frac{x^{2}}{2 }}\,dx =\int_{u=a}^{\infty}4\pi u^{2}e^{-\frac{u^{2}}{2}}\,du\] \[\leq O\left(\int_{u=a}^{\infty}4\pi a^{2}e^{-\frac{au}{3}}\,du\right)\] \[\leq O\left(e^{-\frac{a^{2}}{4}}\right).\] This means \[\int_{x\in\mathbb{R}^{3}:\left\|x\right\|\leq a}e^{-\frac{x^{2}}{2}}\,dx=(2\pi )^{\frac{3}{2}}-\int_{x\in\mathbb{R}^{3}:\left\|x\right\|\geq a}e^{-\frac{x^ {2}}{2}}\,dx\geq(2\pi)^{\frac{3}{2}}-O\left(e^{-\frac{a^{2}}{4}}\right).\] Therefore \[C=(2\pi)^{-3/2}+O\left(e^{-\frac{a^{2}}{4}}\right).\] We now note that \[H(\eta_{1,a}) =\int_{\left\|x\right\|\leq a}-Ce^{-\left\|x\right\|^{2}/2}\log \left(Ce^{-\left\|x\right\|^{2}/2}\right)\,dx\] \[=\int_{\left\|x\right\|\leq a}C\left(\frac{\left\|x\right\|^{2}}{ 2}-\log C\right)e^{-\left\|x\right\|^{2}/2}\,dx.\] We have \[\int_{x\in\mathbb{R}^{3}}C\left(\frac{\left\|x\right\|^{2}}{2}- \log C\right)e^{-\left\|x\right\|^{2}/2}\,dx =(2\pi)^{3/2}\,C\left(\frac{3}{2}-\log C\right)\] \[=\left(1+O\left(e^{-\frac{a^{2}}{4}}\right)\right)\left(\frac{3} {2}\log e+\frac{3}{2}\log 2\pi+O\left(e^{-\frac{a^{2}}{4}}\right)\right)\] \[=\frac{3}{2}\log 2\pi e+O\left(e^{-\frac{a^{2}}{4}}\right).\] We also have \[\int_{x\in\mathbb{R}^{3}:\left\|x\right\|\geq a}C\left(\frac{ \left\|x\right\|^{2}}{2}-\log C\right)e^{-\left\|x\right\|^{2}/2}\,dx\] \[\qquad=\int_{u=a}^{\infty}4\pi u^{2}C\left(\frac{u^{2}}{2}-\log C \right)e^{-u^{2}/2}\,du\] \[\qquad\leq O\left(\int_{u=a}^{\infty}a^{4}e^{-au/3}\,du\right)\] \[\qquad\leq O\left(e^{-a^{2}/4}\right).\] This gives \[H(\eta_{1,a})\geq\frac{3}{2}\log 2\pi e-O(e^{-a^{2}/4}).\] From this we may immediately deduce that \[H(\eta_{r,a})\geq\frac{3}{2}\log 2\pi er^{2}-O(e^{-a^{2}/4})\] as required. The fact that \(H(\eta_{r,a})\leq\frac{3}{2}\log 2\pi er^{2}\) follows immediately from Lemmas 4.19 and 5.6. ### Entropy gap We now prove Proposition 5.4. This Proposition bounds the difference in entropy of \(q_{\tau}\) smoothed at two different scales. Before proving this we will need the following estimate. **Lemma 5.11**.: _Let \(\mu\) be a probability measure on \(PSL_{2}(\mathbb{R})\). Suppose that \(\mu\) is strongly irreducible and that everything in its support has operator norm at most \(R\) for some \(R>1\). 'Suppose that the support of \(PSL_{2}(\mathbb{R})\) is not contained in any compact subgroup of \(PSL_{2}(\mathbb{R})\). Let \(\gamma_{1},\gamma_{2},\dots\) be i.i.d. samples from \(\mu\) and let \(q_{n}:=\gamma_{1}\gamma_{2}\dots\gamma_{n}\). Let \(\varepsilon>0\). Then there is some \(\alpha>0\) such that the following is true. Let \(b\in P_{1}(\mathbb{R})\) and let \(t>0\) be sufficiently large depending on \(\mu\), \(\varepsilon\) and \(b\). Let_ \[\tau_{t,b}:=\min\{n:\left\|\gamma_{1}\gamma_{2}\dots\gamma_{n}b\right\|\geq t \left\|b\right\|\}.\] _Then_ \[\mathbb{P}\left[\left|\tau_{t,b}-\frac{\log t}{\chi}\right|>\varepsilon\log t \right]<t^{-\alpha}.\] We will prove this later in this section. We also need the following results about entropy. **Lemma 5.12** (Entropy is concave).: _Let \(\lambda_{1},\lambda_{2},\dots,\lambda_{n}\) be probability measures either all on \(\mathbb{R}^{d}\) or all on \(PSL_{2}(\mathbb{R})\) which are either all absolutely continuous or all discrete. Suppose that all of the probability measures have finite entropy. Let \(\mathbf{p}=(p_{1},p_{2},\dots,p_{n})\) be a probability vector. Then_ \[H(\sum_{i=1}^{n}p_{i}\lambda_{i})\geq\sum_{i=1}^{n}p_{i}H(\lambda_{i}).\] Proof.: The \(\mathbb{R}^{d}\) case is proven in [22, Lemma 4.6]. The same proof works for measures on \(PSL_{2}(\mathbb{R})\). **Lemma 5.13** (Entropy is almost convex).: _Let \(\lambda_{1},\lambda_{2},\dots,\lambda_{n}\) be probability measures either all on \(\mathbb{R}^{d}\) or all on \(PSL_{2}(\mathbb{R})\) which are either all absolutely continuous or all discrete. Suppose that all of the probability measures have finite entropy. Let \(\mathbf{p}=(p_{1},p_{2},\dots,p_{n})\) be a probability vector. Then_ \[H(\sum_{i=1}^{n}p_{i}\lambda_{i})\leq\sum_{i=1}^{n}p_{i}H(\lambda_{i})+H( \mathbf{p}).\] Proof.: The \(\mathbb{R}^{d}\) case is proven in [22, Lemma 4.7]. The same proof works for measures on \(PSL_{2}(\mathbb{R})\). We also use the following convention. **Definition 5.14**.: Suppose that \(\lambda\) is a (not necessarily probability) measure defined on a space for which we have some concept of entropy and which is either absolutely continuous or discrete. Then we define \[H(\lambda)=\left\|\lambda\right\|_{1}H(\lambda/\left\|\lambda\right\|_{1}).\] Note that with this definition if \(\lambda_{1}\) and \(\lambda_{2}\) are measures then by Lemma 5.12 we have \(H(\lambda_{1}+\lambda_{2})\geq H(\lambda_{1})+H(\lambda_{2})\). We also need the following lemmas. **Lemma 5.15**.: _Let \(d\) be the distance function of a left invariant metric and let \(r>0\). Suppose that \(g\) is a discrete random variable taking values in \(PSL_{2}(\mathbb{R})\) and that there are \(x_{1},x_{2},\ldots,x_{n}\in PSL_{2}(\mathbb{R})\) and a probability vector \(\mathbf{p}=(p_{1},p_{2},\ldots,p_{n})\) such that_ \[\mathbb{P}\left[g=x_{i}\right]=p_{i}.\] _Suppose further that for every \(i\neq j\) we have \(d(x_{i},x_{j})>2r\). Let \(h\) be an absolutely continuous random variable taking values in \(PSL_{2}(\mathbb{R})\). Suppose that \(d(\mathrm{Id},h)\leq r\) almost surely. Suppose further that \(g\) and \(h\) both have finite entropy. Then_ \[H(gh)=H(g)+H(h)\] Proof.: In [22, Lemma 4.8] this result is proven for random taking values in \(\mathbb{R}^{d}\). The same proof works for random variables taking values in \(PSL_{2}(\mathbb{R})\). **Lemma 5.16**.: _Let \(X\) and \(Y\) be discrete random variables defined on the same probability space each having finitely many possible values. Suppose that \(K\) is an integer such that for each \(y\) in the image of \(Y\) there are at most \(K\) elements \(x\) in the image of \(X\) such that_ \[\mathbb{P}\left[X=x\cap Y=y\right]>0.\] _Then_ \[H(X|Y)\leq\log K.\] Proof.: Note that \((X|Y)\) is almost surely supported on at most \(K\) points. This means that \[H((X|Y))\leq\log K\] almost surely. The result now follows by Lemma 4.23. **Lemma 5.17**.: _Given \(u>0\) let \(K_{u}\) denote the set_ \[K_{u}:=\{g\in PSL_{2}(\mathbb{R}):\|g\|\leq u\}.\] _Then_ \[\tilde{m}(K_{u})\leq O(u^{2}).\] _Here \(\tilde{m}\) is the Haar measure on \(PSL_{2}(\mathbb{R})\) defined in 4.17._ The proof of Lemma 5.17 is a simple computation involving the Haar measure which we will carry out later in this section. We now have everything we need to prove Proposition 5.4. Proof of Proposition 5.4.: First we will deal with (24). Fix some \(\varepsilon>0\) which is sufficiently small depending on \(M\) and \(\mu\). Let \(m=\left\lfloor\frac{\log t}{\chi}\right\rfloor\) and define \(\tilde{\tau}\) by \[\tilde{\tau}=\begin{cases}\lceil(1+\varepsilon)m\rceil&\text{if }\tau>\lceil(1+ \varepsilon)m\rceil\\ \lfloor(1-\varepsilon)m\rfloor&\text{if }\tau<\lfloor(1-\varepsilon)m\rfloor\\ \tau&\text{otherwise.}\end{cases}\] Given some random variable \(X\) let \(\mathcal{L}(X)\) denote its law. If we are also given some event \(A\) we will let \(\mathcal{L}(X)|_{A}\) denote the (not necessarily probability) measure given by the push forward of the restriction of \(\mathbb{P}\) to \(A\) under the random variable \(X\). Note that \(\left\lVert\mathcal{L}(X)|_{A}\right\rVert_{1}=\mathbb{P}[A]\). We have the following inequality. \[H(q_{\tau}s_{r_{1},a}) =H(\mathcal{L}(q_{\tau})*\mathcal{L}(s_{r_{1},a})) \tag{27}\] \[\geq H(\mathcal{L}(q_{\tau})|_{\tau=\tilde{\tau}}*\mathcal{L}(s_ {r_{1},a}))+H(\mathcal{L}(q_{\tau})|_{\tau\neq\tilde{\tau}}*\mathcal{L}(s_{r_{ 1},a}))\] (28) \[\geq H(\mathcal{L}(q_{\tau})|_{\tau=\tilde{\tau}}*\mathcal{L}(s_ {r_{1},a}))+\mathbb{P}[\tau\neq\tilde{\tau}]H(\mathcal{L}(s_{r_{1},a}))\] Here (27) follows from Lemma 5.12 and (28) follows from Lemmas 4.24 and 5.12. First we will bound \(H(\mathcal{L}(q_{\tau})|_{\tau=\tilde{\tau}})\). To do this we introduce the random variable \(\tilde{X}\) which is defined by \[\tilde{X}=\left(q_{\lfloor(1-\varepsilon)m\rfloor},\gamma_{\lfloor(1- \varepsilon)m\rfloor+1},\gamma_{\lfloor(1-\varepsilon)m\rfloor+2},\ldots, \gamma_{\lceil(1+\varepsilon)m\rceil}\right).\] We know that \(q_{\tilde{\tau}}\) is completely determined by \(\tilde{X}\) so \[H(\tilde{X}|q_{\tilde{\tau}})=H(\tilde{X})-H(q_{\tilde{\tau}}). \tag{29}\] Let \(K\) be the number of points in the support of \(\mu\). Clearly if \(\gamma_{\lfloor(1-\varepsilon)m\rfloor+1},\gamma_{\lfloor(1-\varepsilon)m \rfloor+2},\ldots,\gamma_{\lceil(1+\varepsilon)m\rceil}\) and \(\tilde{\tau}\) are fixed then for any possible value of \(q_{\tilde{\tau}}\) there is at most one choice of \(q_{\lfloor(1-\varepsilon)m\rfloor}\) which would lead to this value of \(q_{\tilde{\tau}}\). Therefore for each \(y\) in the image of \(q_{\tilde{\tau}}\) there are at most \[(2\varepsilon m+2)K^{(2\varepsilon m+2)}\] elements \(x\) in the image of \(\tilde{X}\) such that \(\mathbb{P}[\tilde{X}=x\cap q_{\tilde{\tau}}=y]>0\). By Lemma 5.16 this gives \[H(\tilde{X}|q_{\tilde{\tau}})\leq\log\left((2\varepsilon m+2)K^{(2\varepsilon m +2)}\right)\leq\frac{2\varepsilon\log K}{\chi}\log t+o_{\mu}(\log t). \tag{30}\] We also know that \[H(\tilde{X})\geq H(q_{m})\geq h_{RW}\cdot m\geq\frac{h_{RW}}{\chi}\log t-o_{ \mu}(\log t). \tag{31}\] Combining equations (29), (30) and (31) gives \[H(q_{\tilde{\tau}})\geq\frac{h_{RW}-2\varepsilon\log K}{\chi}\log t-o_{\mu}(\log t).\] We note by Lemma 5.13 that \[H(\mathcal{L}(q_{\tilde{\tau}}))\leq H(\mathcal{L}(q_{\tilde{\tau}})|_{\tau= \tilde{\tau}})+H(\mathcal{L}(q_{\tilde{\tau}})|_{\tau\neq\tilde{\tau}})+H( \mathbb{I}_{\tau=\tilde{\tau}}).\] We wish to use this to bound \(H(\mathcal{L}(q_{\tilde{\tau}})|_{\tau=\tilde{\tau}})\) from below. First note that trivially \(H(\mathbb{I}_{\tau=\tilde{\tau}})\leq\log 2\leq o(\log t)\). Note that by Lemma 5.11 we have that providing \(t\) is sufficiently large depending on \(\varepsilon\) and \(\mu\) \[\mathbb{P}\left[\tau\neq\tilde{\tau}\right]\leq\alpha^{m}\] for some \(\alpha\in(0,1)\) which depends only on \(\varepsilon\) and \(\mu\). We also know that conditional on \(\tau\neq\tilde{\tau}\) there are at most \(K^{\lceil(1+\varepsilon)m\rceil}+K^{\lfloor(1-\varepsilon)m\rfloor}\) possible values for \(q_{\tilde{\tau}}\). This means that \[H(\mathcal{L}(q_{\tilde{\tau}})|_{\tau\neq\tilde{\tau}})\leq\alpha^{m}\log \left(K^{\lceil(1+\varepsilon)m\rceil}+K^{\lfloor(1-\varepsilon)m\rfloor} \right)\leq o_{\mu,\varepsilon}(\log t).\] Therefore \[H(\mathcal{L}(q_{\tilde{\tau}})|_{\tau=\tilde{\tau}})\geq\frac{h_{RW}-2 \varepsilon\log K}{\chi}\log t-o_{\mu,\varepsilon}(\log t).\] Recall that \(d\) is the distance function of some left invariant Riemann metric and that by the definition of \(M_{\mu}\) given any \(N\in\mathbb{Z}_{>0}\) and any two distinct \(x,y\in PSL_{2}(\mathbb{R})\) such that for each of them there is some \(n\leq N\) such that they are in the support of \(\mu^{*n}\) we have \[d(x,y)\geq M_{\mu}^{-N+o_{\mu}(N)}\] In particular this means that if \(x\) and \(y\) are both in the image of \(q_{\tilde{\tau}}\) then \[d(x,y)\geq M_{\mu}^{-m(1+\varepsilon)+o_{\mu}(N)}.\] Note also that trivially for all sufficiently small \(r\) we have \(d(\exp(u),\mathrm{Id})\leq O(r)\) whenever \(u\in\mathfrak{psl}_{z}(\mathbb{R})\) satisfies \(\|u\|\leq r\). In particular since \(r_{1}<M^{-m}\) this means that providing \(t\) is sufficiently large depending on \(M\) and \(a\) we have \[d(s_{r_{1},a},\mathrm{Id})\leq O(aM^{-m})\] almost surely. Therefore, providing \(\varepsilon\) is small enough that \(M_{\mu}^{(1+\varepsilon)}<M\) and \(t\) is sufficiently large depending on \(\mu\), \(a\), \(\varepsilon\) and \(M\) we have \[d(s_{r_{1},a},\mathrm{Id})<\frac{1}{2}\min_{x,y\in\mathrm{Im}\,q_{\tilde{\tau }},x\neq y}d(x,y).\] In particular by Lemma 5.15 and Definition 5.14 we have \[H(\mathcal{L}(q_{\tau})|_{\tau=\tilde{\tau}}*\mathcal{L}(s_{r_{1},a}))=H( \mathcal{L}(q_{\tau})|_{\tau=\tilde{\tau}})+\mathbb{P}[\tau=\tilde{\tau}]H( \mathcal{L}(s_{r_{1},a})).\] Putting this into the estimate (28) for \(H(q_{\tau}s_{r_{1},a})\) we get \[H(q_{\tau}s_{r_{1},a})\geq\frac{h_{RW}-2\varepsilon\log K}{\chi}\log t+H(s_{s _{1},a})-o_{\mu,M,a,\varepsilon}(\log t).\] Since \(\varepsilon\) can be made arbitrarily small this becomes \[H(q_{\tau}s_{r_{1},a})\geq\frac{h_{RW}}{\chi}\log t+H(s_{r_{1},a})-o_{\mu,M,a}( \log t)\] as required. Now to prove (25). Note that \(\|q_{\tau}s_{r_{2},a}\|\leq Rtar_{2}\). Therefore by Lemma 5.17 the image of \(q_{\tau}m_{r_{2},a}\) is contained in a set of \(\tilde{m}\)-measure at most \(O_{\mu,a}(t^{2})\) where \(\tilde{m}\) is our normalised Haar measure. Trivially by Jensen's inequality this gives \[H(q_{\tau}m_{r_{2},a})\leq 2\log t+o_{\mu,M,a}(\log t)\] as required. Subtracting (25) from (24) gives \[H(q_{\tau}m_{r_{1},a})-H(q_{\tau}m_{r_{2},a})\geq\left(\frac{h_{RW}}{\chi}-2 \right)\log t+H(m_{r_{1},a})-o_{M,\mu,a}(\log t).\] Noting that \(|H(m_{r_{2},1})-3\log r_{2}|\leq O_{a}(1)\leq o_{M,\mu,a}(\log t)\) gives (26) as required. We now prove Lemma 5.11. We need the following result. **Theorem 5.18** (Theorem V.6.1 in [4]).: _Let \(\mu\) be a probability measure on \(PSL_{2}(\mathbb{R})\). Suppose that \(\mu\) is strongly irreducible. Let \(\chi\) be the Lypanov exponent of \(\mu\). Suppose that \(\chi>0\) and that there exists some \(u>0\) such that_ \[\int e^{\text{ulog}\|g\|}\mu(dg)<\infty. \tag{32}\] _Let \(g_{1},g_{2},\dots\) be i.i.d. samples from \(\mu\) and let \(q_{n}=\gamma_{1}\gamma_{2}\dots\gamma_{n}\). Let \(\varepsilon>0\). Then there exists some \(\alpha\in(0,1)\) such that for all unit vectors \(w\in\mathbb{R}^{2}\) and all sufficiently large \(n\) we have_ \[\mathbb{P}\left[\left|\log\left\|q_{n}^{T}w\right\|-n\chi\right|>\varepsilon n \right]<\alpha^{n}.\] Proof.: This is [4, Theorem V.6.1]. Note that in [4] the author uses a definition of the Lypanov exponent which is the exponential of the definition used in this paper. Lemma 5.11 follows from this as follows. Proof of Lemma 5.11.: First note that (32) is clearly satisfied as \(\mu\) is compactly supported. Note in order to have \[\left|\tau-\frac{\log t}{\chi}\right|>\varepsilon\log t\] there must be some \(n\geq\frac{\log t}{\log R}\) such that \[\left|\log\left\|q_{n}\right\|-n\chi\right|>\tilde{\varepsilon}n\] for some \(\tilde{\varepsilon}>0\) depending on \(\varepsilon\). We are now done by Theorem 5.18 and the sum of a geometric series. Finally we prove Lemma 5.17. Proof of Lemma 5.17.: First let \[M_{x,y,\theta}:=\begin{pmatrix}1&x\\ 0&1\end{pmatrix}\begin{pmatrix}y^{\frac{1}{2}}&0\\ 0&y^{-\frac{1}{2}}\end{pmatrix}\begin{pmatrix}\cos\theta&-\sin\theta\\ \sin\theta&\cos\theta\end{pmatrix}.\] Note that we have \[M_{x,y,\theta}\begin{pmatrix}\cos\theta\\ -\sin\theta\end{pmatrix}=\begin{pmatrix}y^{\frac{1}{2}}\\ 0\end{pmatrix}\] and \[M_{x,y,\theta}\begin{pmatrix}\sin\theta\\ \cos\theta\end{pmatrix}=\begin{pmatrix}xy^{-\frac{1}{2}}\\ y^{-\frac{1}{2}}\end{pmatrix}\] meaning that \[\|M_{x,y,\theta}\|\geq\max\{y^{\frac{1}{2}},|x|y^{-\frac{1}{2}},y^{-\frac{1}{2 }}\}.\] This means that we have \[\tilde{m}(K_{t}) \leq O\left(\int_{t^{-2}}^{t^{2}}\int_{-ty^{\frac{1}{2}}}^{ty^{ \frac{1}{2}}}\int_{0}^{2\pi}\frac{1}{y^{2}}\,d\theta\,dx\,dy\right)\] \[=O\left(t\int_{t^{-2}}^{t^{2}}y^{-\frac{3}{2}}\,dy\right)\] \[\leq O\left(t\int_{t^{-2}}^{\infty}y^{-\frac{3}{2}}\,dy\right)\] \[=O(t^{2})\] as required. ### Variance of a disintegration of a stopped random walk In this subsection we will prove Proposition 5.5 and then use this to prove Proposition 1.24. Proof of Proposition 5.5.: Let \(\tau=\tau_{t,b}\) and let \(a\geq 1\) be a number we will choose later. Let \(r_{1}=a^{-1}M^{-\frac{\log t}{\chi}}\) and let \[N=\left\lfloor(1-\frac{h_{RW}}{10\log M})\frac{\log M\log t}{\chi\log 2} \right\rfloor-1.\] Note that \[\frac{1}{4}t^{\frac{\log M}{\chi}}/t^{\frac{h_{RW}}{10\chi}}\leq 2^{N}\leq \frac{1}{2}t^{\frac{\log M}{\chi}}/t^{\frac{h_{RW}}{10\chi}}.\] Given \(u\in[1,2)\) and \(i\in[N]\) let \[k_{i}(u):=H(q_{\tau}m_{2^{i-1}ur_{1},a})-H(m_{2^{i-1}ur_{1},a})-H(q_{\tau}m_{2 ^{i}ur_{1},a})+H(m_{2^{i}ur_{1},a}).\] Note that by Proposition 5.3 there is some absolute constant \(c>0\) such that we have \[v(q_{\tau};a2^{i}ur_{1})\geq ca^{-2}(k_{i}(u)-O(e^{-\frac{a^{2}}{4}})-O_{a}(2 ^{i}r_{1})). \tag{33}\] This means that \[\sum_{i=1}^{N}v(q_{\tau};a2^{i}ur_{1})\geq ca^{-2}\sum_{i=1}^{N}k_{i}(u)-O(Ne^{- \frac{a^{2}}{4}}a^{-2})-O_{a}(N2^{N}r_{1}).\] Note that for \(u\in[1,2)\) we have \[a2^{N}ur_{1}\leq t^{-\frac{hRW}{10\chi}}\] and \[a2^{1}ur_{1}\geq t^{-\frac{\log M}{\chi}}.\] This means that \[\int_{t^{-\frac{\log M}{\log\chi}}}^{t^{-\frac{\log M}{\log\chi}}} \frac{1}{u}v(q_{\tau};u)\,du\geq ca^{-2}\int_{1}^{2}\frac{1}{u}\sum_{i=1}^{N}k _{i}(u)\,du-O(Ne^{-\frac{a^{2}}{4}}a^{-2})-O_{a}(N2^{N}r_{1}). \tag{34}\] Clearly for any fixed \(u\in[1,2)\) we have \[\sum_{i=1}^{N}k_{i}(u)=H(q_{\tau}m_{ur_{1},a})-H(m_{ur_{1},a})-H(q_{\tau}m_{2^{ N}ur_{1},a})+H(m_{2^{N}ur_{1},a}).\] This means that by Proposition 5.4 we have \[\sum_{i=1}^{N}k_{i}(u) \geq\left(\frac{h_{RW}}{\chi}-12\right)\log t+3\log 2^{N}ur_{1}+o_ {M,\mu,a,w}(\log t) \tag{35}\] \[\geq\left(\frac{h_{RW}}{\chi}-2-\frac{3h_{RW}}{10\chi}\right)\log t +o_{M,\mu,a,w}(\log t).\] Let \(C\) be chosen such that the error term \(O(Ne^{-\frac{a^{2}}{4}}a^{-2})\) in (34) can be bounded above by \(CNe^{-\frac{a^{2}}{4}}a^{-2}\). Note that this is at most \(C\frac{\log M}{\chi\log 2}e^{-\frac{a^{2}}{4}}a^{-2}\log t\). Let \(c\) be as in (33). We take our value of \(a\) to be \[a=2\sqrt{\log\left(\frac{100C}{c\log 2}\frac{\log M}{h_{RW}}\right)}.\] Note that \(a\) depends only on \(\mu\) and \(M\). This means \[CNe^{-\frac{a^{2}}{4}}a^{-2}\leq a^{-2}\frac{h_{RW}}{100\chi}c\log t.\] Note also that \(N2^{N}r_{1}\leq o_{\mu,M}(\log t)\). Therefore putting (35) into (34) we get \[\int_{t^{-\frac{h_{RW}}{\chi}}}^{t^{-\frac{h_{RW}}{10\chi}}}\frac{1}{u}v(q_{ \tau};u)\,du\geq ca^{-2}\left(\frac{h_{RW}}{\chi}-2-\frac{3h_{RW}}{\chi}-\frac {h_{RW}}{100\chi}\right)\log t+o_{M,\mu,w}(\log t).\] In particular providing \(\frac{h_{RW}}{\chi}>10\) we have \[\int_{t^{-\frac{\log M}{\chi}}}^{t^{-\frac{h_{RW}}{\log M}}}\frac{1}{u}v(q_{ \tau};u)\,du\gtrsim a^{-2}\left(\frac{h_{RW}}{\chi}\right)\log t+o_{M,\mu,w}(\log t).\] Noting that \(a^{2}\leq O(\max\left\{1,\log\frac{\log M}{h_{RW}}\right\})\) we have that for all sufficiently large (depending on \(\mu\), \(M\), and \(w\)) \(t\) we have \[\int_{t^{-\frac{\log M}{\log\chi}}}^{t^{-\gamma\frac{\log M}{\log\chi}}}\frac{ 1}{u}v(q_{\tau};u)\,du\gtrsim\left(\frac{h_{RW}}{\chi}\right)\left(\max\left\{ 1,\log\frac{\log M}{h_{RW}}\right\}\right)^{-1}\log t\] as required. We wish to prove Proposition 1.24. First we need the following corollary of Proposition 5.5. **Corollary 5.19**.: _Suppose that \(\mu\) is a strongly irreducible measure on \(PSL_{2}(\mathbb{R})\) with finite support and that the support of \(PSL_{2}(\mathbb{R})\) is not contained in any compact subgroup of \(PSL_{2}(\mathbb{R})\). Suppose further that \(M_{\mu}<\infty\) and let \(M>M_{\mu}\). Suppose that \(M\) is chosen large enough that \(h_{RW}\leq\log M\). Then for all sufficiently large (depending on \(\mu\) and \(M\)) \(t\) we have_ \[\int_{P_{1}(\mathbb{R})}\int_{t^{-\frac{\log M}{\log\chi}}}^{t^{- \frac{h_{RW}}{\log\chi}}}\frac{1}{u}v(q_{\tau_{t,b}};u)\,du\,\hat{\nu}(db)\gtrsim\] \[\left(\frac{h_{RW}}{\chi}\right)\left(\max\left\{1,\log\frac{ \log M}{\chi}\right\}\right)^{-1}\log t.\] Proof.: Given \(\mu\) and \(M\) let \[S(t):=\{b\in P_{1}(\mathbb{R}):t\text{ is large enough to satisfy Proposition \ref{prop:P1} for this }b,\mu\text{ and }M\}.\] By Proposition 5.5 we know that \(S(t)\nearrow P_{1}(\mathbb{R})\). Therefore \(\hat{\nu}(S(t))\nearrow 1\). In particular providing \(t\) is sufficiently large (depending on \(\mu\) and \(M\)) we have \(\hat{\nu}(S(t))\geq\frac{1}{2}\). This, along with the fact that \(v(\cdot;\cdot)\) is always non-negative, is enough to prove Corollary 5.19. This is enough to prove Proposition 1.24. Proof of Proposition 1.24.: Recall that \(\hat{m}=\left\lfloor\frac{\log M}{100\chi}\right\rfloor\). Let \[A:=t^{\frac{\log M}{2h_{\chi}}-\frac{h_{RW}}{20h_{\chi}}}.\] Define \(a_{1},a_{2},\ldots,a_{2\hat{m}+1}\) by \[a_{i}:=t^{-\frac{\log M}{\chi}}A^{i-1}.\] Note that this means \(a_{1}=t^{-\frac{\log M}{\chi}}\) and \(a_{2\hat{m}+1}=t^{-\frac{h_{RW}}{10\chi}}\). Furthermore, providing \(h_{RW}/\chi\) is sufficiently large we have \[t^{3}\leq A\leq t^{50}.\] In particular \(a_{i+1}\geq t^{3}a_{i}\). Let \(U,V\) be defined by \[U:=\bigcup_{i=1}^{\hat{m}}[a_{2i-1},a_{2i})\] and \[V:=\bigcup_{i=1}^{\hat{m}}[a_{2i},a_{2i+1}).\] Note that \(U\) and \(V\) partition \(\left[t^{-\frac{\log M}{\chi}},t^{-\frac{h_{RW}}{10\chi}}\right]\). Let \(c>0\) be the absolute constant in Corollary 5.19. By Corollary 5.19 providing \(t\) is sufficiently large depending on \(\mu\) and \(M\) we have \[\int_{U\cup V}\int_{P_{1}(\mathbb{R})}\frac{1}{u}v(q_{\tau_{t,b}};u)\,\hat{ \nu}(db)\,du\geq c\left(\frac{h_{RW}}{\chi}\right)\left(\max\left\{1,\log \frac{\log M}{h_{RW}}\right\}\right)^{-1}\log t.\] In particular either \[\int_{U}\int_{P_{1}(\mathbb{R})}\frac{1}{u}v(q_{\tau_{t,b}};u)\,\hat{\nu}(db) \,du\geq\frac{1}{2}c\left(\frac{h_{RW}}{\chi}\right)\left(\max\left\{1,\log \frac{\log M}{h_{RW}}\right\}\right)^{-1}\log t. \tag{36}\] or \[\int_{V}\int_{P_{1}(\mathbb{R})}\frac{1}{u}v(q_{\tau_{t,b}};u)\,\hat{\nu}(db) \,du\geq\frac{1}{2}c\left(\frac{h_{RW}}{\chi}\right)\left(\max\left\{1,\log \frac{\log M}{h_{RW}}\right\}\right)^{-1}\log t.\] Without loss of generality assume that (36) holds. For \(i=1,2,\ldots,\hat{m}\) let \(\tilde{r}_{i}\in(a_{2i-1},a_{2i})\) be chosen such that \[\int_{P_{1}(\mathbb{R})}v(q_{\tau_{t,b}};\tilde{r}_{i})\,\hat{\nu}(db)\geq \frac{1}{2}\sup_{u\in(a_{2i-1},a_{2i})}\int_{P_{1}(\mathbb{R})}v(q_{\tau_{t,b }};u)\,\hat{\nu}(db).\] In particular this means that \[\int_{P_{1}(\mathbb{R})}v(q_{\tau_{t,b}};\tilde{r}_{i})\,\hat{\nu}(db)\geq \frac{1}{2\log A}\int_{a_{2i-1}}^{a_{2i}}\int_{P_{1}(\mathbb{R})}\frac{1}{u}v (q_{\tau_{t,b}};u)\,\hat{\nu}(db)\,du.\] Summing over \(i\) gives \[\sum_{i=1}^{\hat{m}}\int_{P_{1}(\mathbb{R})}v(q_{\tau_{t,b}};\tilde {r}_{i})\,\hat{\nu}(db) \geq\frac{1}{2\log A}\int_{U}\int_{P_{1}(\mathbb{R})}\frac{1}{u}v (q_{\tau_{t,b}};u)\,\hat{\nu}(db)\,du\] \[\geq\frac{1}{4\log A}c\left(\frac{h_{RW}}{\chi}\right)\left(\max \left\{1,\log\frac{\log M}{h_{RW}}\right\}\right)^{-1}\log t.\] Noting that \(\log A\leq O(\log t)\) we get that providing \(t\) is sufficiently large depending on \(\mu\) and \(M\) that \[\sum_{i=1}^{\hat{m}}\int_{P_{1}(\mathbb{R})}v(q_{\tau_{t,b}};\tilde{r}_{i})\, \hat{\nu}(db)\geq c^{\prime}\left(\frac{h_{RW}}{\chi}\right)\left(\max\left\{1, \log\frac{\log M}{h_{RW}}\right\}\right)^{-1}\] for some absolute constant \(c^{\prime}>0\). Finally note that \(A\geq t^{3}\) means that \(\tilde{r}_{i+1}\geq t^{3}\tilde{r}_{i}\). ## 6. More results on regular conditional distributions Before proving Theorem 1.8 we first need a few more results on regular conditional distributions. First we need the following definition. **Definition 6.1**.: Let \((\Omega,\mathcal{F},\mathbb{P})\) be a probability space and let \(\mathcal{A}\subset\mathcal{F}\) be a \(\sigma\)-algebra. We say that two \(\sigma\)- algebras \(\mathcal{G}_{1},\mathcal{G}_{2}\subset\mathcal{F}\) are conditionally independent given \(\mathcal{A}\) if for any \(U\in\mathcal{G}_{1}\) and \(V\in\mathcal{G}_{2}\) we have \[\mathbb{P}[U\cap V|\mathcal{A}]=\mathbb{P}[U|\mathcal{A}]\mathbb{P}[V| \mathcal{A}]\] almost surely. Similarly we say that two random variables or a random variable and a \(\sigma\)-algebra are conditionally independent given \(\mathcal{A}\) if the \(\sigma\)-algebras generated by them are conditionally independent given \(\mathcal{A}\). Now we have these three lemmas. **Lemma 6.2**.: _Let \((\Omega,\mathcal{F},\mathbb{P})\) be a probability space and let \(\mathcal{A}\subset\mathcal{F}\) be a \(\sigma\)-algebra. Let \(g\) and \(x\) be random variables on \((\Omega,\mathcal{F},\mathbb{P})\) with \(g\) taking values in \(PSL_{2}(\mathbb{R})\) and with \(x\) taking values in \(X\) where \(X\) is either \(PSL_{2}(\mathbb{R})\) or \(P_{1}(\mathbb{R})\). Suppose that \(g\) and \(x\) are conditionally independent given \(\mathcal{A}\). Then_ \[(gx|\mathcal{A})=(g|\mathcal{A})*(x|\mathcal{A})\] _almost surely._ Proof.: This follows by essentially the same proof as the proof that the law of \(gx\) is the convolution of the laws of \(g\) and of \(x\) and is left to the reader. **Lemma 6.3**.: _Let \((\Omega,\mathcal{F},\mathbb{P})\) be a probability space and let \(\mathcal{A}\subset\mathcal{F}\) be a \(\sigma\)-algebra. Let \(g\) be a random variable taking values in some measurable space \((X,\xi)\). Let \(\mathcal{G}\) be a \(\sigma\)-algebra such that_ \[\mathcal{A}\subset\mathcal{G}\subset\mathcal{F}\] _and \(g\) is independent of \(\mathcal{G}\) conditional on \(\mathcal{A}\). Then_ \[(g|\mathcal{G})=(g|\mathcal{A})\] Proof.: This is immediate from the definitions of the objects involved. **Lemma 6.4**.: _Let \((\Omega,\mathcal{F},\mathbb{P})\) be a probability space and let \(\mathcal{A}\subset\mathcal{F}\) be a \(\sigma\)-algebra. Let \(g\) be a random variable taking values in some measurable space \((X,\xi)\). Suppose that \(g\) is \(\mathcal{A}\)-measurable. Then_ \[(g|\mathcal{A})=\delta_{g}\] _almost surely._ Proof.: This is immediate from the definitions of the objects involved. **Lemma 6.5**.: _Let \((\Omega,\mathcal{F},\mathbb{P})\) be a probability space and let \(\mathcal{A}\subset\mathcal{F}\) be a \(\sigma\)-algebra. Let \(g\) be a random variable taking values in some measurable space \((X,\xi)\). Let \(\mathcal{G}\) be a \(\sigma\)-algebra such that \(\mathcal{A}\subset\mathcal{G}\subset\mathcal{F}\) and \(g\) is \(\mathcal{G}\) measurable. Let \(A\in\mathcal{A}\) and construct the \(\sigma\)-algebra \(\hat{\mathcal{A}}\) by_ \[\hat{\mathcal{A}}=\sigma(\mathcal{A},\{G\in\mathcal{G}:G\subset A\}).\] _Then for almost all \(\omega\in\Omega\) we have_ \[(g|\hat{\mathcal{A}})(\omega,\cdot)=\begin{cases}\delta_{g}&\text{if }\omega\in A \\ (g|\mathcal{A})(\omega,\cdot)&\text{otherwise.}\end{cases}\] Proof.: Let \[Q(\omega,\cdot):=\begin{cases}\delta_{g}&\text{if }\omega\in A\\ (g|\mathcal{A})(\omega,\cdot)&\text{otherwise.}\end{cases}\] We will show that \(Q\) satisfies the conditions of being a regular conditional distribution for \(g\) given \(\hat{\mathcal{A}}\). Clearly \(Q\) is a Markov kernel. Now let \(D\in\hat{\mathcal{A}}\) and let \(B\in\xi\). We simply need to show that \[\mathbb{P}[D\cap\{g\in B\}]=\mathbb{E}[\mathbb{I}_{D}Q(\cdot,B)]. \tag{37}\] First suppose that \(D\subset A\). In this case the left hand side of (37) becomes \(\mathbb{E}[\mathbb{I}_{D}\mathbb{I}_{g\in B}]\) which is trivially equal to the left hand side. Now suppose that \(D\subset A^{C}\). This means that \(D\in\mathcal{A}\). In this case by the definition of \((g|\mathcal{A})(\omega,\cdot)\) we know that (37) is satisfied. The general case follows by summing. ## 7. Proof of the main theorem In this section we will prove Theorem 1.8. Throughout this section we will let \(\mu\) be a strongly irreducible finitely supported probability measure on \(PSL_{2}(\mathbb{R})\) with the operator norm being at most \(R\) on the support of \(\mu\). We will also assume that the support of \(\mu\) is not contained in any compact subgroup of \(PSL_{2}(\mathbb{R})\). Furthermore \(\mu\) will be \(\alpha_{0}\), \(t\) - non-degenerate. We also adopt the convention of allowing the constants in \(O\), \(o\), \(\Theta\), \(\lesssim\), \(\gtrsim\), and \(\cong\) to depend on \(\alpha_{0}\), \(t\), and \(R\) without explicitly listing these in subscripts. We first construct a sample from the Furstenberg measure \(\nu\) using Proposition 1.22 and Proposition 1.24 in such a way that we can bound it's order \(k\) detail using Lemma 1.18, Lemma 2.8, and Lemma 1.19. **Proposition 7.1**.: _Let \(\mu\) be a finitely supported strongly irreducible probability measure on \(PSL_{2}(\mathbb{R})\) whose support is not contained in any compact subgroup of \(PSL_{2}(\mathbb{R})\). Suppose that \(M_{\mu}<\infty\) and let \(\chi\) be the Lyapunov exponent. Let \(R>0\) be chosen such that the operator norm is at most \(R\) on the support of \(\mu\). Let \(\nu\) be the Furstenberg measure generated by \(\mu\). Suppose that \(\alpha_{0}\in(0,1/3)\) and \(t>0\) are such that \(\mu\) is \(\alpha_{0},t\)- non-degenerate._ _Suppose that_ \[\frac{h_{RW}}{\chi}\left(\max\left\{1,\log\frac{\log M_{\mu}}{h_{RW}}\right\} \right)^{-2} \tag{38}\] _is sufficiently large (depending on \(R\), \(t\) and \(\alpha_{0}\)). Suppose that \(C>0\)._ _Then for all sufficiently small (depending on \(\mu\), \(R\), \(C\), \(t\) and \(\alpha_{0}\)) \(\tilde{r}>0\) there exists \(n\in\mathbb{Z}_{>0}\), an increasing sequence of scales \(s_{1},s_{2},\ldots,s_{n}>0\), random variables \(g_{1},g_{2},\ldots,g_{n}\) taking values in \(PSL_{2}(\mathbb{R})\), random variables \(u^{(1)},u^{(2)},\ldots,u^{(n)}\) taking values in \(\mathfrak{psl}_{2}(\mathbb{R})\) and a random variable \(b\) taking values in \(P_{1}(\mathbb{R})\) such that_ \[g_{1}\exp(u^{(1)})g_{2}\exp(u^{(2)})\ldots g_{n}\exp(u^{(n)})b \tag{39}\] _has law \(\nu\) and the following holds._ _There is a \(\sigma\)-algebra \(\mathcal{A}\) on the probability space where \(g_{i}\), \(u^{(i)}\), and \(b\) are defined, an \(\mathcal{A}\)-measurable event \(A\), and an \(\mathcal{A}\)-measurable random index set \(I\subset[n]\) such that_ _A1. \((g_{1}\exp(u^{(1)})\ldots g_{n}\exp(u^{(n)})b|\mathcal{A})=\delta_{g_{1}}*( \exp(u^{(1)})|\mathcal{A})*\cdots*\delta_{g_{n}}*(\exp(u^{(n)})|\mathcal{A})* \delta_{b}\)._ _A2. We have \(C^{n}s_{n}\leq(\log\tilde{r}^{-1})^{-10}\)._ _A3. \(\mathbb{P}[A]\geq 1-(\log\tilde{r}^{-1})^{-10}\)._ _Furthermore for all \(\omega\in A\) the following holds. For all \(i\in I\), we have_ _A4. \(\left\|g_{1}g_{2}\ldots g_{i}\right\|^{2}\cong s_{i}/\tilde{r}\)._ _A5. \(\left\|u^{(i)}\right\|\leq s_{i}\)._ _A6. \(g_{i+1}g_{i+2}\ldots g_{n}b\in U_{t/4}(u^{(i)}|\mathcal{A})\)._ _For \(i\notin I\), we have \(u^{(i)}=0\) almost surely. If \(\omega\in A\) and we can enumerate \(I\) as \(i_{1}<i_{2}<\cdots<i_{\tilde{n}}\) then_ _A7. \(\left\|g_{1}g_{2}\ldots g_{i_{1}}\right\|\geq C\) and for all \(j\in[\tilde{n}-1]\) we have \(\left\|g_{i_{j}}g_{i_{j}+1}\ldots g_{i_{j+1}}\right\|\geq C\)._ _A8. For all \(j\in[\tilde{n}]\) we have_ \[d(b^{-}(g_{i_{j-1}+1}g_{i_{j-1}+2}\ldots g_{i_{j}}),b^{+}(g_{i_{j}+1}g_{i_{j}+2 }\ldots g_{j_{j+1}}))>t/8\] _with \(i_{j-1}\) replaced by \(1\) in the case \(j=1\) and \(b^{+}(g_{i_{j}+1}g_{i+2}\ldots g_{j_{j+1}})\) replaced by \(g_{i_{n+1}}\ldots g_{n}b\) in the case \(j=\tilde{n}\)._ _Furthermore for all \(\omega\in A\) we have_ _A9._\(\sum_{i\in I}\frac{\operatorname{VAR}[u^{(i)}|\mathcal{A}](\omega)}{s_{i}^{2}}\gtrsim \frac{h_{RW}}{\chi}\left(\max\left\{1,\log\frac{\log M_{\mu}}{h_{RW}}\right\} \right)^{-2}\log\log\tilde{r}^{-1}\). Here \(U_{t/4}\) from Condition A6 is as in Definition 3.5. We now briefly discus the role of each of the conditions in the proof of Theorem 1.8. We let \(x\) denote the random element of \(P_{1}(\mathbb{R})\) given by (39). We prove Theorem 1.8 by applying Proposition 1.22 in the case \(\omega\in A\) and then using Lemmas 1.18, 1.19, and 2.8 to get an upper bound on the order \(k\) detail of \((x|\mathcal{A})\) for an appropriate choice of \(k\). In the case \(\omega\notin A\) we use the trivial bound \(s_{r}^{(k)}(x|\mathcal{A})\leq 1\). Using the convexity of \(s_{r}^{(k)}(\cdot)\) we bound \(s_{r}^{(k)}(x)\) by taking the expectation of this. After this we complete the proof using Lemmas 1.15 and 1.16. We need conditions A1, A4, A5, A7, and A8 in order to be able to apply Proposition 1.22 in the case \(\omega\in A\). We need condition A2 to show that the contribution to the order \(k\) detail introduced by the Wasserstein distance in Proposition 1.22 is small. We need condition A3 to show that the contribution to \(s_{r}^{(k)}(x)\) from the case where \(\omega\notin A\) is small. We need Condition A6 in order to apply Proposition 3.2 which will enable us to control the variance of the \(\zeta_{i}(u^{(i)})\) in Proposition 1.22. Condition A9 is needed to ensure that we can apply Lemma 1.18 enough times. The details of how we deduce Theorem 1.8 from Proposition 7.1 will be given in Section 7.5. To show that our random variable (39) is a sample from \(\nu\) we will require the following Lemma. **Lemma 7.2**.: _Let \(\gamma_{1},\gamma_{2},\dots\) be i.i.d. samples from \(\mu\) and let \(\left(\mathcal{F}_{i}\right)_{i=1}^{\infty}\) be a filtration for \(\gamma_{1},\gamma_{2},\dots\). This means that the \(\mathcal{F}_{i}\) are \(\sigma\)-algebras such that \(\mathcal{F}_{1}\subset\mathcal{F}_{2}\subset\dots\) and \(\gamma_{i}\) is \(\mathcal{F}_{i}\)-measurable. Suppose further that \(\gamma_{i+1}\) is independent from \(\mathcal{F}_{i}\). Let \(T\) be a stopping time for the filtration \(\left(\mathcal{F}_{i}\right)_{i=1}^{\infty}\). Suppose that \(\nu\) is a \(\mu\) invariant probability measure on \(P_{1}(\mathbb{R})\). Let \(b\) be a sample from \(\nu\) which is independent from \(\left(\mathcal{F}_{i}\right)_{i=1}^{\infty}\). Then_ \[\gamma_{1}\gamma_{2}\dots\gamma_{T}b\] _has law \(\nu\)._ This lemma is trivial and the proof is left to the reader. In the proof of Proposition 7.1, we construct a sample of \(\nu\) in the form \[x=b_{0}f_{1}h_{1}b_{1}f_{2}h_{2}b_{2}\dots f_{n}h_{n}b_{n}\hat{b} \tag{40}\] where \(b_{0},f_{1},h_{1},\dots,b_{n}\) are products of consecutive elements of the sequence \(\gamma_{1},\gamma_{2},\dots\) of i.i.d. sample from \(\mu\) defined using suitable stopping times, and \(\hat{b}\) is a sample of \(\nu\) independent of \(\gamma_{1},\gamma_{2},\dots\). By Lemma 7.2\(x\) is indeed a sample from \(\nu\). In addition, we will also define a \(\sigma\)-algebra \(\mathcal{A}\) and \(\mathcal{A}\)-measurable random variables \(a_{1},a_{2},\dots,a_{n}\) taking values in \(PSL_{2}(\mathbb{R})\) such that, amongst other things that we will discuss later, the following holds. The random elements \(b_{i}\), \(f_{i}\) and \(b\) are \(\mathcal{A}\)-measurable for all values of \(i\). In addition, \(h_{1},\ldots,h_{n}\) are conditionally independent given \(\mathcal{A}\). By Lemmas 6.2 and 6.3, these imply that \[(x|\mathcal{A})=\delta_{b_{0}}*\delta_{f_{1}}*\delta_{a_{1}}*(a_{1}^{-1}h_{1}| \mathcal{A})*\cdots*(a_{n}^{-1}h_{n}|\mathcal{A})*\delta_{b_{n}}*\delta_{b}.\] We take our values in Proposition 7.1 to be \(g_{1}:=b_{0}f_{1}a_{1}\), \(g_{2}:=b_{1}f_{2}a_{2}\) and so on, \(u^{(i)}:=\log(a_{i}^{-1}h_{i})\) and \(b:=b_{n}\check{b}\). The rest of the section is organised as follows. We give the details of the construction (40) in Section 7.1 and give some results about the construction. Sections 7.2, 7.3, and 7.4 contain the proofs of some of the properties claimed in Proposition 7.1. Conditions A1 and A7 will follow immediately from the construction of our sample and the results of Section 6. Condition A2 will follow easily from our results on the construction. We prove Condition A3 by showing that each of the Conditions A4, A5, A6, and A8 occur on \(\mathcal{A}\)-measurable events with probabilities at least \(1-o((\log\tilde{r}^{-1})^{-10})\). Condition A9 will be checked in Section 7.3. Before we go on, we make a few remarks on the role of the elements \(b_{i},f_{i}\), and \(h_{i}\) in our construction. The \(h_{i}\) will be defined in such a way that Proposition 1.24 can be applied to them with appropriate choices of the parameter \(t\). Using the scales \(\tilde{r}_{j}\) in that proposition we define a sequence of scales \(s_{i}\) such that \(v(g_{i};s_{i})\) is large on average by the proposition. Using the definition of \(v(g_{i};s_{i})\), we can find a \(\sigma\)-algebra \(\mathcal{A}_{i}\) and an \(\mathcal{A}_{i}\)-measurable random variable \(a_{i}\) taking values in \(PSL_{2}(\mathbb{R})\) such that \(\big{\|}\log(a_{i}^{-1}h_{i})\big{\|}\leq s_{i}\) and \[\mathbb{E}\left[\operatorname{Var}\left[\log(a_{i}^{-1}h_{i})|\mathcal{A}_{i }\right]\right]\geq v(h_{i};s_{i})/2.\] The role of \(f_{i}\) will be to set the norm of \(g_{1}g_{2}\ldots g_{i}\) to the correct size so that Condition A4 from Proposition 7.1 holds. The role of \(b_{i}\) is less intuitive. For technical reasons, before we define \(f_{i}\), we need to know whether \(i-1\) belongs to the set of nice indices \(I\) in Proposition 7.1. By defining \(b_{i-1}\) first, we will be able to decide whether or not Conditions A8 and A6 in Proposition 7.1 are likely to hold for \(i-1\) and this will allow us to make a decision on whether or not to put \(i-1\) in \(I\). ### Construction at a scale In this section we give the detail of the construction outlined above. Fix a sufficiently small \(\tilde{r}>0\). The construction depends on a number of parameters which we fix now. We choose \(M\) such that \(M>M_{\mu}\) and \(h_{RW}\leq\log M\). To do this, we set \(M=\max\{\exp h_{RW},2M_{\mu}\}\). We set \[K:=\left\lfloor\exp(\sqrt{\log\log\tilde{r}^{-1}})\right\rfloor.\] This value of \(K\) is chosen to ensure that for small \(\tilde{r}\) we have that \(R^{K}\) is smaller than any polynomial in \(\tilde{r}^{-1}\) and larger than any polynomial in \(\log(\tilde{r}^{-1})\) where \(R\) is the constant in Proposition 7.1. We set \(n=m\hat{m}\) where \(\hat{m}=\left\lfloor\frac{\log M}{100\chi}\right\rfloor\) is the number of scales that appear in Proposition 1.24 and \(m\) is a number depending on \(\tilde{r}\) to be chosen below. We also let \(\varepsilon>0\) be some number depending only on \(\mu\), \(R\), \(t\), and \(\alpha_{0}\) which we will fix later. We set \[\hat{t}:=\tilde{r}^{-\frac{\chi}{10\log M}}. \tag{41}\] We will apply Proposition 1.24 for each of the values \[\hat{t}^{\left(\frac{h_{BW}}{100\log M}\right)^{j-1}} \tag{42}\] in the role of \(t\) for \(j=1,2,\dots,m\). We choose \(m\) to be the largest possible value such that \[\hat{t}^{\left(\frac{h_{BW}}{100\log M}\right)^{m-1}}\geq R^{100K}. \tag{43}\] We define the sequence \(t_{1},t_{2},\dots,t_{n}\) by repeating each of the values in (42) \(\hat{m}\) times. Recall that \(h_{RW}\leq\log M\) and so \(\frac{h_{BW}}{100\log M}\leq\frac{1}{100}\). This means that \(t_{i}\geq t_{i+1}\). When we apply Proposition 1.24 for \(\hat{t}^{\left(\frac{h_{BW}}{100\log M}\right)^{j-1}}\) in the role of \(t\). For each \(j\) we get a sequence of scales \(\tilde{r}_{1},\tilde{r}_{2},\dots,\tilde{r}_{\hat{m}}\). We define the sequence \(s_{1},s_{2},\dots,s_{n}\) in such a way that for each \(j\in[m]\) the elements \(s_{j\hat{m}+1},\dots,s_{(j+1)\hat{m}}\) are these scales in increasing order. Now let \(\gamma_{1},\gamma_{2},\dots\) be i.i.d. samples from \(\mu\) and let \(\hat{b}\) be a sample from \(\nu\) which is independent of the \(\gamma_{i}\). In what follows we define a sequence of stopping times \(T_{0}<S_{1}<T_{1}<S_{2}<T_{2}<\dots<S_{n}<T_{n}\), random variables \(f_{1},f_{2},\dots,f_{n}\), \(h_{1},h_{2},\dots,h_{n}\), \(b_{0},b_{1},b_{2},\dots,b_{n}\), \(a_{1},a_{2},\dots,a_{n}\) taking values in \(PSL_{2}(\mathbb{R})\) and random variables \(y_{1},y_{2},\dots,y_{n}\) taking values in \(P_{1}(\mathbb{R})\). We also construct a filtration \(\mathcal{F}_{0}\subset\mathcal{F}_{1}\subset\dots\subset\mathcal{F}_{n}\). Let \[T_{0}:=\min\{n:\|\gamma_{1}\dots\gamma_{n}\|\geq R^{K}\}\] and let \(b_{0}=\gamma_{1}\gamma_{2}\dots\gamma_{T_{0}}\). Let \[S_{1}=\min\left\{n\geq T_{0}+1:\left\|\gamma_{n}^{T}\gamma_{n-1}^{T}\dots \gamma_{T_{0}+1}^{T}b^{-}(b_{0})^{\perp}\right\|\geq\max\left\{R^{K},\frac{ \sqrt{s_{1}}}{t_{1}\sqrt{\tilde{r}}\,\|b_{0}\|}\right\}\right\}\] and let \(f_{1}=\gamma_{T_{0}+1}\dots\gamma_{S_{1}}\). Note that this definition is chosen so that we can control \(\|b_{0}f_{1}\|\). Let \(\mathcal{F}_{0}=\sigma(b_{0})\). Let \(k\in[n]\). Suppose that \(y_{i}\), \(T_{i}\), \(h_{i}\), \(a_{i}\), \(b_{i}\), and \(\mathcal{F}_{i}\) are all defined for \(i<k\) and \(S_{i}\) and \(f_{i}\) are defined for \(i\leq k\). We define \(y_{k}\), \(T_{k}\), \(g_{k}\), \(b_{k}\), \(\mathcal{F}_{k}\), \(a_{k}\), and if \(k\leq n-1\)\(S_{k+1}\) and \(f_{k+1}\) as follows. We let \(\hat{\nu}\) denote the measure from Theorem 1.25 with our choice of \(\mu\). We now define the random variable \(y_{k}\). **Lemma 7.3**.: _Providing \(\tilde{r}\) is sufficiently small (in terms of \(\mu\), \(R\), \(\alpha_{0}\) and \(t\)) for each \(k\in[n]\) we can choose a random variable \(y_{k}\) taking values in \(P_{1}(\mathbb{R})\) such that it is independent of \(\mathcal{F}_{k-1}\) and is such that \(y_{k}^{\perp}\) has law \(\hat{\nu}\). Moreover, we may ensure that_ \[\mathbb{P}[d(y_{k},b^{-}(f_{k}))<\varepsilon|\mathcal{F}_{k-1}]>1-\varepsilon. \tag{44}\] We will prove this lemma later in the subsection. We choose \(y_{k}\) such that it satisfies the requirements of the lemma. Next we define \[T_{k}=\min\left\{n\geq S_{k}+1:\left\|\gamma_{n}^{T}\gamma_{n-1}^{T}\ldots \gamma_{S_{k}+1}^{T}y_{k}^{\perp}\right\|\geq t_{k}\right\}\] and we set \(h_{k}=\gamma_{S_{k}+1}\ldots\gamma_{T_{k}}\). We choose this definition so that we can apply Proposition 1.24. Note that by Lemma 3.11 \[\|b_{0}f_{1}h_{1}\ldots b_{k-1}f_{k}h_{k}\| \approx\|b_{0}f_{1}h_{1}\ldots b_{k-1}f_{k}\|\cdot\|h_{k}\|\sin d (b^{+}(h_{k}),b^{-}(b_{0}f_{1}h_{1}\ldots b_{k-1}f_{k}))\] \[\approx\|b_{0}f_{1}h_{1}\ldots b_{k-1}f_{k}\|\cdot\|h_{k}\|\sin d (b^{+}(h_{k}),y_{k})\] \[=\|b_{0}f_{1}h_{1}\ldots b_{k-1}f_{k}\|\left\|h_{k}^{T}y_{k} \right\|.\] This means that we can also control the size of the product. We now choose a \(\sigma\)-algebra \(\hat{\mathcal{A}}_{k}\) and a \(\hat{\mathcal{A}}_{k}\) measurable random variable \(\hat{a}_{k}\) taking values in \(PSL_{2}(\mathbb{R})\) such that \(\left\|\log\hat{a}_{k}^{-1}h_{k}\right\|\leq s_{i}\) almost surely and \[\mathbb{E}\left[\operatorname{VAR}_{\hat{a}_{k}}\left[h_{k}|\hat{\mathcal{A} }_{k},y_{k}\right]|y_{k}\right]\geq\frac{1}{2}s_{k}^{2}v([h_{k}|y_{k}];s_{k}). \tag{45}\] This is possible by the definition of \(v(\cdot;\cdot)\). See Definition 1.23. Furthermore we require \(\hat{\mathcal{A}}_{k}\) to be independent of \(\mathcal{F}_{k-1}\) and of \(\gamma_{T_{k}+1},\gamma_{T_{k}+2},\ldots.\) Since \(h_{k}\) is independent of these this is trivially possible providing we take our underlying probability space to be sufficiently large. We now let \(b_{k}=\gamma_{T_{k}+1}\gamma_{T_{k}+2}\ldots\gamma_{T_{k}+K}\). Now we need to decide if \(k\) is one of our "nice" indices. We let \(k\in I\) if and only if the following hold 1. \(d(b^{-}(f_{k}),y_{k})<\varepsilon\). 2. \(d(y_{k},b^{+}(\hat{a}_{k}))>100\varepsilon\). 3. \(b^{+}(b_{k})\in U_{t/4,t/8}(\log\hat{a}_{k}^{-1}h_{k}|\hat{\mathcal{A}}_{k})\). 4. \(d(b^{-}(\hat{a}_{k}),b^{+}(b_{k}))>t/4\). Conditions (1) and (2) will be used to ensure that Condition A4 occurs with high probability. Condition (3) will be used to show that Condition A6 occurs with high probability and Condition (4) will be used to ensure that A8 occurs with high probability. If \(k\in I\) then we let \(a_{k}=\hat{a}_{k}\) and \(\mathcal{A}_{k}=\hat{\mathcal{A}}_{k}\). Otherwise we let \(a_{k}=h_{k}\) and \(\mathcal{A}_{k}=\sigma(h_{k})\). We now let \[\mathcal{F}_{k}=\sigma(\mathcal{F}_{k-1},f_{k},y_{k},a_{k},\mathcal{A}_{k},b_ {k}).\] Finally if \(k<n\) we let \[S_{k+1}=\min\left\{\ \ n\geq T_{k}+K+1:\left\|\gamma_{n}^{T}\gamma_{n-1}^ {T}\ldots\gamma_{T_{k}+K+1}^{T}b^{-}(b_{0}f_{1}a_{1}b_{1}\ldots f_{k}a_{k}b_{k}) ^{\perp}\right\|\geq\right.\] \[\left.\max\left\{R^{K},\frac{\sqrt{s_{k}}}{t_{k}\sqrt{\tilde{r}} \left\|b_{0}f_{1}a_{1}b_{1}\ldots f_{k}a_{k}b_{k}\right\|}\right\}\right\}\] and let \(f_{k+1}=\gamma_{T_{k}+K+1}\ldots\gamma_{S_{k+1}}\). We need the following result. **Lemma 7.4**.: _We have_ \[m\cong\left(\max\left\{1,\log\frac{\log M}{h_{RW}}\right\}\right)^{-1}\log \log\tilde{r}^{-1}\] _and_ \[n\cong\frac{\log M}{\chi}\left(\max\left\{1,\log\frac{\log M}{h_{RW}}\right\} \right)^{-1}\log\log\tilde{r}^{-1}.\] Proof.: Note that by our definition of \(m\) we have \[m=\left\lfloor\frac{\log\frac{\chi\log\tilde{r}^{-1}}{1000K\log M\log R}}{ \log\frac{100\log M}{h_{RW}}}\right\rfloor+1.\] Our estimate for \(m\) now follows by a simple computation which is left to the reader. The estimate for \(n\) follows by combining our estimate for \(m\) with the definition of \(\hat{m}\). **Lemma 7.5**.: _We have_ \[\sum_{i=1}^{n}\frac{\mathbb{E}[\mathrm{VAR}_{\hat{a}_{i}}[h_{i}|\hat{\mathcal{ A}}_{i}]]}{s_{i}^{2}}\gtrsim\frac{h_{RW}}{\chi}\left(\max\left\{1,\log\frac{\log M }{h_{RW}}\right\}\right)^{-2}\log\log\tilde{r}^{-1}\] Proof.: This follows easily from Lemma 7.4 and the use of Proposition 1.24 in our construction. **Lemma 7.6**.: _For all \(i\in[n-1]\) we have_ \[s_{i+1}\geq t_{i+1}^{3}s_{i}. \tag{46}\] _Furthermore providing \(\tilde{r}\) is sufficiently small we have_ \[s_{1}\geq R^{20K}t_{i}^{2}\tilde{r} \tag{47}\] _and_ \[s_{n}\leq R^{-\frac{10h_{RW}}{\chi}K}. \tag{48}\] Proof.: First we will deal with (46). Recall from Proposition 1.24 that \[s_{i}\in\left(t_{i}^{-\frac{\log M}{\chi}},t_{i}^{-\frac{h_{RW}}{10\chi}}\right)\] and that when \(\hat{m}\nmid i\) we have \(s_{i+1}\geq t_{i+1}^{3}s_{i}\). In particular this means that we have dealt with the case \(\hat{m}\nmid i\). In the case \(\hat{m}\mid i\) by Proposition 1.24 we have \[s_{i}\leq t_{i}^{-\frac{h_{RW}}{10\chi}}\] and \[s_{i+1}\geq t_{i+1}^{\frac{\log M}{\chi}}.\] We also have by (42) that \[t_{i}=t_{i+1}^{\frac{100\log M}{h_{RW}}}.\] This means that \[t_{i+1}^{3}s_{i} \leq t_{i+1}^{3-\frac{h_{RW}}{10\chi}\cdot\frac{100\log M}{h_{RW}}}\] \[=t_{i+1}^{3-\frac{10\log M}{\chi}}.\] Note that by the requirements of Proposition 7.1 we may assume that the quantity in (38) is at least \(2\). In particular this means that \(h_{RW}\geq 2\chi\) and so noting that \(\log M\geq h_{RW}\) we get \[t_{i+1}^{3-\frac{10\log M}{\chi}} \leq t_{i+1}^{-\frac{\log M}{\chi}}\] \[\leq s_{i+1}\] as required. We will now deal with (47). Note that by Proposition 1.24 \[s_{1}\geq t_{1}^{-\frac{\log M}{\chi}}.\] Substituting in our value for \(t_{1}\) from (41) and (42) we get \[s_{1}\geq\tilde{r}^{\frac{1}{10}}.\] We also have by the fact that \(\log M\geq h_{RW}\geq 2\chi\) \[R^{20K}t_{1}^{2}\tilde{r}\leq R^{20K}\tilde{r}^{\frac{8}{10}}.\] Since \(R^{K}\) grows slower that any polynomial in \(\tilde{r}^{-1}\) this is less that \(s_{1}\) for all sufficiently small \(\tilde{r}\). Finally (48) follows from the fact that by (43) we have \[t_{n}\geq R^{100K}\] and by Proposition 1.24 we have \[s_{n}\leq t_{n}^{-\frac{h_{RW}}{10\chi}}.\] To prove Lemma 7.3 we recall some results on the speed of convergence to the Furstenberg measure which will also be useful later. **Lemma 7.7**.: _Let \(\mu\) be a probability measure on \(PSL_{2}(\mathbb{R})\) which is strongly irreducible and whose support is not contained in any compact subgroup of \(PSL_{2}(\mathbb{R})\). Let \(\gamma_{1},\gamma_{2},\dots\) be i.i.d. samples from \(\mu\). If for some \(\tau>0\)_ \[\int\exp(\tau\log\|g\|)\,d\mu(g)<\infty\] _then there exists \(\delta>0\) such that for each \(a\in(0,\delta]\) we have_ \[\lim_{n\to\infty}\left(\sup_{x,y\in P_{1}(\mathbb{R}),x\neq y}\mathbb{E}\left[ \left(\frac{\tilde{d}(\gamma_{1}\gamma_{2}\dots\gamma_{n}x,\gamma_{1}\gamma_{2 }\dots\gamma_{n}y)}{\tilde{d}(x,y)}\right)^{a}\right]\right)^{1/n}<1\] _where \(\tilde{d}\) is the metric on \(P_{1}(\mathbb{R})\) given by_ \[\tilde{d}(x,y)=\frac{\|x\times y\|}{\|x\|\cdot\|y\|}.\] Proof.: This is [4, Section VII Proposition 2.1]. From this we get the following corollaries. **Corollary 7.8**.: _Let \(\mu\) be a probability measure on \(PSL_{2}(\mathbb{R})\) which is strongly irreducible, finitely supported, and whose support is not contained in any compact subgroup of \(PSL_{2}(\mathbb{R})\). Let \(\gamma_{1},\gamma_{2},\dots\) be i.i.d. samples from \(\mu\). Then there exists some \(C,\delta>0\) such that for all \(n,m\in\mathbb{Z}\) with \(m\geq n\) we have_ \[\mathbb{P}\left[d(b^{+}(\gamma_{1}\gamma_{2}\dots\gamma_{n}),b^{+}(\gamma_{1} \gamma_{2}\dots\gamma_{m}))>C\exp(-\delta n)\right]<C\exp(-\delta n).\] Proof.: First note that \(d\) and \(\tilde{d}\) are equivalent metrics. Note that since \(\mu\) is finitely supported in has an exponential moment. By Lemma 7.7 we know that the is some \(a>0\) and \(\lambda_{1}\in(0,1)\) such that for all sufficiently large \(n\in\mathbb{Z}_{>0}\) and all \(x,y\in P_{1}(\mathbb{R})\) we have \[\mathbb{E}\left[\left(\frac{\tilde{d}(\gamma_{1}\dots\gamma_{n}x,\gamma_{1} \dots\gamma_{n}y)}{\tilde{d}(x,y)}\right)^{a}\right]<\lambda_{1}^{n}.\] We know that \(\tilde{d}(x,y)\leq 1\). This means that for all \(x,y\in P_{1}(\mathbb{R})\) \[\mathbb{E}\left[\left(\tilde{d}(\gamma_{1}\dots\gamma_{n}x,\gamma_{1}\dots \gamma_{n}y)\right)^{a}\right]<\lambda_{1}^{n}.\] By Markov's inequality and the fact that \(d\) and \(\tilde{d}\) are equivalent we may deduce that there is some \(\lambda_{2}\in(0,1)\) such that for all sufficiently large \(n\in\mathbb{Z}_{>0}\) and all \(x,y\in P_{1}(\mathbb{R})\) we have \[\mathbb{P}\left[d(\gamma_{1}\dots\gamma_{n}x,\gamma_{1}\dots\gamma_{n}y)> \lambda_{2}^{n}\right]<\lambda_{2}^{n}.\] Let \(u\) be a uniform random variable on \(P_{1}(\mathbb{R})\). We now apply the above equation with \(u\) in the role of \(x\) and \(\gamma_{n+1}\dots\gamma_{m}u\) in the role of \(y\). This gives \[\mathbb{P}\left[d(\gamma_{1}\dots\gamma_{n}u,\gamma_{1}\dots\gamma_{m}u)> \lambda_{2}^{n}\right]<\lambda_{2}^{n}. \tag{49}\] By Theorem 5.18 we know that there is some \(\lambda_{3}\in(0,1)\) such that for all sufficiently large \(n\) \[\mathbb{P}[\|\gamma_{1}\gamma_{2}\dots\gamma_{n}\|<\exp(n\chi/2)]<\lambda_{3}^{n}.\] By Lemma 3.9 this means that there is some \(\lambda_{4}\in(0,1)\) such that for all sufficiently large \(n\) we have \[\mathbb{P}[d(\gamma_{1}\dots\gamma_{n}u,b^{+}(\gamma_{1}\dots\gamma_{n}))> \lambda_{4}^{n}]<\lambda_{4}^{n}.\] The result now follows by applying this to (49). **Corollary 7.9**.: _Let \(\mu\) be a probability measure on \(PSL_{2}(\mathbb{R})\) which is strongly irreducible, finitely supported, and whose support is not contained in any compact subgroup of \(PSL_{2}(\mathbb{R})\). Let \(\gamma_{1},\gamma_{2},\dots\) be i.i.d. samples from \(\mu\) and let \(b\) be a sample from \(\nu\) independent of the \(\gamma_{i}\). Then there exists some \(C,\delta>0\) such that for all \(N\in\mathbb{Z}_{>0}\) the probability that there exists \(m,n\in\mathbb{Z}_{>0}\) with \(n,m\geq N\) such that either_ \[d(b^{+}(\gamma_{1}\gamma_{2}\dots\gamma_{n}),b^{+}(\gamma_{1}\gamma_{2}\dots \gamma_{m}))>C\exp(-\delta N)\] _or_ \[d(b^{+}(\gamma_{1}\gamma_{2}\dots\gamma_{n}),\gamma_{1}\gamma_{2}\dots\gamma_ {m}b)>C\exp(-\delta N)\] _is at most \(C\exp(-\delta N)\)._ Proof.: This follows immediately from Corollary 7.8 and the fact that a geometric series convergences. **Corollary 7.10**.: _Let \(\mu\) be a probability measure on \(PSL_{2}(\mathbb{R})\) which is strongly irreducible, finitely supported, and whose support is not contained in any compact subgroup of \(PSL_{2}(\mathbb{R})\). Suppose further that \(\mu\) is \(\alpha_{0},t\) - non-degenerate. Let \(s\in(0,t)\) and let \(\beta_{0}>\alpha_{0}\). Let \(\gamma_{1},\gamma_{2},\dots\) be i.i.d. samples from \(\mu\) and let \(q_{n}=\gamma_{1}\gamma_{2}\dots\gamma_{n}\). Then there exists some \(N\in\mathbb{Z}_{>0}\) such that for all \(a\in\mathbb{R}\) we have_ \[\mathbb{P}[\forall n\geq N\text{ such that }\phi(b^{+}(q_{n}))\in(a,a+s)+\pi \mathbb{Z}]>1-\beta_{0}.\] Proof.: This follows easily from the definition of \(\alpha_{0},t\) - non- degenerate and Corollary 7.9. We also need the following result from [4]. **Lemma 7.11**.: _Let \(\mu\) be a probability measure on \(PSL_{2}(\mathbb{R})\) which is strongly irreducible, finitely supported, and whose support is not contained in any compact subgroup of \(PSL_{2}(\mathbb{R})\). Let \(\nu\) be the corresponding Furstenberg measure. Given \(x\in P_{1}(\mathbb{R})\) and \(r>0\) let \(B(x,r)\) denote the (open) ball centre \(x\) and radius \(r\) in \(P_{1}(\mathbb{R})\). Then there exist constants \(C,\delta>0\) such that_ \[\nu(B(x,r))\leq Cr^{\delta}. \tag{50}\] Proof.: This is [4, Chapter VI, Corollary 4.2]. We are now ready to prove Lemma 7.3. Proof of Lemma 7.3.: First note that by Theorem 1.25 and the fact that \(R^{K}\to\infty\) as \(\tilde{r}\to 0\), providing \(\tilde{r}\) is sufficiently small (in terms of \(\mu\) and \(R\)) for each \(k\in[n]\) we can choose a random variable \(y_{k}\) taking values in \(P_{1}(\mathbb{R})\) such that it is independent of \(\mathcal{F}_{k-1}\), such that \(y_{k}^{\perp}\) has law \(\hat{\nu}\) and such that \[\mathbb{P}[d(y_{k}^{\perp},f_{k}^{T}b^{-}(b_{0})^{\perp})>\varepsilon/2]< \varepsilon/2.\] Now choose \(\delta>0\), \(N\in\mathbb{Z}_{>0}\) such that for all \(a\in P_{1}(\mathbb{R})\) we have \[\mathbb{P}[\exists n\geq N:d(b^{+}(\gamma_{1}\gamma_{2}\ldots\gamma_{n}),a)> \delta]<\varepsilon/2.\] Note that this is possible by Corollary 7.9 and Lemma 7.11. From this it follows that providing \(\tilde{r}\) is sufficiently small (in terms of \(\mu\) and \(R\)) we have \[\mathbb{P}[d(b^{-}(f_{k}^{T}),b^{-}(b_{0})^{\perp})<\delta]<\varepsilon/2.\] Now apply Corollary 3.10 with \(\min(\delta,\varepsilon/2)\) in the role of \(\varepsilon\). Noting that \(\|f_{k}\|\geq R^{K}\to\infty\) means that providing \(\tilde{r}\) is sufficiently small (in terms of \(\mu\) and \(R\)) we have \[\mathbb{P}[d(f_{k}^{T}b^{-}(b_{0})^{\perp},b^{-}(f_{k})^{\perp})>\varepsilon/ 2]<\varepsilon/2.\] The result follows. ### Checking the size of products In this subsection we will check that Condition A4 from Proposition 7.1 holds. **Definition 7.12**.: Let \(B\) be the \(\hat{\mathcal{F}}\)-measurable event that for all \(i\in[n]\) we have \[d(b^{+}(f_{i}),b^{-}(b_{0}f_{1}a_{1}b_{1}\ldots f_{i-1}a_{i-1}b_{i-1}))>R^{-K/2} \tag{51}\] and \[d(b^{+}(a_{i}),y_{i}^{\perp})>R^{-K/2} \tag{52}\] and \[d(b^{+}(a_{i}),b^{-}(b_{0}f_{1}a_{1}b_{1}\ldots f_{i-1}a_{i-1}b_{i-1}f_{i}))> R^{-K/2} \tag{53}\] and \[d(b^{-}(b_{0}f_{1}a_{1}b_{1}\ldots f_{i}),b^{-}(f_{i}))<\varepsilon. \tag{54}\] **Lemma 7.13**.: _Let \(g_{1},g_{2}\in PSL_{2}(\mathbb{R})\). Then_ \[d(b^{+}(g_{1}g_{2}),b^{+}(g_{1}))\leq O(\|g_{1}\|^{-2}\,\|g_{2}\|^{2}) \tag{55}\] _and_ \[d(b^{-}(g_{1}g_{2}),b^{-}(g_{2}))\leq O(\|g_{1}\|^{2}\,\|g_{2}\|^{-2}). \tag{56}\] Proof.: First we will deal with (55). Given \(h>0\) let \[W(h):=\left\{b\in P_{1}(\mathbb{R}):d(g_{2}b,b^{-}(g_{1}))<h\right\}.\] Note that by Lemma 3.16 we know that \(m(W(h))<O(\left\|g_{2}\right\|^{2}h)\) where \(m\) denotes the pushforward of the Lesbegue measure under \(\phi\). Choose \(c_{1}>0\) to be some absolute constant small enough such that if we let \(h=c_{1}\left\|g_{2}\right\|^{-2}\) then we have \(m(W(h))<\frac{1}{10}\). Now choose \(b\in P_{1}(\mathbb{R})\) such that \(b\notin W(h)\) and \(d(b,b^{-}(g_{1}g_{2}))>\frac{1}{10}\). Note that by Lemma 3.9 \[d(g_{1}g_{2}b,b^{+}(g_{1}g_{2}))\leq O(\left\|g_{1}g_{2}\right\|^{-2})\leq O( \left\|g_{1}\right\|^{-2}\left\|g_{2}\right\|^{2})\] and \[d(g_{1}g_{2}b,b^{+}(g_{1}))\leq O(\left\|g_{1}\right\|^{-2}h^{-1})\leq O( \left\|g_{1}\right\|^{-2}\left\|g_{2}\right\|^{2}).\] This gives the required result. (56) follows from taking the transpose of everything. We also need to show that under \(B\) everything is of approximately the correct size. Specifically we will prove the following. **Lemma 7.14**.: _If \(B\) occurs and \(\tilde{r}\) is sufficiently small depending on \(\mu\), \(R\), \(t\) and \(\alpha_{0}\) then for every \(i\in[n]\) we have_ \[\max\left\{R^{K},\frac{\sqrt{s_{i}}}{t_{i}\sqrt{\tilde{r}}\left\|b_{0}f_{1}a_{ 1}b_{1}\ldots f_{i-1}a_{i-1}b_{i-1}\right\|}\right\}=\frac{\sqrt{s_{i}}}{t_{i} \sqrt{\tilde{r}}\left\|b_{0}f_{1}a_{1}b_{1}\ldots f_{i-1}a_{i-1}b_{i-1}\right\|}, \tag{57}\] \[\left\|b_{0}f_{1}a_{1}b_{1}\ldots f_{i-1}a_{i-1}b_{i-1}f_{i}\right\|\cong \sqrt{\frac{s_{i}}{t_{i}^{2}\tilde{r}}}, \tag{58}\] \[R^{-K}\sqrt{\frac{s_{i}}{\tilde{r}}}\lesssim\left\|b_{0}f_{1}a_{1}b_{1}\ldots f _{i-1}a_{i-1}b_{i-1}f_{i}a_{i}\right\|\lesssim R^{K}\sqrt{\frac{s_{i}}{\tilde {r}}} \tag{59}\] _and_ \[R^{-2K}\sqrt{\frac{s_{i}}{\tilde{r}}}\lesssim\left\|b_{0}f_{1}a_{1}b_{1}\ldots f _{i-1}a_{i-1}b_{i-1}f_{i}a_{i}b_{i}\right\|\lesssim R^{2K}\sqrt{\frac{s_{i}}{ \tilde{r}}}. \tag{60}\] Proof.: We will prove this by induction. For \(i=1\) we know that (57) is satisfied by Lemma 7.6 and the fact that \(\left\|b_{0}\right\|\leq R^{K+1}\). Now suppose that (57) is satisfied for some given \(i\). We will show that (58) also holds for this \(i\). Trivially from the definition of \(f_{i}\) we have that \[\frac{\sqrt{s_{i}}}{t_{i}\sqrt{\tilde{r}}\left\|b_{0}f_{1}a_{1}b_{1}\ldots f_{ i-1}a_{i-1}b_{i-1}\right\|}\cong\left\|f_{i}\right\|\sin d(b^{-}(b_{0}f_{1}a_{1}b_{1} \ldots f_{i-1}a_{i-1}b_{i-1}),b^{+}(f_{i})) \tag{61}\] We also know by (51) that \[d(b^{-}(b_{0}f_{1}a_{1}b_{1}\ldots f_{i-1}a_{i-1}b_{i-1}),b^{+}(f_{i}))>R^{-K/2}.\] Combining this with (61) and applying Lemma 3.11 with \(A=2\) and \(t=R^{-K/2}\) gives (58). Now assume (58) holds for some given \(i\in[n]\). We show that (59) holds for this \(i\) too. We know by the construction of \(h_{i}\) that \[t_{i}\cong\|h_{i}\|\sin d(b^{+}(h_{i}),y_{i}^{\perp}). \tag{62}\] Note that \(\left\|\log a_{i}^{-1}h_{i}\right\|\to 0\) as \(\tilde{r}\to 0\). In particular this means that providing \(\tilde{r}\) is sufficiently small we can guarantee that \(\left\|a_{i}^{-1}h_{i}\right\|\leq 2\). We also know \(\|h_{i}\|\geq t_{i}\geq R^{100K}\). By Lemma 7.13 this means that \[d(b^{+}(h_{i}),b^{+}(a_{i}))\leq O(R^{-200K}).\] In particular by (52) and (53) this means that \[d(b^{+}(h_{i}),y_{i}^{\perp})\gtrsim R^{-K}\] and \[d(b^{+}(h_{i}),b^{-}(b_{0}f_{1}a_{1}b_{1}\dots f_{i-1}a_{i-1}b_{i-1}f_{i})) \gtrsim R^{-K}.\] Putting these as well as (62) into Lemma 3.11 gives \[R^{-K}\sqrt{\frac{s_{i}}{\tilde{r}}}\lesssim\left\|b_{0}f_{1}a_{1}b_{1}\dots f _{i-1}a_{i-1}b_{i-1}f_{i}h_{i}\right\|\lesssim R^{K}\sqrt{\frac{s_{i}}{\tilde{ r}}}.\] (59) now follows from the fact that \(\left\|a_{i}^{-1}h_{i}\right\|\leq 2\). Assuming that (59) holds for a given \(i\in[n]\) we have that (60) follows trivially for that \(i\) by the definition of \(b_{i}\). Now suppose that (60) holds for some given \(i\in[n]\). We show that (57) is satisfied for \(i+1\). This is immediate from Lemma 7.6. We are therefore done by induction. Finally we show that Condition A4 occurs. **Proposition 7.15**.: _Suppose that \(B\) occurs. Then for all \(i\in I\) we have_ \[\|b_{0}f_{1}a_{1}b_{1}\dots f_{i}a_{i}\|\cong\sqrt{\frac{s_{i}}{\tilde{r}}}.\] Proof.: Suppose that \(i\in I\) and \(B\) occurs. Note that by Lemma 7.14 \[\|b_{0}f_{1}a_{1}b_{1}\dots f_{i-1}a_{i-1}b_{i-1}f_{i}\|\cong\sqrt{\frac{s_{ i}}{t_{i}^{2}\tilde{r}}}.\] Note that by the construction of \(h_{i}\) \[t_{i}\cong\|h_{i}\|\sin d(b^{+}(h_{i}),y_{i}^{\perp}). \tag{63}\] Note that by (54) and condition (1) of the definition of \(I\) we have \[d(y_{i},b^{-}(b_{0}f_{1}a_{1}b_{1}\dots f_{i-1}a_{i-1}b_{i-1}f_{i}))<2\varepsilon. \tag{64}\] Note that by Lemma 7.13 we know that \[d(b^{+}(a_{i}),b^{+}(h_{i}))<O(R^{-200K}). \tag{65}\] In particular providing \(\tilde{r}\) is sufficiently small we have \[d(b^{+}(a_{i}),b^{+}(h_{i}))<\varepsilon.\] Combining this with condition (2) of the definition of \(I\) and (64) gives \[d(b^{+}(h_{i}),b^{-}(b_{0}f_{1}a_{1}b_{1}\ldots f_{i-1}a_{i-1}b_{i-1}f_{i}))>50\varepsilon.\] In particular \[\sin d(b^{+}(h_{i}),b^{-}(b_{0}f_{1}a_{1}b_{1}\ldots f_{i-1}a_{i-1}b_{i-1}f_{i} ))\cong\sin d(b^{+}(h_{i}),y_{i}^{\perp}).\] Note that by (53) and (65) providing \(\tilde{r}\) is sufficiently small we have \[d(b^{+}(h_{i}),b^{-}(b_{0}f_{1}a_{1}b_{1}\ldots f_{i-1}a_{i-1}b_{i-1}f_{i}))>2 R^{-K/2}.\] By applying Lemma 3.11 with \(A=2\) and \(t=2R^{-K/2}\) we get \[\|b_{0}f_{1}a_{1}b_{1}\ldots f_{i}h_{i}\|\cong\sqrt{\frac{s_{i}}{\tilde{r}}}.\] The result now follows from the fact that \(\left\|a_{i}^{-1}h_{i}\right\|\leq 2\). Note that Proposition 7.15 is enough to prove that Condition A4 holds as long as we ensure that \(B\subset A\). This means that we just need to show that \(\mathbb{P}[B]\) is high. **Lemma 7.16**.: _The probability that \(B\) occurs is at least \(1-o_{\mu}((\log\tilde{r}^{-1})^{-10}\)._ Proof.: Note that for the conditions (51), (52), and (53) in the definition of \(B\) using Lemma 7.13 and Corollary 7.9 we can find some \(C,\delta>0\) such that for any fixed \(i\in[n]\) the probability of the condition not occurring is at most \(C\exp(-\delta K)\). By Lemma 3.12, (51), and the fact that \(\|f_{i}\|\geq R^{K}\) we may do the same with (54). This means we can write then \(B^{C}\) as the union of \(O(n)\) events each with probability at most \(C\exp(-\delta K)\). This means that \[\mathbb{P}[B^{C}]\leq O(n\exp(-\delta K)).\] We know by Lemma 7.4 that \[n\leq O_{\mu}(\log\log\tilde{r}^{-1}).\] Combining this with the definition of \(K\) gives the required result. ### Sum of variances In this subsection we show that with high probability Condition A9 is satisfied. We do this by showing that the sum is nearly a sum of independent random variables. To make this work we need the following modified version of Cramer's Theorem. **Lemma 7.17**.: _Let \(a,b,c>0\) with \(c\leq a\) and let \(n\in\mathbb{Z}_{>0}\). Let \(X_{1},\ldots,X_{n}\) be random variables taking values in \(\mathbb{R}\) and let \(m_{1},\ldots,m_{n}\geq 0\) be such that we have almost surely_ \[\mathbb{E}\left[X_{i}|X_{1},\ldots,X_{i-1}\right]\geq m_{i}.\] _Suppose that \(\sum_{i=1}^{n}m_{i}=an\). Suppose also that we have almost surely \(X_{i}\in[0,b]\) for all \(i\in[n]\). Then we have_ \[\mathbb{P}[X_{1}+\cdots+X_{n}\leq nc]\leq\left(\left(\frac{a}{c}\right)^{\frac{ c}{b}}\left(\frac{b-a}{b-c}\right)^{1-\frac{c}{b}}\right)^{n}.\] Proof.: First note that by Jensen's inequality for any \(\lambda\geq 0\) we have \[\mathbb{E}[e^{-\lambda X_{i}}|X_{1},\ldots,X_{i-1}]\leq\left(1-\frac{m_{i}}{b} \right)+\frac{m_{i}}{b}e^{-\lambda b}. \tag{66}\] Therefore we have \[\mathbb{E}[e^{-\lambda(X_{1}+\cdots+X_{n})}] \leq\prod_{i=1}^{n}\left(\left(1-\frac{m_{i}}{b}\right)+\frac{m_ {i}}{b}e^{-\lambda b}\right) \tag{67}\] \[\leq\left(\left(1-\frac{a}{b}\right)+\frac{a}{b}e^{-\lambda b} \right)^{n}.\] with (67) following from the AM-GM inequality. Applying Markov's inequality for any \(\lambda\geq 0\) we have \[\mathbb{P}(X_{1}+\cdots+X_{n}\leq nc) \leq e^{\lambda nc}\mathbb{E}[e^{-\lambda(X_{1}+\cdots+X_{n})}] \tag{68}\] \[\leq\left(e^{\lambda c}\left(\left(1-\frac{a}{b}\right)+\frac{a} {b}e^{-\lambda b}\right)\right)^{n}.\] We wish to substitute in the value of \(\lambda\) which minimizes the right hand side of (68). It is easy to check by differentiation that this is \(\lambda=-\frac{1}{b}\log\frac{c(b-a)}{a(b-c)}\). It is easy to see that this value of \(\lambda\) is at least \(0\) because \(c\leq a\). Note that with this value of \(\lambda\) we get \(e^{-\lambda b}=\frac{c(b-a)}{a(b-c)}\) and \(e^{\lambda c}=\left(\frac{c(b-a)}{a(b-c)}\right)^{-c/b}\). Hence \[\left(1-\frac{a}{b}\right)+\frac{a}{b}e^{-\lambda b} =\left(1-\frac{a}{b}\right)+\frac{a}{b}\frac{c(b-a)}{a(b-c)}\] \[=\frac{(b-a)(b-c)}{b(b-c)}+\frac{c(b-a)}{b(b-c)}\] \[=\frac{b-a}{b-c}.\] The result follows. **Remark 7.18**.: We could deduce a result similar to Lemma 7.17 from the Azuma-Hoeffding inequality. In our application of this result \(a\) will be very small compared to \(b\). In this regime the Azuma-Hoeffding inequality is inefficient for several reasons the most important of which is the inefficiency of Hoeffding's Lemma in this regime. Indeed using Hoeffding's Lemma to bound the left hand side of (66) would lead to a bound of \[\exp\left(-\lambda m_{i}+\frac{\lambda^{2}b^{2}}{8}\right).\] When we apply the lemma we end up with \(m_{i}\) being very small, \(b=1\), and \(\lambda\approx\log 2\). Clearly this bound is weak when this occurs. It turns out that the bound from Azuma-Hoeffding is not strong enough to prove Theorem 1.8 in its current form but we could prove a similar result with the left hand side of (1) replaced by \[\left(\frac{h_{RW}}{\log M}\right)\left(\frac{h_{RW}}{\chi}\right)\left(\max \left\{1,\log\frac{\log M_{\mu}}{h_{RW}}\right\}\right)^{-3}.\] We wish to apply Lemma 7.17 with \[X_{i}=s_{i}^{-2}\operatorname{VAR}_{\hat{a}_{i}}[h_{i}|\hat{\mathcal{A}}_{i}, y_{i}]\mathbb{I}_{i\in I}.\] Trivially the expression on the left of Condition A9 is \(X_{1}+X_{2}+\cdots+X_{n}\). By Lemma 7.5 we know that \[\sum_{i=1}^{n}s_{i}^{-2}\mathbb{E}\left[\operatorname{VAR}_{\hat{a}_{i}}[h_{i }|\hat{\mathcal{A}}_{i},y_{i}]\right]\gtrsim\left(\frac{h_{RW}}{\chi}\right) \left(\max\left\{1,\log\frac{\log M_{\mu}}{h_{RW}}\right\}\right)^{-2}\log \log\tilde{r}^{-1}.\] Also we have \(X_{i}\in[0,1]\) because \(\log(\hat{a}_{i}^{-1}h_{i})\) is contained in a ball of radius \(s_{i}\) around \(0\). This means that in order to apply Lemma 7.17 we just need to get a lower bound on \(\mathbb{E}[X_{i}|\mathcal{F}_{i-1}]\) in terms of \(\mathbb{E}\left[\operatorname{VAR}_{\hat{a}_{i}}[h_{i}|\mathcal{A}_{i},y_{i}]\right]\). Specifically we will prove the following. **Lemma 7.19**.: _Given any \(\delta>0\) providing \(\varepsilon\) is sufficiently small (depending on \(\delta\), \(\alpha_{0}\), and \(\mu\)) and \(\tilde{r}\) is sufficiently small (depending on \(\delta\), \(\alpha_{0}\), \(\mu\), and \(\varepsilon\)) we have_ \[\mathbb{E}[X_{i}|\mathcal{F}_{i-1}]\geq\frac{1}{2}(1-3\alpha_{0})s_{i}^{-2} \mathbb{E}[\operatorname{VAR}_{\hat{a}_{i}}[h_{i}|\hat{\mathcal{A}}_{i},y_{i} ]]-\delta.\] Proof.: Given \(i\in[n]\) let \(K_{i}\) be the event that * \(d(b^{-}(f_{i}),y_{i})<\varepsilon\) * \(d(y_{i},b^{+}(\hat{a}_{i}))>100\varepsilon\) and let \(L_{i}\) be the event that * \(d(b^{-}(\hat{a}_{i}),b^{+}(b_{i}))>t/2\) * \(b^{+}(b_{i})\in U_{t/4,t/8}(\log\hat{a}_{i}^{-1}h_{i}|\mathcal{A}_{i})\). Note that the event \(i\in I\) is \(K_{i}\cap L_{i}\). We will prove the lemma by showing that \(\mathbb{P}[K_{i}^{C}]\) can be made arbitrarily small and bounding \(\mathbb{P}[L_{i}|\mathcal{F}_{i-1},\hat{\mathcal{A}}_{i},y_{i}]\) from below. First we wish to find an upper bound on \(\mathbb{P}[K_{i}^{C}]\). By the construction of \(y_{i}\) we know that \[\mathbb{P}[d(b^{-}(f_{i}),y_{i})<\varepsilon|\mathcal{F}_{i-1}]>1-\varepsilon.\] By definition we know that \[h_{i}=\gamma_{S_{k}+1}\gamma_{S_{k}+2}\ldots\gamma_{T_{k}}.\] Let \[\tilde{h}_{i}:=\lim_{n\to\infty}b^{+}(\gamma_{S_{k}+1}\gamma_{S_{k}+2}\ldots \gamma_{n}).\] We know that \(T_{k}-S_{k}\geq K\). Therefore by 7.9 there exist some \(C_{1},\delta_{1}>0\) such that providing \(\tilde{r}\) is sufficiently small (depending on \(\varepsilon\)) we have \[\mathbb{P}[d(b^{+}(h_{i}),\tilde{h}_{i})>\varepsilon|\mathcal{F}_{i-1}]<C_{1} \exp(-K\delta_{1}).\] In particular providing \(\tilde{r}\) is sufficiently small (depending on \(\varepsilon\)) this is at most \(\varepsilon\). Next note that by Lemma 7.11 and the fact that \(\tilde{h}_{i}\) is independent of \(y_{i}\) we have \[\mathbb{P}[d(\tilde{h}_{i},y_{i})<200\varepsilon|\mathcal{F}_{i-1}]<C_{2} \varepsilon^{\delta_{2}}\] for some \(C_{2},\delta_{2}>0\). Finally by Lemma 7.13 we know that providing \(\tilde{r}\) is sufficiently small \(d(b^{+}(h_{i}),b^{+}(\hat{a}_{i}))<\varepsilon\). Combining these estimates gives that providing \(\tilde{r}\) is sufficiently small (depending on \(\varepsilon\)) we have. \[\mathbb{P}[K_{i}^{C}|\mathcal{F}_{i-1}]<2\varepsilon+C_{2}\varepsilon^{\delta _{2}}.\] In particular providing \(\varepsilon\) is sufficiently small and \(\tilde{r}\) is sufficiently small (depending on \(\varepsilon\)) we have \[\mathbb{P}[K_{i}^{C}|\mathcal{F}_{i-1}]\leq\delta. \tag{69}\] We also know by Corollary 7.10 that for any \(\beta_{0}>\alpha_{0}\) providing \(\tilde{r}\) is sufficiently small \[\mathbb{P}[L_{i}^{C}|\mathcal{F}_{i-1},\hat{\mathcal{A}}_{i},y_{i}]\leq 3\beta _{0}.\] In particular this means that if we choose \(\beta_{0}\) sufficiently close to \(\alpha_{0}\) we may guarantee that \[\mathbb{P}[L_{i}|\mathcal{F}_{i-1},\hat{\mathcal{A}}_{i},y_{i}]\geq\frac{1}{ 2}(1-3\alpha_{0}). \tag{70}\] Let \(\tilde{X}_{i}=s_{i}^{-2}\operatorname{VAR}_{a_{i}}[h_{i}|\hat{\mathcal{A}}_{i},y_{i}]\mathbb{I}_{L_{i}}\) and let \(\hat{X}_{i}=s_{i}^{-2}\operatorname{VAR}_{\hat{a}_{i}}[h_{i}|\hat{\mathcal{A} }_{i},y_{i}]\mathbb{I}_{K_{i}^{C}}\). Note that \(X_{i}\geq\tilde{X}_{i}-\hat{X}_{i}\). Also note that since \(\log(\hat{a}_{i}^{-1}h_{i})\) is contained in a ball of radius \(s_{i}\) around \(0\) we have \(s_{i}^{-2}\operatorname{VAR}_{\hat{a}_{i}}[h_{i}|\hat{\mathcal{A}}_{i},y_{i}]\leq 1\). This means that by (69) we have \[\mathbb{E}[\hat{X}_{i}|\mathcal{F}_{i-1}]\leq\delta.\] We also have by (70) that \[\mathbb{E}[\tilde{X}_{i}|\mathcal{F}_{i-1}]\geq\frac{1}{2}(1-3\alpha_{0})s_{i }^{-2}\mathbb{E}[\operatorname{VAR}_{\hat{a}_{i}}[g_{i}|\mathcal{A}_{i}]].\] This gives the required result. We are now ready to prove that Condition A9 holds with high probability. **Proposition 7.20**.: _Providing_ \[\frac{h_{RW}}{\chi}\left(\max\left\{1,\log\frac{\log M_{\mu}}{h_{RW}}\right\} \right)^{-2}\] _is sufficiently large (depending on \(\alpha_{0}\), \(t\), and \(R\)) and \(\tilde{r}\) is sufficiently small (depending on \(\alpha_{0}\), \(t\), \(R\), and \(\mu\)) then Condition A9 is satisfied with probability at least \(1-o_{\mu}((\log\tilde{r}^{-1})^{-10})\)._ Proof.: We let \[T=\sum_{i\in I}\frac{\operatorname{Var}[u^{(i)}|\mathcal{A}]}{s_{i}^{2}}.\] We will apply Lemma 7.17. As mentioned previously \[\frac{\operatorname{Var}[u^{(i)}|\mathcal{A}]}{s_{i}^{2}}=\frac{\operatorname {VAR}_{\hat{a}_{i}}[h_{i}|\hat{\mathcal{A}}_{i},y_{i}]}{s_{i}^{2}}\mathbbm{1}_ {i\in I}.\] We will call this quantity \(X_{i}\) and apply Lemma 7.17 to \(X_{1}+X_{2}+\cdots+X_{n}\). Let \(\delta>0\) be as in Lemma 7.19. Note that by Lemma 7.19 we may take \[m_{i}=\max\left\{\frac{1}{2}(1-3\alpha_{0})s_{i}^{-2}\mathbb{E}[\operatorname {VAR}_{\hat{a}_{i}}[h_{i}|\hat{\mathcal{A}}_{i},y_{i}]]-\delta,0\right\}.\] By Lemma 7.5 we have \[\sum_{i=1}^{n}m_{i} \geq\frac{1}{2}(1-3\alpha_{0})s_{i}^{-2}\sum_{i=1}^{n}\frac{ \mathbb{E}[\operatorname{VAR}_{\hat{a}_{i}}[h_{i}|\hat{\mathcal{A}}_{i},y_{i} ]]}{s_{i}^{2}}\] \[\gtrsim\left(\frac{h_{RW}}{\chi}\right)\left(\max\left\{1,\log \frac{\log M_{\mu}}{h_{RW}}\right\}\right)^{-2}\log\log\tilde{r}^{-1}.\] Combining this with our estimate for \(n\) form Lemma 7.4 we see that we can take \[a\gtrsim\left(\frac{h_{RW}}{\log M}\right)\left(\max\left\{1,\log\frac{\log M _{\mu}}{h_{RW}}\right\}\right)^{-1}-\delta.\] In particular providing we choose \(\delta\) sufficiently small (in terms of \(\mu\)) when \(\tilde{r}\) is sufficiently small (depending on \(\mu\), \(\alpha_{0}\), and \(t\)) we may take \[a\gtrsim\left(\frac{h_{RW}}{\log M}\right)\left(\max\left\{1,\log\frac{\log M _{\mu}}{h_{RW}}\right\}\right)^{-1}.\] We have \(b=1\) and we take \(c=\frac{1}{2}a\). By Lemma 7.17 we get \[\mathbb{P}[T\leq nc]\leq\left(2^{a/2}\left(\frac{1-a}{1-\frac{a}{2}}\right)^{ 1-a/2}\right)^{n}. \tag{71}\] Let \(f(x):=\log\left(2^{x/2}\left(\frac{1-x}{1-\frac{x}{2}}\right)^{1-x/2}\right)\). Note that (71) can be written as \[\log\mathbb{P}[T\leq nc]\leq nf(a).\] Also note that \[f(x)=\frac{x}{2}\log 2+(1-\frac{x}{2})\log(1-x)-(1-\frac{x}{2})\log(1-x/2)\] meaning \[f^{\prime}(0)=\frac{1}{2}\log 2-1+\frac{1}{2}<0.\] Note that we may also assume that \(a\) is small enough that \(f^{\prime}(x)<\frac{1}{2}f^{\prime}(0)\) for all \(x\in[0,a]\). This means \[nf(a) \lesssim-na\] \[\lesssim-\left(\frac{h_{RW}}{\chi}\right)\left(\max\left\{1,\log \frac{\log M_{\mu}}{h_{RW}}\right\}\right)^{-2}.\] In particular this means that there is some constant \(c_{1}\) depending only on \(R,\alpha_{0}\) and \(t\) such that \[\log\mathbb{P} \left[T\leq c_{1}\left(\frac{h_{RW}}{\chi}\right)\left(\max\left\{ 1,\log\frac{\log M}{h_{RW}}\right\}\right)^{-2}\log\log\tilde{r}^{-1}\right]\] \[\lesssim-\left(\frac{h_{RW}}{\chi}\right)\left(\max\left\{1,\log \frac{\log M}{h_{RW}}\right\}\right)^{-2}\log\log\tilde{r}^{-1}.\] The result follows. ### Proof of Proposition 7.1 In this sub-section we will prove Proposition 7.1 by checking that our construction satisfies the remaining conditions. Proof of Proposition 7.1.: First note that Condition A1 holds by the construction and the results of Section 6. Condition A2 follows from Lemma 7.4 and Lemma 7.6. We will prove Condition A3 by showing that each of the Conditions A4, A5, A6, A7,A8, and A9 hold on \(\mathcal{A}\)-measurable events with probability at least \(1-o_{\mu}((\log\tilde{r}^{-1})^{-10})\). We checked that this applies to Condition A4 in Section 7.2. Condition A5 follows immediately from construction. Condition A7 follows from Condition A4 and Lemma 7.6. Note that by Conditions (4) and (3) from the definition of \(I\) for Conditions A6 and A8 to hold it is sufficient that for each \(i\in[n]\) we have \[d(b^{-}(g_{i}),b^{-}(g_{1}g_{2}\dots g_{i}))<\frac{1}{10}t\] and \[d(b^{+}(g_{i}),g_{i}g_{i+1}\dots g_{n}b)<\frac{1}{10}t.\] By Lemma 7.13 and Corollary 7.9 there is some \(\delta>0\) depending on \(\mu\) such that for each fixed \(i\) these have probability at least \[1-O_{\mu}(\exp(-\delta K)).\] Putting in our estimates for \(K\) and \(n\) in terms of \(\tilde{r}\) gives the required result. Finally note that we checked Condition A9 in Section 7.3. ### Proof of the main theorem To prove Theorem 1.8 we will first prove the following proposition. **Proposition 7.21**.: _Let \(\mu\) be a finitely supported strongly irreducible probability measure on \(PSL_{2}(\mathbb{R})\) whose support is not contained in any compact subgroup of \(PSL_{2}(\mathbb{R})\). Suppose \(M_{\mu}<\infty\). Let \(\chi\) denote the Lypanov exponent of \(\mu\). Let \(R>0\) be chosen such that the operator norm is at most \(R\) on the support of \(\mu\). Let \(\nu\) be the Furstenberg measure generated by \(\mu\). Suppose that \(\alpha_{0}\in(0,1/3)\), \(t>0\) are such that \(\mu\) is \(\alpha_{0},t\)- non-degenerate. Suppose that_ \[\frac{h_{RW}}{\chi}\left(\max\left\{1,\log\frac{\log M_{\mu}}{h_{RW}}\right\} \right)^{-2}\] _is sufficiently large (depending on \(R\), \(t\) and \(\alpha_{0}\)). Then there exists some constant \(C\) (depending only on \(R\), \(t\) and \(\alpha_{0}\)) such that_ \[s_{C\tilde{r}}^{(k)}(\nu)<\left(\log\tilde{r}^{-1}\right)^{-5}\] _for all sufficiently small (depending only on \(\mu\), \(R\), \(t\) and \(\alpha_{0}\)) \(\tilde{r}>0\) and all_ \[k\in\left[\frac{1}{2}\log\log\tilde{r}^{-1},\log\log\tilde{r}^{-1}\right]\cap \mathbb{Z}. \tag{72}\] Proof.: Let \(C_{1}\) and \(\delta_{1}\) be the \(C\) and \(\delta\) from Proposition 1.22 with \(\frac{1}{10}t\) in the role of \(t\) and the implied constant (which depends only on \(R\), \(t\) and \(\alpha_{0}\)) in the \(\cong\) from Condition A4 of Proposition 7.1 in the role of \(c\). We now apply Proposition 7.1 with \(C_{1}\) in the role of \(C\). Suppose that \(\tilde{r}>0\) is chosen to be small enough to apply this and also so that \(\tilde{r}<\delta_{1}\). Let \(g_{1},g_{2},\ldots,g_{n}\), \(u^{(1)},u^{(2)},\ldots,u^{(n)}\), \(b\) and \(I\) be as in Proposition 7.1 and let \(\zeta_{i}\in\mathfrak{psl}_{*}^{*}\) be the derivative given by \[\zeta_{i}=D_{u}(\phi(g_{1}\ldots g_{i}\exp(u)g_{i+1}\ldots g_{n}b))|_{u=0}.\] We enumerate \(I\) as \(i_{1}<i_{2}<\cdots<i_{\tilde{n}}\). We now define \(\tilde{g}_{1},\tilde{g}_{2},\ldots,\tilde{g}_{\tilde{n}}\) and \(\tilde{b}\) by letting \(\tilde{g}_{1}:=g_{1}\ldots g_{i_{\tilde{1}}}\), \(\tilde{g}_{2}:=g_{i_{1}+1}\ldots g_{i_{2}}\) and so on with \(\tilde{g}_{n}:=g_{i_{\tilde{n}-1}+1}\ldots g_{i_{\tilde{n}}}\). We also define \(\tilde{b}:=g_{i_{\tilde{n}+1}}\ldots g_{n}b\). We apply Proposition 1.22 with our previous choices for \(t\) and \(c\) and with \(\tilde{n}\) in the role of \(n\), \(\tilde{b}\) in the role of \(b\) and \(\tilde{g}_{1},\tilde{g}_{2},\ldots,\tilde{g}_{\tilde{n}}\) in the role of \(g_{1},g_{2}\ldots g_{\tilde{n}}\). From this, noting that \(\tilde{n}\leq n\), we get that if \(\omega\in A\) then \[\mathcal{W}_{1}\left(\phi([x|\mathcal{A}]),\phi(g_{1}g_{2}\ldots g_{n}b)+\sum_ {i=1}^{n}\zeta_{i}([u^{(i)}|\mathcal{A}])\right)<C_{1}^{n}\left\|g_{1}g_{2} \ldots g_{i_{\tilde{n}}}\right\|^{2}\tilde{r}^{2}\] where \(x=g_{1}\exp(u^{(1)})\ldots g_{n}\exp(u^{(n)})b\). By Conditions A2 and A4 this means that \[\mathcal{W}_{1}\left(\phi([x|\mathcal{A}]),\phi(g_{1}g_{2}\ldots g_{n}b)+\sum_ {i=1}^{n}\zeta_{i}([u^{(i)}|\mathcal{A}])\right)\lesssim\tilde{r}\left(\log \tilde{r}^{-1}\right)^{-10}. \tag{73}\] We now let \[S=\sum_{i=1}^{n}\zeta_{i}([u^{(i)}|\mathcal{A}]).\] We bound \(s_{r}^{(k)}(S)\) for appropriate choices of \(r\) and \(k\). Suppose that \(A\) occurs and let \(V_{i}=\zeta([u^{(i)}|\mathcal{A}])\). We know by Condition A8, Lemma 3.16 and Lemma 3.13 that whenever \(i\in I\) \[\|\zeta_{i}\|\lesssim\left\|g_{1}g_{2}\ldots g_{i}\right\|^{-2}.\] Combining this with Conditions A4 and A5 and the fact that if \(i\notin I\) then \(u^{(i)}=0\) gives \[\left|V_{i}\right|\lesssim\tilde{r} \tag{74}\] almost surely. We also know by Conditions A4, A6, and A8, Proposition 3.6, Lemma 3.16, and the chain rule that whenever \(i\in I\) \[\operatorname{Var}V_{i}\gtrsim\frac{\operatorname{Var}u^{(i)}}{s_{i}^{2}}.\] In particular, combining this with Condition A9, we have that \[\sum_{i=1}^{n}\operatorname{Var}V_{i}\gtrsim\frac{h_{RW}}{\chi}\left(\max \left\{1,\log\frac{\log M_{\mu}}{h_{RW}}\right\}\right)^{-2}\tilde{r}^{2}\log \log\tilde{r}^{-1}.\] Let \(c_{1}\) be the implied constant from the \(\lesssim\) in (74). Suppose that \[k\in\left[\frac{1}{2}\log\log\tilde{r}^{-1},\log\log\tilde{r}^{-1}\right]\cap \mathbb{Z}.\] Partition \([n]\) into \(k\) sets \(J_{1},J_{2},\ldots,J_{k}\) such that for each \(j\in[k]\) \[\sum_{i\in J_{j}}\operatorname{Var}V_{i}\geq\frac{1}{k}\sum_{i=1}^{n} \operatorname{Var}V_{i}-c_{1}^{2}\tilde{r}^{2}.\] Trivially this is possible because \(\operatorname{Var}V_{i}\leq c_{1}^{2}\tilde{r}^{2}\) for all \(i\). In particular this means that providing \[\frac{h_{RW}}{\chi}\left(\max\left\{1,\log\frac{\log M_{\mu}}{h_{RW}}\right\} \right)^{-2}\] is sufficiently large (in terms of \(R\), \(\alpha_{0}\), and \(t\)) we have \[\sum_{i\in J_{j}}\operatorname{Var}V_{i}\gtrsim\frac{h_{RW}}{\chi}\left(\max \left\{1,\log\frac{\log M_{\mu}}{h_{RW}}\right\}\right)^{-2}\tilde{r}^{2}.\] Now let \(C_{2}\) be the \(C\) from Lemma 1.18 with \(10^{-5}\) in the role of \(\alpha\). By Lemma 1.18 we know that providing \[\frac{h_{RW}}{\chi}\left(\max\left\{1,\log\frac{\log M_{\mu}}{h_{RW}}\right\} \right)^{-2}\] is sufficiently large (in terms of \(R\), \(\alpha_{0}\), and \(t\)) we have \[s_{c_{1}C_{2}\tilde{r}}\left(\sum_{i\in J_{j}}V_{i}\right)\leq 10^{-5}\] and so by Lemma 2.8 we have \[s_{c_{1}C_{2}\tilde{r}}^{(k)}(S)\leq 10^{-5k}.\] Combining this with (73) and Lemma 1.19 we get that whenever \(\omega\in A\) we have \[s_{c_{1}C_{2}\tilde{r}}^{(k)}(x|\mathcal{A})\leq 10^{-5k}+O((\log\tilde{r})^{-10 }).\] Combining this with Condition A3 and the fact that \(5\log(10)>10\) we deduce that \[s_{c_{1}C_{2}\tilde{r}}^{(k)}(\nu)<o\left((\log\tilde{r})^{-5}\right)\] as required. We can now prove the main theorem. Proof of Theorem 1.8.: We use Proposition 7.21 along with Lemma 1.16 to show that for all sufficiently small \(r\) we have \[s_{r}(\nu)<(\log r^{-1})^{-2}.\] We will then complete the proof using Lemma 1.15. Let \(C\) be as in Proposition 7.21 and given some sufficiently small \(r>0\) let \(k=\left\lfloor\frac{3}{4}\log\log r^{-1}\right\rfloor\), let \(a=r/\sqrt{k}\), let \(b=r\exp\left(k\log k\right)\) and let \(\alpha=(\log r^{-1})^{-2}\). We apply Lemma 1.16 with this choice of \(a\), \(b\) and \(k\). Suppose that \(s\in[a,b]\) and let \(\tilde{r}=s/C\). To apply Proposition 7.21 we just need to check that \[k\in\left[\frac{1}{2}\log\log\tilde{r}^{-1},\log\log\tilde{r}^{-1}\right]\] providing \(r>0\) is sufficiently small. This is a trivial computation and is left to the reader. From Proposition 7.21 we may deduce that \[s_{s}^{(k)}(\nu)\leq\left(\log\tilde{r}^{-1}\right)^{-5}.\] In particular providing \(r\) is sufficiently small we have \[s_{s}^{(k)}(\nu)\leq\left(\log r\right)^{-4}.\] This means that by Lemma 1.16 we have \[s_{r}(\nu)\leq(\log r^{-1})^{-4}k\left(\frac{2e}{\pi}\right)^{\frac{k-1}{2}}+ k!\cdot ka^{2}b^{-2}\] Note that \(\log\frac{2e}{\pi}<\frac{2}{3}\) and so \[k\left(\frac{2e}{\pi}\right)^{\frac{k-1}{2}} \leq k\exp\left(\frac{3}{4}\log\frac{2e}{\pi}\log\log r^{-1}\right)\] \[\leq o((\log r^{-1})^{2}).\] Also \[k!\cdot ka^{2}b^{-2} <\exp(k\log k)a^{2}b^{-2}\] \[<\exp(-k\log k)\] \[<o((\log r^{-1})^{-2}).\] Putting this together gives \(s_{r}(\nu)\leq o((\log r^{-1})^{-2})\). This is sufficient to apply Lemma 1.15 which completes the proof. ## 8. Examples In this section we will give examples of measures \(\mu\) on \(PSL_{2}(\mathbb{R})\) which satisfy the conditions of Theorem 1.8. ### Heights and separation In this subsection we will review some techniques for bounding \(M_{\mu}\) using heights. First we need the following definition. **Definition 8.1** (Height).: Let \(\alpha_{1}\) be algebraic with algebraic conjugates \(\alpha_{2},\alpha_{3},\dots,\alpha_{d}\). Suppose that the minimal polynomial for \(\alpha_{1}\) over \(\mathbb{Z}[X]\) has positive leading coefficient \(a_{0}\). Then we define the _height_ of \(\alpha_{1}\) by \[\mathcal{H}(\alpha_{1}):=\left(a_{0}\prod_{i=1}^{n}\max\{1,|\alpha_{i}|\} \right)^{1/d}.\] We wish to use this to bound the size of polynomials of algebraic numbers. To do this we need the following way of measuring the complexity of a polynomial. **Definition 8.2**.: Given some polynomial \(P\in\mathbb{Z}[X_{1},X_{2},\dots,X_{n}]\) we define the _length_ of \(P\), which we denote by \(\mathcal{L}(P)\), to be the sum of the absolute values of the coefficients of \(P\). We also need the following basic fact about heights. **Lemma 8.3**.: _Let \(\alpha\neq 0\) be an algebraic number. Then_ \[\mathcal{H}(\alpha^{-1})=\mathcal{H}(\alpha).\] Proof.: This follows easily from the definition and is proven in [28, Section 14]. **Lemma 8.4**.: _Given \(P\in\mathbb{Z}[X_{1},X_{2},\dots,X_{n}]\) of degree at most \(L_{1}\geq 0\) in \(X_{1}\),..., \(L_{n}\geq 0\) in \(X_{n}\) and algebraic numbers \(\xi_{1},\xi_{2},\dots,\xi_{n}\) we have_ \[\mathcal{H}(P(\xi_{1},\xi_{2},\dots,\xi_{n}))\leq\mathcal{L}(P)\mathcal{H}( \xi_{1})^{L_{1}}\dots\mathcal{H}(\xi_{n})^{L_{n}}\] Proof.: This is [28, Proposition 14.7]. To make the above lemma useful for bounding the absolute value of expressions we need the following. **Lemma 8.5**.: _Suppose that \(\alpha\in\mathbb{C}\backslash\{0\}\) is algebraic and that its minimal polynomial has degree \(d\). Then_ \[\mathcal{H}(\alpha)^{-d}\leq|\alpha|\leq\mathcal{H}(\alpha)^{d}.\] Proof.: The fact that \(|\alpha|\leq\mathcal{H}(\alpha)^{d}\) is immediate from the definition of height. The other side of the inequality follows from Lemma 8.3. **Proposition 8.6**.: _Suppose that \(\mu\) is a measure on \(PSL_{2}(\mathbb{R})\) supported on a finite set of points. For each element in the support of \(\mu\) choose a representative in \(SL_{2}(\mathbb{R})\). Let \(S\subset SL_{2}(\mathbb{R})\) be the set of these representatives._ _Suppose that all of the entries of the elements of \(S\) are algebraic. Let \((\xi_{1},\xi_{2},\ldots,\xi_{k})\) be the set of these entries. Let \(K=\mathbb{Q}[\xi_{1},\xi_{2},\ldots,\xi_{k}]\) be the number field generated by the \(\xi_{i}\) and let_ \[C=\max\{\mathcal{H}(\xi_{i}):i\in[k]\}.\] _Then_ \[M_{\mu}\leq 4^{[K:\mathbb{Q}]}C^{8[K:\mathbb{Q}]}.\] Proof.: Let \(a\in S^{m}\) and \(b\in S^{n}\). We find an upper bound for \(d(a,b)\) where \(d\) is the distance function of our left-invariant Riemann metric introduced in the introduction. We have that \[d(a,b)=d(\operatorname{Id},a^{-1}b)\geq\Theta\left(\min\left\{\left\|I-a^{-1 }b\right\|_{2},\left\|I+a^{-1}b\right\|_{2}\right\}\right).\] For \(i\in[|S|]\) and \(j,k\in\{1,2\}\) let \(\zeta_{i,j,k}\) be the \((j,k)\)-th entry of the \(i\)-th element of \(S\). Let \(L_{i}\) be the sum of the number of times the \(i\)-th element of \(S\) appears in our word for \(a\) and the number of times it appears in our word for \(b\). Note that the components of \(a^{-1}\) are components of \(a\) possibly with a sign change. We know that each each component of \(I\pm a^{-1}b\) is of the form \(P(\zeta_{1,1,1},\ldots,\zeta_{|S|,2,2})\) where \(P\) is some polynomial of degree at most \(L_{i}\) in \(\zeta_{i,j,k}\). We also know that the \(L_{i}\) sum to \(m+n\). It is easy to see by induction that \(\mathcal{L}(P)\leq 2^{m+n}+1\). In particular \(\mathcal{L}(P)\leq 2^{m+n+1}\). By Lemma 8.4 this means that if \(\alpha\) is a coefficient of \(I\pm a^{-1}b\) then \[\mathcal{H}(\alpha)\leq 2^{m+n+1}C^{4(m+n)}.\] We know that \(\alpha\in K\) and so in particular the degree of its minimal polynomial is at most \([K:\mathbb{Q}]\). This means that if \(\alpha\neq 0\) then \[|\alpha|\geq 2^{-(m+n+1)[K:\mathbb{Q}]}C^{-4(m+n)[K:\mathbb{Q}]}.\] In particular this means that if \(a\neq b\) then \[d(a,b)\geq\Theta\left(2^{-(m+n+1)[K:\mathbb{Q}]}C^{-4(m+n)[K:\mathbb{Q}]}\right)\] and so \[M_{\mu}\leq 4^{[K:\mathbb{Q}]}C^{8[K:\mathbb{Q}]}.\qed\] ### Bounding the random walk entropy using the Strong Tits alternative In this subsection we will combine Breulliard's strong Tits alternative [7] with the results of Kesten [20] in order to obtain an estimate on the random walk entropy. The main result of this section will be the following. **Proposition 8.7**.: _There is some \(c>0\) such that the following is true. Let \(\mu\) be a finitely supported probability measure on \(PSL_{2}(\mathbb{R})\) and let \(h_{RW}\) be it's random walk entropy. Let \(K>0\) and suppose that for every virtually solvable subgroup \(H<PSL_{2}(\mathbb{R})\) we have_ \[\mu(H)<1-K.\] _Suppose further that \(\mu(\mathrm{Id})>K\). Then_ \[h_{RW}>cK.\] \(PSL_{2}(\mathbb{R})\) acts on the closed complex half plane \(\overline{\mathbb{H}}=\{z\in\mathbb{C}:\mathrm{Im}\,z\geq 0\}\) by Mobius transformations. It is well known that the virtually solvable subgroups of \(PSL_{2}(\mathbb{R})\) are precisely those which either have a common fixed point in \(\overline{\mathbb{H}}\) or for which there exists a pair of points in \(\overline{\mathbb{H}}\) such that each element in the subgroup either fixes both points or maps them both to each other. To prove Proposition 8.7 we introduce the following. We let \(G\) be a countable group and let \(\mu\) be a finite measure on \(G\). We let \(T_{\mu,G}:l^{2}(G)\to l^{2}(G)\) be the operator defined by \(T_{\mu,G}(f)(g)=\int_{G}f(gh)d\mu(h)\). It is clear that \(T_{\mu,G}\) is a bounded linear operator and that when \(\mu\) is symmetric \(T_{\mu,G}\) is self-adjoint. To prove Proposition 8.7 we need the following results. **Lemma 8.8**.: _The operator \(T_{\mu}\) is linear in \(\mu\). In other words_ \[T_{\lambda_{1}\mu_{1}+\lambda_{2}\mu_{2}}=\lambda_{1}T_{\mu_{1}}+\lambda_{2}T _{\mu_{2}}.\] **Lemma 8.9**.: _Let \(\mu\) be a finitely supported probability measure on some group \(G\). Let \(h_{RW}\) be the random walk entropy of \(\mu\). Then_ \[h_{RW}\geq-2\log\left\|T_{\mu,G}\right\|.\] **Lemma 8.10**.: _There is some \(\varepsilon>0\) such that the following is true. Suppose that \(a,b,c\in PSL_{2}(\mathbb{R})\) generate a non-virtually solvable subgroup. Let \(G\) be the group generated by \(a\), \(b\), and \(c\). Let_ \[\mu=\frac{1}{4}\delta_{a}+\frac{1}{4}\delta_{b}+\frac{1}{4}\delta_{c}+\frac{1 }{4}\delta_{\mathrm{Id}}.\] _Then_ \[\left\|T_{\mu,G}\right\|<1-\varepsilon.\] **Lemma 8.11**.: _Let \(\lambda\) be a finite non-negative measure on \(PSL_{2}(\mathbb{R})\) with finite support. Let \(T\) be the total mass of \(\lambda\). Let \(K\geq 0\) and suppose that for every virtually solvable subgroup \(H<PSL_{2}(\mathbb{R})\) we have_ \[\lambda(H)<T-K. \tag{75}\] _Then there exists some \(n\in\mathbb{Z}_{\geq 0}\) such that for each \(i\in[n]\) there exists \(a_{i},b_{i},c_{i}\in PSL_{2}(\mathbb{R})\) and \(k_{i}>0\) such that_ \[\lambda=\lambda^{\prime}+\sum_{i=1}^{n}k_{i}\left(\frac{1}{3}\delta_{a_{i}}+ \frac{1}{3}\delta_{b_{i}}+\frac{1}{3}\delta_{c_{i}}\right)\] _for some non-negative measure \(\lambda^{\prime}\) and for each \(i\in[n]\) the set \(\{a_{i},b_{i},c_{i}\}\) generates a non-virtually solvable group._ Proposition 8.7 follows immediately by combining these lemmas. The rest of this subsection will be concerned with proving these lemmas. Lemma 8.8 is trivial and its proof is left to the reader. We now prove Lemma 8.9. Proof of Lemma 8.9.: Let \(\gamma_{1},\gamma_{2},\dots\) be i.i.d. samples from \(\mu\) and let \(n\in\mathbb{Z}\). It is clear that for all \(g\in G\) \[\mathbb{P}[\gamma_{1}\gamma_{2}\dots\gamma_{n}=g]=T^{n}_{\mu,G}(\delta_{ \operatorname{Id}})(g).\] Now let \(p_{1},p_{2},\dots,p_{m}\) be a probability vector and let \(g_{1},g_{2},\dots,g_{m}\in G\) be distinct points such that \[\mu^{*n}=\sum_{i=1}^{m}p_{i}\delta_{g_{i}}.\] By definition we know that \[H(\gamma_{1}\gamma_{2}\dots\gamma_{n})=-\sum p_{i}\log p_{i}.\] By applying Jensen's inequality to the convex function \(-\log\) we can see that \[-\sum p_{i}\log p_{i}\leq-\log\sum p_{i}^{2}.\] Clearly \[-\log\sum p_{i}^{2} =-2\log\left\|T^{n}_{\mu,G}(\delta_{\operatorname{Id}})\right\|\] \[\geq-2n\log\left\|T_{\mu,G}\right\|.\] Hence for all \(n\in\mathbb{Z}_{>0}\) we have \[\frac{1}{n}H(\gamma_{1}\gamma_{2}\dots\gamma_{n})\geq-\log\left\|T_{\mu,G} \right\|.\] The result now follows by taking \(n\to\infty\). The proof of Lemma 8.10 is more involved. The key ingredient is the following result of Breuillard. **Theorem 8.12**.: _There exists some \(N\in\mathbb{Z}_{>0}\) such that if \(F\) is a finite symmetric subset of \(PSL_{2}(\mathbb{R})\) containing \(\operatorname{Id}\), either \(F^{d}\) contains two elements which freely generate a non-abelian free group, or the group generated by \(F\) is virtually solvable (i.e. contains a finite index solvable subgroup)._ Proof.: This is a special case of [7, Theorem 1.1]. We also need the following result of Kesten and a corollary of it. **Theorem 8.13**.: _Let \(G\) be a countable group. Suppose that \(a,b\in G\) freely generate a free group. Let \(A<G\) be the subgroup generated by \(a\) and \(b\). Let \(\mu\) be the measure on \(A\) given by_ \[\mu=\frac{1}{4}\left(\delta_{a}+\delta_{a^{-1}}+\delta_{b}+\delta_{b^{-1}} \right).\] _Then \(\left\|T_{\mu,A}\right\|=\frac{\sqrt{3}}{2}\)._ Proof.: This follows from [20, Theorem 3] and the fact that the spectral radius of a self-adjoint operator is its norm. **Corollary 8.14**.: _Let \(G\) be a countable group. Suppose that \(a,b\in G\) freely generate a free group. Let \(A<G\) be the subgroup generated by \(a\) and \(b\). Let \(\mu\) be the measure on \(G\) given by_ \[\mu=\frac{1}{4}\left(\delta_{a}+\delta_{a^{-1}}+\delta_{b}+\delta_{b^{-1}} \right).\] _Then \(\left\|T_{\mu,G}\right\|=\frac{\sqrt{3}}{2}\)._ Proof.: Let \(H\subset G\) be chosen such that each left coset of \(A\) in \(G\) can be written uniquely as \(hA\) for some \(h\in H\). This means that \[l^{2}(G)\cong\bigoplus_{h\in H}l^{2}(hA).\] We also note that for any \(h\in H\) the map \(T_{\mu,G}\) maps \(l^{2}(hA)\) to \(l^{2}(hA)\) and its action on \(l^{2}(hA)\) is isomorphic to the action of \(T_{\mu|_{A},A}\) on \(l^{2}(A)\). This means that \(\left\|T_{\mu,G}\right\|=\left\|T_{\mu|_{A},A}\right\|\). The result now follows by Theorem 8.13. One difficulty we need to overcome is that Theorems 8.12 and 8.13 require symmetric sets and measures but symmetry is not a requirement of Proposition 8.7. We will do this by bounding \(\left\|T_{\mu,G}T_{\mu,G}^{\dagger}\right\|\). First we need the following two simple lemmas. **Lemma 8.15**.: _Let \(G\) be a countable group and let \(\mu_{1},\mu_{2}\) be measures on \(G\). Then_ \[T_{\mu_{1},G}T_{\mu_{2},G}=T_{\mu_{1}*\mu_{2},G}. \tag{76}\] **Lemma 8.16**.: _Let \(G\) be a group, let \(n\in\mathbb{Z}_{>0}\), and let \(\left(p_{i}\right)_{i=1}^{n}\) be a probability vector. Let \(g_{1},g_{2},\ldots,g_{n}\in G\) and let \(\mu\) be defined by_ \[\mu=\sum_{i=1}^{n}p_{i}g_{i}\] _and let \(\hat{\mu}\) be defined by_ \[\hat{\mu}=\sum_{i=1}^{n}p_{i}g_{i}^{-1}.\] _Then_ \[T_{\mu,G}^{\dagger}=T_{\hat{\mu},G}.\] These lemmas are trivial and their proofs are left to the reader. We are now ready to prove Lemma 8.10. Proof of Lemma 8.10.: We will prove this by bounding \(\left\|(T_{\mu,G}T_{\mu,G}^{\dagger})^{N}\right\|\) where \(N\) is as in Theorem 8.12. Note that this is equal to \(\left\|T_{\mu,G}\right\|^{2N}\). Let \(\hat{\mu}\) be as in Lemma 8.16. Note that we may write \[\mu*\hat{\mu}=\eta+\frac{1}{16}(\delta_{\mathrm{Id}}+\delta_{a}+\delta_{a^{-1 }}+\delta_{b}+\delta_{b^{-1}}+\delta_{c}+\delta_{c^{-1}})\] where \(\eta\) is some positive measure of total mass \(\frac{9}{16}\). By applying Theorem 8.12 with \(F=\{\mathrm{Id},a,a^{-1},b,b^{-1},c,c^{-1}\}\) we know that there is some \(f,g\in F^{N}\) which freely generate a free group. We write \[(\mu*\hat{\mu})^{*N}=\eta^{\prime}+\frac{1}{16^{N}}(\delta_{f}+\delta_{f^{-1} }+\delta_{g}+\delta_{g^{-1}})\] where \(\eta^{\prime}\) is some positive measure with total mass \(1-\frac{4}{16^{N}}\). By Theorem 8.13 and Lemma 8.8 we know that \[\left\|T_{\frac{1}{16^{N}}(\delta_{c}+\delta_{c^{-1}}+\delta_{d}+\delta_{d^{- 1}}),G}\right\|\leq\frac{2\sqrt{3}}{16^{N}}.\] This means that \[\left\|T_{(\mu*\hat{\mu})^{*N},G}\right\|\leq 1-\frac{4}{16^{N}}(1-\frac{ \sqrt{3}}{2})\] and therefore \[\left\|T_{\mu,G}\right\|\leq\left(1-\frac{4}{16^{N}}(1-\frac{\sqrt{3}}{2}) \right)^{1/2N}<1.\qed\] Finally we need to prove Lemma 8.11. Proof of Lemma 8.11.: We prove this by induction on the number of elements in the support of \(\lambda\). If \(\lambda\) is the zero measure then the statement is trivial so we have our base case. If \(K=0\) then the statement is trivial so assume \(K>0\). Let \(a\in\operatorname{supp}\lambda\) be chosen such that \(\lambda(a)\) is minimal amongst all non-identity elements in the support of \(\lambda\). Now choose some \(b\in\operatorname{supp}\lambda\) such that \(a\) and \(b\) do not share a common fixed point. This is possible by (75) and the fact that \(K>0\). If \(a\) and \(b\) generate a non virtually solvable group then we may write \[\lambda=\lambda^{\prime}+\lambda(a)\left(\frac{1}{3}\delta_{a}+\frac{1}{3} \delta_{a}+\frac{1}{3}\delta_{b}\right)+\lambda(a)\left(\frac{1}{3}\delta_{a} +\frac{1}{3}\delta_{b}+\frac{1}{3}\delta_{b}\right)\] where \(\lambda^{\prime}\) is a non-negative measure with smaller support that \(\lambda\). We then apply the inductive hypothesis to \(\lambda^{\prime}\) with \(\max\{K-2\lambda(a),0\}\) in the role of \(K\) and \(T-2\lambda(a)\) in the role of \(T\). If \(a\) and \(b\) generate a virtually solvable group then there must be two distinct points \(g_{1},g_{2}\in PSL_{2}(\mathbb{R})\) such that the set \(\{g_{1},g_{2}\}\) is stationary under both \(a\) and \(b\). If this is the case then choose some \(c\in\operatorname{supp}\lambda\) such that \(\{g_{1},g_{2}\}\) is not stationary under \(c\). This is possible by (75). Note that \(a,b\) and \(c\) generate a non virtually solvable group. Write \[\lambda=\lambda^{\prime}+3\lambda(a)\left(\frac{1}{3}\delta_{a}+\frac{1}{3} \delta_{b}+\frac{1}{3}\delta_{c}\right).\] We then apply the inductive hypothesis to \(\lambda^{\prime}\) with \(\max\{K-3\lambda(a),0\}\) in the role of \(K\) and \(T-2\lambda(a)\) in the role of \(T\). ### Symmetric and nearly symmetric examples The purpose of this subsection is to prove Proposition 1.12. We will do this using Theorem 1.8. First we need the following proposition. **Proposition 8.17**.: _For all \(\alpha_{0},c,A>0\) there exists \(t>0\) such that for all sufficiently small (depending on \(\alpha_{0}\), \(c\), and A) \(r>0\) the following is true._ _Suppose that \(\mu\) is a compactly supported probability measure on \(PSL_{2}(\mathbb{R})\) and that \(U\) is a random variable taking values in \(\mathfrak{psl}_{2}(\mathbb{R})\) such that \(\exp(U)\) has law \(\mu\). Suppose that \(\|U\|\leq r\) almost surely and that \(\|\mathbb{E}[U]\|\leq cr^{2}\). Suppose that the smallest eigenvalue of the covariance matrix of \(U\) is at least \(Ar^{2}\). Then \(\mu\) is \(\alpha_{0}\), \(t\) - non-degenerate._ This is enough to prove Proposition 1.12. Proof of Proposition 1.12.: Note that by Proposition 8.17 there is some \(t>0\) such that providing \(r\) is sufficiently small \(\mu\) is \(\frac{1}{4}\), \(t\) - non-degenerate. Note that we can make \(r\) arbitrarily small be choosing our \(C\) to be arbitrarily large. Note that by Proposition 8.7 \[h_{RW}\geq\Theta(T).\] Note that by Proposition 8.6 \[M_{\mu}\leq 4^{k}M^{8k}.\] Note that trivially \[\chi\leq O(r).\] The result now follows from Theorem 1.8. In order to prove Proposition 8.17 we first need the following result and a corollary of it. **Theorem 8.18**.: _For all \(\gamma\in(1,\infty)\) there is some \(L>0\) such that the follow is true. Suppose that \(X_{1},X_{2},\ldots,X_{n}\) are random variables taking values in \(\mathbb{R}\) and suppose that for each \(i\in[n]\)_ \[\mathbb{E}[X_{i}|X_{1},X_{2},\ldots,X_{i-1}]=0,\] \[\mathbb{E}[X_{i}^{2}|X_{1},X_{2},\ldots,X_{i-1}]=1,\] _and_ \[|X_{i}|\leq\gamma\] _almost surely. Then_ \[\sup_{t}\left|\Phi(t)-\mathbb{P}\left[\frac{X_{1}+X_{2}+\dots+X_{n}}{\sqrt{n}}<t \right]\right|\leq Ln^{-1/2}\log n\] _where_ \[\Phi(t):=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{t}\exp(-x^{2}/2)dx\] _is the c.d.f. of the standard normal distribution._ Proof.: This is a special case of [3, Theorem 2]. **Corollary 8.19**.: _For all \(\varepsilon,\gamma>0\) there exists \(\delta>0\) and \(N\in\mathbb{Z}_{>0}\) such that the following is true. Let \(n\geq N\) and let \(X_{1},\dots,X_{n}\) be as in Theorem 8.18 with this values of \(\gamma\). Then for all \(a\in\mathbb{R}\) we have_ \[\mathbb{P}\left[\frac{X_{1}+X_{2}+\dots+X_{n}}{\sqrt{n}}\in[a,a+\delta]\right] \leq\varepsilon.\] Proof.: This follows immediately from Theorem 8.18. We will now prove Proposition 8.17. Proof of Proposition 8.17.: To prove Proposition 8.17 we will show that there is some \(n\) such that for all \(b_{0}\in P_{1}(\mathbb{R})\) the measure \(\mu^{*n}*\delta_{b_{0}}\) has mass at most \(\alpha_{0}\) on any interval of length at most \(t\). To do this, given an \(n\)-step random walk on \(P_{1}(\mathbb{R})\) generated by \(\mu\) we will construct an \(n\)-step random walk on \(\mathbb{R}\). Specifically we have the following. We let \(n\in\mathbb{Z}_{>0}\) be some value we will choose later. Let \(b_{0}\in P_{1}(\mathbb{R})\) and let \(\gamma_{1},\gamma_{2},\dots,\gamma_{n}\) be i.i.d. samples from \(\mu\). Let \(b_{i}:=\gamma_{i}\gamma_{i-1}\dots\gamma_{1}b_{0}\). Let \(U_{i}:=\log\gamma_{i}\) and define the real valued random variables \(X_{1},X_{2},\dots,X_{n}\) by \[X_{i}:=\left(\operatorname{Var}\left[\varrho_{b_{i-1}}(U)\right]\right)^{-1/2 }\varrho_{b_{i-1}}(U_{i})\] where \(\varrho_{b}\in\mathfrak{psl}_{*}^{*}\) is defined to be \(D_{u}(\exp(u)b)|_{u=0}\) as in Definition 3.1. We let \(Y_{1},Y_{2},\dots,Y_{n}\) be defined by \[Y_{i}=X_{i}-\mathbb{E}[X_{i}]\] and let \(S=Y_{1}+Y_{2}+\dots+Y_{n}\). Clearly \(\mathbb{E}[Y_{i}|Y_{1},Y_{2},\dots,Y_{i-1}]=0\) and \(\mathbb{E}[Y_{i}^{2}|Y_{1},Y_{2},\dots,Y_{i-1}]=1\). This enables us to apply Theorem 8.18. We now need to show that understanding \(S\) gives us some information about the distribution of \(b_{n}\). Now let \(c_{1},c_{2},\dots\) denote positive constants which depend only on \(\alpha_{0}\), \(c\), and \(A\). We define \(f:\mathbb{R}\to\mathbb{R}\) by \[f:x\mapsto\int_{0}^{x}\left(\operatorname{Var}\left[\varrho_{b_{i-1}}(U) \right]\right)^{-1/2}du.\] This definition is chosen such that \(f(\phi(b_{i}))-f(\phi(b_{i-1}))\) is approximated \(X_{i}\). In-fact we have \[D_{u}f(\phi(\exp(u)b_{i-1}))|_{u=0}=\left(\operatorname{Var}\left[\varrho_{b_{i- 1}}(U)\right]\right)^{-1/2}\varrho_{b_{i-1}}(U_{i})\] and so \(X_{i}=D_{u}f(\phi(\exp(u)b_{i-1}))|_{u=0}(U_{i})\). This means that to bound \(|f(\phi(b_{i}))-f(\phi(b_{i-1}))-X_{i}|\) it is sufficient to bound \(\|D_{u}^{2}f(\phi(\exp(u)b_{i-1}))\|\) for \(\|u\|\leq 1\). By compactness the norms of the first and second derivatives of the exponential function are bounded on the unit ball. Note that for all \(u\in\mathbb{R}\) \[c_{1}^{-1}r^{2}\leq\operatorname{Var}\varrho_{\phi^{-1}(u)}(U)\leq c_{1}r^{2} \tag{77}\] and so \[c_{2}^{-1}r^{-1}\leq f^{\prime}\leq c_{2}r^{-1}. \tag{78}\] Also note that \(\operatorname{Var}\varrho_{\phi^{-1}(u)}(U)\) can be written as \[\operatorname{Var}\varrho_{\phi^{-1}(u)}(U)=v^{T}\Sigma v\] where \(\Sigma\) is the covariance matrix of \(U\) and \(v\in\mathbb{R}^{3}\) depends smoothly on \(u\) and depends on nothing else. In particular \[\left|\frac{d}{du}\operatorname{Var}\varrho_{\phi^{-1}(u)}(U)\right| =\left|v^{\prime}(u)^{T}\Sigma v(u)+v(u)^{T}\Sigma v^{\prime}(u)\right|\] \[\leq c_{3}r^{2}.\] Note that \[f^{\prime\prime}(x) =\frac{d}{dx}\left(\operatorname{Var}\varrho_{\phi^{-1}(x)}(U) \right)^{-1/2}\] \[=\left(\operatorname{Var}\rho_{\phi^{-1}(x)}(U)\right)^{-3/2} \left(\frac{d}{du}\operatorname{Var}\rho_{\phi^{-1}(u)}(U)\right)\] and so in particular \[|f^{\prime\prime}(x)|\leq c_{4}r^{-1}. \tag{79}\] In particular this means that whenever \(\|u\|\leq 1\) we have \[\left\|D_{u}^{2}f(\phi(\exp(u)b_{i-1}))\right\|\leq c_{5}r^{-1}.\] Also note that there is some \(M\) with \(c_{6}^{-1}r^{-1}\leq M\leq c_{6}r^{-1}\) such that for all \(x\in\mathbb{R}\) \[f(x+\pi)=f(x)+M.\] Note that by (79) and Taylor's Theorem \[|f(\phi(b_{i}))-f(\phi(b_{i-1}))-X_{i}|\leq c_{7}r.\] Note that by (77) and the conditions of the proposition \[|X_{i}-Y_{i}|=|\mathbb{E}[X_{i}]|\leq c_{8}r.\] Therefore \[|f(\phi(b_{i}))-f(\phi(b_{i-1}))-Y_{i}|\leq c_{9}r.\] In particular \[|f(\phi(b_{n}))-f(\phi(b_{0}))-S|\leq c_{10}nr. \tag{80}\] We now let \(n=\lceil Kr^{-2}\rceil\) where \(K\) is some positive constant depending on \(\alpha_{0}\), \(A\), and \(c\) which we will choose later. Choose \(N\in\mathbb{Z}_{>0}\) and \(T>0\) such that by applying Theorem 8.18 we may ensure that whenever \(n\geq N\) and \(a\in\mathbb{R}\) we have \[\mathbb{P}\left[\frac{S}{\sqrt{n}}\in[a,a+T]\right]\leq\frac{\alpha_{0}}{2}.\] Note that \[\mathbb{E}[S^{2}]=n\] and so \[\mathbb{P}\left[|S|\geq\frac{M}{2}\right]\leq c_{10}K.\] Therefore whenever \(n\geq N\) and \(a\in\mathbb{R}\) \[\mathbb{P}\left[S\in[a,a+T\sqrt{n}]+M\mathbb{Z}\right]\leq\frac{\alpha_{0}}{2 }+c_{10}K.\] Substituting in our value for \(n\) gives \[\mathbb{P}\left[S\in[a,a+c_{11}\sqrt{K}r^{-1}]+M\mathbb{Z}\right]\leq\frac{ \alpha_{0}}{2}+c_{10}K.\] From (80) we may deduce that \[\mathbb{P}\left[f(\phi(b_{n}))\in[a,a+(c_{11}\sqrt{K}-c_{12}K)r^{-1}]+M \mathbb{Z}\right]\leq\frac{\alpha_{0}}{2}+c_{10}K.\] By taking \(K=\min\left\{\frac{\alpha_{0}}{2\alpha_{0}},\frac{c_{11}^{2}}{2c_{12}^{2}}\right\}\) we get \[\mathbb{P}\left[f(\phi(b_{n}))\in[a,a+c_{13}r^{-1}]+M\mathbb{Z}\right]\leq \alpha_{0}.\] By (78) this means that \[\mathbb{P}\left[\phi(b_{n})\in[a,a+c_{14}]+\pi\mathbb{Z}\right]\leq\alpha_{0}\] providing \(n\geq N\). Noting that \(n\to\infty\) as \(r\to 0\) completes the proof. ### Examples with rotational symmetry One way in which we can ensure that the Furstenberg measure satisfies our \(\alpha_{0},t\)- non-degeneracy condition is to ensure that it has some kind of rotational symmetry. In particular we can prove the following. **Proposition 8.20**.: _For every \(a,b\in\mathbb{Z}_{>0}\) with \(a\geq 4\) and \(K>0\) there exists some \(N\in\mathbb{Z}_{>0}\) and \(\varepsilon>0\) such that the following is true._ _Suppose that \(n\) is an integer with \(n\geq N\). Suppose that \(A_{1},A_{2},\ldots,A_{b}\in PSL_{2}(\mathbb{R})\) have operator norms at most \(1+1/n\) and have entries whose Mahler measures are at most \(\exp(\exp(\varepsilon\sqrt{n}))\). Suppose further that the degree of the number field generated by the entries of the \(A_{i}\) is at most \(\exp(\varepsilon\sqrt{n})\)._ _Let \(R\in PSL_{2}(\mathbb{R})\) be a rotation by \(\pi/a\) and let \(\mu\) be defined by_ \[\mu:=\frac{1}{ab}\sum_{i=0}^{a-1}\sum_{j=1}^{b}\delta_{R^{i}A_{j}R^{-i}}.\] _Suppose further that for every virtually solvable \(H<PSL_{2}(\mathbb{R})\) we have \(\mu(H)\leq 1-K\)._ _Then the Furstenberg measure generated by \(\mu\) is absolutely continuous._ Proof.: We wish to apply Theorem 1.8 to \(\frac{2}{3}\mu+\frac{1}{3}\delta_{\mathrm{Id}}\). Note that this measure is clearly \(\frac{1}{a}\), \(\frac{\pi}{a}\)- non-degenerate. Also note that we may take \(R=2\) in Theorem 1.8. Clearly \(\chi<\frac{1}{n}\). Note that by Proposition 8.7 we have \(h_{RW}\geq\Theta(K)\). Note that by Proposition 8.6 we know that \(M_{\mu}\leq\exp(A\exp(\varepsilon n))\) where \(A\) is some constant depending only on \(a\), \(b\) and \(c\). The result now follows by Theorem 1.8. ### Examples supported on large elements The purpose of this subsection is to prove Proposition 1.13. First we will need the following lemma. **Lemma 8.21** (The Ping-Pong Lemma).: _Suppose that \(G\) is a group which acts on a set \(X\). Let \(n\in\mathbb{Z}\) and suppose that we can find \(g_{1},g_{2},\dots,g_{n}\in G\) and pairwise disjoint non-empty sets_ \[A_{1}^{+},A_{2}^{+},\dots,A_{n}^{+},A_{1}^{-},A_{2}^{-}\dots,A_{n}^{-}\subset X\] _such that for all \(i\in[n]\) and all \(x\in X\backslash A_{i}^{-}\) we have \(g_{i}x\in A_{i}^{+}\). Then \(g_{1},g_{2},\dots,g_{n}\) freely generate a free semi-group._ This lemma is well known and we will not prove it. From this we may deduce the following. **Lemma 8.22**.: _For every \(\varepsilon>0\) there is some \(C\leq O(\varepsilon^{-1})\) such that the following is true. Let \(n\in\mathbb{Z}_{>0}\). Suppose that \(\theta_{1},\theta_{2},\dots,\theta_{n}\in\mathbb{R}/\pi\mathbb{Z}\) and that for every \(i\neq j\) we have \(|\theta_{i}-\theta_{j}|\geq\varepsilon\) and \(|\theta_{i}-\theta_{j}+\pi/2|\geq\varepsilon\). Let \(\lambda_{1},\lambda_{2},\dots\lambda_{n}\) be real numbers which are at least \(C\). Then the set_ \[\left\{R_{\theta_{i}}\begin{pmatrix}\lambda_{i}&0\\ 0&\lambda_{i}^{-1}\end{pmatrix}R_{-\theta_{i}}:i\in[n]\right\}\] _freely generates a free semi-group._ Proof.: This follows immediately by applying Lemma 8.21 with \(G=PSL_{2}(\mathbb{R})\), \(X=P_{1}(\mathbb{R})\), \(A_{i}^{+}=\phi^{-1}((\theta_{i}-\varepsilon/2,\theta_{i}+\varepsilon/2))\), and \(A_{i}^{-}=\phi^{-1}((\theta_{i}-\varepsilon/2,\theta_{i}+\varepsilon/2))^{\perp}\) along with Lemma 3.9. **Lemma 8.23**.: _For all \(n\in\mathbb{Z}\) there exists some \(\theta_{n}\in\left(\frac{1}{2n},\frac{2}{n}\right)\) such that \(\sin\theta_{n}\) and \(\cos\theta_{n}\) are rational and have height at most \(4n^{2}+1\)._ Proof.: Choose \(\theta_{n}\) such that \[\sin\theta_{n}=\frac{4n}{4n^{2}+1}\] and \[\cos\theta_{n}=\frac{4n^{2}-1}{4n^{2}+1}.\] We are now ready to prove Proposition 1.13. Proof of Proposition 1.13.: Given some \(r>0\) and some \(n\in\mathbb{Z}\) define \(\beta_{0},\ldots,\beta_{n-1}>0\) by letting \(\beta_{k}=\theta_{8^{n+1-k}}\) where \(\theta.\) is as in Lemma 8.23. We then define \(\alpha_{0},\alpha_{1},\ldots,\alpha_{2^{n}-1}\geq 0\) by letting \[\alpha_{k}=\sum_{i=0}^{n-1}\xi_{i}^{(k)}\beta_{i}\] where the \(\xi_{i}^{(k)}\) are the binary expansion of \(k\). In other words \(k=\sum_{i=0}^{n-1}\xi_{i}^{(k)}2^{i}\) with \(\xi_{i}^{(k)}\in\{0,1\}\). Clearly \[0=\alpha_{0}<\alpha_{1}<\cdots<\alpha_{2^{n}-1}.\] Furthermore \(\alpha_{i+1}>\alpha_{i}+\varepsilon\) where \(\varepsilon=\frac{1}{2\cdot 8^{n+1}}\). We also have that \[\alpha_{2^{n}-1} <\frac{2}{8^{2}}+\frac{2}{8^{3}}+\frac{2}{8^{4}}+\ldots\] \[=\frac{1}{32}\cdot\frac{8}{7}\] \[<\frac{\pi}{10}-\varepsilon.\] We now let \(C\) be the \(C\) from Lemma 8.22 with this value of \(\varepsilon\) and we choose some prime number \(p\) such that \(p\geq C^{2}\), \(p\leq O(8^{2n})\), and \(X^{2}-p\) is irreducible in the field \(\mathbb{Q}[\sin\frac{\pi}{5},\cos\frac{\pi}{5}]\). Now for \(i=0,1,\ldots,2^{n}-1\) and \(j=0,1,\ldots,4\) we let \(g_{i,j}\) be defined by \[g_{i,j}:=R_{\frac{j\pi}{5}+\alpha_{i}}\begin{pmatrix}\left\lceil r+\sqrt{p} \right\rceil+\sqrt{p}&0\\ 0&\left(\left\lceil r+\sqrt{p}\right\rceil+\sqrt{p}\right)^{-1}\end{pmatrix} R_{-\frac{j\pi}{5}-\alpha_{i}}.\] By Lemma 8.22 we know that the \(g_{i,j}\) freely generate a free semi-group. Now for \(i=0,1,\ldots,2^{n}-1\) and \(j=0,1,\ldots,4\) we let \(\hat{g}_{i,j}\) be defined by \[\hat{g}_{i,j}:=R_{\frac{j\pi}{5}+\alpha_{i}}\begin{pmatrix}\left\lceil r+ \sqrt{p}\right\rceil-\sqrt{p}&0\\ 0&\left(\left\lceil r+\sqrt{p}\right\rceil-\sqrt{p}\right)^{-1}\end{pmatrix} R_{-\frac{j\pi}{5}-\alpha_{i}}.\] Clearly the \(\hat{g}_{i,j}\) are Galois conjugates of the \(g_{i,j}\) and so also freely generate a free semi-group. We now let \(\mu\) be defined by \[\mu=\sum_{i=0}^{2^{n}-1}\sum_{j=0}^{4}\frac{1}{5\cdot 2^{n}}\delta_{\hat{g}_{i,j}}.\] We wish to use Theorem 1.8 to show that the Furstenberg measure generated by \(\mu\) is absolutely continuous providing \(n\) is sufficiently large in terms of \(r\). Let \(\nu\) be the Furstenberg measure generated by \(\mu\). By the construction of \(\mu\) we know that \(\nu\) is invariant under rotation by \(\pi/5\). In particular this means that it is \(\frac{1}{5}\), \(\frac{\pi}{5}\) - non-degenerate. We also know that for each \(i,j\) we have \(\|\hat{g}_{i,j}\|=\left\lceil r+\sqrt{p}\right\rceil-\sqrt{p}\leq r+1\). This means that \(\chi\leq r+1\) and that we may take \(R=r+1\). Since the \(\hat{g}_{i,j}\) freely generate a free semi-group we know that \(h_{RW}=\log\left(5\cdot 2^{n}\right)\geq\Theta(n)\). Finally we need to bound \(M_{\mu}\). To bound the \(M_{\mu}\) we will apply Proposition 8.6. We know by Lemma 8.23 that the heights of the entries in the \(\beta_{i}\) are at most \(O(8^{2n})\). We also know that the height of \(\left\lceil r+\sqrt{p}\right\rceil-\sqrt{p}\) is at most \(O_{r}(\sqrt{p})\) which is at most \(O_{r}(8^{n})\). By Lemma 8.4 this means that the height of entries in the \(\hat{g_{i,j}}\) is at most \(O_{r}(2^{2n}\cdot 8^{4n^{2}+n})\) which is at most \(O_{r}(8^{5n^{2}})\). It is easy to show that \(\left[\mathbb{Q}[\sin\frac{\pi}{5},\cos\frac{\pi}{5}]:\mathbb{Q}\right]=4\). This means that by Proposition 8.6 we have \[M_{\mu}\leq O_{r}\left(8^{8\cdot 4\cdot 5n^{2}}\right)\leq\exp(O_{r}(n^{2})).\] Therefore \[\frac{h_{RW}}{\chi}\left(\max\left\{1,\log\log\frac{M_{\mu}}{h_{ RW}}\right\}\right)^{-2} \gtrsim\frac{n}{r+1}\left(\log\log\exp(O_{r}(n^{2}))\right)^{-2}\] \[\geq\frac{n}{O_{r}((\log n)^{2})}\] \[\to\infty.\] This means that by Theorem 1.8 the Furstenberg measure is absolutely continuous providing \(n\) is sufficiently large in terms of \(r\). ### Examples with two generators In this subsection we will prove Proposition 1.14. Proof of Proposition 1.14.: First we will show that there is some \(\alpha_{0}\in\left(0,\frac{1}{3}\right)\) and \(t>0\) such that \(\mu\) is \(\alpha_{0}\), \(t\) - non-degenerate for all sufficiently large \(n\). First note that \(A\) is a rotation by \(\theta_{n}\) where \(\theta_{n}=\frac{1}{n}+O(\frac{1}{n^{2}})\). Also note that for all \(x\in P_{1}(\mathbb{R})\) we have \(d(x,Bx)\leq O(n^{-3})\). We now let \(\tilde{A}:\mathbb{R}\to\mathbb{R},x\mapsto x+\theta_{n}\) and choose \(\tilde{B}:\mathbb{R}\to\mathbb{R}\) such that \(\tilde{B}(x)\in\phi(B\phi^{-1}(x))\) and for all \(x\in\mathbb{R}\) we have \(|x-\tilde{B}(x)|\leq O(n^{-3})\). We then let \(\tilde{\mu}=\frac{1}{2}\delta_{\tilde{A}}+\frac{1}{2}\delta_{\tilde{B}}\). By Lemma 2.13 (a simple bound on the Wasserstein distance between a sum of independent random variables and a normal distribution) we know that for any \(x\in\mathbb{R}\) we have \[\mathcal{W}_{1}\left(\tilde{\mu}^{*n^{2}}*\delta_{x},N(x+\frac{1}{2}n^{2} \theta_{n},n^{2}\theta_{n}^{2})\right)<O(n^{-1}).\] Noting that \(n^{2}\theta_{n}^{2}\to 1\) we can see that there is some \(\alpha_{0}\in\left(0,\frac{1}{3}\right)\) and \(t>0\) such that \(\mu\) is \(\alpha_{0}\), \(t\) - non-degenerate for all sufficiently large \(n\). We will apply Theorem 1.8 to \(\tilde{\mu}:=\frac{2}{3}\mu+\frac{1}{3}\delta_{\mathrm{Id}}\). Note that this generates the same Furstenberg measure as \(\mu\) and so in particular it is \(\alpha_{0}\), \(t\) - non-degenerate. Note that by Proposition 8.7 there is some \(\varepsilon>0\) such that for all \(n\) we have \(h_{RW}\geq\varepsilon\). Note that by Proposition 8.6 we have \(M_{\tilde{\mu}}\leq 4(n^{3}+1)^{8}\). Clearly we may take \(R=2\). Also note that \(\chi\leq n^{-3}\). This means that to prove the proposition it is sufficient to prove that \[\varepsilon n^{3}\left(\log\log\frac{4(n^{3}+1)^{8}}{\varepsilon}\right)^{-2}\] tends to \(\infty\) as \(n\to\infty\). This is trivially true. ## 9. Appendix ### Proof of Theorem 1.25 We extend the result of Kesten [21, Theorem 1] to show that the convergence is uniform in the vector \(v\). **Theorem 9.1**.: _Suppose that \(\mu\) is a strongly irreducible measure on \(PSL_{2}(\mathbb{R})\) with compact support. Suppose that the support of \(\mu\) is not contained within any compact subgroup of \(PSL_{2}(\mathbb{R})\). Then there exists some probability measure measure \(\hat{\nu}\) on \(P_{1}(\mathbb{R})\) such that the following is true. Let \(\gamma_{1},\gamma_{2},\dots\) be i.d.d. samples from \(\mu\) and let \(q_{n}:=\gamma_{1}\gamma_{2}\dots\gamma_{n}\). Then given any \(\varepsilon>0\) and \(v\in P_{1}(\mathbb{R})\) there exists some \(T>0\) such that given any \(t>T\) we can find some random variable \(x\) with law \(\hat{\nu}\) such that_ \[\mathbb{P}[d(q_{\tau_{t,v}}^{T}v,x)>\varepsilon]<\varepsilon.\] Recall that \(\tau_{t,v}\) is the stopping time given by \[\tau_{t,v}=\min\{n:\left\|q_{n}^{T}v\right\|\geq t\left\|v\right\|\}.\] Proof.: In [21, Theorem 1] it is proven that this holds in a much more general setting providing some conditions are satisfied. In [14, Section 4] it is shown that the conditions of [21, Theorem 1] are satisfied in this setting. We deduce uniform convergence from this fact. To do this we show that if \(v,w\in P_{1}(\mathbb{R})\) are close then with high probability \(\tau_{t,v}=\tau_{t,w}\) and \(q_{\tau_{t,v}}^{T}v\) is close to \(q_{\tau_{t,v}}^{T}w\). **Lemma 9.2**.: _Suppose that \(\mu\) is a strongly irreducible measure on \(PSL_{2}(\mathbb{R})\) with compact support. Suppose that \(\chi>0\). Then given any \(c_{1},c_{2}>0\) there exists \(T\) such that for any \(t>T\) and any unit vector \(b\in\mathbb{R}^{2}\)_ \[\mathbb{P}[\exists n:\log t\leq\log\left\|q_{n}^{T}b\right\|\leq\log t+c_{1}] \lesssim c_{1}/\chi+c_{2}.\] Proof.: This follows immediately from [27, Proposition 4.8]. **Lemma 9.3**.: _Let \(\mu\) be a finitely supported measure on \(PSL_{2}(\mathbb{R})\) which is strongly irreducible and such that \(\chi>0\). Let \(\tau_{t,v}\) be as in Theorem 1.25. Then there exists some \(\delta>0\) depending on \(\mu\) such that given any \(r>0\) for all sufficiently large (depending on \(r\) and \(\mu\)) \(t\) the following is true. Suppose that \(v,w\in P_{1}(\mathbb{R})\) and \(d(v,w)<r\). Then_ \[\mathbb{P}[\tau_{t,v}=\tau_{t,w}]\geq 1-O_{\mu}(r^{\delta}).\] Proof.: Let \(A\) be the event that \[d(v,b^{-}(q_{n}^{T}))>\sqrt{r}\] and \[d(w,b^{-}(q_{n}^{T}))>\sqrt{r}\] for all \(n\geq\log t/\log R\). By Corollary 7.9 and Lemma 7.11 we know that providing \(t\) is sufficiently large in terms of \(\mu\) and \(r\) there is some \(\delta>0\) such that \[\mathbb{P}[A]\geq 1-O_{\mu}(r^{\delta}).\] By Lemma 3.11 we know that there is some constant \(C>0\) such that on the event \(A\) \[|\log\left\|q_{n}^{T}v\right\|-\log\left\|q_{n}^{T}w\right\||<Cr^{1/2}\] for all \(n\geq\log t/\log R\). Now let \(B\) be the event that there exists \(n\) such that \[|\log\left\|q_{n}^{T}v\right\|-t|<10Cr^{1/2}.\] By Lemma 9.2 we know that providing \(t\) is sufficiently large in terms of \(\mu\) and \(r\)\(\mathbb{P}[B]\leq O_{\mu}(r^{1/2})\). We also know that \(\{\tau_{t,v}=\tau_{t,w}\}\supset A\backslash B\). Therefore \[\mathbb{P}[\tau_{t,v}=\tau_{t,w}]\geq 1-O_{\mu}(r^{\delta})\] as required. Proof of Theorem 1.25.: Given \(\varepsilon>0\) we wish to show that we can find some \(T\) (depending on \(\mu\) and \(\varepsilon\)) such that whenever \(t>T\) and \(v\in P_{1}(\mathbb{R})\) we can find some random variable \(x\) with law \(\hat{\nu}\) such that \[\mathbb{P}[d(x,q_{\tau_{t,v}}^{T}v)>\varepsilon)]<\varepsilon.\] First let \(\varepsilon>0\). Choose \(k\in\mathbb{Z}_{>0}\) and let \(v_{1},v_{2},\ldots,v_{k}\in P_{1}(\mathbb{R})\) be equally spaced. Let \(T_{1}\) be the greatest of the \(T\) from Theorem 9.1 with \(\frac{1}{10}\varepsilon\) in the role of \(\varepsilon\) and \(v_{1},v_{2},\ldots,v_{k}\) in the role of \(v\) and let \(x_{1},x_{2},\ldots,x_{k}\) be the \(x\). Let \(T_{2}\) be the \(T\) from Lemma 9.3 with \(r=\frac{\pi}{k}\). Let \(T=\max\{T_{1},T_{2}\}\). Thus whenever \(t>T\) and \(i\in[k]\) \[\mathbb{P}\left[d(x_{i},q_{\tau_{t,v_{i}}}^{T}v_{i})>\frac{\varepsilon}{10} \right]<\frac{\varepsilon}{10}.\] Now let \(t>T\) and let \(v\in P_{1}(\mathbb{R})\). Suppose without loss of generality that \(v_{1}\) is the closest of the \(v_{i}\) to \(v\). In particular \(d(v_{1},w)<\frac{\pi}{k}\). By Lemma 9.3 this means that \[\mathbb{P}[\tau_{t,v_{1}}=\tau_{t,v}]\geq 1-O(k^{-\delta}) \tag{81}\] for some \(\delta>0\) depending only on \(\mu\). We know by for example Lemma 3.16 that providing \[d(b^{-1}(q_{n}^{T}),v_{1})>100k^{-1}\] we have \[d(q_{n}^{T}v_{1},q_{n}^{T}v)<O_{k}(\left\|q_{n}^{T}\right\|^{-2}).\] In particular by Corollary 7.9 and Lemma 7.11 we know that \[\mathbb{P}\left[d(q_{\tau_{t,v_{1}}}^{T}v_{1},q_{\tau_{t,v_{1}}}^{T}v)<O_{k}(t ^{-2})\right]\geq 1-O(k^{-\delta}).\] Combining this with (81) we know that providing \(t\) is sufficiently large depending on \(k\) and \(\mu\) \[\mathbb{P}\left[d(q_{\tau_{t,v_{1}}}^{T}v_{1},q_{\tau_{t,v}}^{T}v)>O_{k}(t^{-2 })\right]<O(k^{-\delta}).\] In particular this means that providing \(t\) is sufficiently large depending on \(k\) and \(\mu\) \[\mathbb{P}\left[d(x_{1},q_{\tau_{t,v}}^{T}v)>\frac{1}{10}\varepsilon+O_{k}(t ^{-2})\right]<\frac{1}{10}\varepsilon+O(k^{-\delta})\] and so if we choose \(k\) large enough (depending on \(\mu\) and \(\varepsilon\)) and then choose \(t\) large enough (depending on \(\mu\), \(k\), and \(\varepsilon\)) then \[\mathbb{P}\left[d(x_{1},q_{\tau_{t,v}}^{T}v)>\varepsilon\right]<\varepsilon\] as required.
2307.02899
Experimental realization of quantum non-Markovianity through the convex mixing of Pauli semigroups on an NMR quantum processor
This experimental study aims to investigate the convex combinations of Pauli semigroups with arbitrary mixing parameters to determine whether the resulting dynamical map exhibits Markovian or non-Markovian behavior. Specifically, we consider the cases of equal as well as unequal mixing of two Pauli semigroups, and demonstrate that the resulting map is always non-Markovian. Additionally, we study three cases of three-way mixing of the three Pauli semigroups and determine the Markovianity or non-Markovianity of the resulting maps by experimentally determining the decay rates. To simulate the non-unitary dynamics of a single qubit system with different mixing combinations of Pauli semigroups on an NMR quantum processor, we use an algorithm involving two ancillary qubits. The experimental results align with the theoretical predictions.
Vaishali Gulati, Vinayak Jagadish, R. Srikanth, Kavita Dorai
2023-07-06T10:14:17Z
http://arxiv.org/abs/2307.02899v2
Experimental realization of quantum non-Markovianity through the convex mixing of Pauli semigroups on an NMR quantum processor ###### Abstract This experimental study aims to investigate the convex combinations of Pauli semigroups with arbitrary mixing parameters to determine whether the resulting dynamical map exhibits Markovian or non-Markovian behavior. Specifically, we consider the cases of equal as well as unequal mixing of two Pauli semigroups, and demonstrate that the resulting map is always non-Markovian. Additionally, we study three cases of three-way mixing of the three Pauli semigroups and determine the Markovianity or non-Markovianity of the resulting maps by experimentally determining the decay rates. To simulate the non-unitary dynamics of a single qubit system with different mixing combinations of Pauli semigroups on an NMR quantum processor, we use an algorithm involving two ancillary qubits. The experimental results align with the theoretical predictions. ## I Introduction The field of quantum computing is rapidly developing, and there is a crucial need to develop reliable methods to characterize and control quantum systems. Quantum systems can interact with their environment in various ways, leading to decoherence and dissipation, which could have a deleterious effect on the computational protocols. The study of open quantum systems [1; 2] therefore has significant implications for applications in quantum information processing, quantum computing, and quantum communication. Recent research has focused on the effect of decoherence on the performance of quantum computers [3] and the use of error correction codes to address this issue [4]. A critical aspect of open quantum systems is characterizing their dynamical behavior, with a particular focus on the distinction between Markovian and non-Markovian dynamics [5; 6; 7]. The theory of non-Markovian dynamics has become an important area of research, with a focus on characterization, quantification, and detection of non-Markovian behavior [8; 9; 10]. The reduced dynamics of the quantum system of interest undergoing open evolution is described by a time-continuous family of completely positive (CP) and trace-preserving (TP) linear maps \(\{\Lambda(t):t\geq 0,\Lambda(0)=1\}\) known as the quantum dynamical map, acting on the bounded operators of the Hilbert space of the system of interest [11; 12]. The dynamical map is also related to the time-local generator \(\mathcal{L}(t)\)[13] in the time-local master equation, \(\dot{\Lambda}(t)=\mathcal{L}(t)\Lambda(t)\), with \[\mathcal{L}(t)[\rho]= -i[H(t),\rho]\] \[+\sum_{i}\gamma_{i}(t)\left(L_{i}(t)\rho L_{i}(t)^{\dagger}- \frac{1}{2}\{L_{i}(t)^{\dagger}L_{i}(t),\rho\}\right), \tag{1}\] were \(H(t)\) is the effective Hamiltonian, \(L_{i}(t)\)'s are the noise operators, and \(\gamma_{i}(t)\) the decoherence rates. The divisibility of the dynamical map is expressed as follows. \[\Lambda(t_{f},t_{i})=V(t_{f},t)\Lambda(t,t_{i}),\quad\forall t_{f}\geq t\geq t _{i}\geq 0. \tag{2}\] The map is CP-divisible if for all \(t\), the propagator \(V(t_{f},t)\) is CP and the corresponding decay rates \(\gamma_{i}(t)\) are positive at all times. Otherwise, the map is said to be CP indivisible. In contrast with classical non-Markovianity, quantum non-Markovianity does not have a unique definition [5; 6; 14]. Two major proposals to address quantum non-Markovianity, are based on the CP-indivisibility criterion (RHP) [15; 16] and on the distinguishability of states (BLP) [17; 18]. According to the RHP divisibility criterion [15], a quantum dynamical map is non-Markovian if it is CP-indivisible. A Markovian evolution, therefore is CP-divisible, with all the decay rates \(\gamma_{i}(t)\) in the time-local master equation Eq. (1) are positive at all times. A temporarily negative decay rate is therefore a signature of CP-indivisibility of the map and therefore non-Markovianity. According to the BLP definition [19], a quantum dynamical map \(\Lambda(t)\) is said to be Markovian if it does not increase the distinguishability of two initial states \(\rho_{1}\) and \(\rho_{2}\), i.e., if \(\|\Lambda(t)(\rho_{1})-\Lambda(t)(\rho_{2})\|\leq\|\Lambda(0)(\rho_{1})- \Lambda(0)(\rho_{2})\|\), where \(\|\cdot\|\) denotes the trace distance. In this work, we stick to the CP-indivisibility criterion of non-Markovianity. Convex combinations of Pauli semigroups and time-dependent Markovian Pauli dynamical maps was studied in [20; 21] discussing the geometrical aspects and non-Markovianity. These results showed the non-convexity of the sets of CP-divisible and CP-indivisible Pauli dynamical maps. Convex combination of semigroups of generalized Pauli dynamical maps has been addressed in [22]. Convex combinations of noninvertible dynamical maps has also been studied recently [23; 24; 25; 26]. For the case of generalized Pauli dynamical maps, it was shown that mixing invertible maps can never result in noninvertible maps [23]. Subsequently, it was also shown that noninvertibility of the generalized Pauli input maps is necessary for getting a semigroup [24]. The fraction of (non)invertible maps obtained by mixing noninvertible generalized Pauli maps was quantified in [25]. The measure of the set of non-Markovian maps obtained by mixing noninvertible Pauli maps was studied in [26]. In recent years, there has been a growing interest in the experimental implementation of non-Markovian dynamics in various physical systems, including quantum dots [27; 28; 29], superconducting qubits [30], trapped ions [31; 32], and nuclear magnetic resonance (NMR) systems [33; 34]. NMR systems, in particular, are a useful platform to investigate non-Markovian dynamics due to their excellent ability to control and manipulate system-environment interactions. Various studies in NMR investigate different quantum correlations present in the system [35; 36] and their dynamics under various environments [37; 38]. In this work, we aim to experimentally study the behavior of a single qubit system under the effect of different mixing combinations of Pauli semigroups on an NMR quantum processor. We demonstrate that the mixing of any two Markovian Pauli semigroups produces a map which is CP-indivisible and therefore RHP non-Markovian. One of the decay rate always turns out to be negative in this scenario. We also verify our experimental results for arbitrary choices of the mixing parameters for the dynamical semigroup realizations of the three Pauli semigroups which are in agreement with the notion of Pauli Simplex as defined in [20]. We note that the non-Markovian nature of the map becomes apparent when one or more of the decay rates becomes negative. We consider the case of a single qubit with two ancilla qubits to simulate non-unitary dynamics and make use of the algorithm for the circuit design as in [39]. The rest of this paper is organized as follows. Sec. II briefly describes the theory of the convex combinations of Pauli semigroups. The experimental details and results are presented in Sec. III. We then conclude in Sec. IV. ## II Convex combination of Pauli semigroups Consider the three Pauli dynamical semigroups, \[\Lambda_{i}(t)[\rho] = [1-p(t)]\rho+p(t)\sigma_{i}\rho\sigma_{i},\,i=1,2,3,\mbox{with}\] \[p(t) = \frac{1-e^{-ct}}{2},\,c>0. \tag{3}\] Here \(p(t)\) is the decoherence function and \(\sigma_{i}\) are the Pauli matrices. The convex combination of the three Pauli semigroups Eq. (3), each mixed in proportions of \(x_{i}\) is, \[\tilde{\Lambda}(t)=\sum_{i=1}^{3}x_{i}\Lambda_{i}(t),\quad(x_{i}>0,\sum_{i}x_{ i}=1). \tag{4}\] Let us call the three \(\Lambda_{i}(t)\)'s input maps and \(\tilde{\Lambda}(t)\) the output map. The associated time-local master equation for \(\tilde{\Lambda}(t)\) is \[\mathcal{L}(t)[\rho]=\sum_{i=1}^{3}\gamma_{i}(t)(\sigma_{i}\rho\sigma_{i}-\rho), \tag{5}\] with the decay rates \[\gamma_{1}(t) = \left(\frac{1-x_{2}}{1-2(1-x_{2})p(t)}+\frac{1-x_{3}}{1-2(1-x_{3} )p(t)}-\frac{1-x_{1}}{1-2(1-x_{1})p(t)}\right)\frac{\dot{p}(t)}{2}\] \[\gamma_{2}(t) = \left(\frac{1-x_{1}}{1-2(1-x_{1})p(t)}+\frac{1-x_{3}}{1-2(1-x_{3} )p(t)}-\frac{1-x_{2}}{1-2(1-x_{2})p(t)}\right)\frac{\dot{p}(t)}{2}\] \[\gamma_{3}(t) = \left(\frac{1-x_{1}}{1-2(1-x_{1})p(t)}+\frac{1-x_{2}}{1-2(1-x_{2} )p(t)}-\frac{1-x_{3}}{1-2(1-x_{3})p(t)}\right)\frac{\dot{p}(t)}{2}. \tag{6}\] The CP-divisibility and therefore, the Markovianity of output map \(\tilde{\Lambda}(t)\) depends on the mixing coefficients \(x_{i}\). For instance, an equal mixing of the three Pauli semigroups results in a Markovian output. The fraction of non-Markovian (CP-indivisible) maps obtained by mixing Pauli semigroups was reported in [20]. As opposed to three-way mixing, any mixing of two Pauli semigroups is always non-Markovian. To this end, let \(x_{1}=0\). The decay rate, \(\gamma_{1}(t)\) turns out to be \[\gamma_{1}(t)=-\left[\frac{(1-x_{2})x_{2}[1-p(t)]p(t)}{[1-2p(t)][1-2(1-x_{2} )p(t)][1-2x_{2}p(t)]}\right]\dot{p}(t), \tag{7}\] which remains negative for all values of \(x_{2}\). (Note that \(x_{3}=1-x_{2}\).) ## III Experimental analysis of Markovianity and non-Markovianity ### NMR Simulation of Pauli semigroups A dynamical map acting on a system of \(d\)-dimensional Hilbert space could be simulated by a \(d^{2}\)-dimensional ancilla if one allows the most general unitary evolution of the total system under the assumption that the ancillae is initialized in a pure state [40]. Therefore, to simulate maps on a qubit, a two qubit ancillae is sufficient. The finite time map \(\tilde{\Lambda}(t)\),as in Eq. (4) being CPTP admits an operator-sum representation, \(\tilde{\Lambda}(t)(\rho)=\sum_{k}E_{k}(t)\rho E_{k}^{\dagger}(t)\), where the operators \(E_{k}(t)\) satisfies the trace-preservation condition, \(\sum_{k}E_{k}^{\dagger}(t)E_{k}(t)=\mathbb{1}\). The non-unitary operators \(E_{k}(t)\) associated with the dynamical map can be decomposed into a linear combination of 4 unitary operators (Pauli matrices \(\sigma_{i}\)'s in this case) and are experimentally implemented using 2 ancillary qubits added to the working system. Efficient implementation of the non-unitary transformation represented by \(\tilde{\Lambda}(t)\) is achievable when suitable unitary operations \(U,V\), and \(W\) are found, such that \(E_{k}=\sum_{i}W_{ki}V_{i0}U_{i}\). By applying the overall unitary operation \((I\otimes W)U(I\otimes V)\) to the initial state of the working system and ancillary system, followed by the trace-out of the ancilla, the simulation of the map is obtained. The algorithm involving three unitaries offers the advantage in implementing the maps involving the convex mixtures of Pauli semigroups in a more general manner. This approach eliminates the need to design separate circuits for each specific mixing combination. By incorporating three unitaries into the algorithm, it becomes possible to dynamically adjust and experiment with different mixing parameters and Pauli operators, allowing for greater flexibility and versatility in simulating the desired non-unitary dynamics. The algorithm is as follows. * Transforming the state of the ancilla qubits: After initializing the three-qubit system in the state \(|0\rangle_{s}|00\rangle\) where \(|0\rangle_{s}\) is the state of the system qubit and \(|00\rangle\) that of the ancillary qubits, a unitary operation \(V\) is performed on the ancillary qubits. The composite state evolves to \(V_{00}|0\rangle_{s}|00\rangle+V_{10}|0\rangle_{s}|01\rangle+V_{20}|0\rangle_{s }|10\rangle+V_{30}|0\rangle_{s}|11\rangle\). The mixing parameters and the decoherence function associated with the Kraus operators determine the values in the first column of the unitary matrix \(V\). * Transforming the state of the system: The unitary operations \(\sigma_{i}\) are applied on the system qubit depending on the state of the ancilla qubits acting as control qubits. \[U=\sigma_{0}\otimes|00\rangle\langle 00|+\sigma_{1}\otimes|01\rangle\langle 01|+ \sigma_{2}\otimes|10\rangle\langle 10|+\sigma_{3}\otimes|11\rangle\langle 11|,\] (8) where \(\sigma_{0}\) is the Identity matrix. The system now evolves to the state \(V_{00}\sigma_{0}|0\rangle_{s}|00\rangle+V_{10}\sigma_{1}|0\rangle_{s}|01\rangle+ V_{20}\sigma_{2}|0\rangle_{s}|10\rangle+V_{30}\sigma_{3}|0\rangle_{s}|11\rangle.\) * Finally, the unitary operation \(W\) is performed on the ancillary system which transforms the state into \(\sum_{i,k=0}^{3}W_{ki}V_{i0}\sigma_{i}|0\rangle_{s}|k\rangle\), where \(E_{k}=\sum_{i=0}^{3}W_{ki}V_{i0}\sigma_{i}\). The elements of matrix \(W\) are uniquely determined by the choice of matrix elements of \(V\). We obtain the \(W\) matrix as Identity matrix in our cases. * On measuring the final state of the working system with the ancillary system in the state \(|k\rangle\langle k|\), we obtain \(E_{k}|0\rangle_{s}\langle 0|_{s}E_{k}^{\dagger}\). By tracing out the ancillary qubits, summing over each state \(|k\rangle\langle k|\), the resultant is \(\sum_{k}E_{k}(t)|0\rangle_{s}\langle 0|_{s}E_{k}^{\dagger}(t)\) which corresponds to simulating the map \(\tilde{\Lambda}(\rho)\) where the initial state of the system \(\rho\) is \(|0\rangle\langle 0|\). The specific forms of the matrices \(V\) used in the experiments depend on the dynamical map under consideration, and the specific forms used in our experiments are given in the following section. ### Experimental Parameters The three NMR qubits were realized using the three \({}^{19}\)F spin-1/2 nuclei in the molecule trifluoroiodoethylene (Fig. 2) dissolved in the deuterated solvent, d6-acetone. All experiments were performed at ambient temperature (\(\approx 298\) K) on a Bruker AVANCE-III 400 MHz NMR spectrometer equipped with a Broadband Observe (BBO) probe. The high-temperature, high-field approximation simplifies the NMR Hamiltonian by neglecting certain terms when the thermal and Zeeman energies dominate over other interactions. This approximation enables easier analysis and calculations in NMR experiments. The resulting Hamiltonian, assuming weak scalar coupling \(J_{ij}\) between spins \(i\) and \(j\), is given by [41] \[\mathcal{H}=-\sum_{i=1}^{3}\omega_{i}I_{iz}+2\pi\sum_{i<j}^{3}J_{ij}I_{iz}I_{ jz}, \tag{9}\] where \(\omega_{i}\) is the chemical shift of the \(i\)th spin, and \(I_{iz}\) represents the \(z\)-component of the spin-\(\frac{1}{2}\) operator for the \(i\)th spin. Nuclear spins at thermal equilibrium are represented by the density operator, \[\rho=\frac{\exp(-H/k_{B}T)}{Z}, \tag{10}\] where \(H\) is the Hamiltonian of the system, \(k_{B}\) is the Boltzmann's constant, \(T\) is the temperature, and \(Z\) is the partition function. Starting from thermal equilibrium, the system is prepared in a pseudopure state (PPS) using the spatial av eraging technique [42; 43], with the density matrix corresponding to the PPS being given by \[\rho_{000}=\frac{(1-\epsilon)}{8}\mathbbm{1}_{8}+\epsilon|000\rangle\langle 0 00|, \tag{11}\] where \(\epsilon\sim 10^{-5}\) is the spin polarization at room temperature and \(\mathbbm{1}_{8}\) is the \(8\times 8\) identity operator. The identity part of the density operator plays no role and the NMR signal arises solely from the traceless part of the density matrix given in Eq. (11). \(T_{1}\) and \(T_{2}\) relaxation times in NMR describe the return to equilibrium and loss of phase coherence of nuclear spins. \(T_{1}\) measures the recovery of longitudinal magnetization, while \(T_{2}\) measures the decay of transverse magnetization. The experimentally determined \(T_{1}\) and \(T_{2}\) relaxation times for the three qubits on the average range between 1-5 sec, respectively. The experimentally measured scalar couplings are given by \(J_{12}\)= 69.65 Hz, \(J_{13}\)= 47.67 Hz and \(J_{23}\)= -128.32 Hz. The radiofrequency (rf) required for creating the PPS state were designed using the Gradient Ascent Pulse Engineering (GRAPE) [44] technique, along with pulsed magnetic field gradients [45]. The \({}^{19}\)F 90\({}^{\circ}\) rf pulse duration was set to 16.2 \(\mu\)s, at a power level of -14.56 dB. The pulse length of the GRAPE pulses varied between 700-2500 \(\mu\)s. The system was evolved from the PPS to the other states via state-to-state transfer unitaries, and all states were created with high fidelities \(\geq 0.99\). The standard methods for quantum state reconstruction for NMR quantum information processing typically involve performing full state tomography [46; 47] which is computationally expensive, although some alternatives involving maximum likelihood estimation have been proposed and used [48]. For this work, we used a least squares constrained convex optimization method to reconstruct the density matrix of the desired state [49; 50]. Fidelities of the experimentally reconstructed states (as compared to the theoretically expected state) were computed using the Uhlmann-Jozsa measure [51; 52], \[\mathcal{F}(\chi_{\text{expt}},\chi_{\text{theo}})=\frac{|\text{Tr}[\chi_{ \text{expt}}\chi_{\text{theo}}^{\dagger}]|}{\sqrt{\text{Tr}[\chi_{\text{expt }}^{\dagger}\chi_{\text{expt}}]\text{Tr}[\chi_{\text{theo}}^{\dagger}\chi_{ \text{theo}}]}}, \tag{12}\] where \(\chi_{\text{theo}}\) and \(\chi_{\text{expt}}\) denote the theoretical and experi Figure 2: The structure of the molecule trifluoroiodoethylene with three NMR active spin\(-1/2\)\({}^{19}\)F nuclei acting as three qubits, along with the NMR spectra of the pseudo pure state \(|000\rangle\) which represents the initial state of the three-qubit system. Figure 1: (a) Schematic of the circuit used to simulate the dynamical map obtained from the convex combination of two and three Pauli dynamical maps. For both two- and three-dynamical map mixing, the controlled operation \(U\) is the same. \(\sigma_{i}\) denote the Pauli matrices, with \(\sigma_{0}\) being Identity matrix. The unitary operation \(V\) is different for the cases of two-way and three-way mixing. The \(W\) operation is equivalent to the Identity operation and hence not implemented experimentally. (b) The NMR pulse sequence used to simulate the map. The rectangular shapes represent radiofrequency (rf) pulses of differing angles and phases (which are written on the top of each pulse). CNOT operations between two qubits are represented by blue lines between the corresponding qubits. Step 1 corresponds to the preparation of the input state. Gradient pulses are represented by shaped green curves, while the GRAPE-optimized pulse to implement Step 2 of the circuit is represented by a large dark green curve, applied simultaneously on all three qubits. Step 3 corresponds to measurements on all the three qubits. mental density matrices respectively. We experimentally prepared the PPS with a fidelity of 0.96\(\pm\)0.01. #### iii.1.1 Mixing of Two Pauli Semigroups We experimentally demonstrate mixing of two-Pauli semigroups for two cases each with the decoherence parameter \(p(t)=[1-\exp\{(-2t)\}]/2\). To this end, we consider convex mixing as \[\bar{\Lambda}(t)(\rho)=a\Lambda_{3}(t)(\rho)+(1-a)\Lambda_{2}(t)(\rho). \tag{13}\] The two cases considered are * Equal mixing with the mixing parameter \(a=0.5\) and * unequal mixing with the mixing parameter \(a=0.25\). For the simulation of mixing two Pauli semigroups, the algorithm described above leads to the following matrix. \[V=\left(\begin{array}{cccc}\sqrt{1-p(t)}&\sqrt{p(t)}&0&0\\ 0&0&1&0\\ \sqrt{p(t)(1-a)}&-\sqrt{(1-a)(1-p(t))}&0&\sqrt{a}\\ \sqrt{ap(t)}&-\sqrt{a(1-p(t))}&0&-\sqrt{1-a}\end{array}\right). \tag{14}\] To implement the unitary for the convex combination of the case of mixing two and three Pauli semigroups experimentally, we utilized the quantum circuit shown in Fig. 1. For mixing of both two and three semigroups, the controlled operation \(U\) is the same, as in Eq. 8. The unitary operation \(V\) is different for the two-way and three-way mixing. The \(W\) operation is equivalent to the Identity operation for both cases and is hence not implemented experimentally. For the implementation of the NMR pulse sequence, GRAPE-optimized pulses are used. The unitaries \(U\) and \(V\) are designed so as to be implemented by use of a single pulse for each time point in all the cases. The experimental procedure involves three steps. * Step 1- Initialization: The system is prepared in the state \(|000\rangle\langle 000|\) with the help of optimized pulses and magnetic field gradients. * Step 2- Simulation of the non-unitary dynamics: The implementation of \(U\) and \(V\) with GRAPE optimized pulses. * Step 3- Measurement: The acquisition and tomography pulses are applied. The rectangular shapes in Fig. 1 depict the rf pulses used to prepare the initial pseudopure state required for step 1 of the algorithm. Each rectangle is associated with specific phases, which are indicated above them. The magnetic field direction is assumed to align with the \(z\)-axis. The rf pulses are applied along the \(x\) or \(y\)-axis at specific angles, allowing precise control over qubit rotations and transformations. With the knowledge of the desired phases and angles of the rf pulses, we can perform operations like single-qubit rotations and two-qubit gates. For example, the first qubit is rotated by an angle of \(\theta_{1}=\frac{5\pi}{12}\) radians around the \(y\)-axis, while the second qubit is rotated by an angle of \(\theta_{2}=\frac{\pi}{3}\) radians. CNOT operations between two qubits are represented by blue lines between the corresponding qubits. The complete pulse sequence corresponding to the CNOT gate can be found in [35]. Before the CNOT gate operation, an \(x\) pulse with an angle of \(\frac{\pi}{4}\) is applied. This pulse rotates the state of the qubit around the \(x\)-axis. Following the CNOT gate, a \(y\) pulse with an angle of \(-\frac{\pi}{4}\) is applied, which rotates the state around the \(y\)-axis. The angles and pulses of the RF pulses or gate operations are carefully chosen to achieve the desired output state or perform the targeted operation. The specific choice of angles or gates depend on our goal which in this case is to prepare the PPS. After the initialization, a GRAPE pulse corresponding to Step 2 of the algorithm is applied. This pulse applies the unitary operations \(V\) and \(U\), depending on the specific case being considered. #### iii.1.2 Mixing of Three Pauli Semigroups We next consider the case of the convex combination of three Pauli semigroups. We experimentally demonstrate this for three cases, each with the decoherence parameter \(p(t)=[1-\exp\{(-3t)\}]/2\): * Equal mixing with mixing parameters \(x_{1}=x_{2}=x_{3}=0.33\), * unequal mixing with mixing parameters \(x_{1}=x_{3}=0.3,x_{2}=0.4\) and * unequal mixing with mixing parameters \(x_{1}=0.2,x_{2}=x_{3}=0.4\). The \(V\) matrix in this case is evaluated to be \[V=\left(\begin{array}{cccc}\sqrt{1-p(t)}&\sqrt{p(t)}&0&0\\ \sqrt{x_{1}p(t)}&-\sqrt{x_{1}(1-p(t))}&\sqrt{1-x_{1}}&0\\ \sqrt{x_{2}p(t)}&-\sqrt{x_{2}(1-p(t))}&-\sqrt{\frac{x_{1}x_{2}}{1-x_{1}}}& \sqrt{\frac{x_{3}}{1-x_{1}}}\\ \sqrt{x_{3}p(t)}&-\sqrt{x_{3}(1-p(t))}&-\sqrt{\frac{x_{1}x_{3}}{1-x_{1}}}&- \frac{x_{2}}{\sqrt{x_{2}(1-x_{1})}}\end{array}\right). \tag{15}\] The decay rate of the decoherence parameter \(p(t)\) is dependent on the chosen constant \(c\). Therefore, determining the optimal time interval required to study the behavior of the system is directly linked to the selection of \(c\). Shorter time periods are preferable to minimize decoherence during experimental duration. The appropriate choice of \(c\) is crucial to effectively study the impact of the resulting dynamical map on the system, while minimizing noise interference. The final three-qubit density matrix was reconstructed using the least squares constrained convex optimization method. For the experimental matrix, we achieved fidelities ranging from 0.95 to 0.98. The experimental output matrix for the single-qubit system is obtained after tracing over the ancilla qubits. We plot bar graphs, Fig. 3 to visually compare the real and imaginary parts of the theoretical and experimental density matrices for the specific example of the second case of mixing two semigroups at \(t=0.1ms\). The fidelity of the experimental state, in this case, is 0.98. The decoherence parameter \(p(t)\) is computed at every time point from the output matrix and the experimental data is fitted to obtain the experimental parameter \(p_{e}(t)\) and its time evolution \(\dot{p}_{e}(t)\). The experimental decay rates are subsequently computed with the help of Eq. (6). Figures 4 and 5 depict a comparison of the theoretical and experimental results for the two-way mixing case, for equal and unequal mixing, respectively. For each case, the decoherence parameter \(p(t)\) is plotted in the top panel. The blue dots represent the experimental data with error bars, the blue curves represent the experimental fits, and the red dashed curves represent the theoretical parameters. The experimental decay rates \(\gamma_{i}(t)\) are negative for both case (i) and case (ii), indicating that the resultant dynamical map, when two Pauli semigroups maps are mixed, is non-Markovian which is consistent with the Theorem 1 in [20]. Figures 6-8 presents a comparison of the theoretical and experimental results for the case of three-way mixing. For each case, the decoherence parameter \(p(t)\) is plotted in the top panel. The blue dots represent the experimental data with error bars, the blue curves represent the experimental fits, and the red dashed curves represent the theoretical parameters. To determine whether the resultant dynamical map is Markovian or Non-Markovian, the decay rates are analyzed. The decay rates \(\gamma_{1}(t),\gamma_{2}(t),\gamma_{3}(t)\) were all positive for case (i) and case (ii) as shown in plots (b),(c) and (d) respectively, indicating that the resultant dynamical maps are Markovian. However, for case (iii), the negative decay rate of \(\gamma_{1}(t)\) suggests that the resultant dynamical map is non-Markovian which is consistent with the theoretical results. Figures 4-8 provide clear evidence of the agreement between the theoretical and experimental results. The experimental results clearly corroborate the Markovian or non-Markovian nature of the dynamical map in both cases of two- and the three-way mixing, which is consistent with Theorem 1 and the Pauli simplex in [20]. The outcomes presented here, which successfully demonstrate the effects of combining different Pauli semigroups with arbitrary mixing parameters, provide valuable insights for the study of memory effects in open quantum systems. Moreover, these results are significant for the development of quantum error correction and fault-tolerant quantum computing. Figure 8: Convex Combination of three Pauli dynamical maps for the case of inequal mixing. (a) Comparison of the theoretical and experimental decoherence parameters \(p(t)\). (b),(c),(d) Comparison of theoretical and experimental decay rates \(\gamma_{1}(t),\gamma_{2}(t),\gamma_{3}(t)\) with mixing parameters \(x_{1}=0.2,x_{2}=x_{3}=0.4\), respectively. The red dashed and blue curves represent the theoretical and the fit to the experimental parameters, respectively. Experimental data points with error bars are represented by blue dots. The negativity of the decay rate \(\gamma_{1}(t)\) indicates non-Markovianity of the resulting map. Figure 5: Convex Combination of two Pauli dynamical maps for the case of inequal mixing. (a) Comparison of the theoretical and experimental decoherence parameters \(p(t)\). (b) Comparison of theoretical and experimental decay rates \(\gamma_{1}(t),\gamma_{2}(t),\gamma_{3}(t)\) with mixing parameter \(a=0.25\). The red dashed and blue curves represent the theoretical and the fit to the experimental parameters, respectively. Experimental data points with error bars are represented by blue dots. The decay rate \(\gamma_{1}(t)\) is negative throughout indicating non-Markovianity. Figure 7: Convex Combination of three Pauli dynamical maps for the case of inequal mixing. (a) Comparison of the theoretical and experimental decoherence parameters \(p(t)\). (b) Comparison of theoretical and experimental decay rates \(\gamma_{1}(t),\gamma_{2}(t),\gamma_{3}(t)\) with mixing parameters \(x_{1}=x_{3}=0.3,x_{2}=0.4\), respectively. The red dashed and blue curves represent the theoretical and the fit to the experimental parameters, respectively. Experimental data points with error bars are represented by blue dots. All the decay rates are positive indicating that the resulting map is Markovian. Conclusions In our experimental study, we have successfully demonstrated the combination of two and three Pauli semigroups, with different mixing parameters. The main objective was to investigate the Markovianity and non-Markovianity of the resulting dynamical maps. By analyzing the decay rates associated with these dynamical maps, we were able to assess the characteristics of the quantum maps under investigation. We compared our experimental analysis with the theoretical predictions. The comparative analysis allowed us to validate the accuracy of our experimental findings and establish the reliability of our approach. The good agreement between the experimental results and theoretical expectations highlight the efficacy of our methodology in capturing the underlying dynamics of the system-environment interactions. This research represents a significant step forward in advancing our understanding of quantum correlations and the interplay between the system and its surrounding environment. Overall, our experimental investigation contributes to the growing body of knowledge in the field of quantum dynamics, paving the way for further studies on the characterization and manipulation of quantum information in realistic environments. NMR, with its precise control, long coherence times and accurate measurements serves as a good platform for simulating the dynamics of open quantum systems and understanding the correlations between quantum systems and their environment. ###### Acknowledgements. V.J. acknowledges financial support by the Foundation for Polish Science through TEAM-NET project (contract no. POIR.04.04.00-00-17C1/18-00). R.S. and K.D. acknowledge financial support from Department of Science and Technology (DST), India, Grants Nos:DST/ICPS/QuST/Theme-1/2019/14 and DST/ICPS/QuST/Theme-2/2019/Q-74, respectively. RS also acknowledges the support of the Govt. of India DST/SERB grant CRG/2022/008345.
2307.14023
Are Transformers with One Layer Self-Attention Using Low-Rank Weight Matrices Universal Approximators?
Existing analyses of the expressive capacity of Transformer models have required excessively deep layers for data memorization, leading to a discrepancy with the Transformers actually used in practice. This is primarily due to the interpretation of the softmax function as an approximation of the hardmax function. By clarifying the connection between the softmax function and the Boltzmann operator, we prove that a single layer of self-attention with low-rank weight matrices possesses the capability to perfectly capture the context of an entire input sequence. As a consequence, we show that one-layer and single-head Transformers have a memorization capacity for finite samples, and that Transformers consisting of one self-attention layer with two feed-forward neural networks are universal approximators for continuous permutation equivariant functions on a compact domain.
Tokio Kajitsuka, Issei Sato
2023-07-26T08:07:37Z
http://arxiv.org/abs/2307.14023v3
Are Transformers with One Layer Self-Attention Using Low-Rank Weight Matrices Universal Approximators? ###### Abstract Existing analyses of the expressive capacity of Transformer models have required excessively deep layers for data memorization, leading to a discrepancy with the Transformers actually used in practice. This is primarily due to the interpretation of the softmax function as an approximation of the hardmax function. By clarifying the connection between the softmax function and the Boltzmann operator, we prove that a single layer of self-attention with low-rank weight matrices possesses the capability to perfectly capture the context of an entire input sequence. As a consequence, we show that single-layer Transformer has a memorization capacity for finite samples, and that Transformers consisting of one self-attention layer with two feed-forward neural networks are universal approximators for continuous functions on a compact domain. ## 1 Introduction The Transformer model has been ubiquitously used in deep learning since its proposal by Vaswani et al. (2017). Its widespread application spans several domains, not only revolutionizing Natural Language Processing (NLP) through models like BERT (Devlin et al., 2019; Liu et al., 2019) and GPT (Brown et al., 2020; Radford et al., a,b) but also making significant advancements in image and graph processing as an alternative to conventional models like convolutional neural networks (CNNs) and graph neural networks (GNNs) (Dosovitskiy et al., 2022; Ying et al., 2022). One of the key reasons behind the success of the Transformer model is its ability to represent a wide range of functions. Various studies have been conducted to investigate this aspect, including the universal approximation theorem for Transformer models and its memorization capacity Yun et al. (2023); Kim et al. (2023); Mahdavi et al. (2023); Edelman et al. (2022); Likhosherstov et al. (2021). The main challenge in proving the universal approximation theorem for Transformer models lies in the fact that the Transformer needs to account for the context of the entire input sequence. Unlike feed-forward neural networks where each input is processed independently, the self-attention mechanism in Transformer models must take into account the dependencies between all elements in each input sequence. In constructive proofs (Edelman et al., 2022; Yun et al., 2023; Kim et al., 2023), these dependencies are often aggregated into a scalar value, which we call a "context id" here and is calculated by a self-attention mechanism. The drawback of existing analyses is that it requires excessively deep layers for data memorization (Yun et al., 2023; Kim et al., 2023), which leads to a discrepancy with Transformers being deployed in practice. This discrepancy primarily arises from the interpretation of the softmax function as an approximation of the hardmax function. Consequently, to compute the "context id" within the self-attention mechanism, the requirement for self-attention blocks scales linearly with the length of an input sequence. In this work, we address this gap by closely examining the softmax function itself. First, we show that it is impossible to output the "context id" using just one layer of self-attention with the hardmax function. At the same time, we also demonstrate that a single layer of one-head and softmax-based self-attention with low-rank weight matrices possesses the capability to perfectly capture the context of an entire input sequence. This result implies that the Transformer with one self-attention layer is a universal approximator by using two feed-forward neural networks connected before and after the self-attention mechanism. Our contributions are summarized as follows. 1. We show that one layer self-attention with the hardmax function is not a contextual mapping; that is, one layer Transformer has no memorization capacity. 2. In contrast, we provide a framework for constructing a context mapping with one layer of self-attention using the softmax function. 3. As a result, we prove that one layer Transformer has a memorization capacity, and Transformers with one self-attention layer is universal approximators. ### Related Works **Universal approximation theorems.** The history of the universal approximation theorem begins around 1990 (Cybenko, 1989; Carroll and Dickinson, 1989; Hornik, 1991; Funahashi, 1989). In particular, Cybenko (1989) analyzed the ability of one-hidden-layer neural networks with step function to approximate continuous functions, and later Hornik (1991) extended the analysis to bounded activation functions. Recent studies on this topic include analyses of how network width and depth affect the expressive power (Lu et al., 2017), and proofs of the universal approximation theorems for specific architectures (Lin and Jegelka, 2018). In parallel with studies on the universal approximation theorem, there have also been analyses of memorization capacity of models, i.e., the number of parameters required to memorize a finite number of samples perfectly (Baum, 1988; Huang and Babri, 1998). This research topic is similar to the proof of universal approximation theorem, but the focus of the memorization capacity is mainly on the analysis of parameter efficiency for storing finite samples (Huang, 2003; Vershynin, 2020; Park et al., 2021; Vardi et al., 2022; Yun et al., 2019; Bubeck et al., 2020; Hardt and Ma, 2016; Rajput et al., 2021; Zhang et al., 2016). Notably, Zhang et al. (2016) demonstrates that a neural network of the size used in practice can perfectly memorize a randomly labeled data set. In addition, Belkin et al. (2019); Nakkiran et al. (2019) have pointed out that the minimum number of parameters required to memorize a dataset is related to the double descent threshold. **Expressive capacity of Transformer.** Ever since Vaswani et al. (2017) first proposed the Transformer architecture, there have been various theoretical analyses on its expressive capacity. Yun et al. (2023) proved for the first time the universal approximation theorem for the Transformer, showing that a continuous function can be approximated with arbitrary precision if the number of Transformer blocks is on the order of the power of \(n\), where \(n\) is the length of each input sequence. Later, Kim et al. (2023) developed a more efficient way to construct contextual mapping, showing that \(2n\) self-attention blocks are sufficient for the memorization of finite samples. Since the studies of Yun et al. (2023) and Kim et al. (2023) are closely related to our paper, we discuss the details in more depth in Section 3.2 later. Their results were based on the assumption that the inputs are separated to some extent, which is an assumption we also make in this paper. Alternatively, under the assumption that input sequences are linearly independent, Mahdavi et al. (2023) recently showed that a one-layer \(H\)-head the self-attention mechanism can memorize \(O(Hn)\) samples. Relatedly, Edelman et al. (2022) investigated the inductive bias of self-attention mechanism and demonstrated that the bounded self-attention head is capable of expressing a sparse Boolean function while obtaining an upper bound on the covering number of self-attention. In an approach opposite to ours, in which inputs are assumed to be given, Likhosherstov et al. (2021) showed that, given parameters, there exists an input such that self-attention approximates an arbitrary sparse pattern. While Bhojanapalli et al. (2020) proved that Transformers with small head size, which is typical for multi-head self-attention, cannot express certain positive column-stochastic matrix, Aghajanyan et al. (2021) demonstrated empirically that pre-trained Transformers have a very low intrinsic dimension. ## 2 Preliminaries ### Notation We use bold lowercase letters to represent vectors and bold uppercase letters to represent matrices. For any vector \(\mathbf{v}\in\mathbb{R}^{a}\), we denote by \(v_{i}\) the \(i\)-the element of \(\mathbf{v}\). For any matrix \(\mathbf{A}\in\mathbb{R}^{a\times b}\), we denote its \(i\)-th row by \(\mathbf{A}_{i,:}\), its \(k\)-th column by \(\mathbf{A}_{\cdot,k}\) and the element at its \(i\)-th row and \(k\)-th column by \(A_{i,k}\). For any positive integer \(m\in\mathbb{N}_{+}\), \([m]\) represents the set \(\{1,\ldots,m\}\). For any real numbers \(a<b\), \([a,b]\) represents the interval \(\{x\in\mathbb{R}\mid a\leq x\leq b\}\), \((-\infty,a)\) represents \(\{x\in\mathbb{R}\mid x<a\}\) and \((b,\infty)\) represents \(\{x\in\mathbb{R}\mid x>b\}\). Let \(\sigma_{S}[\mathbf{v}]\) and \(\sigma_{H}[\mathbf{v}]\) for any input vector \(\mathbf{v}\) be the softmax function and hardmax function, respectively. Note that when there are multiple indices with maximum values, the hardmax function is defined such that the sum of the values at these indices equals one. By abuse of notation, for any input matrix \(\mathbf{A}\), \(\sigma_{S}\left[\mathbf{A}\right]\) and \(\sigma_{H}\left[\mathbf{A}\right]\) are defined as column-wise softmax and column-wise hardmax, respectively. We denote the ReLU activation function by \(\sigma_{R}\). Unlike \(\sigma_{S}\) and \(\sigma_{H}\), \(\sigma_{R}\) is always an element-wise operator, regardless of whether the input is a vector or a matrix. Let \(\|\cdot\|\) be the \(\ell^{2}\) norm and \(\|\cdot\|_{p}\) (\(1\leq p<\infty\)) be the \(\ell^{p}\) norm. We define the distance between two functions \(f_{1},f_{2}:\mathbb{R}^{d\times n}\rightarrow\mathbb{R}^{d\times n}\) by \[\mathbf{d}_{p}\left(f_{1},f_{2}\right):=\left(\int\|f_{1}(\mathbf{X})-f_{2}( \mathbf{X})\|_{p}^{p}\,\mathrm{d}\mathbf{X}\right)^{1/p}.\] In this paper, \(n\) denotes the length of an input sequence, \(N\) the number of input sequences, \(C\) the number of output classes, and \(d\) the embedding dimension. In addition, \(i,j\) are basically used for the indices of finite samples and \(k,l\) are used for the indices in each input sequence. ### Transformer Block Transformer was first introduced in Vaswani et al. (2017). Here we follow the definitions adopted in Kim et al. (2023): the Transformer block is composed of the self-attention mechanism and the feed-forward neural network, each accompanied by a skip connection. Given an input sequence \(\mathbf{Z}\in\mathbb{R}^{d\times n}\), composed of \(n\) tokens each with an embedding dimension of size \(d\), a dot-product self-attention mechanism with \(h\) heads outputs the following values: \[\mathcal{F}_{S}^{(SA)}(\mathbf{Z})=\mathbf{Z}+\sum_{i=1}^{h}\mathbf{W}_{l,i}^{(O)}\left( \mathbf{W}_{l,i}^{(V)}\mathbf{Z}\right)\sigma_{S}\left[\left(\mathbf{W}_{l,i}^{(K)}\mathbf{Z} \right)^{\top}\left(\mathbf{W}_{l,i}^{(Q)}\mathbf{Z}\right)\right]\in\mathbb{R}^{d \times n},\] where \(\mathbf{W}_{l,i}^{(O)}\in\mathbb{R}^{d\times s}\) and \(\mathbf{W}_{l,i}^{(V)}\), \(\mathbf{W}_{l,i}^{(K)}\,\mathbf{W}_{l,i}^{(Q)}\in\mathbb{R}^{s\times d}\) are the weight matrices, and \(s\) is the head size. Note that here, as with Yun et al. (2023) and Kim et al. (2023), we adopt the definition of the self-attention mechanism, which excludes layer normalization from the original definition of Vaswani et al. (2017) for the sake of simplicity. In contrast, given an input \(\mathbf{H}\in\mathbb{R}^{d\times n}\), the output of feed-forward neural network with a skip connection at index \(k\in[n]\) is \[\mathcal{F}^{(FF)}\left(\mathbf{H}\right)_{:,k}=\mathbf{H}_{:,k}+\mathbf{W}^{(2)}\sigma_{ R}\left[\mathbf{W}^{(1)}\mathbf{H}_{:,k}+\mathbf{b}^{(1)}\right]+\mathbf{b}^{(2)}\in\mathbb{R}^{d},\] where \(q\) is the hidden dimension, \(\mathbf{W}^{(1)}\in\mathbb{R}^{q\times d}\) and \(\mathbf{W}^{(2)}\in\mathbb{R}^{d\times q}\) are weight matrices, and \(\mathbf{b}^{(1)}\in\mathbb{R}^{q}\) and \(\mathbf{b}^{(2)}\) are bias terms. On the basis of the above definition, the Transformer block is represented as a combination of a self-attention mechanism and a feed-forward neural network: for any input sequence \(\mathbf{Z}\in\mathbb{R}^{d\times n}\), composed of \(n\) tokens each with an embedding dimension of size \(d\), the Transformer block \(\mathcal{F}:\mathbb{R}^{d\times n}\rightarrow\mathbb{R}^{d\times n}\) outputs \[\mathcal{F}\left(\mathbf{Z}\right)=\mathcal{F}^{(FF)}\left(\mathcal{F}_{S}^{(SA)} \left(\mathbf{Z}\right)\right).\] From the above definition, we see that the interaction of each token occurs only in the self-attention mechanism. ## 3 Attention is a Contextual Mapping ### Problem Setting Let \((\mathbf{X}^{(1)},\mathbf{Y}^{(1)}),\ldots,(\mathbf{X}^{(1)},\mathbf{Y}^{(1)})\subset\mathbb{R }^{d\times n}\times[C]^{1\times n}\) be a \(N\) sequence of input-output pairs, each of which consists of an \(n\) sequence \(\mathbf{X}^{(i)}\) of tokens with embedding dimension \(d\), and an output \(\mathbf{Y}^{(i)}\), where \(\mathbf{Y}^{(i)}_{:,}\) corresponds to the label of the token \(\mathbf{X}^{(i)}_{:,k}\) at index \(k\). In addition, we define the \(i\)-th vocabulary set for \(i\in[N]\) by \(\mathcal{V}^{(i)}=\bigcup_{k\in[n]}\mathbf{X}^{(i)}_{:,k}\subset\mathbb{R}^{d}\), and the whole vocabulary set \(\mathcal{V}\) is defined by \(\mathcal{V}=\bigcup_{i\in[N]}\mathcal{V}^{(i)}\subset\mathbb{R}^{d}\). In order to analyze the memorization capacity and universal approximation theorem in the following sections without contradiction, we impose the following natural consistency on the data. **Assumption 1** (Consistency).: _The \(N\) sequence of input-output pairs \((\mathbf{X}^{(1)},\mathbf{Y}^{(1)}),\ldots,(\mathbf{X}^{(1)},\mathbf{Y}^{(1)})\subset\mathbb{R} ^{d\times n}\times[C]^{1\times n}\) satisfies the following consistency condition: for any \(i,j\in[N]\) and \(k,l\in[n]\)_ \[\mathbf{Y}^{(i)}_{:,k}=\mathbf{Y}^{(j)}_{:,l}\] _holds if \(\mathcal{V}^{(i)}=\mathcal{V}^{(j)}\) and \(\mathbf{X}^{(i)}_{:,k}=\mathbf{X}^{(j)}_{:,l}\) are satisfied._ _Remark 1_.: It is important to note that if general input-output pairs are to be considered, the condition in the above definition of consistency should not be defined as \(\mathcal{V}^{(i)}=\mathcal{V}^{(j)}\) but rather as a condition where the _multiset_ of tokens in each sentence matches, and simultaneously the tokens in the position of interest are the same. However, when considering consistency in this paper, it is only for input sequences that do not contain duplicate tokens, so matching the vocabulary set as described above is sufficient. ### Background Yun et al. (2023) proved affirmatively one of the most fundamental questions on the expressive capacity of Transformer models, namely, whether universal approximation theorem for Transformer models hold. Their proof approach is to quantize the input domain and reduce the universal approximation theorem to an analysis of memorization of finite samples, i.e., the construction of a model that achieves zero loss for a finite number of training data, which was also analyzed later by Kim et al. (2023). In the analysis of memorization capacity, assumptions are usually made on the inputs in order to perform a meaningful analysis beyond the lower bound of Sontag (1997), and here, as with the assumptions adopted by Yun et al. (2023); Kim et al. (2023), we assume that the input tokens are separated by a certain distance: **Definition 1** (Tokenwise Separatedness).: Let \(m\in\mathbb{N}\) and \(\mathbf{Z}^{(1)},\ldots,\mathbf{Z}^{(N)}\in\mathbb{R}^{m\times n}\) be input sequences. Then, \(\mathbf{Z}^{(1)},\ldots,\mathbf{Z}^{(N)}\) are called tokenwise \((r_{\min},r_{\max},\delta)\)-separated if the following three conditions hold. 1. For any \(i\in[N]\) and \(k\in[n]\), \(\left\|\mathbf{Z}^{(i)}_{:,k}\right\|>r_{\min}\) holds. 2. For any \(i\in[N]\) and \(k\in[n]\), \(\left\|\mathbf{Z}^{(i)}_{:,k}\right\|<r_{\max}\) holds. 3. For any \(i,j\in[N]\) and \(k,l\in[n]\) with \(\mathbf{Z}^{(i)}_{:,k}\neq\mathbf{Z}^{(j)}_{:,l}\), \(\left\|\mathbf{Z}^{(i)}_{:,k}-\mathbf{Z}^{(j)}_{:,l}\right\|>\epsilon\) holds. Note that we refer to \(\mathbf{Z}^{(1)},\ldots,\mathbf{Z}^{(N)}\) as tokenwise \((r_{\max},\epsilon)\)-separated instead if the sequences satisfy the conditions 2 and 3. The achievement of Yun et al. (2023) was not only to prove the universal approximation theorem for Transformers, but also to clarify the difficulties in the analysis of this kind of expressive capacity of Transformers and elucidate an approach to establishing the proof. Namely, what makes Transformers' memorization different from that of feed-forward neural networks is that Transformers need to capture the context of each input sequence as a whole, rather than simply associating each token with a label. Remarkably, Yun et al. (2023); Kim et al. (2023) formulated this concept as a contextual mapping, which assigns a unique id to a pair of input sequences and each of their tokens. We define it here using the notion of \((r,\delta)\)-separatedness. **Definition 2** (Contextual Mapping).: Let \(\mathbf{X}^{(1)},\ldots,\mathbf{X}^{(N)}\in\mathbb{R}^{d\times n}\) be input sequences. Then, a map \(q:\mathbb{R}^{d\times n}\rightarrow\mathbb{R}^{d\times n}\) is called an \((r,\delta)\)-contextual mapping if the following two condition holds: 1. For any \(i\in[N]\) and \(k\in[n]\), \(\left\|q\left(\mathbf{X}^{(i)}\right)_{:,k}\right\|\leq r\) holds. 2. For any \(i,j\in[N]\) and \(k,l\in[n]\) with \(\mathcal{V}^{(i)}\neq\mathcal{V}^{(j)}\) or \(\mathbf{X}^{(i)}_{:,k}\neq\mathbf{X}^{(j)}_{:,l}\), \(\left\|q\left(\mathbf{X}^{(i)}\right)_{:,k}-q\left(\mathbf{X}^{(j)}\right)_{:,l}\right\|>\delta\) holds. In particular, \(q(\mathbf{X}^{(i)})\) for \(i\in[N]\) is called a context id of \(\mathbf{X}^{(i)}\). If we have such a contextual mapping, a label sequence can be associated with a unique id for each input sequence using the existing analysis of memorization in feed-forward neural networks. So the central question is: how to construct a contextual mapping in Transformer models? The only place in Transformer models where interaction between tokens can be taken into account is the self-attention mechanism, and therefore the self-attention mechanism must be used to construct the contextual mapping. Yun et al. (2023) first constructed a contextual mapping by using \(|\mathcal{V}|^{d}+1\) self-attention layers1, and later Kim et al. (2023) improved it to \(2n\) self-attention layers. However, this is still far from the practical implementation of Transformers, and it remains unclear whether a reasonably-sized Transformer would possess memorization capacity or if the universal approximation theorem would hold. This leads to the following question. Footnote 1: To be precise, when the continuous input range is quantized into \(1/\delta\) pieces for some \(0<\delta<1\), they demonstrated that there exists a contextual mapping composed of \(\delta^{-d}\) self-attention layers. **How many self-attention layers are both necessary and sufficient to construct a contextual mapping?** We first point out the reason for requiring a significant number of self-attention layers in the construction of contextual mapping in the analyses of Yun et al. (2023); Kim et al. (2023). Their approach entails interpreting the softmax function in the self-attention mechanism as an approximation of the hardmax function, which also hindered the detailed analysis of the specific properties of the softmax function. As evidence of this, we illustrate in Section 3.3 that using a single layer of self-attention with the hardmax function does not suffice to construct a contextual mapping. Next, in Section 3.4, we demonstrate that a contextual mapping can be constructed by using only 1 self-attention layer with the softmax function. This is somewhat surprising because this implies that it is possible to fully capture the context of each input sequence only through the attention coefficients computed by the pairwise dot-product of the softmax function and its weighted average. ### Self-attention with hardmax In previous studies analyzing the memorization capacity of the Transformer (Yun et al., 2023; Kim et al., 2023), softmax is taken to be an approximation of hardmax. However, we show here that the attention block with hardmax is not a contextual mapping. First we define the attention block with hardmax: for an input sequence \(\mathbf{Z}\in\mathbb{R}^{d\times n}\), the attention with hardmax is calculated as \[\mathcal{F}_{H}^{(SA)}(\mathbf{Z})=\mathbf{Z}+\sum_{i=1}^{h}\mathbf{W}_{l,i}^{(O)}\left( \mathbf{W}_{l,i}^{(V)}\mathbf{Z}\right)\sigma_{H}\left[\left(\mathbf{W}_{l,i}^{(K)}\mathbf{Z} \right)^{\top}\left(\mathbf{W}_{l,i}^{(Q)}\mathbf{Z}\right)\right], \tag{1}\] where \(\mathbf{W}_{l,i}^{(O)}\in\mathbb{R}^{d\times s}\) and \(\mathbf{W}_{l,i}^{(V)}\), \(\mathbf{W}_{l,i}^{(K)}\,\mathbf{W}_{l,i}^{(Q)}\in\mathbb{R}^{s\times d}\) are the weight matrices The following theorem holds for such a model. The proof is in Appendix A.1. **Theorem 1**.: \(1\)_-layer multi-head self-attention \(\mathcal{F}_{H}^{(SA)}\) with the hardmax function cannot be a contextual mapping._ Since the self-attention mechanism is the only place in Transformer models where interaction between tokens can be considered, this theorem indicates that one-layer Transformer does not have a memorization capacity. ### Self-attention with softmax In this subsection, we show that \(1\)-layer attention block with softmax is a contextual mapping for almost all input sequences. **Theorem 2**.: _Let \(\mathbf{X}^{(1)},\ldots,\mathbf{X}^{(N)}\in\mathbb{R}^{d\times n}\) be input sequences with no duplicate word token in each sequence, that is,_ \[\mathbf{X}^{(i)}_{:,k}\neq\mathbf{X}^{(i)}_{:,l} \tag{2}\] _for any \(i\in[N]\) and \(k,l\in[n]\). Also assume that \(\mathbf{X}^{(1)},\ldots,\mathbf{X}^{(N)}\) are tokenwise \((r_{\min},r_{\max},\epsilon)\)-separated. Then, there exist weight matrices \(\mathbf{W}^{(O)}\in\mathbb{R}^{d\times s}\) and \(\mathbf{W}^{V},\mathbf{W}^{K},\mathbf{W}^{Q}\in\mathbb{R}^{s\times d}\) such that the ranks of \(\mathbf{W}^{V},\mathbf{W}^{K}\) and \(\mathbf{W}^{Q}\) are all \(1\), and \(1\)-layer single head attention with softmax, i.e., \(\mathcal{F}_{S}^{(SA)}\) with \(h=1\) is an \((r,\delta)\)-contextual mapping for the input sequences \(\mathbf{X}^{(1)},\ldots,\mathbf{X}^{(N)}\in\mathbb{R}^{d\times n}\) with \(r\) and \(\delta\) defined by_ \[r =r_{\max}+\frac{\epsilon}{4}, \tag{3}\] \[\delta =\frac{\epsilon r_{\min}\log n}{r_{\max}^{2}(|\mathcal{V}|+1)^{4} \pi d\cdot\max\left(2+e,6\log n\right)}\] \[\qquad\qquad\cdot\exp\left(-\left(|\mathcal{V}|+1\right)^{4} \frac{\pi dr_{\max}^{2}\cdot\max\left(2+e,6\log n\right)}{4\epsilon r_{\min}} \right). \tag{4}\] Here we provide a simple proof sketch. The full proof can be found in Appendix A.2. Proof Overview.: For simplicity, we here assume \(s=1\). If we have a unique id, i.e., sequence id, corresponding to each input sequence \(\mathbf{X}^{(i)}\) for \(i\in[N]\), a context id can be constructed from a suitable linear combination of sequence id and the value of each token. Since this procedure of calculating the linear combination can be achieved by the output projection matrix \(\mathbf{W}^{(O)}\) and skip connection, the problem is how to configure weight parameters \(\mathbf{W}^{(V)},\mathbf{W}^{(K)},\mathbf{W}^{(Q)}\in\mathbb{R}^{1\times d}\) so that each row of values' softmax weighted average, \[\left(\mathbf{W}^{(V)}\mathbf{X}^{(i)}\right)\sigma_{S}\left[\left(\mathbf{W}^{(K)}\mathbf{X}^{ (i)}\right)^{\top}\left(\mathbf{W}^{(Q)}\mathbf{X}^{(i)}\right)\right]\in\mathbb{R}^{1 \times n}, \tag{5}\] outputs the unique sequence id of \(\mathbf{X}^{(i)}\). Actually, an even weaker condition is sufficient for an attention block to be a contextual mapping: there is no need to have just one unique sequence id for each input sequence. In fact, it is possible to construct a contextual mapping, provided that for each token \(\mathbf{v}\in\mathcal{V}\), input sequences in which the token appears can be identified by some \(v\)-specific sequence ids. This condition can be expressed in a mathematical form as follows: for any distinct \(i,j\in[N]\) and any \(k,l\in[n]\) such that \(\mathbf{X}^{(i)}_{:,k}=\mathbf{X}^{(j)}_{:,l}\), what we have to show is to construct weight matrices \(\mathbf{W}^{(V)},\mathbf{W}^{(K)},\mathbf{W}^{(Q)}\in\mathbb{R}^{1\times d}\) such that \[\left(\mathbf{W}^{(V)}\mathbf{X}^{(i)}\right)\sigma_{S}\left[\left(\mathbf{W }^{(K)}\mathbf{X}^{(i)}\right)^{\top}\left(\mathbf{W}^{(Q)}\mathbf{X}^{(i)}_{:,k}\right)\right]\] \[\qquad\qquad-\left(\mathbf{W}^{(V)}\mathbf{X}^{(j)}\right)\sigma_{S}\left[ \left(\mathbf{W}^{(K)}\mathbf{X}^{(j)}\right)^{\top}\left(\mathbf{W}^{(Q)}\mathbf{X}^{(j)}_{:,l}\right)\right]>\epsilon \tag{6}\] holds for some \(\epsilon>0\). For simplicity, we choose \(\mathbf{W}^{(V)}=\mathbf{W}^{(K)}=\mathbf{W}^{(Q)}=\mathbf{w}^{\top}\)2 such that the linear operator \(\mathbf{w}\in\mathbb{R}^{d}\) projects each token to a scalar value while approximately preserving distance between each pair of tokens: for any pair of tokens \(\mathbf{v}_{a},\mathbf{v}_{b}\in\mathcal{V}\), Footnote 2: In our actual proof, there exist unit vectors \(\mathbf{v},\mathbf{v}^{\prime}\in\mathbb{R}^{d}\) such that \(\mathbf{W}^{(V)},\mathbf{W}^{(K)}\) and \(\mathbf{W}^{(Q)}\) may be defined by \(\mathbf{W}^{(V)}=\mathbf{u}^{\prime\prime}\mathbf{v}^{\top},\mathbf{W}^{(K)}=\mathbf{u}^{\prime} \mathbf{v}^{\top}\) and \(\mathbf{W}^{(Q)}=\mathbf{u}\mathbf{v}^{\prime\top}\) for arbitrary vectors \(\mathbf{u},\mathbf{u}^{\prime},\mathbf{u}^{\prime\prime}\in\mathbb{R}^{s}\) satisfying certain constraints. \[C\|\mathbf{v}_{a}-\mathbf{v}_{b}\|\leq\left|\mathbf{w}^{\top}\mathbf{v}_{a}-\mathbf{w}^{\top}\mathbf{ v}_{b}\right|\leq\|\mathbf{v}_{a}-\mathbf{v}_{b}\| \tag{7}\] with some constant \(0<C<1\). Then, by using the assumption \(\mathbf{t}=\mathbf{X}^{(i)}_{:,k}=\mathbf{X}^{(j)}_{:,l}\) for some token \(\mathbf{t}\in\mathbb{R}^{d}\), we have \[\left|\mathbf{w}^{\top}\mathbf{t}\right|\cdot\left|\left(\mathbf{w}^{\top}\mathbf{ X}^{(i)}\right)\sigma_{S}\left[\left(\mathbf{w}^{\top}\mathbf{X}^{(i)}\right)^{\top} \left(\mathbf{w}^{\top}\mathbf{t}\right)\right]-\left(\mathbf{w}^{\top}\mathbf{X}^{(j)}\right) \sigma_{S}\left[\left(\mathbf{w}^{\top}\mathbf{X}^{(j)}\right)^{\top}\left(\mathbf{w}^{ \top}\mathbf{t}\right)\right]\right|\] \[\geq\left|\left(\mathbf{a}^{(i)}\right)^{\top}\sigma_{S}\left[\mathbf{a}^ {(i)}\right]-\left(\mathbf{a}^{(j)}\right)^{\top}\sigma_{S}\left[\mathbf{a}^{(j)} \right]\right|, \tag{8}\] where we denote \(\mathbf{a}^{(i)}=\left(\mathbf{w}^{\top}\mathbf{X}^{(i)}\right)^{\top}\left(\mathbf{w}^{\top} \mathbf{t}\right)\in\mathbb{R}^{n}\) and \(\mathbf{a}^{(j)}=\left(\mathbf{w}^{\top}\mathbf{X}^{(j)}\right)^{\top}\left(\mathbf{w}^{\top} \mathbf{t}\right)\in\mathbb{R}^{n}\). Therefore, in order to prove that a self-attention block serves as a contextual mapping, we only have to focus on the separability of the function \[\mathbf{boltz}:\mathbb{R}^{n}\rightarrow\mathbb{R},\mathbf{a}\mapsto\mathbf{a}^{\top} \sigma_{S}[\mathbf{a}], \tag{9}\] which is known as the Boltzmann operator (Littman, 1996; Asadi & Littman, 2017). The following lemma shows that the Boltzmann operator is a mapping that projects input sequences to scalar values while preserving some distance, and is central to our proof that the self-attention function is a contextual mapping. **Lemma 1**.: _Let \(\mathbf{a}^{(1)},\ldots,\mathbf{a}^{(m)}\in\mathbb{R}^{n}\) be tokenwise \((r,\delta)\)-separated vectors with no duplicate element in each vector and_ \[\delta=\max(2+e,6\log n). \tag{10}\] _Then, the outputs of the Boltzmann operator are \((r,\delta^{\prime})\)-separated, that is,_ \[\left|\mathbf{boltz}(\mathbf{a}^{(i)})\right| \leq r \tag{11}\] \[\left|\mathbf{boltz}(\mathbf{a}^{(i)})-\mathbf{boltz}(\mathbf{a}^{(j)})\right| >\delta^{\prime}=\frac{\log n}{2}e^{-2r} \tag{12}\] _hold for each \(i,j\in[m]\) with \(\mathbf{a}^{(i)}\neq\mathbf{a}^{(j)}\)._ Taking into account the above arguments, this separability of the Boltzmann operator allows us to construct one self-attention layer to be a contextual mapping. ## 4 Applications of Contextual Mapping ### Memorization capacity of one-layer Transformer As a first application of Theorem 2, we prove that a 1-layer Transformer can completely memorize finite samples, each of which has no duplicate token. This result especially indicates that in contrast to the proof of Kim et al. (2023), which required \(2n\) self-attention layers for Transformer memorization, one layer of self-attention is actually sufficient. In addition, it is worth referring to the fact that the hardmax-based Transformers do not have a memorization capacity, which is implied straightforwardly from Theorem 1. **Corollary 1** (Memorization capacity of one-layer Transformer).: _Let \(\epsilon>0,r_{\max}>r_{\min}>0\) and \((\mathbf{X}^{(1)},\mathbf{Y}^{(1)}),\ldots,(\mathbf{X}^{(N)},\mathbf{Y}^{(N)})\subset\mathbb{ R}^{d\times n}\times|C|^{1\times n}\) be sequences of input-output-pairs such that \(\mathbf{X}^{(1)},\ldots,\mathbf{X}^{(N)}\) are tokenwise \((r_{\min},r_{\max},\epsilon)\)-separated input sequences with no duplicate token in each sentence. Then, there exist weight parameters such that for any \(i\in[N]\)_ \[\mathcal{F}\left(\mathbf{X}^{(i)}\right)=\mathcal{F}^{(FF)}\left( \mathcal{F}_{S}^{(SA)}\left(\mathbf{X}^{(i)}\right)\right)=\mathbf{Y}^{(i)} \tag{13}\] _holds._ In addition, it is straightforward to show that a 1-layer Transformer with trainable positional encodings has a memorization capacity for arbitrary input sequences possibly with duplicate tokens. **Corollary 2** (Memorization capacity of one-layer Transformer with positional encodings).: _Let \(\epsilon>0,r_{\max}>r_{\min}>0\) and \((\mathbf{X}^{(1)},\mathbf{Y}^{(1)}),\ldots,(\mathbf{X}^{(N)},\mathbf{Y}^{(N)})\subset\mathbb{ R}^{d\times n}\times|C|^{1\times n}\) be sequences of input-output-pairs such that \(\mathbf{X}^{(1)},\ldots,\mathbf{X}^{(N)}\) are tokenwise \((r_{\min},r_{\max},\epsilon)\)-separated input sequences. Then, there exist weight parameters and positional encodings \(\mathbf{E}\in\mathbb{R}^{d\times n}\) such that for any \(i\in[N]\)_ \[\mathcal{F}\left(\mathbf{X}^{(i)}+\mathbf{E}\right)=\mathcal{F}^{(FF)} \left(\mathcal{F}_{S}^{(SA)}\left(\mathbf{X}^{(i)}+\mathbf{E}\right)\right)=\mathbf{Y}^{(i)} \tag{14}\] _holds._ ### Transformers with one self-attention layer are universal approximators As a further application of Theorem 2 we here provide a proof that Transformer with one self-attention layer is a universal approximator. More precisely, let \(\mathcal{F}_{\mathrm{PE}}\) be the set of all permutation equivariant continuous functions that take values on a compact domain in \(\mathbb{R}^{d\times n}\), and let \(\mathcal{T}_{2}\) be the set of all two layer Transformers with one self-attention layer, that is, \[\mathcal{T}_{2}=\left\{\mathcal{F}_{2}^{(FF)}\circ\mathcal{F}_{S}^{(SA)}\circ \mathcal{F}_{1}^{(FF)}:\mathbb{R}^{n\times d}\rightarrow\mathbb{R}^{n\times d }\right\}, \tag{15}\] where \(\mathcal{F}_{1}^{(FF)},\mathcal{F}_{2}^{(FF)}\) and \(\mathcal{F}_{S}^{(SA)}\) are feed-forward neural network layers and a self-attention layer with the softmax function, respectively. Then the following proposition holds: **Proposition 1** (Transformers with one layer self-attention are universal approximators).: _Let \(1\leq p<\infty\). Then, for any \(f\in\mathcal{F}_{\mathrm{PE}}\) and \(\epsilon>0\), there exists a Transformer \(g\in\mathcal{T}_{2}\) with one layer self-attention such that_ \[\mathbf{d}_{p}(f,g)=\left(\int_{\mathbb{R}^{d\times n}}\|f(\mathbf{Z})-g(\mathbf{Z})\| _{p}^{p}\,\mathrm{d}\mathbf{Z}\right)^{1/p}<\epsilon.\] _holds._ ## 5 Conclusions We demonstrated that a contextual mapping can be implemented in one layer self-attention with low-rank matrices, by clarifying the connection between a self-attention mechanism and the Boltzmann operator. This particularly indicates that one layer Transformers have a memorization capacity for finite samples, and Transformers with one self-attention layer are universal approximators for continuous functions on a compact domain. Our proof of the universal approximation theorem requires one feed-forward neural network layer before the self-attention layer to quantize continuous inputs. We leave it as future work to clarify whether the one-layer Transformers without such a quantization layer are universal approximators or not.
2301.08784
Visual Semantic Relatedness Dataset for Image Captioning
Modern image captioning system relies heavily on extracting knowledge from images to capture the concept of a static story. In this paper, we propose a textual visual context dataset for captioning, in which the publicly available dataset COCO Captions (Lin et al., 2014) has been extended with information about the scene (such as objects in the image). Since this information has a textual form, it can be used to leverage any NLP task, such as text similarity or semantic relation methods, into captioning systems, either as an end-to-end training strategy or a post-processing based approach.
Ahmed Sabir, Francesc Moreno-Noguer, LluΓ­s PadrΓ³
2023-01-20T20:04:35Z
http://arxiv.org/abs/2301.08784v2
# Visual Semantic Relatedness Dataset for Image Captioning ###### Abstract Modern image captioning system relies heavily on extracting knowledge from images to capture the concept of a static story. In this paper, we propose a textual visual context dataset for captioning, in which the publicly available dataset COCO Captions Lin et al. (2014) has been extended with information about the scene (such as objects in the image). Since this information has a textual form, it can be used to leverage any NLP task, such as text similarity or semantic relation methods, into captioning systems, either as an end-to-end training strategy or a post-processing based approach.1 Footnote 1: [https://github.com/ahmedssabir/Textual-Visual-Semantic-Dataset](https://github.com/ahmedssabir/Textual-Visual-Semantic-Dataset) ## 1 Introduction Caption generation is a task that lies at the intersection of computer vision and natural language processing. This task aimed to generate a synthetic language description for an image. Recently, Transformer Vaswani et al. (2017) has become the new standard for image caption generation systems Huang et al. (2019); Cornia et al. (2020); Zhang et al. (2021); Li et al. (2022). However, most diverse image captioning systems employ visual context information to generate accurate synthetic caption descriptions from an image. Early work Fang et al. (2015) uses visual information from the image to build caption re-ranking system via similarity. Another work Wang et al. (2018) focuses on the importance of object information in images, such as frequency count, size, and position. The work of Cornia et al. (2019) employs object information to control the caption generation as a visual grounding task. Gupta et al. (2020) proposes a contrastive learning based approach via language modeling and object information for phrase grounding in a caption system. Zhang et al. (2021) explores semantic coherency in image captioning by aligning the visual context to the language graph, which results in capturing both the correct linguistic characteristics and visual relevance. More recently, Sabir et al. (2022) proposes a belief revision based visual relatedness score that re-ranks the most visually related caption using the object information. Learning the semantic relation between the text and its environmental context is an important task in computer vision _i.e_. a visual grounding task. While there are some publicly available visual context datasets for image captioning Lin et al. (2014); Agrawal et al. (2019); Changpinyo et al. (2021), none includes textual level information of the visual context in the image. Therefore, we propose a visual semantic relatedness dataset for the caption pipeline, as our aim is to combine language and vision to learn textual semantic similarity and relatedness between the text and its related context from the image. Figure 1: Examples of our proposed COCO based textual visual context dataset. (Top) the visual context associated with each image, (Bottom) the overlapping dataset in blue. We use out-of-the-box tools to extract visual concepts from the image. Figure 3 shows our proposed strategy to estimate the most closely related/not-related visual concepts using the caption description. Building on (Sabir et al., 2022), in this paper we describe in depth the construction of the visual context dataset. This dataset is based on the COCO (Lin et al., 2014), and we further extend the dataset using state-of-the-art and out-of-the-box tools to extract the most closely related visual semantic context information from each image. Our main contribution is this combined visual context dataset, which provides the language and vision research community with the opportunity to use semantic similarity at the textual level between text and image to improve their results. Also, unlike the computer vision community, which tackles this problem by relying on visual features (Lu et al., 2020; Li et al., 2020; Desai and Johnson, 2021), our approach relies only on the textual information. Therefore, it can be used as an end-to-end or post-processing approach. In addition, we propose a similarity based re-ranking task, where the main concept is to learn a scoring function that assigns higher similarity to better caption hypotheses that correlate with its visual context from the same image. ## 2 Visual Context Information To obtain the _visual context_ from each image, we will use out-of-the-box classifiers to extract the image context information. We extract two kinds of context information: objects and scenarios present/seen in the image. **ResNet-152 (He et al., 2016).** A residual or shortcut connection-based deep network that relies heavily on batch normalization. The shortcut is known as a gated recurrent unit that is able to train a deeper network while maintaining less complexity. **CLIP (Radford et al., 2021).** (Contrastive Language-Image Pre-training) This is a pre-trained model with contrastive loss where the pair of image-text needs to be distinguished from randomly selected sample pairs. CLIP uses available resources across the Internet without human annotation of 400M pairs. CLIP achieves state-of-the-art performance in a wide range of image classification tasks in zero-short learning. **Inception-Resnet FRCNN2 (Huang et al., 2017).** An improved variant of Faster R-CNN with the trade-off of better accuracy and fast inference via high-level features extracted from Inception-Resnet. Faster R-CNN has two stages: (1) a region proposal that suggests region-of-interest and (2) region-of-interest scoring. It is a pre-trained model that is trained on COCO categories with 80 object classes. Footnote 2: TensorFlow Object Detection API The objects extracted from all the pre-trained approaches mentioned above are obtained by extracting the top-3 object classes/categories from each classifier after filtering out no confidence instances via a probability threshold \(<0.2\). ## 3 Dataset In this section, we first outline in more detail the existing datasets, and then we describe our proposed textual visual context information datasets. ### Related Work While there are a number of publicly available datasets for captioning and visual context, none of them includes textual form (only in the form of a feature _e.g._ Visual genome (Krishna et al., 2017) Figure 2: Visual context dataset. (**Left**) COCO-visual dataset: top frequency count of the extracted visual context from COCO Captions with semantic relatedness \(th\)reshold with the human annotated caption. (**Middle**) COCO-overlapping dataset: top frequency count of the overlap visual context with human annotation. (**Right**) The figure shows the raw frequency count output of two visual classifiers (ResNet152 and CLIP). The result indicates that each classifier gave different degrees of confidence regarding the object in the image. and Bottom-Up Top-Down feature (Anderson et al., 2018)). In this section, we outline several publicly available datasets for image captioning. **COCO (Lin et al., 2014).** This dataset (COCO Captions) contains more than 120K images, that annotated by humans (five different captions per image). We follow the standard split provided by the Karpathy et al. (Karpathy and Fei-Fei, 2015), where 5K images are used for validation, 5K for testing and the rest for training. **Novel Object Captioning (Agrawal et al., 2019).** A new dataset from the Open Images dataset (Krasin et al., 2017) that extended for the image captioning task with the capability of describing novel objects which are not seen in the training set. The dataset consists of 15k images divided into validation and testing, 4,500 and 10,600, respectively. The images are grouped into subsets depending on their nearness to COCO classes. **Conceptual Captions 12M (Changpinyo et al., 2021).** The most recent dataset and acquired image and text annotation from a web crawl. It contains around 12 million pairs automatically collected from the internet using relaxed filtering to increase the variety in caption styles. ### Resulting Datasets As we mentioned before, we will rely on the COCO Captions dataset to extract the visual context information, as it is the most used by the language and vision community, and it was human annotated, as shown in Figure 1. **COCO-visual.** It consists of 413,915 captions with associated visual context top-3 objects for training and 87,721 for validation. We rely on the confidence of the classifier to filter out non-existing objects in the image. For testing, we use VilBERT (Lu et al., 2020), with Beam search \(k\) = 9, to generate 45,000 captions with their visual context using the 5K Karpathy test split. **COCO-overlapping.** Inspired by (Wang et al., 2018) that investigates the object count in image captioning. We also create an overlapping object with a caption as a dataset, as shown at the bottom of Figure 1. It consists of 71,540 overlap annotated captions and their visual context information. Although we extract the top-3 objects from each image, we use three filter approaches to ensure the quality of the dataset (1) Threshold: to filter out predictions where the object classifier is not confident enough, and (2) semantic alignment with semantic similarity to remove duplicated objects. (3) a semantic relatedness score as a soft-label: to guarantee that the visual context and caption have a strong relation. In particular, we use Sentence RoBERTa (Reimers and Gurevych, 2019) to give a soft label via cosine similarity3 (_i.e_. the degree of visual relatedness) and then we use a \(h\)reshold to annotate the final label (if \(th\geq 0.2,0.3,0.4\) then [1,0]). Figure 2 shows the visual context dataset with different \(th\)resholds. Footnote 3: Sentence-BERT uses a siamese network to derive meaningfully sentence embedding that can be compared via cosine similarity. Figure 3 shows the proposed model to establish the visual context relatedness between the caption and the visual in the image. We omit higher \(th\)reshold as the data becomes imbalanced with a more negative label. Note that, all the textual visual contexts extracted by pre-trained models mentioned above have fast inference times, which makes them suitable for new task adoption. Therefore, we evade computationally hungry pre-computed features _e.g_. Bottom-Up Top-Down feature as it is too computationally expansive for our task. ## 4 Experiment In this section, we describe the task and the experiment performed, and we compared the performance of our model against several existing baselines. Figure 3: System overview. We proposed an end-to-end system that (1) generates the visual semantic relatedness context dataset and (2) estimates the semantic relation between the candidate caption (provided by an off-the-shelf image captioning approach) and its environmental context in the image. **Task.** To evaluate the dataset, we frame a re-ranking task, where the task is to re-rank the caption hypotheses produced by the baseline beam search using only similarity metrics. However, unlike previous works [14, 15], we rely only on similarity to re-rank the caption using semantic similarity4\(sim(\text{visual},\text{caption})\), and we therefore use top-3 multi-visual context information at the same time. We employ BERT/BERT-CNN based model as shown in Figure 3 to compute the similarity/relatedness score: Footnote 4: Semantic similarity is a more specific term than semantic relatedness. However, we here refer similarity to both semantic similarity and general semantic relatedness (_e.g. car_ is similar to a _truck_, but is also related to _parking_). **BERT [1].** BERT achieves remarkable results in semantic similarity, and we therefore fine-tune \(\text{BERT}_{\text{base}}\) on the proposed dataset with a binary classifier, cross-entropy loss function [0,1] (related/not related). **BERT-CNN.** To take advantage of the overlapping between the visual context and the caption, and to extract global information from each visual, we use the same BERT mentioned above as an embedding layer followed by a shallow CNN [13]. Let \(X=\{w_{1},\dots,w_{L}\}\) be the sentence, where \(w_{i}\in\mathbb{R}^{D}\) is the \(D\) dimensional BERT embedding of the \(i\)-th word vector in the sentence, while \(L\) denotes the sentence length. We pass the sentence \(X\) through a Kernel \(\mathbf{f}\in\mathbb{R}^{K\times n\times D}\) that convolved over a window of \(n\) words with Kernel size \(K\). By doing this operation, we generate local features of words \(n\)-gram fragments. The local feature of each \(i\)-th fragments is computed: \[z_{i}=\mathrm{R}\left(\mathbf{f}*w_{i:i+n-1}+b\right) \tag{1}\] where * is the convolution operator, \(b\) is the bias, and \(\mathrm{R}(\cdot)\) is the Rectified Linear Unit (ReLU) function [10]. By applying this convolution to all the sentence or text fragments, we obtain the corresponding feature map for the \(n\)-grams at all locations: \[\mathbf{z}=[z_{1},z_{2},\dots,z_{L-n+1}] \tag{2}\] where the \(\mathbf{z}\) or \(\mathbf{z}^{(n)}\) is computed using BERT embedding, and each feature map has a size (\(n=3\)). Then, all feature maps are finally concatenated with max pooling operator and then a sigmoid classification layer. We experimented, first, with fine-tuning the last BERT upper 3 layer (BERT-3L) to capture more semantic information with CNN. However, we gained more improvement in some metrics when fine-tuning the fully 12 layers in an end-to-end fashion. Note that, lower layers capture lexical information _i.e_. phrase-level [1], therefore, our approach also benefits from fine-tuning both lexical and semantic information. **Evaluation Metric.** We use the official COCO offline evaluation suite, producing several widely used caption quality metrics: **BLEU**[14] **METEOR**[14], **ROUGE**[15], **C**IDEr**[16], **SPICE**[1] and the semantic-similarity based metric **BERTScore**(B-S) [14]. **Human Evaluation.** We conducted a small-scale human study to investigate human preferences over the visual re-ranked caption. We randomly selected 19 test images and gave the 12 reliable human subjects the option to chooses between two captions: (1) baseline and (2) similarity based visual ranker. We can observed that in around 50% of the cases, the human subject agreed with our re-ranking. Figure 4 shows the user interface presented to the human subjects, asking them to select the most diverse caption. **Baseline.** We use visual semantic information to re-rank candidate captions produced by out-of-the-box state-of-the-art caption generators. We extract Figure 4:. The user interface presented to our human subjects through the survey website asking them to re-rank the most descriptive caption candidates based on the visual information. top-9 beam search candidate captions from general pre-trained language and visual model: VilBERT (Lu et al., 2020), fine-tuned on a total of 12 different vision and language datasets such as caption image retrieval and visual question answering. **Implementation.** We apply different similarity based re-rankers as shown in Table 1. The re-rankers are similarity via fine-tuning BERT between the visual context and the candidate caption. The model is fined-tuned on each datasets that are labeled with different \(th\)resholds as shown in the Table 1, with a batch size 16 for 1 epoch with a learning rate \(2\)e-\(5\) and _max length_ 50, we kept the rest of hyperparameter settings as in the original implementation. For the BERT-CNN, the model is trained as end-to-end for five epochs. ## 5 Result and Discussion We compared the performance of our model against several existing baselines that improve captions with object information. All baselines are trained on the same dataset (without any filtering \(th\)reshold = 0), object based word re-ranking (Fang et al., 2015), an LSTM with object counter (Wang et al., 2018) and a language grounding based caption re-ranker (Cornia et al., 2019). The experiment consists of re-ranking the captions produced by the baseline pre-trained language and vision model VilBERT using only the similarity. In this experiment, each candidate caption is compared to multiple objects and concepts appearing in the image, and re-ranked according to the obtained similarity scores. The result of our model and comparison against different baseline are reported in Table 1. The improvement is across all metrics with BERT except BLEU and SPICE. Therefore, we added CNN on the top of BERT to capture word-level global information and thereby we gained an improvement over word-level as shown in Figure 5. **Diversity Evaluation.** We follow the standard diversity evaluation (Shetty et al., 2017; Deshpande et al., 2019): (1) _Div-\(1\)_ (D1) the ratio of unique unigram to the number of word in caption (2) _Div-\(2\)_ (D2) the ratio of unique bi-gram to the number of word in the caption, (3) _mBLEU_ (mB) is the BLEU score between the candidate caption against all human captions (lower value indicate diversity) and finally (4) _Unique_ words in the caption before and after re-ranking. Although, as shown in Table 3, the two first _Div_ metrics are not able to capture the small changes, our results have lower _mBLEU_ and more _Unique_ words per caption. Also, we use SBERT (SB) to measure the semantic diversity at sentence level between the desired caption against the five human annotations. **Experiments with Pre-trained Model.** Although this approach is proposed to take the advantage of the dataset, we also investigate the use of an out-of-the-box based similarity re-ranker on the generated text set with visual context. For this, \begin{table} \begin{tabular}{l c c c c c} \hline \hline Model & B-4 & M & R & C & S & B-S \\ \hline VilBERT (Lu et al., 2020) &.330 & 272 & 554 & 1.104 & 207 &.9352 \\ + Best Beam &.351 & 274 &.557 & 1.115 & 206 &.9363 \\ \hline +V\({}_{\text{avg}}\)(Wang et al., 2015) &.348 & 274 &.559 & 1.123 &.206 &.9365 \\ +V\({}_{\text{v0nn}}\)(Wang et al., 2018) &.348 & 274 &.559 & 1.123 &.206 &.9364 \\ +V\({}_{\text{v0nn}}\)(Corni et al., 2019) &.345 & 274 &.557 & 1.116 &.206 &.9361 \\ \hline +S\({}_{\text{RoBERT}}\)(as baseline) &.348 & 272 &.557 & 1.115 &.204 &.9362 \\ +BERT \(h=0\) &.345 & 274 &.558 & 1.117 &.207 &.9363 \\ +BERT \(h>0.2\) &.349 & 275 &.560 & 1.125 &.207 &.9364 \\ +BERT \(h\leq 0.3\) &.351 & 275 &.560 & 1.127 &.207 &.9365 \\ +BERT \(h\leq 0.4\) &.351 & 276 & **.861** & **.118** &.207 & **.9367** \\ +BERT\(h=0\) &.358 & 274 &.585 & 1.121 &.206 &.9362 \\ +BERT\(h\)-CNN \(h=0\) &.349 & 275 &.559 & 1.128 &.207 &.9364 \\ +BERT\(h\)-CNN \(h\leq 0.3\) &.350 & 275 &.560 & **.131** &.207 &.9365 \\ +BERT\(h\)-CNN \(h\leq 0.4\) &.304 & 274 &.559 & 1.124 &.206 &.9365 \\ +BERT\(h=0\) &.346 & 275 &.557 & 1.117 &.207 &.9361 \\ +BERT\(h=0\) &.349 & 277 &.560 & 1.128 &.208 &.9366 \\ +BERT\(h=0\) &.352 & 275 &.560 & 1.131 & **.208** &.9366 \\ +BERT\(h\geq 0.4\) &.348 & 274 &.560 & 1.123 &.206 &.9364 \\ \hline \hline \end{tabular} \end{table} Table 1: Caption re-ranking performance results on the COCO-Caption β€œKarpathy” test split. The result shows that the model benefits from having a \(th\)reshold and \(n\)-gram extractor CNN over the baseline. The BERT-3L indicates that only the upper 3 layers are fine-tuned. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Model & B-4 & M & R & C & S & B-S \\ \hline VilBERT (Lu et al., 2020) &.330 & 272 &.554 & 1.104 & **207** &.9352 \\ + Best Beam &.341 &.274 &.557 & 1.115 &.205 &.9363 \\ \hline +V\({}_{\text{v0nn}}\)(Wang et al., 2015) &.348 & 274 &.589 & **1.123** &.206 &.9365 \\ +V\({}_{\text{v0nn}}\)(Wang et al., 2018) &.348 & 274 &.559 & 1.120 &.206 &.9364 \\ +V\({}_{\text{v0nn}}\)(Corni et al., 2019) &.345 & 274 &.557 & 1.116 &.206 &.9362 \\ +S\({}_{\text{i}}\)BERT (Rein et al., 2019) &.348 & 274 &.559 & **1.123** &.206 &.9365 \\ +S\({}_{\text{RoBERT}}\)(disl) &.345 & 273 &.556 & 1.116 &.206 &.9362 \\ +S\({}_{\text{i}}\)SmCE (Gao et al., 2021) &.346 & 273 &.557 & 1.116 &.206 &.9362 \\ +S\({}_{\text{i}}\)SmCE (unsupervised) &.346 & 274 &.558 & 1.120 &.206 &.9364 \\ \hline \hline \end{tabular} \end{table} Table 2: Performance results on the β€œKarpathy” test split via pre-tained model. All the pre-trained BERT models are RoBERT\({}_{\text{la,range}}\) based BERT. Figure 5: Result improvement of BERT on the BLEU score after adding CNN layer. Example with 30 images randomly selected from the β€œKarpathy” test split. we use similarity to probability SimProb(Blok et al., 2007), but we only rely on similarity and the confidence of the classifier as: \[\text{P}(w\mid c)=\text{sim}(w,c)^{\text{P}(c)} \tag{3}\] where \(\text{sim}(w,c)\) is the similarity between the visual contexts \(c\) and the caption \(w\), and \(\text{P}(c)\) is the visual classifier top-3 averaged confidence. We rely on two variations of RoBERTa (Liu et al., 2019) based model to compute the similarity: (1) SBERT that tuned on the STS-B dataset (Cer et al., 2017) and (2) a contrastive learning based semantic embedding SimSCE (supervise with NLI dataset (Conneau et al., 2017) and unsupervised version). In particular, for the unsupervised approach, the model passes the sentence twice with dropout to obtain two embedding as positive pairs then the model predicts the positive one among other sentences in the same mini-batch as negatives. The results are shown in Table 2, SBERT and SimSCE out-of-the-box model have the best results against the baselines in different metrics, especially in CIDEr, and slightly worse on BLUE-4 and SPICE. Figure 6 shows a comparison between the pre-trained model and our proposed similarity or visual semantic score. The pre-trained model relies on the visual classifier confidence and the unstable low/high relatedness score via pre-trained cosine similarity and therefore struggle to associate the most closest caption to its related visual context. **Bias in Visual Context.** COCO-Caption is a gender bias dataset towards men (Zhao et al., 2017; Hendricks et al., 2018), and our visual context dataset suffers from the same bias. However, the neutral gender is dominates in most cases, as shown in Table 5. We follow Zhao et al. (2017) in calculating the gender bias ratio towards man as: \[\frac{\text{count}(\text{obj, m})}{\text{count}(\text{obj, m})+\text{ count}\left(\text{obj, w}\right)} \tag{4}\] where **man** and **w**oman refer to the visual in the image, and the **count** is the co-occurrence with the **object** as pairs in the dataset. The ratio to _person_ is computed as: \[\frac{\text{count}(\text{obj, m/w})}{\text{count}(\text{obj, person})} \tag{5}\] To investigate this further and to show how the balance data affects the final accuracy, we replace each specific gender with gender-neutral (person/people) (_e.g_. a **man**_person_ on a skateboard \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Model & B-4 & M & R & C & S & B-S \\ \hline \hline \(\text{\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 37\char 7\char 37\char 37\char 37\char 37\char 7\char 37\char 7\char 37\char 7\char 37\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\ 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\char 7\ 7\char 7\char 7\char 7\char 7\ 7\char 7\char 7\char 7\char 7\ 7\char 7\char 7\char 7\char 7\ 7\char 7\char 7\char 7\char 7\char 7\char 7\ in a park). Then, we train our best model again, as shown in Table 4. The result as we expected, a lower accuracy, as in some cases specifying the gender influences the similarity score. For example, _a woman is putting makeup on another woman in a chair_ is more human-like natural language than _a person is putting makeup on another person in a chair_. However, when making both the visual and the caption gender-neutral, the model achieves a stronger result. By having a gender neutral _person_, the result is better than having the wrong gender _man_ or _woman_ as the model overcomes the cases when the gender is not obvious. **Limitation.** The drawback of this dataset is that the visual classifier struggles with complex backgrounds (_i.e_. wrong visual, object hallucination Rohrbach et al. (2018), _etc_.), as shown in Figure 7. These problems can be tackled by either relying on human annotation or using a more computationally expansive visual classifier or semantic segmentation based model. Another limitation is the low/high cosine label score (_i.e_. low relatedness context score), which leads to wrong annotations of the relation between the visual and the caption. For example, a _paddle_ and _a man riding a surfboard on a wave._ have a low cosine score. We tackled this problem by adding multiple concepts at the same time to have more context to the sentence (_i.e_. caption). ## 6 Application **Visual Context based Image Search.** One of the intuitive applications of this approach is the **V**isual **C**ontext based Image **S**earch (VCS). The model takes the visual context as an input _query_ and attempts to retrieve the most closely related image via caption matching (_i.e_. semantic sentence matching as query-to-document matching). Following the same procedure in this work: (1) we extract the visual context from the image with the visual classifier, then (2) the textual visual context is used, as a keyword for the semantic search, to extract the most closely related caption to its visual context, and (3) a sentence matching algorithm with cosine distance and semantic similarity (_e.g_. SBERT) is employed to re-rank the top-\(k\) semantically related caption, in the test set, to retrieve the image. The most direct approach to performing a semantic search is to extract the embedding from the last hidden layer after fine-tuning the model (_i.e_. VCS\({}_{\text{BERT}}\))5 and then using a \(k\) Nearest Neighbor search (\(k\)NN) to retrieve the caption given the visual context. We adopt an efficient similarity search _extract search_ using GPU directly \begin{table} \begin{tabular}{l c c c c} \hline Model & R@1 & R@5 & R@10 & R@15 \\ \hline VCS-\(k_{1}\) &.89 & **.88** & **.87** & **.84** \\ VCS-\(k_{2}\) & **.90** & **.88** &.85 &.83 \\ VCS-\(k_{3}\) & **.90** &.87 &.85 &.83 \\ \hline \end{tabular} \end{table} Table 6: Retrieval results with top-\(k\) 3 visual context on the Karpathy test split. Results depicted in terms of Recall@K (R@K). \begin{table} \begin{tabular}{l} \hline **Visual context:** fanstain, sax, obo **Human:** black and white of two \\ women sitting on a marble looking \\ bench one of them looking at camera \\ holding and eating a watermelon wedge \\ with another woman from back in a chair. \\ \hline **Visual context:** parachute, volleyball, \\ pole \(\mathbf{\mathcal{X}}\) \\ **Human:** a woman wearing a \\ multi-colored striped swearat holds her \\ arms up triumphantly as a kite flies high \\ in the sky. \\ \hline \end{tabular} \end{table} Table 5: Retrieval results with top-\(k\) 3 visual context on the Karpathy test split. Results depicted in terms of Recall@K (R@K). Figure 8: Visual Context based Image Search via **visual context from an image**. Example shows how the visual context is used to retrieve the image via caption information. The \(k\)**NN** is the original retrieved caption from the fine-tuned model and **top-\(k\)** is the top re-ranked match generated caption from Karpathy test split. Note that, using a single concept _query_ results in more accurate retrieval than multiple concepts as in the _beer_ example. Figure 7: Limitation of the dataset. The model struggles with complex backgrounds and out-of-context objects. with FAISS [12]. The _extract search_ is a brute-force search that extracts the nearest neighbor with inner product with normalized length that is equal to the cosine similarity. Table 6 shows that, without extra training, the model achieves good retrieval results with the top-k 3 visual context on the caption Karpathy test split. Figure 8 shows some successful cases of context-based retrieval. Also, we found that using a single concept _query_ results in more accurate retrieval images than multiple concepts, as shown in the same figure with _beer_ glass example. **Limitations.** The limitations of this approach are: First, it is very sensitive to out-of-vocabulary word-to-caption. For example, for a rare _query_ such as _lama_, the model randomly output words without any relation (_puck_, _pole_ and _stupa_). Secondly, it relies on the quality of the classifiers (_i.e_. object and caption) to retrieve the related image. For instance, in Figure 8, the false positive _guitar_ instead of _knife_ and the false positive caption with a correct retrieve description. ## Conclusions In this work, we have proposed a COCO-based textual visual context dataset. This dataset can be used to leverage any text-based task, such as learning the semantic relation/similarity between a visual context and a candidate caption, either as post-processing or end-to-end training. Also, we proposed a task and an application that can take advantage of this dataset.
2307.10245
Measuring Online Emotional Reactions to Events
The rich and dynamic information environment of social media provides researchers, policy makers, and entrepreneurs with opportunities to learn about social phenomena in a timely manner. However, using this data to understand social behavior is difficult due heterogeneity of topics and events discussed in the highly dynamic online information environment. To address these challenges, we present a method for systematically detecting and measuring emotional reactions to offline events using change point detection on the time series of collective affect, and further explaining these reactions using a transformer-based topic model. We demonstrate the utility of the method on a corpus of tweets from a large US metropolitan area between January and August, 2020, covering a period of great social change. We demonstrate that our method is able to disaggregate topics to measure population's emotional and moral reactions. This capability allows for better monitoring of population's reactions during crises using online data.
Siyi Guo, Zihao He, Ashwin Rao, Eugene Jang, Yuanfeixue Nan, Fred Morstatter, Jeffrey Brantingham, Kristina Lerman
2023-07-17T06:52:30Z
http://arxiv.org/abs/2307.10245v2
# Measuring Online Emotional Reactions to Offline Events ###### Abstract The rich and dynamic information environment on social media provides researchers, policy makers, and entrepreneurs with opportunities to learn about social phenomena in a timely manner. However, using this data to understand human affect and behavior poses multiple challenges, such as heterogeneity of topics and events discussed in the highly dynamic online information environment. To address these challenges, we present a methodology for systematically detecting and measuring emotional reactions to offline events using change point detection on the time series of collective affect, and further explaining these reactions using a transformer-based topic model. We demonstrate the utility of the methodology on a corpus of tweets collected from a large US metropolitan area between January and August, 2020, covering a period of great social change, including the COVID-19 pandemic and racial justice protests. We demonstrate that our method is able to disaggregate topics to measure population's emotional and moral reactions to events. This capability allows for better monitoring of population's reactions to offline events using online data. emotional reaction, social media data, change point detection, topic modeling ## I Introduction Social media platforms connect billions of people worldwide, enabling them to exchange information and opinions, express emotions [1, 2], and to respond to others [3, 4]. Researchers, policy makers, and entrepreneurs have grown interested in learning what the unfettered exchange of information reveals about current social conditions [5, 6]. Social scientists rely on social media data to track public opinion on socially important issues [7, 8], monitor the well-being of populations at an unprecedented spatial scale and temporal resolution [2, 9], and investigate the psychology of hard-to-reach groups [10]. Using social media data to learn about human behavior, however, poses significant challenges. Social media represents a heterogeneous, highly dynamic information environment where some topics are widely discussed while others are barely mentioned [11]. It includes people's self-reports of their own lives, as well as their reactions to a continuous stream of external events. Researchers have developed methods to track shifts in discussion topics in response to events [12] and even detect events from online discussions [13, 14]. However, social media data provides evidence for learning about human behavior beyond shifts in topics. For example, it can also shed light on human affect, emotions, and morality, important drivers of individual attitudes, beliefs, psychological well-being, and social interactions [15, 16, 17, 18]. To study the collective affect, some works investigated emotional engagement with news [19, 20], while others studied collective reactions to specific types of events, such as elections [21] or natural disasters [22]. These works, however, leave a gap in our understanding of collective emotional reactions to a broad spectrum of socio-political events, which could shed light on opinion dynamics, emergence of polarization emergence, and even help identify online influence campaigns. To bridge these gaps, we present a methodology for detecting, measuring and explaining the collective emotional reactions to offline events. Using state-of-the-art transformer-based models, we construct the time series of aggregate affect from social media posts. We detect emotional reactions as discontinuities in these time series, and then explain the associated offline events using topic modeling. We demonstrate the utility of the methodology on a corpus of tweets collected from a large US metropolitan area between January and August, 2020. This time span represents a complex period in American history with important social, political and cultural changes. We successfully detect the simultaneous crises of the COVID-19 pandemic and racial justice reckoning, and other important events like political primaries. We show how these developments had profound impact on the psychological state of the population in our data. For example, as the COVID-19 pandemic began to unfold, people expressed more anger, significantly more fear, and more moral sentiments like care and authority. Furthermore, we disaggregate COVID-related tweets by topic to more accurately measure how a population's emotions and moral sentiments change regarding different subtopics. We identify stronger reactions to daily-life issues, such as grocery panics and leisure activities, than topics directly mentioning the Coronavirus. Our results suggest that studying the collective emotional reactions on social media can provide valuable insights into understanding people's opinions and responses to timely socio-political events, and aid policy makers in crafting messages that align with the values and concerns of the population. ## II Related Works **Event Detection:** With the rich and dynamic information on social media that is tightly related to offline events, researchers have developed methods for event detection on online platforms, including topic detection techniques such as Latent Dirichlet Allocation (LDA) [23] and Topic2Vec [24], clustering documents based on their textual similarity [25], studying term co-occurrence and performing term frequency analysis [13] and detecting bursty terms [14, 25]. Recent methods also incorporate deep learning techniques [26, 27]. Although these methods help detect events from social media data, we also want to understand the dynamics of emotions and moral sentiments in the aggregate online population, as opinions and emotions expressed online have a complex interplay that can impact and manifest in offline behaviors. **Sentiments and Emotions:** Early research on quantifying online emotions relied on dictionary-based approaches to measure sentiment of messages by counting the occurrences of positive or negative words [1, 28]. Researchers found that the sentiment of tweets in aggregate revealed hourly, diurnal and weekly patterns of mood variations [1, 9]. Some studies demonstrated the feasibility of monitoring subjective wellbeing of populations [2] at unprecedented temporal scale and resolution [29]. Other works used social media sentiment analysis to study user reactions to political campaigns [28], or as an alternative to costly public opinion polls [7], predict stock market prices [30] and results of elections [31]. In terms of studying emotional reactions to offline events, Hauthal et al. [19] used emojis to analyze the online reaction to Brexit. A recent work conducted emotion analysis of user reactions to online news [20], focusing on the relationship among news content, emotions and user engagement reactions. Another study investigates the user reactions to news articles by predict emotional audience reactions before and after publishing the posts [32]. These works utilize user reactions such as comments, likes and shares to pre-defined specific news articles, and they focus more on how social media contents and news articles influence emotional user engagement online. Different from above studies, we are interested in understanding the online dynamics of emotions and moral sentiments in response to the continuous stream of real-life events, because affect is tightly related to opinions and induces offline actions, providing us valuable information to study human behavior. We incorporate both event detection and emotion analysis techniques to detect, measure and explain emotional and moral reactions on social media. ## III Methods and Materials To understand the dynamics of online emotions, we propose a pipeline (Fig. 1) that detects, measures and explains online emotional reactions to offline events. With a set of timestamped texts, such as a collection of tweets, we first perform emotion and morality detection on each tweet. We then calculate the daily fractions of tweets with each emotion and moral category to construct the time series of the aggregate affect. Next, to detect reactions, i.e., changes in affect in response to offline events, we perform change point detection on each emotion and morality time series. We measure the magnitude of the change at each detected change point and perform topic modeling to explain what offline event triggered the specific online reaction. ### _Data_ The data used in this study was collected using Twitter's Filter API by specifying a geographic bounding box over a large metropolitan area. This method collects every tweet that is either geotagged within the bounding box (using the device's coordinates with the user's permission), or by using the Twitter "place" feature, where the user tags their location with a point of interest in a large US metropolitan area. The data was collected in real time, with the crawler operating during the entirety of the analysis period. We collected 16,912,165 tweets from 344,638 unique users. While we perform analyses on Twitter data, the proposed pipeline is generalizable to other social media platforms and news. ### _Emotion and Morality Detection_ We first measure emotions and moral sentiments expressed in an individual tweet. For emotion detection, we use a state-of-the-art language model SpanEmo [33], fine-tuned on the SemEval 2018 Task 1e-c [34] dataset. This transformer-based model outperforms prior methods by learning the correlations among the emotions. Given the text of a tweet, the model returns the confidence of each emotion being present in the text. The model can measure a wide range of emotions at scale, including _anticipation_, _joy_, _love_, _trust_, _optimism_, _anger_, _disgust_, _fear_, _sadness_, _pessimism_ and _surprise_. In addition to emotions, we also quantify the moral sentiments of tweets using the moral foundations theory [17] along five dimensions, which are related to dislike of suffering (_care/harm_), dislike of cheating (_fairness/cheating_), group loyalty (_loyalty/bethrayal_), respect of authority and tradition (_authority/subversion_), and concerns with purity and contamination (_purity/degradation_). We use the transformer-based pretrained language model BERT [35], and train it with a large amount of data from different sources, including the Moral Foundation Twitter Corpus dataset [22], the dataset of political tweets published by US congress members [36], a manually annotated COVID dataset [37], and the Extended Moral Foundations Dictionary data [38]. The large amount and the variety of topics in our training data helps mitigate Fig. 1: Pipeline to detect and measure online emotional reactions. the data distribution shift when applying the model to our test data [39]. After labeling each tweet with different emotion and moral categories, we calculate the daily fractions of tweets to construct the time series. We evaluate the effectiveness of emotion and morality detection on a random subset of 850 tweets, which were annotated by five trained annotators. Table I shows the cross-annotator agreement measured by Fleiss' Kappa, which is comparable to those in prior works [22, 34]. We compare our emotion and morality detection methods with widely used dictionary-based methods, namely keyword matching using Emolex (does not include "love" category) for emotions and Distributed Dictionary Representations (DDR) [40] for morality. Our methods outperforms baselines for most categories (Table I), although the performance inevitably varies with support for different categories, as also observed in previous studies [22, 41]. Despite the variation in model performance, prior research has validated that when aggregating on the collective level, the time series of sentiments constructed with supervised deep learning detection and dictionary-based methods have strong correlations with those from self-reports [42]. ### _Change Point Detection_ The time series of emotions and moral sentiments reveal the dynamics of aggregate affect on social media, including how people react to social phenomena. We define an emotional reaction as a change point in the corresponding time series. To detect such change points, we combine two popular methods. The first, cumulative sum (CUSUM) method [43], aims to detect a shift of means in a time series. This method is good at detecting change points like the COVID-19 outbreak which shifted the baseline of emotions and moral sentiments. To detect multiple change points, we use the sliding window strategy to scan the whole time series. We set the window size to be four weeks and slide the window with a stride of five days for the best precision and recall. Another type of event, such as Valentine's Day, which manifests as a sharp, short-lived surge of emotions, can be better detected with Bayesian Online Change Point Detection (BOCPD) [44]. This method uses Bayesian inference to determine if the next data point is improbable, which is good at detecting sudden changes. We combine the results from both CUSUM and BOCPD. As a rule of thumb, we consider a change point to be significant if its confidence is higher than \(0.5\). We perform change point detection separately for each time series of emotion or morality, because different types of events may elicit different reactions. ### _Measuring the Magnitude of Reactions_ For each detected change point, we quantify the magnitude of the collective reaction as percent change before and after it. We compute the baseline level before the change point as the mean of the time series over the two week period before. Then, we measure two types of changes: short-term and long-term changes. To calculate the short-term change, we compare the baseline to the extremum of the time series (peak or dip value) in the two weeks after the change point and compute percent change. To calculate the long-term change, we compare the baseline to the time series value two weeks after the event (we take a five-day average around the two-week mark). The size of the window is chosen to be two weeks so that enough observations are made, but it would not be affected by another event earlier or later. ### _Topic Modeling_ We try to explain changes in emotions detected by our method using topic modeling. We choose BERTopic [45], a transformer-based language model that extracts highly coherent topics compared to traditional topic model LDA [46]. We evaluate both methods on a set of 10% randomly selected tweets from our data, using a different numbers of topics ranging from 10 to 50 in steps of 10. Over different runs, BERTopic gives higher NPMI [47] coherence scores (\(0.14\pm 0.01\)) compared to LDA (\(0.03\pm 0.01\)), and similar diversity [48] scores (\(0.75\pm 0.04\)) compared to LDA (\(0.76\pm 0.04\)). For each emotional reaction, we extract the topics of tweets that are tagged with that emotion or morality category. We apply BERTopic to tweets within the three-day time window [12] before and after the change point separately. For example, for the Black Lives Matter protests starting on 2020-05-26, we extract the topics from tweets posted between 05-23 to 05-25 to develop a baseline and then separately extract the topics from tweets posted between 05-26 to 05-28. By comparing the top 10 baseline topics before the change point with the top 10 appearing after the change point, we determine the new topics that emerged after the change points that are possibly relevant to the event associated with this change point. BERTopic uses transformer-based models that take in whole sentences to account for contextual information. We first preprocess the tweets to remove URLs and name mentions, transform emojis to their textual descriptions, and split hashtags into individual words. We use the Sentence-BERT "all-MiniLM-L6-v2" model [49] to directly embed the processed tweets. After topic modeling, we remove English stopwords in the learned topic keywords. Table II shows some example topics associated with different events. For each topic we show the top three keywords. The newly emerged topics are highlighted in bold. With each emerging topic, we manually verify if there is an associated offline event by examining the tweets belonging to this topic and by searching related news articles. Such manual verification is a necessary and common practice event detection literature [14]. For most reactions, regardless of how small or impactful, the identified topics clearly relate to an offline event (e.g. Table II row 1, 4). For the second change points, the newly emerged topics point us to several different events. However, by examining the tweets belonging to these topics, we found the Black Lives Matter protests were the most predominant event, and other emerging topics such as "america, vote, trump" and "covid, coronavirus, tested" were related to the protests. Finally, there are also some change points for which we cannot identify meaningful emerging topics (e.g. row 3). We decide whether it is a false positive through further inspection of tweets (see section IV-B). ## IV Results ### _Online emotional reactions to Offline Events_ Times series of the aggregate affect from January to August 2020 (Fig. 2) shows complex dynamics with seasonal variation (weekly cycles in joy), short-term bursts (spike in love on Valentine's Day), and long-term changes in emotions and moral sentiment. This time span represents a difficult period in the life of the city. In addition to the world-wide pandemic, which led to lockdowns all over the country by the middle of March, political primaries were also taking place during this time period, which also saw one of the largest social justice protests triggered by the murder of George Floyd in police custody, as well as the death of a beloved sports figure. These developments had profound impact on the city's population, as demonstrated by the many inflection points, rises and dips in emotions and moral sentiments. Fig. 2: Time series of emotions and moral sentiments from January 1 to August 1, 2020. We show the daily fraction of tweets with different affect labels. The notable peaks and dips in the time series can be associated with the external events marked as vertical lines. We run the proposed pipeline to detect and explain the online emotional reactions to various offline events. Table III shows that our method is able to not only identify larger and impactful events such as the COVID-19 pandemic and the Black Lives Matter protests, but also smaller events such as earthquakes and baseball playoffs. We see the complex reactions to the COVID-19 pandemic in multiple dimensions of emotions and morality. The unsupervised method also enables us to discover reactions to smaller events that might be easily missed, such as a rise in subversion in response to Trump's tweets about Obamagate (event 11). We also show that running BERTopic on tweets posted near the event reveals the relevant topics very well. Furthermore, because we detect changes separately in each emotion, we can disentangle events based on different emotional reactions, even when they take place on the same day. For example, Trump's impeachment trial was associated with an increase in betrayal and subversion, MLK Day with joy, loyalty and fairness, and earthquake was associated with fear (events 6, 8 and 14 in table III). ### _Evaluation of the Proposed Pipeline_ Our method is unsupervised and can automatically discover even small events that might be missed in the news. For the purpose of evaluation, however, we formulate detecting emotional reactions as a binary task. We consider a successfully detected change point (true positive) in a time series of emotion or moral sentiments to be one that we can successfully link to an offline event via at least one relevant topic (Section III-E). We detected 54 change points in total. Their confidence given by the change point detection method ranged from 0.65 to 1.00, with 85% having confidence score above 0.9. In table III we also show the date range of the change points for one event if detected in more than one emotion/moral categories. Most detected change points are close to the actual event date. We consider a false positive to be a change point that cannot be explained by any topic (e.g. Table II row 3). We found 10 false positives, giving the precision of 0.84. We searched news articles at these time points to investigate whether these false positives are due to the failure of BERTopic to identify meaningful topics or whether there were indeed no newsworthy real-life events. For all 10 false positives, we are not able to find significant news events. We consider a false negative to be an emotional reaction that shows up as an obvious peak or dip in the time series and can also be related to an offline event, but is not detected by the change point detection methods. Figure 3 shows examples of false negatives, which are related to Easter, Mother's Day, Father's Day, and an earthquake. There are 14 false negatives, giving recall of 0.79. In future, we plan to combine change point detection and anomaly detection to mitigate these false negative cases. ### _Short-term and Long-term Changes in Affect_ Our proposed method enables us to study collective reactions to an event along multiple dimensions of affect. For example, the Black Lives Matter protests were associated with 16 different emotional and moral changes. We quantify the change in emotions and moral sentiments around different events as percent change in the corresponding collective affect before and after the event. Figure 4 shows these changes for four of the more impactful events. Consistent with our intuition, Kobe Bryant's Death was associated with a short-term increase in negative emotions like _pessimism_ and _sadness_ and a decrease in _joy_, as well as a short-term rise in moral language related to _care_ and _harm_. On the other hand, Valentine's Day brought a short-term increase of positive emotions like _love_ and a decrease in negative emotions _anger_ and _disgust_. There were no long-term changes associated with these events. The COVID-19 outbreak triggered a cascade of events aimed at mitigating the pandemic that were associated with complex short-term and long-term changes in emotions and moral sentiments. People expressed more negative emotions such as _anger_, _disgust_, _sadness_, and more significantly, _fear_, both in the short-term and the long-term. Positive emotions like _joy_ and _love_ simultaneously decreased. People also expressed more moral sentiments like _care_ in tweets such as "Stay safe. We thank you", as well as more _harm_ blaming the virus. Interestingly, the moral concerns about _authority_ also increased, possibly because new policies including lockdowns to mitigate the pandemic were put in place (e.g. "I think governor Newsom is doing a great job, we are complying with his guidance and the CDC at my hospital. Thank you governor"), and some were critical of government's response, e.g., "we need leadership not a politician". The Black Lives Matter protests was also associated with complex short- and long-term emotional and moral reactions. We observe increases in negative emotions and decreases in positive emotions. In addition, compared to other three events, we observe greater increases in different moral sentiments associated with the BLM protests. The moral concerns about _fairness_ and _betrayal_ had especially increased much. People keenly felt the injustice and betrayal in George Floyd's murder. ### _Disentangling COVID-related Emotions_ The COVID-19 pandemic was associated with complex and long-term emotional changes. Here we use this example to further show the benefit of disentangling emotional reactions by disaggregating topics. From the BERTopic model, we discover topics that are directly related to COVID-19, for example the topic "coronavirus, corona, covid". More interestingly, we discover many other topics that reflect the impact of COVID-19 on people's lives. Figure 5 shows some of these topics and the change of their tweet frequencies two weeks before and after the WHO pandemic announcement on March 11, 2020. People not only talked about the negative aspects of the pandemic, such as grocery shortages, unemployment, housing and rent issues, but also about leisure activities during quarantine. Because large gatherings were fobidden during lockdowns, the Disneyland and ticket selling topics decreased in frequency, but some at-home activities like cooking and Fig. 4: Short-term and long-term changes of emotions and moral sentiments around four events. The short-term change compares the peak/dip value after an event to the baseline level before the event. The long-term change compares the time series value around two weeks after the event to the baseline level. Fig. 3: Time series of tweets expressing the _LOVE_ emotion. Events like Valentine’s Day, the COVID-19 pandemic and Black Lives Matter (BLM) protests, are true positives identified (red lines). Smaller spikes, such as Easter, Mother’s Day and Father’s Day are false negatives not detected. witching TV increased in frequency. Other discussions include school closure and the switch to remote learning. We select four top categories discussed and group related topics into these categories: directly covid-related topics, grocery panics, leisure activities and school and education. We study emotions and moral expressions aggregated in all the tweets, as well as in these topic categories. Figure 6 shows the fraction of tweets in all tweets and in each of these four categories that express each emotion or moral sentiment. We find that aggregating emotions from all tweets can give misleading impressions. Positive emotions like _joy_ were highly expressed in all tweets (aggregated), but in fact they were mostly dominated by people talking about leisure activities. In COVID-related tweets, few positive emotions were expressed. _Anger_ and _disgust_ were higher in topics about grocery panics than in topics directly related to COVID. Another example is the expression of _care_ and _harm_ moral sentiments. Their expressions were diluted by other topics in aggregate tweets. By disaggregating, we see that they were highly expressed in directly COVID-related tweets. These results suggest that during times of maximal crisis and uncertainty, people find outlets for positive emotions. They also demonstrate the importance of disaggregating by topics when studying specific issues. ## V Conclusion In this work, we have demonstrated the effectiveness of an unsupervised method to detect and measure public reactions to newsworthy events. We applied our method to a large Twitter corpus of tweets drawn from the population of a large metropolitan area, disentangled the dynamics of online emotions during a time period punctuated by complex social, health, and political events. We showed that our method can discover significant and less significant events and measure emotional and moral reactions to these events. To further understand the complex impact of the COVID-19 outbreak, we disaggregated COVID-related tweets and discovered topics directly related to the virus and topics related to changes in life style, including unemployment, grocery panics and education. The emotions expressed on these different topics suggest that people had negative feelings and thoughts during the height of the pandemic, but were also searching for the positive and holding on to optimism. Together, these results suggests the potential of using social media data fortracking of public reactions to events, as well as discovering significant events that may have been missed by traditional news sources. **Limitations and Future Works:** When there is a change point that is a dip, we cannot use topic modeling to explain it, as a dip in the emotion or moral sentiment indicates a decrease of discussion related to an event. However, usually the decrease of some emotions is accompanied by the increase of some other emotions, and we can study the tweets tagged with the surged emotions to understand the topics. Another improvement to make is on how we verify an offline event from the relevant topics obtained from BERTopic. Currently we rely on manual verification by reading tweets and searching for related news articles. This is still a common practice in the event detection area. In the future, we plan to incorporate a more automated method, such as obtaining relevant news articles through a Google Search API. In future works we also plan to move forward to causal analysis. In principle, the offline events _cause_ the online emotional and moral reactions. However, there are many confounding factors in this causal process. For example, the same event might pose very different effects on heterogeneous online populations. We plan to expand this work by performing causal analysis to further disentangle the causal relationships between offline events and online emotions, and to measure the heterogeneous effects on different online populations.
2305.11722
Geometric Learning of Knot Topology
Knots are deeply entangled with every branch of science. One of the biggest open challenges in knot theory is to formalise a knot invariant that can unambiguously and efficiently distinguish any two knotted curves. Additionally, the conjecture that the geometrical embedding of a curve encodes information on its underlying topology is, albeit physically intuitive, far from proven. Here we attempt to tackle both these outstanding challenges by proposing a neural network (NN) approach that takes as input a geometric representation of a knotted curve and tries to make predictions of the curve's topology. Intriguingly, we discover that NNs trained with a so-called geometrical "local writhe" representation of a knot can distinguish curves that share one or many topological invariants and knot polynomials, such as mutant and composite knots, and can thus classify knotted curves more precisely than some knot polynomials. Additionally, we also show that our approach can be scaled up to classify all prime knots up to 10-crossings with more than 95\% accuracy. Finally, we show that our NNs can also be trained to solve knot localisation problems on open and closed curves. Our main discovery is that the pattern of "local writhe" is a potentially unique geometric signature of the underlying topology of a curve. We hope that our results will suggest new methods for quantifying generic entanglements in soft matter and even inform new topological invariants.
Joseph Lahoud Sleiman, Filippo Conforto, Yair Augusto Gutierrez Fosado, Davide Michieletto
2023-05-19T14:58:18Z
http://arxiv.org/abs/2305.11722v2
# Learning Knots Beyond Topological Invariants ###### Abstract **Knots are deeply entangled with every branch of science. For more than a hundred years, the biggest open challenge in knot theory has been to formalise a knot invariant that can unambiguously distinguish any two knotted curves [1; 2; 3]. At the same time, the intuitive hypothesis that the geometry of a curve may encode information on the underlying curve topology remains an open conjecture [4]. Here, we propose a neural network (NN) approach that takes as input a geometric representation of a knotted curve and can classify knots more accurately than any existing knot polynomial. More specifically, we show that NNs trained using a "local writhe" representation can distinguish knots that share one or many topological invariants and knot polynomials, such as knot mutants. We also show that our approach can be scaled up to classify all knots up to 10-crossings with 95% accuracy. Our results not only pave the way to create artificial intelligence tools that can unambiguously classify and eventually generate curves with any knot topology but, perhaps more importantly, identifies local 3D writhe as a geometric signature of knot topology potentially leading to new topological invariants.** ## Main Knots are fascinating objects that have captured the attention of humans for centuries. From Incas who used knotted Quipus to keep records [3] to Lord Kelvin who hypothesised elements were made of knotted ether [5] and to sailors and climbers whose lives often rely on some knotted rope. Knots are also deeply intertwined with history and art and often carry mystical meaning. The human obsession with knots brought Peter Guthrie Tait to compile the first knot tabulation of up to 10 crossings by hand [3]; currently more than one million unique knots up to 16 crossings have been tabulated using computer programs [6]. The rigorous proof that the early tabulated knots did not contain duplicates required the development of so-called topological invariants and knot polynomials, the first of which was the Alexander polynomial [1], followed more recently by the Jones and HOMFLY polynomials [3; 7]. Knot polynomials are mathematical constructs that can be computed on knot diagrams and are invariant under smooth deformations of a curve, i.e. that preserve the curve topology. A famous exemplification of the power of knot polynomials came when two 10-crossings knots, erroneously classified as topologically distinct for 74 years, were found by Perko to in fact be the same knot [3]. In contrast to this, however, some knots share the same topological invariants and cannot be distinguished by these standard measures. Famously, the 11-crossing Conway knot has the same Alexander polynomial as the unknot and shares the same Jones polynomial of its mutant, the Kinoshita-Terasaka (KT) knot [8]. In fact, in general, all mutants of a knot have the same HOMFLY polynomials and the same hyperbolic volume [3]. Alongside the development of knot polynomials, several attempts were made to identify a relationship between a specific geometrical embedding of a knot and its underlying topology. This relationship differs from the one sought between so-called geometric and algebraic invariants [9; 10], e.g. the hyperbolic volume of a knot and its Jones polynomial. Perhaps the most famous result in this direction is the Fary-Milnor theorem stating that the total absolute curvature of non-trivially knotted curves must be greater than \(4\pi\). This result only imposes a weak constrain on the topology of the underlying curve, as the unknot can itself have large curvature due to fluctuations of its contour. Likewise, Gauss proposed a formula to determine the number of times closed curves are linked with each other based only on the geometry of the curves [11]. A generalisation of the Gauss linking integral applied to a single closed curve is associated with its writhe [11] and average crossing number [12; 13]. Inspired by the intuition that writhe captures the level of geometrical entanglement of a curve with itself, we define a local segment-to-segment (StS) writhe generalisation \[\omega_{StS}(x,y)=\frac{(\mathbf{t}(x)\times\mathbf{t}(y))\cdot(\mathbf{r}(x)-\mathbf{r}(y))} {|\mathbf{r}(x)-\mathbf{r}(y)|^{3}}\,, \tag{1}\] where \(\mathbf{r}(x)\) and \(\mathbf{t}(x)\) are the 3D position of, and the tangent at, segment \(x\) belonging to the closed curve \(\gamma\), respectively. Intuitively, Eq. (1) captures the overall magnitude and the chirality of the entanglement between segment \(x\) and segment \(y\) (Fig. 1A-B). The quan ity \(\omega_{StA}(x)=\oint_{\gamma}\omega_{StS}(x,y)dy\) is the local segment-to-all (StA) writhe that characterises how geometrically entangled segment \(x\) is with respect to the rest of the curve. The StA writhe, \(\omega_{StA}(x)\), is a 1D geometrical representation of a knot that we hypothesise may display some features that are topology-dependent. Complex pattern recognition is a task that naturally lends itself to being addressed using a machine learning approach; we thus ask if a neural network trained at recognising patterns within \(\omega_{StA}(x)\) is able to solve ambiguous knot classification problems. To do this we build feed forward and recurrent (long-short term memory, LSTM) neural networks (FFNN and RNN, respectively) and train them using \(\sim 5\,10^{5}\) statistically uncorrelated and pre-labelled conformations of knotted bead-spring polymers which we simulated using LAMMPS [14]. To generate these configurations, we initialised a bead-spring polymer with known topology and \(N=100\) beads (unless otherwise stated) using KnotPlot ([https://knotplot.com/](https://knotplot.com/)), and subsequently evolved the polymer configurations via Langevin dynamics in an implicit solvent and fixed temperature using a Kremer-Grest model [15] to preserve polymer topology (see SI for more details). We confirmed that the topology was conserved either by computing their Alexander determinant via KymoKnot ([http://kymoknot.sissa.it](http://kymoknot.sissa.it)) [16] or, when ambiguous, visually. The NNs are built with an input layer that was determined according to the representation being used; e.g., the Cartesian (XYZ) coordinate representation used 3 neurons (one for each dimension) per polymer bead. The other local input features used one neuron per bead, while the StS writhe feature requires \(N\times N\) input neurons. The optimal number of hidden layers, hidden units, learning rate and batch size were determined via an automated hyperparameter tuning (_KerasTuner_[17]). Unless otherwise stated, our NNs contain 4 hidden layers, with around \(4\,10^{5}\) trainable parameters. The output layer consists of \(C\) output neurons, corresponding to the \(C\) knot types being classified, each implemented with a softmax activation function in order to return the probability that a given input is a certain knot type. We took the sparse categorical cross-entropy as the loss function, as it was the most appropriate for individual class probabilities and integer target labels, i.e. our knot types (Fig. 1D). We first tackle a 5-knot classification problem with the 5 simplest knots, which can be satisfactorily solved using NNs trained on center-of-mass-corrected Cartesian coordinates (XYZ) or adjacent bead input features [18; 19]. In line with these previous works, we find that our NNs can accurately predict the correct topology of around 80% of unseen conformations (80.1% with a FFNN and Figure 1: **A** Examples of equilibrium knotted polymer conformations used as training set. We consider the 5 simplest knots: \(0_{1}\), \(3_{1}\), \(4_{1}\), \(5_{1}\) and \(5_{2}\). The colors follow the knot contour from red to white and then blue. **B** Graphical representation of StS writhe \(\omega_{StS}(x,y)\). **C** Examples of patterns for \(\omega_{StA}(x)\) for three different knots. **D** Graphical representation of the (feed-forward) network employed. The input layer contains \(N\) (or \(3N\)) neurons corresponding to the size of the input feature representation and the output layer yields a probability for each knot class. **E** Accuracy score, tested on unseen polymer conformations for different input features. The StA writhe classifies the 5-simplest kntos with 99.9% accuracy. **F** Confusion matrices obtained by training the network with XYZ and StA writhe input features. 86% with a recurrent NN architecture, Fig. 1E). These values are lower than the ones reported in Ref. [18] because we use a smaller training dataset. We then trained the same NNs using a range of other geometric features, such as local curvature, density and 1D writhe [20] (see SI for details), and found that most of them perform more poorly, or at best equally, with respect to the XYZ and adjacent representations (Fig. 1E). A similar outcome was obtained also in Ref. [19]. In striking contrast, models trained using \(\omega_{StA}(x)\) outperform all other models and are found to achieve 99.9% accuracy, irrespectively if FFNN or RNN architectures (we also tested random forests, see SI). Additionally, the networks reached the early stopping criterion in about 50% fewer epochs that the ones trained by the XYZ representation. When plotted as a confusion matrix, the results clearly show that the XYZ input feature struggles to classify knots with a similar number of crossings, e.g. the \(5_{1}\) and \(5_{2}\) knots. On the contrary, our local 3d writhe feature generates a near-perfect confusion matrix (Fig. 1F). Given that our NN apprach can distinguish knots that share some knot invariants (e.g. the crossing number in the case of the 5-crossing knots), we now ask if our NN can also distinguish more complicated knots that share knot polynomials. To this end, we first consider three knots with identical Alexander polynomial, the square, granny and \(8_{20}\) knots (see Fig. 2A). The first two knots are composites of the trefoil with different chirality, and hence have 6 crossings, whereas the latter is an 8-crossings knot. Once again, we train a FFNN using the \(\omega_{StA}(x)\) profiles (Fig. 2B) and obtain a striking accuracy of 99.98%, compared with 91.8% obtained by training with COM-shifted Cartesian coordinates (Fig. 2C). We then asked if our NN could also perform in situations where the knots shared multiple knot polynomials. As mentioned above, mutant knots share the same hyperbolic volume and several knot polynomials, including HOMFLY. We therefore performed simulations of the Conway (K11n34) knot and one of its mutants, the Kinoshita-Terasaka (KT, K11n42) knot. These 11-crossings knots have a number of identical knot invariants as they share the same Jones, Alexander and Conway polynomials [8]. Intriguingly, the latter two are also shared with the unknot. Thus, we generated \(10^{5}\) statistically uncorrelated conformations of 200-beads long polymers with the Conway, KT or unknot topologies (Fig. 2D) and trained our FFNN to classify these different topologies either using a COM-subtracted XYZ or \(\omega_{StA}(x)\) (Fig. 2E) representations. When tested on unseen conformations we find that the NN trained with XYZ input features cannot distinguish between the Conway and KT knots, but both are accurately distinguished Figure 2: **A** Snapshots of three knots with identical Alexander polynomial: square (\(3_{1}^{\dagger}\#3_{1}^{\mathrm{f}}\)), granny (\(3_{1}^{\mathrm{i}}\#3_{1}^{\mathrm{i}}\)) and \(8_{20}\) knots. **B** Examples of StA writhe patterns from the three knots. **C** Confusion matrices obtained from a 3-class classification problem training a FFNN with XYZ (91.7% accuracy) or StA writhe (99.9% accuracy) features. **D** Snapshots of Conway (blue) and KT (orange) knots. **E** Examples of StA writhe patterns, including the one from the unknot (black). **F** Confusion matrices obtained from a 3-class classification problem training a FFNN with XYZ (67% accuracy ) and StA writhe (99.5% accuracy). from the unknot (Fig. 2F). On the other hand, we find that the NN trained with the local segment-to-all writhe once again perfectly disentangles the three knots with 99.6% accuracy (Fig. 2F). We therefore conclude that the NN trained with \(\omega_{StA}(x)\) has the potential to unambiguously classify knots that share multiple knot polynomials, including knot mutants. In turn, we argue that this capability may be related to the fact that our \(\omega_{StA}(x)\)-trained NN may contain a new topological invariant that outperforms existing knot polynomials. To understand to which extent StA-trained NNs can be used to classify knotted curves, we decided to train our NN on an increasingly larger data sets, up to 10-crossings knots. Note that these 250 knots include 30 that share the same Alexander polynomial (see SI for a table), rendering them challenging to classify using standard tools such as Kymoknot. We first noticed that XYZ-trained NNs rapidly decreased accuracy when we included knots with 6 and more crossings (Fig. 3A-B). On the contrary, the CMs from StA-trained NNs remained more diagonal. In spite of this, we noticed that the knots \(5_{1}\) and \(7_{2}\) created some confusion also in the StA-trained NNs which dropped accuracy to 98% (Fig. 3C-D). We noticed that this was due to \(\omega_{StA}(x)\) displaying similar patterns between the two knots (Fig. 3C). To further distinguish these (and potentially other knots with similar \(\omega_{StA}(x)\) curves) we then considered our original proposition of local StS writhe (Eq. (1)), reported in Fig. 3E, for the same \(5_{1}\) and \(7_{2}\) knots configurations used to compute \(\omega_{StA}(x)\) in Fig. 3C. Interestingly, the \(\omega_{StS}(x,y)\) heat maps appear very different, albeit when integrated along \(y\) and around the polymer contour, yield similar StA curves. We therefore trained our NNs using the StS writhe representation of the knots and re-established 99.8% accuracy (Fig. 3G). Ultimately, the NNs trained using the \(\omega_{StS}(x,y)\) geometrical representation of knots yields the most accurate model, achieving 95% for a 250-class classification task including all knots up to 10 crossings. In comparison, the XYZ-trained and StA-trained NNs achieved 17% and 72%, respectively (Fig. 3G). ## Conclusions In conclusion, we have discovered that local 3d writhe (Eq. (1)) is a geometric descriptor of a curve that contains information about the underlying topology. We showed that NNs can access this information to classify the curve topology more accurately than they could do by using the Cartesian coordinates of the curve's segments (Fig. 1). We argue that NNs trained on local 3d writhe representation of knot conformations may numerically encode a new topological invariant that is more powerful than current ones. This conjecture is supported by the fact that even Figure 3: **A** Two example conformations of \(5_{1}\) and \(7_{2}\) knots. **B** The XYZ-trained NN yields 63.8% accuracy and a rather spare confusion matrix. **C** Examples of \(\omega_{StA}(x)\) curves for the two knots, showing a degree of similarity between the pattern of maxima and minima. **D** The \(\omega_{StA}(x)\)-trained NN achieves 98% accuracy and the confusion matrix shows that \(5_{1}\) and \(7_{2}\) are the knots that are most confused with each other. **E-F** Examples of \(\omega_{StS}(x,y)\) geometric feature for the two knots corresponding to the \(\omega_{StA}(x)\) profiles shown in **C**. **G** Accuracy for a 5-class classification problem with increasing average complexity of the knots being classified. **H** Accuracy as a function of number of knot classes to be distinguished. a simple FFNN architecture can distinguish the topology of knot mutants that share several knot polynomials (Fig. 2). Our AI-driven approach can therefore classify complex topologies that would be otherwise impossible to disentangle using current tools. Finally, we show that our NN can naturally be turned into a tool to accurately classify a large number of random curves belonging to otherwise ambiguous knot topologies (Fig. 3). This approach naturally lends itself to be applied to protein folding [21], DNA [22; 23] and, in general, entanglements in complex systems [24; 25; 26]. ## Methods Knotted curves are modelled as semiflexbile bead-spring polymers with \(N=100\) beads of size \(\sigma\). The beads interact via a repulsive Lennard Jones potential, and are connected by FENE springs [15]. A bending rigidity is added via a Kratky-Porod potential. Each bead's motion is then evolved via a Langevin equation with a friction and noise terms that obey the fluctuation-dissipation theorem. The numerical evolution of the Langevin equation is done with a velocity-Verlet scheme in LAMMPS [14]. The codes to generate these conformations are open access at [https://git.ecdf.ed.ac.uk/taplab/mlknotsproject](https://git.ecdf.ed.ac.uk/taplab/mlknotsproject). We sample these simulations every \(10^{5}\) LAMMPS steps (or \(10^{4}\) Brownian times) to ensure that the conformations are statistically independent and uncorrelated (see SI for more details). The NNS were built using TensorFlow [27] and the network hyperparameters were optimised using Keras-Tuner [17] using the XYZ representation (see SI for the values of hyperparameters used). We also implemented early stopping with a minimum of 0.001 accuracy improvement over 10 epochs. Datasets were randomly partitioned into training-validation-testing in a 90%/2.5%/7.5% whilst maintaining equal proportions of each knot type. ## Acknowledgements DM thanks the Royal Society for support through a University Research Fellowship. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 947918, TAP). JLS thanks the Physics of Life network for a student summer bursary in 2021 providing initial funds for this research. The authors thank Enzo Orlandini, Marco Baiesi, Pawel Dabrowski-Tumanski, Ken Millett, and Luigi del Debbio for insightful discussions.
2304.10715
Flares from merged magnetars: their prospects as a new population of gamma-ray counterparts of binary neutron star mergers
Long-lived massive magnetars are expected to be remnants of some binary neutron star (BNS) mergers. In this paper, we argue that the magnetic powered flaring activities of these merged magnetars would occur dominantly in their early millisecond-period-spin phase, which is in the timescale of days. Such flares endure significant absorption by the ejecta from the BNS collision, and their detectable energy range is from 0.1-10 MeV, in a time-lag of $\sim$ days after the merger events indicated by the gravitational wave chirps. We estimate the rate of such flares in different energy ranges, and find that there could have been ~0.1-10 cases detected by Fermi/GBM. A careful search for $\sim10$ milliseconds spin period modulation in weak short gamma-ray bursts (GRBs) may identify them from the archival data. The next generation MeV detectors could detect them at a mildly higher rate. The recent report on the Quasi-Period-Oscillation found in two BASTE GRBs should not be considered as cases of such flares, for they were detected in a lower energy range and with a much shorter period spin modulation.
Shu-Xu Yi, Zhen Zhang, Xilu Wang
2023-04-21T02:56:31Z
http://arxiv.org/abs/2304.10715v2
# Flares from merged magnetars: ###### Abstract Long-lived massive magnetars are expected to be remnants of some binary neutron star (BNS) mergers. In this paper, we argue that the magnetic powered flaring activities of these merged magnetars would occur dominantly in their early millisecond-period-spin phase, which is in the timescale of days. Such flares endure significant absorption by the ejecta from the BNS collision, and their detectable energy range is from 0.1-10 MeV, in a time-lag of \(\sim\) days after the merger events indicated by the gravitational wave chirps. We estimate the rate of such flares in different energy ranges, and find that there could have been 0.1-10 cases detected by Fermi/GBM. A careful search for \(\sim 10\) milliseconds spin period modulation in weak short gamma-ray bursts (GRBs) may identify them from the archival data. Future MeV detectors can detect them at a rate from a few to tens per year. The recent report on the Quasi-Period-Oscillation found in two BASTE GRBs should not be considered as cases of such flares, for they were detection in a lower energy range and with a much shorter period spin modulation. ## 1 Introduction Magnetars are a kind of neutron stars (NSs) which have extremely strong magnetic fields. The magnetar's magnetic field can be as strong as \(\sim 10^{13-15}\) G (Ferario and Wickramasinghe, 2008; Rea and Esposito, 2011; Mereghetti et al., 2015; Turolla et al., 2015; Kaspi and Beloborodov, 2017), while that of an ordinary NS is \(\sim 10^{10-12}\) G (but see "low-magnetic field" magnetars (Rea et al., 2010; Turolla and Esposito, 2013)). The typical radiation activities are believed to be powered by the huge energy reservoir in the magnetic fields of magnetars, rather than their rotational energy or gravitational energy as those in spin powered or accretion powered NSs. Such magnetars radiation activities were observed as anomalous X-ray pulsars (AXPs) and Soft Gamma-ray Repeaters (SGRs). AXPs appear to be isolated pulsars with X-ray emission, whose spin down luminosity are thought to be insufficient to power their observed luminosity (Fahlman and Gregory, 1981; Gavriil et al., 2002; Kaspi and Gavriil, 2004); SGRs are thought to be magnetars which give off bursts in gamma-ray at irregular time intervals (Golenetskii et al., 1984; Norris et al., 1991; Hartmann, 1995; Thompson and Duncan, 1995; Mereghetti, 2008). Besides, there are rare cases, where much more energetic flares are emitted from magnetars, which are referred to as "Giant Flares" (Palmer et al., 2005; Hurley et al., 2005; Minaev and Pozanenko, 2020; Zhang et al., 2020; Roberts et al., 2021; Svinkin et al., 2021). The latter two flaring activities are believed to originate from magnetic energy releasing from occasional magnetic field recombination of magnetars. There are various theories to explain the underlying triggers of such recombination. Following the dichotomy of Sharma et al. (2023), the first class of mechanisms attributes the trigger to crustal destructive or defective events (the "star quake" paradigm, see models e.g., Blaes et al., 1989; Thompson and Duncan, 1996; Levin, 2006; Beloborodov, 2020; Bransgrove et al., 2020; Yuan et al., 2020); while the second attributes to the reconfiguration in the twisted magnetosphere due to some instabilities (the "solar flare" paradigm, e.g., Lyutikov, 2003; Komissarov et al., 2007; Ripperda et al., 2019; Mahlmann et al., 2019). Unlike those isolated evolved magnetars, there is a population of magnetars that were born in the remnants of binary neutron star (BNS) collisions. We refer to these magnetars as merged-magnetars, which are the focus of this manuscript. It is widely believed that, BNS collision, if the remnant is not massive enough to cause a prompt collapse into a black hole (BH), will result a massive magnetar in millisecond time scale (Duncan & Thompson 1992; Usov 1992; Thompson 1994; Yi & Blackman 1998; Blackman & Yi 1998; Kluzniak & Ruderman 1998; Nakamura 1998; Spruit 1999; Wheeler et al. 2000; Ruderman et al. 2000). Such a magnetar inherits most of the orbital angular momentum of the progenitor NSs, therefore possesses short spin period of milliseconds initially, much shorter than those of typical magnetars of seconds. Because of their faster spin, larger mass and younger age, it's intuitively to suspect that they are even less stable than isolated evolved magnetars, and therefore could be more likely to emit gamma-ray flares. In this work, we will investigate the possibility that flares from merged-magnetars be identified, and especially as electromagnetic wave counterparts (EMC) of gravitational wave (GW) signals of BNS mergers. The manuscript is arranged as follows: In section II, we give semi-quantitative arguments that the flare activities should dominantly occur in the merged magnetar's early phase. Then in the next section, the detectable event rate of merged-magnetar flares is estimated from a population of BNS mergers. In section III, we consider their potential to be identified as EMC of GW, following by a section where the absorption from ejected matter is taken into consideration. We discuss several relevant aspects and summarize the main findings in the last two sections. ## 2 Flare Mechanisms In the "star quake" paradigm, where gamma-ray flares are triggered by crustal defective events, when the magnetar is fast spinning down due to the gigantic magnetic breaking torque, the centrifugal force decreases rapidly. The crust of the NS will re-adjust to its new balance configuration, where the centrifugal force is counteracting against the self-gravity. In this re-configuring process, crustal defective events can be expect to be much frequent than when the NS entered the later slowly spinning-down stage (such spin-down induced star quake was first discuss by Baym & Pines 1971); in the second paradigm, where gamma-ray flares are attributes to the instabilities in the magnetosphere, the boundary of the magnetar's magnetosphere (near the light cylinder, whose radius \(R_{\rm LC}=cP\), where \(c\) is the velocity of light, and \(P\) is the spin period) expand fast as its spin period rapidly increase. The magnetic field lines near the boundary will have to re-adjust accordingly, and thus to be expect to be more likely to give-off gamma-ray flares1. In fact, from our following semi-quantitative estimation we show that, the rate of gamma-ray flares in the fast spinning-down millisecond period phase is much higher than its second period phase, so that the total flare energy released in the former phase (with the time scale of days) is comparable or higher than the latter phase (with the time scale of \(10^{5}\) years). Footnote 1: unlike in those conventional models in the β€œsolar flare” paradigm, where the magnetic twist leaks from the NS interior into the magnetosphere, through the slow crustal deformation, here the magnetic twist fast expands along with the magnetosphere. Therefore we refer to it as the β€œmagnetosphere instability” paradigm for distinction In the crust crack scenarios, let us assume that each crustal destructive event which is energetic enough to Figure 1: Illustrations of two scenarios in which the fast spinning-down millisecond magnetar is more probable to have gamma-ray flares trigger a magnetar gamma-ray flare has a characteristic energy \(E_{\rm crack}\). Denote the change in the centrifugal force as \(\Delta F_{\rm cent}\) in a small interval of time \(\Delta t\). The corresponding linear deformation of the NS is \(\Delta l\), which we suppose is proportional to \(\Delta F_{\rm cent}\) as \[\Delta l=\frac{\Delta F_{\rm cent}}{\kappa},\] where \(\kappa\) is the elastic factor of the crust. The work done by the gravity is thus: \[\Delta E=\frac{\Delta F_{\rm cent}}{\kappa}F_{\rm G}, \tag{1}\] where \(F_{\rm G}\) denote the gravitational force acting on the crust, which remains almost constant as the deformation is small. If we equalize the work done by gravity with that released in crust cracks, we have the following equation: \[\frac{\Delta F_{\rm cent}}{\kappa}F_{\rm G}=\Delta N_{\rm crack}E_{\rm crack}, \tag{2}\] where \(\Delta N_{\rm crack}\) is the number of crust cracks in \(\Delta t\), where their ratio is the magnetar gamma-ray flare rate \[R_{\rm B}=\frac{\Delta N_{\rm crack}}{\Delta t}\] , which, according to equation (2), has the following proportional relationship: \[R_{\rm B}\propto\dot{F}_{\rm cent}\propto\frac{\dot{P}}{P^{3}}. \tag{3}\] Note that if the spinning-down torque is dominated by the magnetic dipole braking2, then we have: Footnote 2: GW braking will only dominate over the magnetic braking when the spin period is less than 0.1 s (Usov 1992; Blackman & Yi 1998). In this case, GW braking will fast spin-down the magnetar to a regime where the magnetic braking takes over (Zhang & Meszaros 2001). \[B_{\rm s}^{2}\propto P\dot{P}, \tag{4}\] where \(B_{\rm s}\) the surface magnetic field strength, we assume to be an constant. As a result, equation (3) becomes: \[R_{\rm B}\propto\frac{1}{P^{4}}. \tag{5}\] Now, since in the first phase, the spin period is in the order of \(\sim 10\) ms, and in the second phase it is \(\sim\)s, the \(R_{\rm B}\) in the first phase can be eight orders of magnitude larger than that in the second phase. On the other hand, the time-span of the first phase is about \(10^{-8}-10^{-7}\) of that of the second phase. Therefore, the magnetar gamma-ray flare energy releasing in the first phase is the same or one order of magnitude larger than that in the second phase, as we claimed in above. If we are in the magnetosphere instability scenarios, in a short time interval \(\Delta t\), the boundary of the magnetosphere (where close and open magnetic field lines transit) expand in distance: \(\Delta R_{\rm LC}\). The volume which has been swept is: \[\Delta V=4\pi R_{\rm LC}^{2}\Delta R_{\rm LC}. \tag{6}\] The volume times the magnetic field energy density is the energy got evolved. This energy is likely to be release by process such as magnetic field re-connection near the light cylinder. We have: \[\Delta E\propto B_{r=R_{\rm LC}}^{2}R_{\rm LC}^{2}\Delta R_{\rm LC}, \tag{7}\] where \(B_{r=R_{\rm LC}}\) is the magnetic field strength at the light cylinder. For a dipole magnetic field, \[B_{r=R_{\rm LC}}\propto\frac{B_{\rm s}}{R_{\rm LC}^{3}}.\] Consequently, equation (7) can be reformed into: \[\Delta E\propto B_{\rm s}^{2}\frac{\Delta R_{\rm LC}}{R_{\rm LC}^{4}}\propto \frac{\dot{P}}{P^{4}}\propto\frac{1}{P^{5}}. \tag{8}\] Using the similar argument as in the crust crack scenarios, we can see the ratio of the \(R_{\rm B}\) between the first and second phase can be ten orders of magnitude, and thus the ratio of the corresponding total energy releasing can be \(10^{3}\) in the magnetosphere instability scenarios. ## 3 The Rate of Flares from the Population of Merged Magnetars Define the magnetar flare number density distribution from a single merged-magnetar as: \(n(\tau,E)\), where \(\tau\) is the age of the magnetar, and \(E\) is the energy release during the flare. The total number of flares above some certain energy limit (\(E_{\rm limit}\)) during the life time of the magnetar is: \[N_{\rm B}=\int_{E_{\rm limit}}^{\infty}\int_{0}^{\infty}n(\tau,E)d\tau dE. \tag{9}\] and the total energy released is: \[E_{\rm B}=\int_{0}^{\infty}\int_{0}^{\infty}n(\tau,E)Ed\tau dE. \tag{10}\] which should be less than the total energy stored in the magnetosphere. Now the rate of bursts from all merged-magnetars in the local Universe within a sphere shell of radius from \(D\) to \(D+dD\), in the energy range from \(E\) to \(E+dE\) is: \[d^{2}R_{B}=4\pi D^{2}dDR_{\rm m}\big{(}\int_{0}^{\infty}n(\tau,E)d\tau\big{)}dE. \tag{11}\] where \(R_{\rm m}\) is the merger rate density of double neutron stars _whose remnants are NSs instead of prompt collapsed BHs_. The above equation can be further formulated to: \[d^{2}R_{B}=4\pi D^{2}dDR_{\rm m}\big{(}\int_{0}^{\infty}n(\tau,E)d\tau\big{)} \frac{dE}{dF}dF, \tag{12}\] where \(F\) is the fluence. Since \(E=4\pi D^{2}F\), we have from the above equation that: \[\frac{d^{2}R_{\rm B}}{dFdD}=(4\pi D^{2})^{2}dDR_{\rm m}\big{(}\int_{0}^{\infty }n(\tau,E)d\tau\big{)}. \tag{13}\] As a result, the rate of such bursts with in a limiting distance \(D_{\rm u}\) and above a limiting fluence is: \[R_{\rm B}=(4\pi)^{2}R_{\rm m}\int_{F_{\rm limit}}^{\infty}\int_{0}^{D_{\rm u} }\big{(}\int_{0}^{\infty}n(\tau,E(F))d\tau\big{)}D^{4}dDdF \tag{14}\] where \(F_{\rm limit}\) is the fluence limit of the gamma-ray detector. The volumetric integral in equation (11) should be limit in local Universe, where the merger rate density can be viewed as a constant, and cosmic expansion has a negligible effect. When considering the joint observation of such flares with GW detection of the BNS mergers, the integral over the luminosity distance in equation (12) should be truncated at the BNS horizon of the GW detector. The key quantities are \(n(\tau,E)\) and \(R_{\rm m}\), the latter we formulated with: \[R_{\rm m}=\eta\mathcal{R}_{\rm m}\] , where \(\mathcal{R}_{\rm m}\) is the merger rate density of all BNS population, and \(\eta\) is the fraction of those have long-lived magnetar remnants. \(\mathcal{R}_{\rm m}\) can be constrained by previous GW observation at \(39-1900\)\(\rm Gpc^{-3}s^{-1}\)(The LIGO Scientific Collaboration et al., 2021). Here we suppose that the age and energy dependence of \(n(\tau,E)\) is separable: \[n(\tau,E)=n(\tau)f(E),\] where \(f(E)\) is the normalized probability density distribution of the energy of flares, for which we assume a power-law form of: \[f(E)=f_{0}E^{-\beta},E_{l}<E<E_{u} \tag{15}\] Studies (Cheng et al., 2020) found the index in broad consistent with that expected from a SOC process (\(\beta=5/3\)), and the normalization factor \[f_{0}=(\beta-1)E_{l}^{\beta-1}\] Now the equation (14) can be simplified to: \[R_{\rm B}=R_{\rm m}(4\pi)^{2-\beta}N_{\rm B}f_{0}\frac{F_{\rm limit}^{1-\beta }}{(\beta-1)}\frac{D_{\rm u}^{5-2\beta}}{(5-2\beta)} \tag{16}\] The total energy to be released should be limited by the magnetic energy stored in the magnetosphere: \[E_{\rm mag}\geq\int_{\tau_{l}}^{\tau_{u}}n(\tau)d\tau\int_{E_{l}}^{E_{u}}Ef(E )dE\sim N_{\rm B}\overline{E}, \tag{17}\] \[\overline{E}=\int_{E_{l}}^{E_{u}}Ef(E)dE\sim\frac{\beta-1}{2-\beta}\big{(} \frac{E_{u}}{E_{l}}\big{)}^{1-\beta}E_{u}. \tag{18}\] the approximant in the above equation is valid only when \(1<\beta<2\). Cheng et al. (2020) found \(\beta\sim 1.66\), which meets the above mentioned conditions. If we assume that the flares in ordinary SGR and GFs follows the same energy distribution law \(f(E)\), the energy of those flares can range more than five orders of magnitudes, with \(E_{\rm u}\) corresponds to the most energetic giant flare is \(E\sim 10^{46}\,\rm ergs\). Therefore, from equation (18) we find that: \(\overline{E}\sim 4\times 10^{43}E_{l,41}^{\beta-1}\,\rm ergs\), where \(E_{l,42}\) is the lower energy end of the \(f(E)\) in unit of \(10^{41}\,\rm ergs\). The magnetic energy stored in the magnetosphere is (Zhang et al., 2022): \[E_{\rm mag}\sim 8\times 10^{46}B_{15}^{2}\,\rm ergs, \tag{19}\] where \(B\) is the surface magnetic field strength of the magnetar scaled with \(10^{15}\) G. If we equalize the both sides of inequality (17), we can have a rough estimation of \(N_{\rm B}\) as: \[N_{\rm B}\sim 2\times 10^{3}\frac{B_{15}^{2}\,\rm ergs}{E_{l,42}^{\beta-1}}, \tag{20}\] Taking the \(N_{\rm B}\) from above, and taking \(E_{l}=10^{42}\rm ergs\), the expression of \(f_{0}\) into equation (16), we have: \[R_{\rm B}=2\times 10^{3}\frac{(4\pi)^{2-\beta}}{5-2\beta}R_{\rm m}D_{\rm u}^{3} \big{(}\frac{10^{41}\,\rm ergs}{F_{\rm limit}D_{\rm u}^{2}}\big{)}^{\beta-1}. \tag{21}\] If we insert the numbers into above equation with \(\beta=5/3\), we obtain that: \[R_{\rm B}=5\times 10^{-3}\eta\mathcal{R}_{\rm m}B_{15}^{2}D_{u,100}^{5/3}F_{ \rm limit,-8}^{-2/3}\,\rm yr^{-1}, \tag{22}\] where \(\mathcal{R}_{\rm m}\) is in unit of \(\rm yr^{-1}/Gpc^{3}\), \(D_{u,100}\) is the distance limit in unit of \(100\) Mpc, \(F_{\rm limit,-8}\) is the fluence limit in unit of \(10^{-8}\,\rm ergs/cm^{2}\). The detection horizon \(D_{u}\) is limited by the fluence cut of a gamma-ray detector as: \[D_{u,\gamma}\sim 300\,F_{\rm limit,-8}^{-1/2}\,\rm Mpc \tag{23}\] when taking the \(E_{u}=10^{46}\,\rm ergs\), which corresponds to a conservative estimation of the total magnetic energy stored in the magnetosphere (Zhang et al., 2022). ## 4 Merged-Magnetar Flares as Em Counterpart of GW Events and Its Spin Period Modulation A magnetar which was born with a millisecond spin period will experience two evolutionary phases. In its first phase of millisecond period of spinning, the magnetar's spin period rapidly slowed down to seconds by the strong magnetic braking torque. In the later phase, when the spin period is settled to second scale and evolve less rapid. We can define a transition time between the first and the second phases: \[\tau_{\rm trans}\sim\frac{P_{10\,{\rm ms}}^{2}}{B_{0,15}^{2}}\,{\rm day}. \tag{24}\] As argued in previous sections, the bursts rate before \(\tau_{\rm trans}\) overwhelms that after it. Therefore, the rate in equation (22) is mostly describe those bursts happens before \(\tau_{\rm trans}\). Equivalently, those bursts to be detected is likely to following a GW chirp from BNS merger with a time lag \(\tau_{\rm lag}<\tau_{\rm trans}\). On the other hand, \(\tau_{\rm lag}\) should be larger than \(\tau_{\rm limit}\), which is the time limit less than which, the ejecta from the BNS merger is still optically thick, thus the flares from the magnetars will be largely absorbed and the temporal structure within the flares is smeared. \(\tau_{\rm limit}\) is also in time scale of days (Li & Paczynski, 1998, and see discussion in following section). Since the duration of a typical magnetar GF is \(\sim 0.1\)-\(1\) s, the flares detected in this phase can exhibit significant spin modulation, which can serve as an unambiguous evidence of the existence of a merged magnetar. In this case, the \(D_{\rm u}\) in equation (22) is the minimum between the gamma-ray detection horizon and the GW horizon: \[D_{u}=\min\left(D_{u,\gamma},D_{\rm GW}\right) \tag{25}\] The flare rate as function of fluence limit is plotted in figure 2, see figure caption for the detailed description of the plot. When plotting figure 2, we calculate the rate using equation (22) with Monte Carlo samplings of \(\eta\), \(R_{\rm m}\) and \(D_{\rm GW}\): \(\eta\) is uniformly random sampled in log-space from 0.01 to 0.1; \(R\) is sampled from a log-Gaussian distribution with 1-\(\sigma\) upper and lower limits correspond to 39 and 1900 yr\({}^{-1}\)/Gpc\({}^{3}\); \(D_{\rm GW}\) is sampled from a Gaussian random with mean 300 Mpc and a standard deviation of 40 Mpc, which corresponds to the BNS detection horizon of a GW detector network with LIGO-Virgo-KAGRA (LVK) in O4 period3. Footnote 3: as simulated here: [https://emfollow.docs.ligo.org/userguide/capabilities.html](https://emfollow.docs.ligo.org/userguide/capabilities.html) ## 5 Absorption by the BNS Ejecta During the collision of BNS, abundant material will be ejected from both the tidal tail and the disk (Bovard et al., 2017; Just et al., 2015). Actually BNS is the confirmed site for rapid neutron capture nucleosynthesis (\(r\)-process) (Abbott et al., 2017, 2017; Cowperthwaite et al., 2017; Kasen et al., 2017), which is responsible for about half of the elements heavier than iron measured in our solar system (Burbidge et al., 1957; Sneden et al., 2008). Thus, it is expected that the BNS will be surrounded by dense \(r\)-process material at early time, with a total ejected mass ranging from \(\sim 0.005-0.1M_{\odot}\)(Bovard et al., 2017; Radice et al., 2018; Cote et al., 2018; Just et al., 2015; Fernandez et al., 2015), and is optically thick to the flare gamma-ray radiation from the center remnant largely due to the Compton scattering. In the mean time, as the \(r\)-process material is ejected from BNS merger with high speed, the ejecta will become optically thin at \(\sim\) days after the merging event (Li & Paczynski, 1998; Korobkin et al., 2020; Wang et al., 2020). The kilonova models of GW170817 observation suggested that such ejecta has a speed ranges from \(0.1c\) to \(0.3c\) on average (e.g., Kasen et al., 2017; Rosswog et al., 2018; Wollaeger et al., 2018; Watson et al., 2019), Figure 2: The flare rate as function of fluence limit is plotted in figure 2. The blue band are the possible range of bursts rate which are associated with GW observation in LVK O4, and the dashed dark lines indicate those of bursts regardless of GW counterparts. The upper and lower limits of the range correspond to the 86% quantiles (1-sigma) in a Morte Carlo simulation. The rate is calculating with the following choice of parameters: \(\eta\) is uniformly random sampled in log-space from 0.01 to 0.1; \(R\) is sampled from a log-Gaussian distribution with 1-\(\sigma\) upper and lower limits correspond to 39 and 1900 yr\({}^{-1}\)/Gpc\({}^{3}\); \(D_{\rm GW}\) is sampled from a Gaussian random with mean 300 Mpc and a standard deviation of 40 Mpc. as expected from previous theoretical work (e.g., Li & Paczynski, 1998; Tanaka & Hotokezaka, 2013). Our calculation in previous section did not include the absorption of the surrounding BNS ejecta to the flare radiation. When such effect is included, and together with a finite work energy range of the detector, it is equivalent that the limit fluence in equations (22) and (23) is replaced by a \(\tilde{F}_{\rm limit}\), which related with the original \(F_{\rm limit}\) as: \[\tilde{F}_{\rm limit}=\frac{F_{\rm limit}}{1-\xi} \tag{26}\] Here we define an absorption factor \(\xi\) to describe the effect of the surrounding ejecta in absorbing the high-energy photons from the BNS flare, which is a function of time after BNS collision (\(\tau\)) and is defined as: \[1-\xi(\tau)=\frac{F_{\rm observed}(\tau)}{F_{\rm emitted}}=\frac{\int_{E_{ \rm low}}^{E_{\rm high}}f_{\rm observed,E}(\tau)dE}{\int_{E_{\rm low}}^{E_{ \rm high}}f_{E}dE}, \tag{27}\] which is the ratio between the flux after absorption (observed) at time \(\tau\) and the total emitted flux from the flare, \(E_{\rm low}\) and \(E_{\rm high}\) denote the energy range where a specific detector works. \(f_{E}=dE_{\gamma}/dEddt\) is the differential energy flux emitted from the flare. We assume a spectrum shape of \(f_{E}\) as a power law of index -0.2 with an exponentially-cutoff at 0.48 MeV, i.e., \(f_{E}=f_{0}E^{-0.2}exp(-E/0.48\)MeV), and \(f_{0}\) is the normalization factor with \(\int_{0}^{\infty}f_{E}dE=F_{\rm total}=L_{\rm total}/4\pi D^{2}\), where \(D\) is the distance of the BNS, and \(L_{\rm total}\) is the luminosity of the magnetar flare. This spectrum shape is taken from that of the GF from magnetar SGR 1806-20 (Palmer et al., 2005). For approximation, we assume a uniform spherical \(r\)-process ejecta distribution as in Wang et al. (2020, 20) to calculate the observed flare emission (after the ejecta absorption). Then, the emitted gamma rays after propagation through the ejecta (absorption due to Compton scattering with the ejecta material) is \[f_{\rm observed,E}(\tau)=f_{E}e^{-\rho_{\rm ej}\kappa(E)l} \tag{28}\] where \(\rho_{\rm ej}\) is the ejecta density, \(\kappa(E)\) is the opacity of the ejecta to a photon with energy \(E\), path-length \(l\) is the distance of the photons travelling through the ejecta, for this case, \(l\sim v\tau\), with \(v\) to be the expanding/ejected velocity of the ejecta. Only non-scattered photons are included in the observed gamma-ray signal here; scattered photons are ignored as their effects are minimal at late times when the ejecta is nearly optically thin. We obtain the \(r\)-process nuclei abundance distribution in the BNS merger ejecta using the nuclear reaction network code Portable Routines for Integrated nucleoSynthesis Modeling, or PRISM (Mumpower et al., 2018) as in Wang et al. (2020). We adopt a BNS merger dynamical ejecta with robust \(r\)-process productions (Rosswog et al., 2013) for the baseline calculation. The opacity values for the total BNS collision ejecta are calculated following the method in Wang et al. (2020), and the opacity values of individual \(r\)-process nuclei are adopted from the XCOM website4. The resulting spectra after absorption by the ejecta at 5 different times (0.2 day, 0.3 day, 0.6 day, 1day and 2 day) after BNS collision are shown in figure 3. Compared to the emitted spectrum shown as black line, we conclude that the detection window of such bursts should be in energy range from \(\sim\)1 MeV to 10 MeV, and in the time window between 0.5 and 2 days after BNS collision. Footnote 4: [https://www.nist.gov/pml/xcom-photon-cross-sections-database](https://www.nist.gov/pml/xcom-photon-cross-sections-database) We note that in addition to the burst flare radiation, the BNS collision ejecta itself also emit gamma-ray photons through the decays of the radioactive \(r\)-process nuclei. The total gamma radiation rate from the \(r\)-process ejecta is estimated to be \(\epsilon_{0}(\tau)\sim 2\times 10^{10}{\rm erg~{}g^{-1}}s^{-1}(\tau/{\rm day })^{-1/3}\)(Metzger & Berger, 2012; Korobkin et al., 2020), and the \(r\)-process gamma-ray energy at \(\sim 1{\rm day}\) is then \(\sim 10^{41}\) erg/s for a 0.01 \(M_{\odot}\) BNS merger ejecta. Thus, such signal would be small compared to the flare emissions, and the BNS ejecta spectrum shapes with nuclear decay lines (Korobkin et al., 2020; Wang et al., 2020) are also different from the flare signal discussed here. Figure 3: Plot of the flare spectra flux \(f_{E}/f_{0}\) vs energy E before (emitted, black) and after absorption by a 0.01 \(M_{\odot}\) BNS merger ejecta with 0.3c expansion velocity and a robust main \(r\)-process components, at 5 different times after BNS collision: 0.2 day (red), 0.3 day (orange), 0.6 day (green), 1 day (cyan), 2 day (blue) Then we conduct the integral in equation (27) to obtain \(\xi\) as function of \(\tau\) in Figure 4. The uncertainty bands are due to variations in BNS ejecta properties, including velocity, ejecta mass and the components. Here we varied the ejecta mass between 0.005 to 0.03 \(M_{\odot}\), the velocity between 0.1c to 0.3c. To test the sensitivity of the signal to the ejecta component, we adopted the parameterized BNS outflow conditions (Just et al., 2015; Radice et al., 2018) with a range of initial electron fractions as in Wang et al. (2020), so that the ejecta component varies from the weak \(r\)-process (no third peak and heavier actinides elements) to robust \(r\)-process (with actinides). From Figure 4, we can see that at the detection window discussed above, the corresponding absorption factor is \(\xi\sim 0.5-1\) with an order of magnitude uncertainty. The burst detection rate after absorption of BNS ejecta considered is re-plotted in Figure 5. When plotting the figure, we calculate the rate with a \(\xi\) randomly drawn from its corresponding range evaluated above with uniform distribution. During the variation test, we find that the flare signal is more sensitive to the ejecta velocity and mass. Therefore, on the other hand, the detection rate obtained in the real observation could enable us to put a constraint on the BNS ejecta property. ## 6 Discussion ### Potential cases in archival data from Fermi/GBM sGRB catalogue The Fermi/GBM detector has an energy range of \(\sim 0.01-1\) MeV, and a fluence limit (for a 1 s bursts) of \(\sim 2\times 10^{-8}\) erg/cm25. It has been monitoring GRBs for \(\sim 10\) years. From our estimation, there should have been \(\sim 0.1-10\) such bursts detected in its bursts catalogue. As mentioned above, such flares may exhibit spin modulation. Although there has been search for QPOs in song of Fermi/GBM's bright GRBs (Dichiara et al., 2013) and found none positive results, a more careful survey focusing on those weak short bursts with a fast increasing period \(\sim 0.1\) s might identify such bursts in the archival data, although with foreseeable difficulties due to their fewer photon counts. Recently, Chirenti et al. (2023) reported the detection of kHz QPOs in two archival sGRBs of the Burst and Transient Source Experiment (BATSE). BATSE works in lower energy range from 50 KeV to 300 keV, in which we expect significant absorption if they were merged-magnetar-flares. Besides, the found QPO are above 1 kHz, which is much higher than we expect for the stable merged-magnetar-flares. Therefore, these two sGRBs with QPOs should not be considered as cases of the proposed merger-magnetar-flares. Footnote 5: Following the practice of Hendriks et al. (2022), we use the lowest observed fluence of the sGRB catalogue (Narayana Bhat et al., 2016) as the fluence limit of second duration bursts. ### Prospects for next generation MeV detectors For the next generation MeV telescope, such as COSI6, AMEGO7, or MeVG Figure 4: Plot of the absorption factor \(\xi\) versus time (\(\tau\)) for 4 different energy bands: 45keV-10MeV (black); 10-100keV(blue); 100keV-1MeV(green); 1MeV-10MeV(red). The solid lines are the absorption results for a 0.01 \(M_{\odot}\) BNS ejecta with 0.3c expansion velocity and a robust main \(r\)-process components, the color shades indicate uncertainties due to the variations in the BNS ejecta properties including mass, velocity and composition. Figure 5: Same plot as the band for GW counterpart in figure 2, but with absorption considered for energy ranges from 1 MeV to 10 MeV and from 100 keV to 1 MeV. tween \(\sim 0.1-10\) MeV will be well covered, and the detectors' sensitivities at this energy range are expected to be at the level between \(\sim 3\times 10^{-12}\) erg cm\({}^{-2}\) s\({}^{-1}\) and \(10^{-10}\) erg cm\({}^{-2}\) s\({}^{-1}\), which are at least 1-2 orders of magnitude better than current and previous MeV detectors like INTEGRAL9 and COMPTEL10. Thus, we expect a much larger detection rate up to \(\sim 100\) per year as a new population of GW gamma-ray counterparts. Footnote 8: [https://indico.icranet.org/event/1/contributions/777/](https://indico.icranet.org/event/1/contributions/777/) Footnote 9: [https://www.cosmos.esa.int/web/integral](https://www.cosmos.esa.int/web/integral) Footnote 10: [https://heasarc.gsfc.nasa.gov/docs/cgro/comptel/](https://heasarc.gsfc.nasa.gov/docs/cgro/comptel/) The main sources of the uncertainties in the burst rate estimation is from: 1) the local rate density of the BNS collision; 2) the fraction of BNS merger which leaves long-lived magnetar, or \(\xi\) as we denote in this paper; A future detection of a population of such bursts, together with multi-messenger observation from GW will in turn give inference on these aspects. The current BNS \({\cal R}_{\rm m}\) estimation is based on two BNS events (GW170817 and GW200311_115853) in LIGO-Virgo-Collaboration (LVC) O2-O3 period (The LIGO Scientific Collaboration et al., 2021). In the LVK-O4 period, it is estimated that \(36^{+49}_{-22}\) BNS mergers shall be detected11, which will return in a much tighter constraints on \({\cal R}_{\rm m}\). The value of \(\xi\) depends on the equation of state of NS matter, and also the mass function of NS. The latter can be better constraints by a larger sample of BNS observed with GWs. If multiple bursts could be observed from a single merger magnetar, the dependence of the bursts properties on its period spin could be study, which will provide valuable insights into the emission mechanism of magnetar activities. Footnote 11: as reported in [https://emfollow.docs.ligo.org/userguide/capabilities.html#datadrivenexpectations](https://emfollow.docs.ligo.org/userguide/capabilities.html#datadrivenexpectations), using the same methodology in Abbott et al. (2020) but with updated input models for detector network and sources. ### Solidification of the crust of the newly born magnetar In order to justify the domination of the magnetar activity in its rapid braking stage in age of \(\sim\)days under the star quake paradigm, we made an order-of-magnitude estimation of the burst rate in the introduction section. A crucial presumption was that the elastic property of the neutron star crust unchanged. First we need to check whether the surface of the NS has already cooled enough to have a solidified crust. According to Negele & Vautherin (1973); Haensel & Pichon (1994); Douchin & Haensel (2000), the melting temperature of the NS crust lies well above \(10^{8}\) K, and Lattimer et al. (1994) showed that the core of a newly born NS can fast cool down to temperature \(T\) on the timescale: \[t=20\,(T/10^{9}\,{\rm K})^{-4}\,{\rm s}. \tag{29}\] Therefore, in the age of \(\sim\) day, the new born magnetar already has a solid crust. As for its elastic properties, their are in general not constants and temperature dependent (_e.g._ Strohmayer et al., 1991). Therefore, a more careful quantitative calculation of the burst rate as function of the magnetar's age should consider a realistic cooling curve of the NS's crust and its temperature-dependent elastic properties. ## 7 Summary From our above argument and calculation, we conclude that flares from merged massive magnetars can be expected as a population of gamma-ray transients, which associate GW chirp events of BNS mergers. Such a gamma-ray counterpart of GW may look like a short gamma-ray bursts (sGRB) according to its duration, but it can be found with several distinct features: * it tends to be weak in flux, and the time-lag between the burst and GW chirp is \(\sim\)1-2 days, rather than \(\sim\) s as in sGRB; * its spectrum has a lower energy cut at \(\sim\)100 keV, due to absorption of the ejecta. Beside, it may show spin modulation with a significant spin down, although the potential of observing a such short scale temporal structure significantly is very challenging in reality. Due to the absorption by the ejecta from the BNS collision, such flares are optimally to be observed in the energy range from 0.1 to 10 MeV. The estimated detection rate is increasing towards a fluence flux limit, with a lower law with a index -1.5; While the rate of such bursts as GW association will also be limited by the detecting reach of GW detector networks, when the fluence limit of the HE detector is below some turn-over sensitivity. Below this turn-over fluence limit, the rate follows another power law with index -2/3. When observing with a detector of energy range 0.1-1 MeV, the turn-over flux limit is at \(\sim 2\times 10^{-9}\) erg/cm\({}^{2}\), while than for a detector of 1-10 MeV is at \(\sim 10^{-8}\) erg/cm\({}^{2}\). Based on our evaluated from a population of BNS, a GRB monitor with energy range of \(\sim 0.1-1\) MeV and a fluence limit of \(\sim 2\times 10^{-8}\) erg/cm\({}^{2}\) could detect such flares as gamma-ray counterparts of GW events at a rate from 0.01 to 1 per year; while a MeV detector working in a range from \(\sim 1-10\) MeV with a fluence limit \(\sim 10^{-9}\) erg/cm\({}^{2}\) could detect such signals a few to a few tens per year. We would like to acknowledge the insightful discussions we had with Profs. Shuang-Nan Zhang and Ming-Yu Ge. This work is supported by the National Key R&D Program of China (2021YFA0718500). SXY acknowledge the support from the Chinese Academy of Sciences (Grant No. E329A3M1). The work of X.W. is supported in part by the Chinese Academy of Sciences (Grant No. E329A6M1).
2305.07140
Linear Codes with Prescribed Hull Dimension and Minimum Distance
The hull of a linear code (i.e., a finite field vector space)~\({\mathcal C}\) is defined to be the vector space formed by the intersection of~\({\mathcal C}\) with its dual~\({\mathcal C}^{\perp}.\) Constructing vector spaces with a specified hull dimension has important applications and it is therefore of interest to study minimum distance properties of such spaces. In this paper, we use the probabilistic method to obtain spaces with a given hull dimension and minimum distance and also derive Gilbert-Varshamov type sufficient conditions for their existence.
Ghurumuruhan Ganesan
2023-05-11T21:05:30Z
http://arxiv.org/abs/2305.07140v1
# Linear Codes with Prescribed Hull Dimension and Minimum Distance ###### Abstract The hull of a linear code (i.e., a finite field vector space) \(\mathcal{C}\) is defined to be the vector space formed by the intersection of \(\mathcal{C}\) with its dual \(\mathcal{C}^{\perp}\). Constructing vector spaces with a specified hull dimension has important applications and it is therefore of interest to study minimum distance properties of such spaces. In this paper, we use the probabilistic method to obtain spaces with a given hull dimension and minimum distance and also derive Gilbert-Varshamov type sufficient conditions for their existence. **Key words:** Hull dimension, minimum distance, finite field vector spaces, linear codes. **AMS 2000 Subject Classification:** Primary: 60J35, 15B33; ## 1 Introduction The minimum distance of a vector space or a linear code is a measure of its error correction capability and algebraic constructions are available to obtain codes with specified minimum distance (Huffman and Pless (2010)). Recently, there has been increasing interest in constructing codes with a given hull dimension, due to wide applications ranging from cryptography to quantum error correction (Carlet et al. (2019)). The hull of a linear code \(\mathcal{C}\) is simply the intersection of \(\mathcal{C}\) with its dual (formal definitions in Section 2). The hull is itself a linear code and Sendrier (1997) shows that the _expected_ hull dimension of a randomly chosen code with a given dimension asymptotically converges to a constant (see also Skersys (2003)). Recently, Sangwisut et al. (2015) and Luo et al. (2018) study linear codes with a given hull dimension and specific parameters, for cyclic linear codes and maximum distance separable codes, respectively and in a related work, Carlet et al. (2019) use prime ideal decompositions to construct codes with a one dimensional hull. In this paper, we use the probabilistic method to derive sufficient conditions for the existence of linear codes with a _given_ hull dimension and minimum distance. In the following Section, we state and prove our main result Theorem 1, regarding codes with given hull dimension and minimum distance and illustrate our result with a brief example. For generality, we henceforth use the term vector spaces instead of linear codes, throughout. ## 2 Hull dimension of vector spaces Let \(q\) be a power of a prime number and let \(\mathbb{F}_{q}\) be the finite field containing \(q\) elements. For integer \(m\geq 1\) we say that a set of vectors \(\mathcal{G}=\{h_{1},\ldots,h_{k}\}\subset\mathbb{F}_{q}^{m}\) is a _basis_ if the vectors in \(\mathcal{G}\) are linearly independent. If \(\mathcal{G}\) is a basis and \(\mathcal{C}\subset\mathbb{F}_{q}^{m}\) is the space spanned by \(\mathcal{G},\) then the dimension of \(\mathcal{C}\) is \(k.\) We then denote \(\mathcal{C}\) to be a \([m,k]_{q}-\)space. We say that a vector \(x=(x_{1},\ldots,x_{m})\in\mathbb{F}_{q}^{m}\) is _orthogonal_ to \(y=(y_{1},\ldots,y_{m})\) if \(x\cdot y^{T}=\sum_{i=1}^{m}x_{i}\cdot y_{i}=0.\) Throughout, \(T\) in the superscript refers to the transpose operation and all vectors are row vectors. The dual space \(\mathcal{C}^{\perp}\) of a space \(\mathcal{C}\subset\mathbb{F}_{q}^{m}\) is the set of all vectors \(v\in\mathbb{F}_{q}^{m}\) such that \(v\) is orthogonal to each vector in \(\mathcal{C}.\) The _hull_ of \(\mathcal{C}\) is the vector space \(\mathcal{C}\cap\mathcal{C}^{\perp}.\) If \(k\) and \(t\) denote the dimension of \(\mathcal{C}\) and its hull respectively, then we say that \(\mathcal{C}\) is a \([m,k]_{q}-\)space with hull dimenions \(t.\) Spaces with hull dimension \(t=0\) and \(t=k\) are called complementary dual (Massey (1992)) and self-orthogonal (Kohnert and Wassermann (2009)) spaces, respectively. The distance between two vectors \(x=(x_{1},\ldots,x_{m})\) and \(y=(y_{1},\ldots,y_{m})\) in a space \(\mathcal{C}\) is defined as \(d(x,y):=\sum_{i=1}^{n}\mathbf{1}(x_{i}\neq y_{i}),\) where \(\mathbf{1}(.)\) refers to the indicator function. The weight of \(x\) is the number of non-zero entries in \(x.\) We say that \(\mathcal{C}\) has a minimum distance of at least \(d\) if any two vectors in \(\mathcal{C}\) have a distance of at least \(d.\) Because \(\mathcal{C}\) is vector space, this is equivalent to saying that the weight of any vector in \(\mathcal{C}\) is at least \(d.\) The following is the main result of the paper. **Theorem 1**.: _Let \(m\geq k\geq 1,1\leq d\leq m\) be integers and let \(q\geq 2\) be a power of prime satisfying_ \[1+\sum_{j=0}^{d-1}(q-1)^{j+1}\cdot{m\choose j}<q^{m-2k+2}, \tag{2.1}\] _strictly. Letting \(0\leq t\leq k\) be any integer we have the following: \((i)\) If \(q\) is even then there exists an \([m+k,k]_{q}-\)space \(\mathcal{C}_{1}\) with hull dimension \(t\) and minimum distance at least \(d.\) \((ii)\) If \(q\equiv 1\mod 4\) then there exists a \([2m+k,k]_{q}-\)space \(\mathcal{C}_{2}\) with hull dimension \(t\) and minimum distance at least \(2d.\) \((iii)\) If \(q\equiv 3\mod 4\) then there exists a \([3m+k,k]_{q}-\)space \(\mathcal{C}_{3}\) with hull dimension \(t\) and minimum distance at least \(3d.\)_ The inequality (2.1) is a sufficient condition that guarantees existence of spaces with a given hull dimension and minimum distance. Before proving Theorem 1, we illustrate (2.1) with an example. For \(d-1\leq\frac{m}{2}\) we use the unimodality of the Binomial coefficient to upper bound the left side of (2.1) as \[(d+1)\cdot(q-1)^{d}\cdot{m\choose d-1}\leq(d+1)\cdot q^{d}\cdot{m\choose d-1}\] so that \[(d+1)\cdot{m\choose d-1}<q^{m-2k-d+2} \tag{2.2}\] is sufficient for (2.1) to hold. Now let \(d=\delta\cdot m,t=\gamma\cdot m\) and \(k=\epsilon\cdot m\) where \(0<\gamma<\epsilon<1\) and \(0<\delta<\frac{1}{2}\) are constants not depending on \(m.\) Using the Stirling approximation, we see that the term \({m\choose d-1}\) grows roughly as \(2^{mH(\delta)},\) where \[H(\delta):=-\delta\cdot\log\delta-(1-\delta)\cdot\log(1-\delta)\] is the entropy function and logarithm is to the base \(2,\) while \(q^{m-2k-d+2}\) grows as \(q^{m(1-2\epsilon-\delta)}.\) Thus if \(2^{H(\delta)}<q^{1-2\epsilon-\delta}\) or equivalently \(\frac{1}{2}\left(1-\delta-\frac{H(\delta)}{\log q}\right)\) strictly and \(q\) is even, then there exists a \([m+k,k]_{q}-\)space \(\mathcal{C}_{1}\) with hull dimension \(t\) and minimum distance at least \(d.\) _Proof of Theorem 1_: A set of vectors \(\{h_{1},\ldots,h_{k}\}\subset\mathbb{F}_{q}^{m}\) is said to be _mutually orthogonal_ if \[h_{i}\cdot h_{j}^{T}=0\text{ for any }1\leq i\neq j\leq k.\] Our proof consists of two steps: In the first step, we use the probabilistic method to construct a set of mutually orthogonal vectors. Next, we use these mutually orthogonal vectors to construct self-orthogonal vectors and obtain vector spaces with given hull dimension. Details follow. _Step 1_: In this step, we use the probabilistic method show that there are \(\overline{1\times m}\) vectors \(g_{1},\ldots,g_{k}\in\mathbb{F}_{q}^{m}\) satisfying the following properties: (1) The set of vectors \(\mathcal{G}:=\{g_{i}\}_{1\leq i\leq k}\) is linearly independent and mutually orthogonal. (2) The \([m,k]_{q}-\)space spanned by \(\mathcal{G}\) has a minimum distance of at least \(d.\) Letting \(g_{1},\ldots,g_{k}\) be independent and identically distributed (i.i.d.) vectors in \(\mathbb{F}_{q}^{m},\) we show below that properties \((1)-(2)\) hold with positive probability. We begin a couple of definitions. For \(2\leq i\leq k\) let \(D_{i}\) be the event that \(\mathcal{G}_{i}:=\{g_{1},\ldots,g_{i}\}\) is a basis and let \(E_{i}\) be the event that the set \(\mathcal{G}_{i}\) is mutually orthogonal. Also let \(F_{i}\) be the event that the space \(\mathcal{C}_{i}\) spanned by \(\mathcal{G}_{i}\) has a minimum distance of at least \(d.\) Finally let \(J_{i}:=D_{i}\cap E_{i}\cap F_{i}\) and define \(J_{1}=F_{1}.\) In what follows, we estimate the conditional probabilities of the events \(D_{i},E_{i}\) and \(F_{i},\) given \(\mathcal{G}_{i-1},\) in that order. Given that \(\mathcal{G}_{i-1}=\{g_{1},\ldots,g_{i-1}\}\) is a basis, the size of the space spanned by \(\mathcal{G}_{i-1}\) is \(q^{i-1}.\) Therefore the set \(\mathcal{G}_{i}\) is a basis if and only if \(g_{i}\) is chosen from the remaining \(q^{m}-q^{i-1}\) vectors and so \[\mathbb{P}(D_{i}\mid\mathcal{G}_{i-1})\mathbf{1}(D_{i-1})=\frac{q^{m}-q^{i-1} }{q^{m}}\mathbf{1}(D_{i-1})=\left(1-\frac{1}{q^{m-i+1}}\right)\mathbf{1}(D_{i -1}),\] where \(\mathbf{1}(.)\) refers to the indicator function. Consequently, we get that \[\mathbb{P}(D_{i}\mid\mathcal{G}_{i-1})\mathbf{1}(J_{i-1})=\left(1-\frac{1}{q ^{m-i+1}}\right)\mathbf{1}(J_{i-1}). \tag{2.3}\] Next we look at the conditional probability of the event \(E_{i}.\) Given that \(\mathcal{G}_{i-1}\) is a mutually orthogonal basis, we would like to estimate the probability that \[g_{i}\cdot g_{j}^{T}=0\text{ for each }1\leq j\leq i-1. \tag{2.4}\] Because \(i-1\leq k\leq m\) (see statement of Lemma) and the event \(E_{i-1}\) occurs, the \((i-1)\times m\) matrix \[B_{i-1}:=[g_{1}^{T},g_{2}^{T},\ldots,g_{i-1}^{T}]^{T} \tag{2.5}\] has a full rank of \(i-1.\) Moreover, since the matrix \(B_{i-1}\) is completely determined by the vectors in \(\mathcal{G}_{i-1},\) we assume for simplicity that the first \(i-1\) columns of \(B_{i-1}\) are linearly independent. The conditions in (2.4) can then be rewritten as \[B_{i-1}^{(1)}\cdot\left(g_{i}^{(1)}\right)^{T}=h_{i-1}:=-B_{i-1}^{(2)}\cdot \left(g_{i}^{(2)}\right)^{T}, \tag{2.6}\] where \(g_{i}^{(1)}\) is the \(1\times(i-1)\) vector formed by the first \(i-1\) entries of \(g_{i}\) and \(g_{i}^{(2)}\) is the vector formed by the remaining entries of \(g_{i}.\) Similarly, \(B_{i-1}^{(1)}\) is the \((i-1)\times(i-1)\) invertible square matrix formed by the first \(i-1\) columns of \(B_{i-1}\) and \(B_{i-1}^{(2)}\) is formed by the remaining columns of \(B_{i-1}.\) Given \(g_{i}^{(2)}\) and \(\mathcal{G}_{i-1},\) we see that (2.6) holds with probability \(\frac{1}{q^{i-1}}\) and so averaging over \(g_{i}^{(2)}\) we get that \[\mathbb{P}(E_{i}\mid\mathcal{G}_{i-1})\mathbf{1}(J_{i-1})=\frac{1}{q^{i-1}} \cdot\mathbf{1}(J_{i-1}). \tag{2.7}\] Finally, we estimate the conditional probability of the event \(F_{i},\) that the space \(\mathcal{C}_{i}\) spanned by the columns of the matrix \(B_{i}\) as defined in (2.5), has a minimum distance at least \(d.\) Let \(x=(x_{1},\ldots,x_{i})\) be any vector in \(\mathbb{F}_{q}^{i}\) with \(x_{i}\neq 0\) and \(y\) be any vector in \(\mathbb{F}_{q}^{m}.\) Given \(\mathcal{G}_{i-1},\) the conditional probability \(\mathbb{P}\left(x\cdot B_{i}=y\mid\mathcal{G}_{i-1}\right)=\frac{1}{q^{m}}\) since the \(j^{th}\) relation \(1\leq j\leq m\) in the matrix equation \(x\cdot B_{i}=y\) is of the form \[x_{i}\cdot g_{i,j}=y_{i}-\sum_{1\leq u\leq i-1}x_{u}\cdot g_{u,j}\] and \(g_{i}=(g_{i,1},\ldots,g_{i,m})\) is the random \(i^{th}\) row of the matrix \(B_{i}.\) In effect, given \(\mathcal{G}_{i-1},\) the probability that the vector \(x\) maps to _some_ vector with weight at most \(d-1\) is bounded above by \(\frac{1}{q^{m}}\cdot\sum_{l=0}^{d-1}(q-1)^{l}\cdot\binom{m}{l}\) and since there are \((q-1)\cdot q^{i-1}\) choices for \(x,\) we get the following: Given \(\mathcal{G}_{i-1},\) the probability that _some_ vector \(x\) with \(x_{i}\neq 0\) maps to some vector with weight at most \(d-1\) is bounded above by \[\frac{(q-1)q^{i-1}}{q^{m}}\cdot\sum_{j=0}^{d-1}(q-1)^{j}\cdot\binom{m}{j}=: \frac{\theta}{q^{m-i+1}}.\] By definition, if the event \(F_{i-1}\) occurs, then no vector \(x\in\mathbb{F}_{q}^{i}\) with \(x_{i}=0\) maps to a vector in \(\mathbb{F}_{q}^{m}\) with weight at most \(d-1.\) Summarizing we therefore get that \[\mathbb{P}\left(F_{i}\mid\mathcal{G}_{i-1}\right)\mathbf{1}(J_{i-1})\geq \left(1-\frac{\theta}{q^{m-i+1}}\right)\mathbf{1}(J_{i-1}) \tag{2.8}\] Combining (2.3), (2.7) and (2.8) and using \[\mathbb{P}(A\cap B\cap C)\geq\mathbb{P}(A)-\mathbb{P}\left(B^{c}\cup C^{c} \right)\geq\mathbb{P}(A)-\mathbb{P}(B^{c})-\mathbb{P}(C^{c})\] with \(A=E_{i}\) and \(B=D_{i}\) and \(C=F_{i},\) we get \[\mathbb{P}(J_{i}|\mathcal{G}_{i-1})\mathbf{1}(J_{i-1})\geq\left(\frac{1}{q^{i -1}}-\frac{1+\theta}{q^{m-i+1}}\right)\mathbf{1}(J_{i-1}). \tag{2.9}\] Taking expectations and using the fact that \(J_{i}\subset J_{i-1}\) we then get that \(\mathbb{P}(J_{i})\geq\frac{\epsilon_{i}}{q^{i-1}}\cdot\mathbb{P}(J_{i-1})\) where \(\epsilon_{i}:=1-\frac{1+\theta}{q^{m-2i+2}}.\) By iteration we therefore get that \[\mathbb{P}(J_{k})\geq\prod_{i=2}^{k}\frac{\epsilon_{i}}{q^{i-1}}\cdot\mathbb{ P}(J_{1})=q^{-\binom{k}{2}}\cdot\prod_{i=2}^{k}\epsilon_{i}\cdot\mathbb{P}(F_{1}) \geq q^{-\binom{k}{2}}\cdot\prod_{i=2}^{k}\epsilon_{i}\cdot\left(1-\frac{ \theta}{q^{m}}\right)\] since \(J_{1}=F_{1}\) and \(\mathbb{P}(F_{1})\geq 1-\frac{\theta}{q^{m}}\) by (2.8). Therefore to get \(\mathbb{P}(J_{k})>0,\) it suffices to ensure that \(\epsilon_{k}>0\) which is true if (2.1) holds. This proves that properties \((1)-(2)\) hold and thereby completes the proof of Step 1. _Step \((2)\):_ We begin with the proof of \((i).\) Suppose \(q=2^{r}\) for some integer \(r\geq 1\) and let \(\beta\) be any primitive element of \(\mathbb{F}_{q}.\) The order of \(\beta\) is \(q-1\) which is odd and so \(\alpha:=\beta^{2}\) is also a primitive element. Letting \(\{g_{1},\ldots,g_{k}\}\subset\mathbb{F}_{q}^{m}\) be the vectors satisfying properties \((1)-(2)\) as obtained in Step 1 above, we define elements \(\alpha_{i}\in\mathbb{F}_{q},1\leq i\leq k\) as follows. For \(1\leq i\leq t\) set \(\alpha_{i}=0\) if \(g_{i}\cdot g_{i}^{T}=0.\) Else let \(\omega_{i}\) be such that \[\alpha^{\omega_{i}}=-g_{i}\cdot g_{i}^{T}\] and set \(\alpha_{i}=\beta^{\omega_{i}}.\) For \(t+1\leq i\leq k,\) we let \(\alpha_{i}\) be any element such that \[\alpha_{i}^{2}+g_{i}\cdot g_{i}^{T}\neq 0.\] Consider the \([m+k,k]_{q}-\)space \(\mathcal{C}_{1}\) spanned by the rows of the matrix \[G_{1}:=[A\mid B] \tag{2.10}\] where \(A\) is the diagonal \(k\times k\) matrix \(diag(\alpha_{1},\ldots,\alpha_{k})\) and \(B\) is the \(k\times m\) matrix \[B:=[g_{1}^{T},g_{2}^{T},\ldots,g_{k}^{T}]^{T}. \tag{2.11}\] By construction we get that \(G_{1}\cdot G_{1}^{T}\) is a diagonal matrix containing exactly \(t\) zeros in its diagonal. This implies that \({\cal C}_{1}\cap{\cal C}_{1}^{\perp}\) has dimension \(t.\) To see this is true let \(h_{1},\ldots,h_{k}\) be the rows of \(G_{1}\) and let \(v\in{\cal C}_{1}\cap{\cal C}_{1}^{\perp}\) be any vector so that \(v=\sum_{i=1}^{k}a_{i}\cdot h_{i}\) with \(a_{i}\in{\mathbb{F}}_{q}.\) Taking dot product with \(h_{i}\) for \(t+1\leq i\leq k\) we get that \[0=v\cdot h_{i}^{T}=a_{i}\cdot\left(h_{i}\cdot h_{i}^{T}\right)\] and since \(h_{i}\cdot h_{i}^{T}\neq 0\) we get that \(a_{i}=0\) for \(t+1\leq i\leq k.\) This implies that \(v\) is a linear combination of \(h_{1},\ldots,h_{t}\) and therefore proves that \({\cal C}_{1}\cap{\cal C}_{1}^{\perp}\) has indeed dimension \(l.\) Finally, by property (2) of the vectors \(\{g_{i}\}\) obtained above, the space \({\cal C}_{1}\) also has a minimum distance of at least \(d.\) This proves Theorem 1\((i).\) We prove \((ii)\) as follows. As before let \(\{g_{1},\ldots,g_{k}\}\subset{\mathbb{F}}_{q}^{m}\) be the vectors obtained in Step 1 and let \(\Delta\) be a \(k\times k\) diagonal matrix containing exactly \(t\) zeros in its diagonal. Because \(q=p^{r}\) and \(p\equiv 1\mod 4\) there exists an integer \(0\leq a\leq p-1\) such that \(a^{2}\equiv-1\mod p\) (Theorem 2.12, pp. 53, Niven et al. (1991)). Therefore letting \(B\) as in (2.11) and setting \[G_{2}:=[\Delta\mid B\mid aB], \tag{2.12}\] we get that \(G_{2}\cdot G_{2}^{T}=\Delta^{2}.\) As in case \((i),\) the space \({\cal C}_{2}\) spanned by the rows of \(G_{2}\) is a \([2m+k,k]_{q}-\)space with hull dimension \(t\) and moreover, has a minimum distance at least \(2d\) by construction. This completes the proof of \((ii).\) Finally, for any odd prime \(p,\) there are integers \(0\leq a,b\leq p-1\) such that \(a^{2}+b^{2}\equiv-1\mod p\) (Theorem 5.14, pp. 246, Niven et al. (1991)). We proceed as in case \((ii)\) and set \[G_{3}:=[\Delta\mid B\mid aB\mid bB], \tag{2.13}\] to get that \(G_{3}\cdot G_{3}^{T}=\Delta^{2}.\) Consequently, the space \({\cal C}_{3}\) spanned by the rows of \(G_{3}\) is a \([3m+k,k]_{q}-\)space with hull dimension \(t\) and minimum distance at least \(3d.\) This completes the proof of \((iii)\) and therefore the Theorem. _Acknowledgements_: I thank Professors Rahul Roy and C. R. Subramanian for crucial comments and also thank IMSc and IISER Bhopal for my fellowships.
2304.07269
Learning-Assisted Optimization for Transmission Switching
The design of new strategies that exploit methods from Machine Learning to facilitate the resolution of challenging and large-scale mathematical optimization problems has recently become an avenue of prolific and promising research. In this paper, we propose a novel learning procedure to assist in the solution of a well-known computationally difficult optimization problem in power systems: The Direct Current Optimal Transmission Switching (DC-OTS) problem. The DC-OTS problem consists in finding the configuration of the power network that results in the cheapest dispatch of the power generating units. With the increasing variability in the operating conditions of power grids, the DC-OTS problem has lately sparked renewed interest, because operational strategies that include topological network changes have proved to be effective and efficient in helping maintain the balance between generation and demand. The DC-OTS problem includes a set of binaries that determine the on/off status of the switchable transmission lines. Therefore, it takes the form of a mixed-integer program, which is NP-hard in general. In this paper, we propose an approach to tackle the DC-OTS problem that leverages known solutions to past instances of the problem to speed up the mixed-integer optimization of a new unseen model. Although our approach does not offer optimality guarantees, a series of numerical experiments run on a real-life power system dataset show that it features a very high success rate in identifying the optimal grid topology (especially when compared to alternative competing heuristics), while rendering remarkable speed-up factors.
Salvador Pineda, Juan Miguel Morales, Asunción Jiménez-Cordero
2023-04-14T17:24:25Z
http://arxiv.org/abs/2304.07269v2
# Learning-Assisted Optimization for Transmission Switching ###### Abstract The design of new strategies that exploit methods from Machine Learning to facilitate the resolution of challenging and large-scale mathematical optimization problems has recently become an avenue of prolific and promising research. In this paper, we propose a novel learning procedure to assist in the solution of a well-known computationally difficult optimization problem in power systems: The Direct Current Optimal Transmission Switching (DC-OTS). This model consists in finding the configuration of the power network that results in the cheapest dispatch of the power generating units. For this, the model includes a set of binaries that determine the on/off status of the switchable transmission lines. Therefore, the DC-OTS problem takes the form of a mixed-integer program, which is NP-hard in general. Its solution has been approached by way of exact and heuristic methods. The former employ techniques from mixed-integer programming to solve the problem to certified global optimality, while the latter seek to identify good solutions quickly. While the heuristic methods tend to be comparatively much faster, they may suggest suboptimal or even infeasible networks topologies. The proposed approach in this paper leverages known solutions to past instances of the DC-OTS problem to speed up the mixed-integer optimization of a new unseen model. Although it does not offer optimality guarantees, a series of numerical experiments run on a real-life power system dataset show that it features a very high success rate in identifying the optimal grid topology (especially when compared to alternative competing heuristics), while rendering remarkable speed-up factors. Machine Learning, Mathematical Optimization, Mixed-Integer Programming, Optimal Transmission Switching, Optimal Power Flow ## 1 Introduction Power systems are colossal and complex networks engineered to reliably supply electricity where it is needed at the _lowest_ possible cost. For this, operational routines based on the _Optimal Power Flow_ (OPF) problem are executed daily and in real time to guarantee the most cost-efficient dispatch of power generating units that satisfy the grid constraints. In particular, the way power flows through a power network is determined by the so-called _Kirchhoff's laws_. These laws are responsible for the fact that _switching off_ a transmission line in the grid can actually result in a lower electricity production cost (a type of "Braess' Paradox") and have provided power system operators with a complementary control action, namely, changes in the grid _topology_, to reduce this cost even further. The possibility of flexibly exploiting the topological configuration of the grid was first suggested in [1] and later formalized in [2] into what we know today as the _Optimal Transmission Switching_ (OTS) problem. Essentially, the OTS problem is the OPF problem enriched with a whole new set of on/off variables that model the status of each _switchable_ transmission line in the system. The OPF formulation we use as a basis to pose the OTS problem is built on the widely used _direct current_ (DC) linear approximation of the power flow equations. Even so, the resulting formulation of the OTS problem, known as DC-OTS, takes the form of a mixed-integer program, which has been proven to be NP-hard for general network classes [3; 4]. To date, the resolution of the DC-OTS has been approached from two distinct methodological points of view, namely, by means of _exact_ methods and by way of _heuristics_. The former exploit techniques from mixed-integer programming such as bounding, tightening, and the generation of valid cuts to solve the DC-OTS to (certified) global optimality, while the latter seek to quickly identify good solutions of the problem, but potentially forgoing optimality and even at the risk of suggesting infeasible grid topologies. Among the methods that are exact, we highlight the works in [3], [4], [5], and [6]. More specifically, the authors in [3] propose a cycle-based formulation of the DC-OTS problem, which results in a mixed-integer linear program. They prove the NP-hardness of the DC-OTS even if the power grid takes the form of a series-parallel graph with only one generation-demand pair, and derive classes of strong valid inequalities for a relaxation of their formulation that can be separated in polynomial time. In [4], the authors work instead with the mixed-integer linear formulation of the DC-OTS that employs a big-M to model the disjunctive nature of the equation linking the power flow through a switchable line and the voltage angles at the nodes the line connects. This is the formulation of the DC-OTS we also consider in this paper. The big-M must be a valid upper bound of the maximum angle difference when the switchable line is open. In [4], it is proven that determining this maximum is NP-hard and, consequently, they propose to set the big-M to the shortest path between the nodes concerned over a spanning subgraph that is assumed to exist. The authors in [5] conduct a computational study of a mixed-integer linear reformulation of the DC-OTS problem alternative to that considered in [4]. This reformulation makes use of the so-called _power transfer distribution factors_ (PTDFs) and the notion of _flow-cancelling transactions_ to model open lines. They argue that this reformulation comparatively offers significant computational advantages, especially for large systems and when the number of switchable lines is relatively small. Finally, a family of cutting planes for the DC-OTS problem are developed in [6]. These cutting planes are derived from the polyhedral description of the integer hull of a certain constraint set that appears in the DC-OTS problem. Specifically, this constraint set is made up of a nodal power balance equation together with the power flow limits of the associated incident lines. Those of these limits that correspond to switchable lines are multiplied by the respective binary variable. Among the heuristic methods that have been proposed in the technical literature, we can distinguish two main groups. The first group includes the heuristic approaches that do not rely on the solutions of previous instances of the OTS problem. For example, some heuristics trim down the computational time by reducing the number of lines that can be switched off [7; 8; 9]. While these approaches do not reach the maximum cost savings, the reported numerical studies show that the cost increase with respect to the optimal solution is small in most cases. Other related works maintain the original set of switchable lines and determine their on/off status using greedy algorithms [10; 11]. They use dual information of the OPF problem to rank the lines according to the impact of their status on the operational cost. Finally, the authors of [12] propose solving the OTS problem in parallel with heuristics that generate good candidate solutions to speed up conventional MIP algorithms. The second group comprises data-based heuristic methods that require information about the optimal solution of past OTS problems. For instance, the authors of [13] use a \(K\)-nearest neighbor strategy to drastically reduce the search space of the integer solution to the DC-OTS problem. In particular, given a collection of past instances of the problem (whose solution is assumed to be known and available), they restrict the search space to the \(K\) integer solutions of those instances which are the closest to the one to be solved in terms of the problem parameters (for example, nodal demands). They then provide as solution to the instance of the DC-OTS problem under consideration the one that results in the lowest cost. This last step requires solving \(K\) linear programs, one per candidate integer solution. Similarly, references [14; 15] present more sophisticated methodologies to learn the status of switchable lines using neural networks. Against this background, in this paper we propose a novel method to address the DC-OTS by exploiting known solutions to _past_ instances of the problem in two distinct, but potentially synergistic ways. First, from these past solutions, we infer those switchable lines that are most likely to be operational (resp. inoperative) in the current instance of the problem (the one we want to solve). Mathematically, this translates into fixing a few binaries to one (resp. zero), an apparently small action that brings, however, substantial benefits in terms of computational speed. Second, beyond the speed-up that one can expect from simply reducing the number of binaries in a MILP, this strategy also allows us to leverage the shortest-path-based argument invoked in [4] to further tighten the big-Ms in the problem formulation, with the consequent extra computational gain. Alternatively, we additionally explore the possibility of directly _inferring_ the big-Ms from the past solutions to the problem instead of resorting to the shortest-path calculation. In any case, the inference of the binaries to be fixed and/or the values of the big-Ms to be used is conducted through a Machine Learning algorithm of the decision-maker's choice. In this paper, we use \(K\)-nearest neighbors because, aside from its obvious, but appealing simplicity, we believe it delivers results that are more than convincing to support the main takeaway messages of this work. Importantly, while our proposal is not endowed with theoretical guarantees of optimality (and thus, belongs to the group of heuristics discussed above), the role that Machine Learning plays in it is supportive rather than surrogative (we still need to solve the MILP problem), which results in significantly lower rates of infeasibility and suboptimality, as demonstrated in the numerical experiments. The remainder of this paper is structured as follows. Section 2 introduces the DC-OTS problem mathematically and discusses how to equivalently reformulate it as a mixed-integer linear program (MILP) through the use of large enough constants (the so-called _big-Ms_). Section 3 describes the different methods we consider in this paper to identify the most cost-efficient grid topology of a power system, including those we propose and those we use for benchmarking. A series of numerical experiments run on a 118-bus power system typically used in the context of the DC-OTS problem are presented and discussed in Section 4. Finally, conclusions are duly drawn in Section 5. ## 2 Optimal transmission switching We start this section by introducing the standard and well-known formulation of the _Direct Current Optimal Transmission Switching_ problem (DC-OTS), which will serve us a basis to construct and motivate its mixed-integer reformulation immediately after. Consider a power network consisting of a collection of nodes \(\mathcal{N}\) and transmission lines \(\mathcal{L}\). To lighten the mathematical formulation of the DC-OTS, we assume _w.l.o.g_ that there is one generator and one power load per node \(n\in\mathcal{N}\). The power dispatch of the generator and the power consumed by the power load are denoted by \(p_{n}\) and \(d_{n}\), respectively. Each generator is characterized by a minimum and maximum power output, \(\underline{p}_{n}\) and \(\overline{p}_{n}\), and a marginal production cost \(c_{n}\). We represent the power flow through the line \((n,m)\in\mathcal{L}\) connecting nodes \(n\) and \(m\) by \(f_{nm}\), with \(f_{nm}\in[-\overline{f}_{nm},\overline{f}_{nm}]\). For each node \(n\) we distinguish between the set of transmission lines whose power flow _enters_ the node, \(\mathcal{L}_{n}^{+}\), and the set of transmission lines whose power flow _leaves_ it, \(\mathcal{L}_{n}^{-}\). The power network includes a subset \(\mathcal{L}_{\mathcal{S}}\subseteq\mathcal{L}\) of lines that can be switched on/off. If the line \((n,m)\in\mathcal{L}_{\mathcal{S}}\), its status is determined by a binary variable \(x_{nm}\), which takes value \(1\) if the line is fully operational, and \(0\) when disconnected. In a DC power network, the flow \(f_{nm}\) through an operational line is given by the product of the susceptance of the line, \(b_{nm}\), and the difference of the voltage angles at nodes \(n\) and \(m\), i.e., \(\theta_{n}-\theta_{m}\). We use bold symbols to define the vectors of variables \(\mathbf{p}=[p_{n},n\in\mathcal{N}]\), \(\boldsymbol{\theta}=[\theta_{n},n\in\mathcal{N}]\), \(\mathbf{f}=[f_{nm},(n,m)\in\mathcal{L}]\), and \(\mathbf{x}=[x_{nm},(n,m)\in\mathcal{L}_{\mathcal{S}}]\). With this notation in place, the DC-OTS problem can be formulated as follows: \[\min_{p_{n},f_{nm},\theta_{n},x_{nm}} \sum_{n}c_{n}\,p_{n} \tag{1a}\] \[\mathrm{s.t.} \underline{p}_{n}\leq p_{n}\leq\overline{p}_{n},\quad\forall n\in \mathcal{N}\] (1b) \[\sum_{(n,m)\in\mathcal{L}_{n}^{-}}f_{nm}-\sum_{(n,m)\in\mathcal{L }_{n}^{+}}f_{nm}=p_{n}-d_{n},\quad\forall n\in\mathcal{N}\] (1c) \[f_{nm}=x_{nm}b_{nm}(\theta_{n}-\theta_{m}),\quad\forall(n,m)\in \mathcal{L}_{\mathcal{S}}\] (1d) \[f_{nm}=b_{nm}(\theta_{n}-\theta_{m}),\quad\forall(n,m)\in \mathcal{L}\setminus\mathcal{L}_{\mathcal{S}}\] (1e) \[-x_{nm}\overline{f}_{nm}\leq f_{nm}\leq x_{nm}\overline{f}_{nm}, \quad\forall(n,m)\in\mathcal{L}_{\mathcal{S}}\] (1f) \[-\overline{f}_{nm}\leq f_{nm}\leq\overline{f}_{nm},\quad\forall( n,m)\in\mathcal{L}\setminus\mathcal{L}_{\mathcal{S}}\] (1g) \[x_{nm}\in\{0,1\},\quad\forall(n,m)\in\mathcal{L}_{\mathcal{S}}\] (1h) \[\theta_{1}=0 \tag{1i}\] The objective is to minimize the electricity generation cost, expressed as in (1a). For this, the power system operator essentially decides the lines that are switched off and the power output of generating units, which must lie within the interval \([\underline{p}_{n},\overline{p}_{n}]\), as imposed in (1b). The flows through the transmission lines are governed by the so-called _Kirchhoff's laws_, which translate into the nodal power balance equations (1c) and the flow-angle relationship stated in (1d) and (1e). In the case of a switchable line, this relationship must be enforced only when the line is in service. This is why the binary variable \(x_{nm}\) appears in (1d). Naturally, \(x_{nm}=0\) must imply \(f_{nm}=0\). Constraints (1f) and (1g) impose the capacity limits of the switchable and non-switchable lines, respectively. Constraint (1h) states the binary character of variables \(x_{nm}\), while equation (1i) arbitrarily sets one of the nodal angles to zero to avoid solution multiplicity. Problem (1) is a mixed-integer nonlinear programming problem due to the product \(x_{nm}(\theta_{n}-\theta_{m})\) in (1d). This problem has been proven to be NP-hard even when the power network includes a spanning subnetwork connected by non-switchable lines only [4] or takes the form of a series-parallel graph with a single generator/load pair [3]. The disjunctive nature of Equation (1d) allows for a linearization of Problem (1) at the cost of introducing a pair of large enough constants \(\underline{M}_{nm}\), \(\overline{M}_{nm}\) per switchable line [16]. Indeed, Equation (1d) can be replaced by the inequalities \[b_{nm}(\theta_{n}-\theta_{m})-\overline{M}_{nm}(1-x_{nm})\leq f_{nm}\leq b_{nm} (\theta_{n}-\theta_{m})-\underline{M}_{nm}(1-x_{nm}) \tag{2}\] provided that the large constants \(\underline{M}_{nm},\overline{M}_{nm}\) respectively constitute a lower and an upper bound of \(b_{nm}(\theta_{n}-\theta_{m})\) when the line \((n,m)\) is disconnected (\(x_{nm}=0\)), that is, \[\underline{M}_{nm} \leq\underline{M}_{nm}^{\mathrm{OPT}}:=b_{nm}\times\min_{\mathcal{ F}}(\theta_{n}-\theta_{m}) \tag{3a}\] \[\overline{M}_{nm} \geq\overline{M}_{nm}^{\mathrm{OPT}}:=b_{nm}\times\max_{\mathcal{ F}}(\theta_{n}-\theta_{m}) \tag{3b}\] where \(\mathcal{F}:=\{(\mathbf{p},\theta,\mathbf{f},\mathbf{x})\in\mathbb{R}^{2| \mathcal{N}|+|\mathcal{L}|+|\mathcal{L}_{\mathcal{S}}|}\) satisfying (1b), (1c), (1e)-(1i), \(x_{nm}=0\), and (2) for all \((n^{\prime},m^{\prime})\in\mathcal{L}_{\mathcal{S}}\setminus(n,m)\}\). First of all, for (3) to be of any use, \(\underline{M}_{nm}^{\mathrm{OPT}}\) and \(\overline{M}_{nm}^{\mathrm{OPT}}\) must be finite, as illustrated in [4]. This may not be the case in power systems where switching off lines can result in isolated subnetworks. However, in practice, islanding in power grids is to be avoided in general for many reasons other than the minimization of the operational cost (e.g., due to reliability and security standards). Consequently, in what follows, we assume that the set of switchable lines \(\mathcal{L}_{\mathcal{S}}\) is such that the connectivity of the whole power network is always guaranteed. Unfortunately, the authors in [4] show that, even when \(\underline{M}_{nm}^{\mathrm{OPT}}\) and \(\overline{M}_{nm}^{\mathrm{OPT}}\) are finite, computing them is as hard as solving the original DC-OTS problem. Therefore, we are obliged to be content with a lower and an upper bound. The choice of these bounds, or rather, of the large constants \(\underline{M}_{nm},\overline{M}_{nm}\) (for all \((n,m)\in\mathcal{L}_{\mathcal{S}}\)) is of utmost importance, because it has a major impact on the relaxation bound of the mixed-integer _linear_ program that results from replacing (1d) with the inequalities (2), that is, \[\min_{p_{n},f_{nm},\theta_{n},x_{nm}} \sum_{n}c_{n}\,p_{n}\] (4a) s.t. \[\underline{p}_{n}\leq p_{n}\leq\overline{p}_{n},\quad\forall n\in \mathcal{N} \tag{4b}\] \[\sum_{(n,m)\in\mathcal{L}_{n}^{-}}f_{nm}-\sum_{(n,m)\in\mathcal{ L}_{n}^{+}}f_{nm}=p_{n}-d_{n},\quad\forall n\in\mathcal{N}\] (4c) \[b_{nm}(\theta_{n}-\theta_{m})-\overline{M}_{nm}(1-x_{nm})\leq f_ {nm},\quad\forall(n,m)\in\mathcal{L}_{\mathcal{S}}\] (4d) \[f_{nm}\leq b_{nm}(\theta_{n}-\theta_{m})-\underline{M}_{nm}(1-x_{ nm}),\quad\forall(n,m)\in\mathcal{L}_{\mathcal{S}}\] (4e) \[f_{nm}=b_{nm}(\theta_{n}-\theta_{m}),\quad\forall(n,m)\in \mathcal{L}\setminus\mathcal{L}_{\mathcal{S}}\] (4f) \[-x_{nm}\overline{f}_{nm}\leq f_{nm}\leq x_{nm}\overline{f}_{nm}, \quad\forall(n,m)\in\mathcal{L}_{\mathcal{S}} \tag{4g}\] \[-\overline{f}_{nm}\leq f_{nm}\leq\overline{f}_{nm},\quad\forall(n,m) \in\mathcal{L}\setminus\mathcal{L}_{\mathcal{S}} \tag{4h}\] \[x_{nm}\in\{0,1\},\quad\forall(n,m)\in\mathcal{L}_{\mathcal{S}}\] (4i) \[\theta_{1}=0 \tag{4j}\] Tighter constants \(\underline{M}_{nm},\overline{M}_{nm}\) lead to stronger linear relaxations of (4), which, in turn, is expected to impact positively on the performance of the branch-and-cut algorithm used to solve it. On the assumption that the power network includes a spanning tree comprising non-switchable lines, the authors in [4] propose the following symmetric bound: \[-\underline{M}_{nm}=\overline{M}_{nm}=b_{nm}\sum_{(k,l)\in\mathrm{SP}_{nm}} \frac{\overline{f}_{kl}}{b_{kl}},\quad\forall(n,m)\in\mathcal{L}_{\mathcal{S}} \tag{5}\] where \(\mathrm{SP}_{nm}\) is the shortest path between nodes \(n\) and \(m\) through said spanning tree. This symmetric bound is inexpensive to compute using Dijkstra's algorithm [17]. In this paper, we propose and test simple, but effective data-driven scheme based on nearest neighbors to estimate lower bounds on \(\underline{M}_{nm}^{\mathrm{OPT}}\) and upper bounds on \(\overline{M}_{nm}^{\mathrm{OPT}}\). This scheme is also used to fix some of the binaries \(x_{nm}\) in (4). While the inherent sampling error of the proposed methodology precludes optimality guarantees, our numerical experiments show that it is able to identify optimal or nearly-optimal solutions to the DC-OTS problem very fast. ## 3 Solution methods In this section we present the different methods we consider to solve the DC-OTS problem. First, we describe the exact method proposed in [4], which we use as a benchmark. Second, we explain a direct learning-based approach that utilizes the \(K\) nearest neighbors technique and the learning-based heuristic approach investigated in [18]. Finally, we introduce the data-based methodologies proposed in this paper. Suppose that the DC-OTS problem (4) has been solved using the big-M values suggested in [4] for different instances to form a training set \(\mathcal{T}=\{(\mathbf{d}^{t},\mathbf{x}^{t},\boldsymbol{\theta}^{t}),t=1, \ldots,|\mathcal{T}|\}\), where, the symbol \(|\mathcal{T}|\) indicates the cardinal of set \(\mathcal{T}\). For each instance, \(t\), \(\mathbf{d}^{t}=[d_{n}^{t},n\in\mathcal{N}]\) denotes the vector of nodal loads, \(\mathbf{x}^{t}=[x_{nm}^{t},(n,m)\in\mathcal{L}_{\mathcal{S}}]\) is the vector of optimal binary variables, which determine whether line \((n,m)\) in instance \(t\) is connected or not; and \(\boldsymbol{\theta}^{t}=[\theta_{n}^{t},n\in\mathcal{N}]\) is the vector of optimal voltage angles. For notation purposes, we use \(C(\mathbf{d}^{t},\mathbf{x}^{t})\) to denote the value of the objective function (1a) when model (1) is solved for demand values \(\mathbf{d}^{t}\) and the binary variables fixed to \(\mathbf{x}^{t}\). Thus, evaluating this function requires to solve a linear programming problem. If this linear problem is infeasible, then \(C(\mathbf{d}^{t},\mathbf{x}^{t})=\infty\). In what follows, we present different strategies to solve the DC-OTS problem for an unseen test instance \(\hat{t}\) with demand values \(\mathbf{d}^{t}\). The goal is to employ the information from the training set, \(\mathcal{T}\), to reduce the computational burden of solving the DC-OTS reformulation (4) for the test instance \(\hat{t}\). Note that depending on the strategy that is applied, the response variable of the test instance to be learned can be \(\mathbf{x}^{\hat{t}}\), \(\boldsymbol{\theta}^{\hat{t}}\) or the tuple \((\mathbf{x}^{\hat{t}},\boldsymbol{\theta}^{\hat{t}})\). ### Exact benchmark approach In the benchmark approach (BEN) the optimal solution of the test DC-OTS problem is obtained using the proposal in [4]. Particularly, problem (4) is solved using the big-M values computed according to Equation (5). This strategy is an exact approach that does not make use of previously solved instances of the problem, but guarantees that its global optimal solution is eventually retrieved. Nevertheless, the computational time employed by this approach may be extremely high. Algorithm 1 shows a detailed description of this approach. ``` Input: load vector for test instance \(\hat{t}\), \(\mathbf{d}^{\hat{t}}\). 1. Determine the shortest path \(\text{SP}_{nm}\) between each pair of nodes \((n,m)\in\mathcal{L}_{S}\) through the spanning tree of non-switchable lines. 2. Compute \(\underline{M}_{nm}\) and \(\overline{M}_{nm}\) for each \((n,m)\in\mathcal{L}_{S}\) using Equation (5). 3. Solve Problem (4) for the load vector \(\mathbf{d}^{\hat{t}}\) and using the big-M values computed in the previous step. Output: Optimal network configuration \(\mathbf{x}^{\hat{t}}\). ``` **Algorithm 1** Benchmark method, BEN ### Existing learning-based approaches In this subsection we present two existing learning approaches based on the \(K\) nearest neighbors technique [18]. The first approach is a pure machine-learning strategy that directly predicts the binary variables of the test instance using the information of the \(K\) closest training data. Such closeness is measured in terms of the \(\ell_{2}\) distance among the load values of the training and test points, even though other distance measures can be applied. The set of closest instances is denoted as \(\mathcal{T}_{K}\). We denote this method by \(K\)nn-D since it _directly_ predicts the value of all binary variables from the data. In the particular case of the DC-OTS problem, we adapt the \(K\)nn strategy as follows: for a fixed number of neighbors \(K\), we fix the binary variables of the test problem (1) to the rounded mean of the decision binary variables of such \(K\) nearest neighbors. Once all binary variables are fixed, model (1) becomes a linear programming problem that can be rapidly solved. Algorithm 2 shows a detailed explanation of the procedure. Note that, in this strategy we only need the information about the load vector and the optimal binary variables in the training data, i.e., we only need \(\{(\mathbf{d}^{t},\mathbf{x}^{t})\}\) for \(t=1,\ldots,|\mathcal{T}|\}\). This approach is very simple and fast. However, fixing the binary variables using a rounding procedure may yield a non-negligible number of infeasible and suboptimal problems. ``` 0: number of neighbors, \(K\); training set, \(\mathcal{T}=\{(\mathbf{d}^{t},\mathbf{x}^{t})\}\) for \(t=1,\ldots,|\mathcal{T}|\}\); and load vector for test instance \(\hat{t}\), \(\mathbf{d}^{\hat{t}}\). 1. Compute the \(\ell_{2}\) distances among the loads of the training data \(t=1,\ldots,|\mathcal{T}|\}\), and the one of the test point, \(\hat{t}\). In other words, compute \(\|\mathbf{d}^{t}-\mathbf{d}^{t}\|_{2}\), for \(t=1,\ldots,|\mathcal{T}|\}\). 2. Select the closest \(K\) neighbors according to the distances computed in 1), and save them in the set \(\mathcal{T}_{K}\). 3. Compute the binary variables \(\mathbf{x}^{\hat{t}}\) as the rounded mean of the binary decision values from \(\mathcal{T}_{K}\) to the closest integer. That is to say: \[\mathbf{x}^{\hat{t}}=\left\lceil\frac{1}{|\mathcal{T}_{K}|}\sum_{t\in \mathcal{T}_{K}}\mathbf{x}^{t}\right\rceil\] where \(\lceil\mathbf{x}\rfloor\) denotes the component-wise nearest integer function. 4. Network configuration \(\mathbf{x}^{\hat{t}}\). ``` **Algorithm 2**\(K\)nn-D The second learning-based methodology explained in this subsection is proposed in [13] and also employs the \(K\)nn technique. As occurs in the previous strategy, here, the authors assume given the set \(\{(\mathbf{d}^{t},\mathbf{x}^{t}),\,t=1,\ldots,|\mathcal{T}|\}\). In short, their proposal works as follows: for a fixed value of \(K\), the \(K\) closest instances to the test point are saved in the set \(\mathcal{T}_{K}\). Then, we evaluate function \(C(\mathbf{d}^{\hat{t}},\mathbf{x}^{t})\) for each \(t\in\mathcal{T}_{K}\) by solving \(K\) linear problems. The optimal binary variables for the test instance \(\mathbf{x}^{\hat{t}}\) are set to those \(\mathbf{x}^{t}\) that lead to the lowest value of \(C(\mathbf{d}^{\hat{t}},\mathbf{x}^{t})\). This approach is denoted as \(K\)nn-LP and more details about it are provided in Algorithm 3. Note that the value of \(K\) strongly affects the speed of the algorithm as well as the number of suboptimal or infeasible problems. Larger values of \(K\) imply taking into account more training points to get the estimation of the test response. As a consequence, a larger number of LPs should be solved, and the computational burden increases. However, the probability of having suboptimal or, even worse, infeasible solutions is reduced. On the contrary, lower values of \(K\) diminishes the computational time of the procedure but increases the risk of obtaining suboptimal or infeasible solutions. ``` 0: number of neighbors, \(K\); training set, \(\mathcal{T}=\{(\mathbf{d}^{t},\mathbf{x}^{t}),\,t=1,\ldots,|\mathcal{T}|\}\); and load vector for test instance \(\hat{t}\), \(\mathbf{d}^{\hat{t}}\). 1. Steps 1) and 2) for Algorithm 2. 2. Evaluate \(C(\mathbf{d}^{\hat{t}},\mathbf{x}^{t})\) for each instance \(t\in\mathcal{T}_{K}\) 3. Select \(\tilde{t}=\arg\min_{t\in\mathcal{T}_{K}}C(\mathbf{d}^{\hat{t}},\mathbf{x}^{t})\). 4. Set \(\mathbf{x}^{\hat{t}}=\mathbf{x}^{\hat{t}}\) Output: Network configuration \(\mathbf{x}^{\hat{t}}\). ``` **Algorithm 3**\(K\)nn-LP, [13] ### Proposed learning-based approaches In this subsection, we propose two improved methodologies which combine the benefits of exact and learning methods. Both approaches start by finding the \(K\) closest training points to the test instance \(\hat{t}\) to be saved in \(\mathcal{T}_{K}\) and fixing those binary variables that reach the same value for all nearest neighbors according to an _unanimous vote_. The two proposed approaches also find, in a different fashion, lower values of the big-Ms than those computed in [4]. Since some binary variables may have been fixed to one thanks to the neighbors' information, the first approach we propose consists in recomputing the shortest paths and the corresponding big-M values using (5). Differently, the second methodology proposed in this paper directly set the big-M values to the maximum and minimum values of the angle differences observed in the closest DC-OTS instances. Either way, smaller big-Ms are obtained, and hence, the associated feasible region of the DC-OTS problem is tighter. As a consequence, we solve a single MILP with a tighter feasible region and a smaller number of binary variables. More specifically, in the first proposed approach (denoted as \(K\)nn-BM) the binary variables of the test instance are set to 1 (resp. to 0) if all the training instances in \(\mathcal{T}_{K}\) concur that the value should be 1 (resp. 0). On the other hand, for those binary variables that are not fixed, the corresponding big-M values are updated using the information of the previously fixed variables. In particular, these fixed binaries are used to recompute the shortest path that determines the big-M values in Equation (5). This strategy relies on the unanimity of all the nearest neighbors and therefore, this learning-based approach is expected to be quite conservative, specially for high values of \(K\). In order to further assess the computational savings yielded by this approach we also investigate two variations. For instance, we denote by \(K\)nn-B the approach in which binary variables are fixed but big-M values are computed using only the information from the original spanning tree. We also consider the \(K\)nn-M approach that does not fix any binary decision variable but only uses the information of the closest neighbors to recompute the shortest paths and update the big-M values with Equation (5). By comparing the computational burden of these three approaches we can analyze whether the numerical improvements are caused by the lower number of binary variables or the tighter values of the big-M parameters. Algorithm 4 shows a detailed description of the methods \(K\)nn-B, \(K\)nn-M and \(K\)nn-BM, respectively. In particular, we highlight in the algorithm the steps that should be ignored when approaches \(K\)nn-B and \(K\)nn-M are considered. **Input:** number of neighbors, \(K\); training set, \(\mathcal{T}=\{(\mathbf{d}^{t},\mathbf{x}^{t})\}\) for \(t=1,\ldots,|\mathcal{T}|\); and load vector for test instance \(\hat{t}\), \(\mathbf{d}^{\hat{t}}\). 1. Compute big-M values using the information from the original spanning tree as shown in Equation (5). 2. Steps 1) and 2) for Algorithm 2. 3. Fix the binary variables for the test instance taking into account the unanimity of the closest training instances in \(\mathcal{T}_{K}\) by adding the following constraint to model (4): \[\left|\frac{1}{|\mathcal{T}_{K}|}\sum_{t\in\mathcal{T}_{K}}\mathbf{x}^{t} \right|\leq\mathbf{x}^{\hat{t}}\leq\left|\frac{1}{|\mathcal{T}_{K}|}\sum_{t \in\mathcal{T}_{K}}\mathbf{x}^{t}\right|\] Note that this step is ignored in method \(K\)nn-M. 4. Using the information of the binary variables obtained in the previous step, compute the updated shortest path by running Dijkstra's algorithm. 5. Update big-M values using Equation (5) and taking into account the updated shortest path. Remember that steps 4) and 5) of this algorithm are ommited when the strategy \(K\)nn-B is run. 6. Solve model (4). **Output:** Network configuration \(\mathbf{x}^{\hat{t}}\). **Algorithm 4**\(K\)nn-BM, \(K\)nn-B and \(K\)nn-M The value of \(K\) also plays an important role in these approaches. Low values of \(K\) increase the chances of unanimous consensus of the nearest neighbors and therefore, a higher number of binary variables are expected to be fixed, and tighter big-M values are obtained. This way, the computational burden of the OTS problem is reduced at the expense of increasing the risk of obtaining infeasible or suboptimal problems. In the extreme case, if \(K=1\), all binary variables are fixed to the values of the closest instance of the training set. On the contrary, large values of \(K\) increase the computational burden but the resulting problems have a high chance of being feasible. In the extreme case, if the whole training set is considered, very few binary variables are expected to be fixed and the computational savings are reduced. The three methodologies presented above compute the big-M values using past observed data through the shortest path algorithm. However, as can be derived from Equation (3), the values \(\overline{M}_{nm}\) and \(\underline{M}_{nm}\) for a switchable line are just the maximum and minimum values of the difference between the voltage angles at nodes \(n\) and \(m\) multiplied by \(b_{nm}\). Therefore, following this idea, the second data-driven approach that we propose (denoted as \(K\)nn-B\(\widehat{M}\)) estimate the big-M values using the information of \((\theta_{n}^{t}-\theta_{m}^{t})\), for all \(t\in\mathcal{T}\) such that \(x_{nm}^{t}=0\). Note that the accent " \(\widehat{\ }\)" above the \(M\) indicates that the big-M values have been estimated using the information from the observed voltage angles. This strategy is riskier than the one used in \(K\)nn-BM since it leads to much tighter feasible regions, which significantly reduces the computational burden of solving the OTS problem, but also increases the chances of yielding infeasible problems. For the sake of comparison, we also consider the approach All-\(\widehat{M}\) in which no binary variables are fixed and big-M values are set using the observed angle differences. More details about the approaches \(K\)nn-B\(\widehat{M}\) and All-\(\widehat{M}\) are provided in Algorithm 5. It is worth noticing that while the big-M values computed by (5) are symmetric, those derived by Algorithm 5 are not. To sum up, Table 1 provides a brief description of the different methods explained throughout Section 3. The first column of the table includes the name of each strategy. The second column shows whether the final problem to be solved is a linear program (LP) or a mixed-integer linear program (MILP). In the third column, the total number of problems to be solved is indicated. Column four shows the number of binary decision variables of the MILPs to be solved. Particularly, _original_ means that the number of variables is exactly the same as the one from the original OTS formulation (4). In contrast, _reduced_ means that the number of binary variables of the resulting MILP has been reduced compared to the original formulation. Finally, the last column indicates how the big-M values have been computed. If _shortest (spanning)_ is written, then we indicate that the bounds are computed by means of the shortest path method and only using the information from the original spanning subgraph. On the contrary, the choice _shortest (update)_ means that the shortest paths needed to compute the big-M values have been updated with the information provided by the closest neighbors. Finally, the word _historic angles_ implies that the bounds are computed using the voltage angle information of previously solved instances. ## 4 Numerical simulations In this section, we present the computational results of the different methodologies discussed in Section 3 for a realistic network. In particular, we compare all approaches using a 118-bus network that includes 186 lines [19]. This network size is sufficiently substantial to render the instances nontrivial for current algorithms, yet not so large as to make them computationally intractable. Indeed, this is the most commonly used network to test OTS solving strategies in the literature [2; 3; 4; 6; 13]. As justified in Section 2, we consider a fixed connected spanning subgraph of 117 lines, while the remaining 69 lines can be switched on or off to minimize the operation cost. The spanning subgraph has been chosen in order to obtain sufficiently challenging problems. For this network, we generate 500 different instances of the OTS problem that differ on the nodal demand \(d_{n}\), which is randomly sampled using independent uniform distributions in the range \([0.9\widehat{d}_{n},1.1\widehat{d}_{n}]\), where \(\widehat{d}_{n}\) is the baseline demand. The database information can be downloaded from [20]. We use a leave-one-out cross-validation technique under which all the available data except for one data point is used as the training set, and the left-out data point is used as the test set. Consequently, the number of nearest neighbors \(K\) ranges from 1 to 499. This process is repeated for all data points and the resulting performance metrics are averaged to get an estimate of the model's generalization performance. All optimization problems have been solved using GUROBI 9.1.2 [21] on a Linux-based server with CPUs clocking at 2.6 GHz, 1 thread and 8 GB of RAM. In all cases, the optimality GAP has been set to 0.01% and the time limit to 1 hour. To illustrate the economic advantages of disconnecting some lines, Figure 1 depicts an histogram of the relative difference between the DC-OTS cost if model (4) is solved by the benchmark approach described in Section 3.1, and the cost obtained if all the 186 lines are connected. This second cost is computed by fixing binary variables \(x_{nm}\) to one and solving model (1) as a linear program. Figure 1 does not include the instances for which this linear problem is infeasible. As observed, the cost savings are significant in most instances, and in the most favorable cases it reaches 15%. The average cost savings for this particular network and the 500 instances considered is 13.2%. On the other hand, solving model (4) is computationally hard and to prove it, Figure 2 plots the number of problems solved as a function of the computational time. For illustration purposes, the left plot shows the 439 problems solved in less than 100 seconds ("easy" instances) and the right plot the remaining "hard" instances that require a longer time. The average time of all instances is 145s, while the average time of the hard instances amounts to 1085 seconds, which demonstrates the difficulty of solving model (4) to certified optimality. In addition, some instances are not solved to optimality (gap lower than 0.01%) after one hour even though model (4) "only" includes 69 binary variables associated to the 69 switchable lines. Throughout this section, we compare the different methodologies with the best solution found in one hour by the BEN approach, although it is not the _true_ optimum, i.e., it may have a gap greater than 0.01%. Next, we discuss the results provided by the \(K\)nn-D approach described in Section 3.2, where the binary variables are just fixed to the values predicted by the nearest neighbor technique. Table 2 collates, for different number of neighbors \(K\), the number of instances in which \(K\)nn-D delivers the same solution obtained by the benchmark BEN (# opt), the number of instances with a suboptimal solution (# sub), the maximum relative gap with respect to the \begin{table} \begin{tabular}{l c c c c} \hline \hline Method & LP/MILP & \# problems & \# binary & big-M computation \\ \hline BEN & MILP & 1 & original & shortest (spanning) \\ \(K\)nn-D & - & - & - & - \\ \(K\)nn-LP & LP & K & - & - \\ \(K\)nn-B & MILP & 1 & reduced & shortest (spanning) \\ \(K\)nn-M & MILP & 1 & original & shortest (update) \\ \(K\)nn-BM & MILP & 1 & reduced & shortest (update) \\ \(K\)nn-B\(\widehat{M}\) & MILP & 1 & reduced & historic angles \\ All-\(\widehat{M}\) & MILP & 1 & original & historic angles \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of the methods explained in Section 3 benchmark approach (gap-max) and the average computational time. Unsurprisingly, this approach is extremely fast and the computational time is just negligible. On the other hand, the vast majority of the instances only attain suboptimal solutions for any number of neighbors \(K\), and the maximum gap is above 8% in all cases. These results illustrate that the use of machine learning approaches to directly predict the value of the binary variables of mixed-integer problems is likely to be extremely fast but potentially suboptimal. Now we run similar experiments using the \(K\)nn-LP approach described in Section 3.2 and proposed in [13]. The corresponding results are presented in Table 3. Logically, the \(K\)nn-LP solves a higher number of LP problems for different combinations of the binary variables and therefore, some instances are solved to optimality, specially for large values of \(K\). Although this methodology could be parallelized, Table 3 includes the sum of the computational times required to solve all the LP problems and therefore, this time increases with \(K\). Remark that the time needed to find the nearest neighbors is negligible for all values of \(K\). Although the computational burden is insignificant if compared Figure 1: DC-OTS cost savings distribution Figure 2: Computational burden of the BEN approach with the benchmark, the number of suboptimal cases and maximum gap are still considerable. We continue this numerical study by comparing approaches \(K\)nn-B, \(K\)nn-M y \(K\)nn-BM discussed in Section 3.3. For simplicity, Table 4 provides the results for \(K=50\) (\(10\%\) of the training data). Unlike \(K\)nn-D and \(K\)nn-LP, these three approaches lead to the optimal solution for all instances, which confirms their robustness for a sufficiently high number of neighbors. Therefore, although these approaches require a higher computational burden than \(K\)nn and \(K\)nn-LP, they still involve significant computational savings with respect to the benchmark model, while reducing the probabilities of yielding suboptimal solutions. Table 4 also shows that approaches \(K\)nn-B, \(K\)nn-M and \(K\)nn-BM differ in terms of their computational burden. The \(K\)nn-M approach reports higher times than \(K\)nn-B, which allows us to conclude that fixing some binary variables involves higher computational savings than tightening the big-M constants. Notwithstanding this, the highest computational gains are obtained if both effects are combined under the \(K\)nn-BM approach. Figure 3 plots the number of problems solved as a function of time. In the left subplot the x-axis ranges from 0 to 100s, while in the right subplot the x-axis goes from 100s to 3600s. In the left subplot we can observe that approaches \(K\)nn-B and \(K\)nn-BM are able to solve most of the instances in less than 100s, while approach \(K\)nn-M has a similar performance as the bencharmk. In the right subplot we see that the hardest instance solved by \(K\)nn-B and \(K\)nn-BM requires 1645s and 296s, \begin{table} \begin{tabular}{c c c c c} \hline \hline \(K\) & \# opt & \# sub & gap-max & time (s) \\ \hline 5 & 2 & 498 & 13.78 & 0.0 \\ 10 & 0 & 500 & 16.40 & 0.0 \\ 20 & 0 & 500 & 13.84 & 0.0 \\ 50 & 0 & 500 & 14.13 & 0.0 \\ 100 & 0 & 500 & 12.53 & 0.0 \\ 200 & 0 & 500 & 12.28 & 0.0 \\ 499 & 0 & 500 & 8.38 & 0.0 \\ \hline \hline \end{tabular} \end{table} Table 2: Performance of the \(K\)nn-D approach \begin{table} \begin{tabular}{c c c c c} \hline \hline \(K\) & \# opt & \# sub & gap-max & time (s) \\ \hline 5 & 10 & 490 & 3.59 & 0.00 \\ 10 & 19 & 481 & 3.56 & 0.01 \\ 20 & 24 & 476 & 1.61 & 0.02 \\ 50 & 51 & 449 & 1.06 & 0.04 \\ 100 & 77 & 423 & 0.71 & 0.08 \\ 200 & 104 & 396 & 0.71 & 0.16 \\ 499 & 127 & 273 & 0.71 & 0.39 \\ \hline \hline \end{tabular} \end{table} Table 3: Performance of the \(K\)nn-LP approach respectively. On the contrary, although \(K\)nn-M outperforms the benchmark, this approach is not able to solve all instances in less than one hour. It is also relevant to point out that the higher the value of \(K\), the lower the chances of achieving unanimity on the status of switchable lines, and thus, the lower the number of binary variables that are fixed in the OTS problem. To illustrate this fact, Table 5 collects the results of approach \(K\)nn-BM for different values of \(K\) including the average number of binary variables fixed to one or zero using the training data (# bin). For \(K=5\), 28 binary variables (out of the original 69 binary variables) are fixed in average, then leading to low computational times but a larger number of suboptimal instances. For \(K=499\), only 8 binary variables are fixed (in average), no suboptimal solutions are obtained, but the computational time is increased. Figure 4 also illustrates the impact of \(K\) on the performance of the \(K\)nn-BM approach. Note that setting \(K\) equal to 5 yields the lowest computational times and all instances are solved in less than 100s. However, this method leads to 47 suboptimal solutions. On the other hand, if \(K\) is set to 499, the maximum time reaches 400s but all instances are solved to optimality. Next, we analyze the results of the two remaining approaches: the \(K\)nn-BM approach that uses the nearest neighbors to fix some binary variables and all the elements in the training to learn the big-M values as explained in Section 3.3, and the All-M approach described in the same section. The \begin{table} \begin{tabular}{l l l l l} \hline \hline & \# opt & \# sub & gap-max & time (s) \\ \hline \(K\)nn-B & 500 & 0 & - & 16.39 \\ \(K\)nn-M & 500 & 0 & - & 109.95 \\ \(K\)nn-BM & 500 & 0 & - & 12.33 \\ BEN & 500 & 0 & - & 145.44 \\ \hline \hline \end{tabular} \end{table} Table 4: Performance of approaches \(K\)nn-B, \(K\)nn-M, \(K\)nn-BM for \(K\)=50 Figure 3: Computational burden of \(K\)nn-B, \(K\)nn-M, \(K\)nn-BM for \(K\)=50 results of these two methods are provided in Table 6 and allow us to draw some interesting conclusions. First, both approaches lead to suboptimal solutions for some instances. This is understandable since, as explained in Section 3.3, these methods set the big-M constants fully relying on the maximum angle difference revealed in the training set. Therefore, if the training set does not include an instance in which the actual maximum angle difference realizes, then the learned values of the big-M may leave the optimal solution out of the feasible region. In other words, while this strategy usually leads to very tight big-M values, it also increases the probabilities of having suboptimal or even infeasible solutions. This strategy is substantially different from approaches \(K\)nn-M and \(K\)nn-BM that learn shorter paths of connected lines based on the optimal solution of the OTS problem for the training data and recompute the big-M constants using (5). Since shorter paths are only updated under the unanimity of the nearest neighbors, this strategy leads to more conservative big-M values and, consequently, larger feasibility regions and computational times. These facts are confirmed by comparing Tables 5 and 6. For instance, for \(K=50\), \(K\)nn-BM solves all instances to optimality and takes 12.33s in average, the \(K\)nn-BM yields five suboptimal solutions but the average computational times is reduced to 0.7s only. The third relevant fact arises from \begin{table} \begin{tabular}{c c c c c c} \hline \hline \(K\) & \# bin & \# opt & \# sub & gap-max & time (s) \\ \hline 5 & 28.31 & 453 & 47 & 1.92 & 2.41 \\ 10 & 21.32 & 486 & 14 & 0.92 & 6.98 \\ 20 & 18.19 & 499 & 1 & 0.10 & 9.56 \\ 50 & 15.65 & 500 & 0 & - & 12.33 \\ 100 & 13.61 & 500 & 0 & - & 15.27 \\ 200 & 11.29 & 500 & 0 & - & 16.65 \\ 499 & 8.00 & 500 & 0 & - & 16.46 \\ \hline \hline \end{tabular} \end{table} Table 5: Impact of \(K\) on the performance of \(K\)nn-BM Figure 4: Impact of \(K\) on the computational burden of \(K\)nn-BM the comparison of the average computational times of the two approaches in Table 6. As observed, these times are particularly similar for all values of \(K\). This leads us to conclude that the obtained big-M constants are so tight that fixing some binary variables does not have a significant impact of the computational burden. For completeness, Figure 5 compares the number of problems solved by \(K\)nn-B\(\widehat{\mathrm{M}}\) for 50 neighbors and All-\(\widehat{\mathrm{M}}\) with the benchmark. Notice that these two methodologies are able to solve most instances in less than 5 seconds, while only 250 instances are solved by the benchmark in that time. This figure also proves that fixing the binary variables has a negligible effect on the computational savings. To further illustrate the performance of the two data-driven strategies to learn the big-M constants, Table 7 provides, for ten of the switchable lines, the big-M values for approaches BEN, \(K\)nn-BM and \(K\)nn-B\(\widehat{\mathrm{M}}\) for \(K=50\). For the first two methods, \(\underline{M}_{nm}\) and \(\overline{M}_{nm}\) are symmetric for all lines, whereas approach \(K\)nn-B\(\widehat{\mathrm{M}}\) computes asymmetric values as explained in Section 3.3. Since the learned large constants may change for each instance, Table 7 includes value ranges. Thanks to the status of switchable lines of the nearest neighbors, the \(K\)nn-BM approach is able to reduce the shortest paths used in \begin{table} \begin{tabular}{c c c c c c} \hline \hline & \(K\) & \# opt & \# sub & gap-max & time (s) \\ \hline \(K\)nn-B\(\widehat{\mathrm{M}}\) & 5 & 450 & 50 & 1.92 & 0.41 \\ \(K\)nn-B\(\widehat{\mathrm{M}}\) & 10 & 482 & 18 & 0.92 & 0.59 \\ \(K\)nn-B\(\widehat{\mathrm{M}}\) & 20 & 494 & 6 & 0.39 & 0.61 \\ \(K\)nn-B\(\widehat{\mathrm{M}}\) & 50 & 495 & 5 & 0.39 & 0.70 \\ \(K\)nn-B\(\widehat{\mathrm{M}}\) & 100 & 495 & 5 & 0.39 & 0.71 \\ \(K\)nn-B\(\widehat{\mathrm{M}}\) & 200 & 495 & 5 & 0.39 & 0.70 \\ \(K\)nn-B\(\widehat{\mathrm{M}}\) & 499 & 495 & 5 & 0.39 & 0.71 \\ All-\(\widehat{\mathrm{M}}\) & 495 & 5 & 0.39 & 0.88 \\ BEN & 500 & 0 & - & 145.44 \\ \hline \hline \end{tabular} \end{table} Table 6: Performance of approahces \(K\)nn-B\(\widehat{\mathrm{M}}\) (\(K\)=50) and All-\(\widehat{\mathrm{M}}\) Figure 5: Computational burden of approaches \(K\)nn-B\(\widehat{\mathrm{M}}\) (\(K\)=50) and All-\(\widehat{\mathrm{M}}\) (5) and significantly decrease the values of the big-Ms for some lines. For lines 2, 58 and 103, these values remain, however, unaltered. The approach \(K\)nn-B\(\widehat{\mathrm{M}}\) learns from the observed angle differences and therefore, the big-M are tightened even further. In fact, for lines 58, 85, 135, 164, this methodology is able to infer the direction of the power flow through these lines and consequently one of the big-M values is set to 0. This bound reduction effectively tightens the DC-OTS model (4) and significantly reduces its computational burden. ## 5 Conclusions In the field of power systems, the optimal transmission switching problem (OTS) determines the on/off status of transmission lines to reduce the operating cost. The OTS problem can be formulated as a mixed-integer linear program (MILP) that includes large enough constants. This problem belongs to the NP-hard class and its computational burden is, consequently, significant even for small networks. While _pure_ end-to-end learning approaches can solve the OTS problem extremely fast, the obtained solutions are usually suboptimal, or even infeasible. Alternatively, we propose in this paper some learning-based approaches that reduce the computational burden of the MILP model by leveraging information of previously solved instances. These computational savings arise from the fact that some binary variables are fixed and tighter big-M values are found. Numerical simulations on a 118-bus power network show that the first proposed approach is able to solve all instances to optimality in less than 300 seconds, while the benchmark approach is unable to solve all of them in 3600 seconds. The second approach we propose is more _aggressive_ and solves all instances in less than 10 seconds, but 1% of them do not reach the optimal solution. Throughout the paper, we assume a given connected spanning subgraph of lines that cannot be switched off. How to adapt our approach to networks in which all lines are switchable deserves further attention. Finally, running the proposed strategy with other learning procedures different from \(K\)nn may be considered as a future research line. \begin{table} \begin{tabular}{c c c c} \hline \hline line & BEN & \(K\)nn-BM & \(K\)nn-B\(\widehat{\mathrm{M}}\) \\ & \(-\underline{M}=\overline{M}\) & \(-\underline{M}=\overline{M}\) & \(\overline{M}\) & \(-\underline{M}\) \\ \hline 2 & 1080 & 1080 & [212,218] & [388,383] \\ 23 & 10267 & [6615,10267] & [1441,1575] & [639,607] \\ 28 & 16806 & [7434,16806] & [553,628] & [604,510] \\ 31 & 1417 & [1309,1417] & [248,252] & [176,175] \\ 46 & 5279 & [2287,5279] & [289,325] & [34,9] \\ 58 & 247 & 247 & 0 & 81 \\ 85 & 776 & [0,776] & [376,391] & 0 \\ 103 & 486 & 486 & [184,185] & 381 \\ 135 & 1458 & [0,294] & [122,127] & 0 \\ 164 & 3231 & [0,837] & & [115,114] \\ \hline \hline \end{tabular} \end{table} Table 7: Comparison of big-M values for BEN, \(K\)nn-BM, \(K\)nn-B\(\widehat{\mathrm{M}}\) for \(K\)=50 Acknowledgments.This work was supported in part by the European Research Council (ERC) under the EU Horizon 2020 research and innovation program (grant agreement No. 755705), in part by the Spanish Ministry of Science and Innovation (AEI/10.13039/501100011033) through project PID2020-115460GB-I00, and in part by the Research Program for Young Talented Researchers of the University of Malaga under Project B1-2020-15. Finally, the authors thankfully acknowledge the computer resources, technical expertise, and assistance provided by the SCBI (Supercomputing and Bioinformatics) center of the University of Malaga.
2304.00506
Noncommutative GrΓΆbner Bases and Ext groups; Application to the Steenrod Algebra
We consider a theory of noncommutative Gr\"obner bases on decreasingly filtered algebras whose associated graded algebras are commutative. We transfer many algorithms that use commutative Gr\"obner bases to this context. As an important application, we implement very efficient algorithms to compute the Ext groups over the Steenrod algebra $\mathscr{A}$ at the prime $2$. Especially, the cohomology of the Steenrod algebra $Ext_{\mathscr{A}}^{*, *}(\mathbb{F}_2, \mathbb{F}_2)$, which plays an important role in algebraic topology, is calculated up to total degree of 261, including the ring structure in this range.
Weinan Lin
2023-04-02T10:48:31Z
http://arxiv.org/abs/2304.00506v1
# Noncommutative Grobner bases and Ext Groups; Application to the Steenrod Algebra ###### Abstract. We consider a theory of noncommutative Grobner bases on decreasingly filtered algebras whose associated graded algebras are commutative. We transfer many algorithms that use commutative Grobner bases to this context. As an important application, we implement very efficient algorithms to compute the Ext groups over the Steenrod algebra \(\mathscr{A}\) at the prime \(2\). Especially, the cohomology of the Steenrod algebra \(\operatorname{Ext}_{\mathscr{A}}^{*,\mathscr{A}}(\mathbb{F}_{2},\mathbb{F}_{2})\), which plays an important role in algebraic topology, is calculated up to total degree of \(261\), including the ring structure in this range. Project funded by China Postdoctoral Science Foundation, 2021TQ0015 ###### Contents * 1 Introduction * 2 Filtered-Commutative Algebras * 3 Bases and Monomial Orderings * 4 Grobner bases over Filtered-Commutative Algebras * 5 Modules * 6 Syzygies * 7 Ext Groups * 8 The Steenrod Algebra * 9 Computing the Ext Groups over the Steenrod Algebra ## 1. Introduction The commutative Grobner basis theory [7] has been a very powerful tool in different areas of mathematics such as commutative algebra, algebraic geometry, homological algebra and applied mathematics. There are several works on generalizing the Grobner bases to noncommutative algebras, including [2] for enveloping algebras of Lie algebras, [3] and [17] for two-sided ideals in noncommutative free algebras and [10] exploring the relationship between commutative and noncommutative Grobner bases. However, most of the Grobner basis theories rely on restrictive monomial orderings that require \(M\geq N\) when a monomial \(M\) is a multiple of another monomial \(N\). This paper introduces a different type of monomial orderings (Definition 3.1) which allows us to apply the theory of Grobner bases to broader noncommutative algebras including the Steenrod algebra more effectively. We define a noncommutative Grobner basis theory on a class of algebras called filtered-commutative algebras (Definition 2.1). The main results are the following. **Theorem 1.1** (Algorithm 4.5).: _Given a finite generating set \(H\) (\(0\notin H\)) of a left ideal \(J\) of a filtered-commutative algebra \(A\), there is a generalized Buchberger's algorithm that expands \(H\) to a Grobner basis of \(J\)._ **Theorem 1.2** (Algorithm 7.3).: _Let \(N=A^{r}/M\) be a graded left module over a filtered-commutative graded algebra \(A\) where \(M\) is a left submodule of \(A^{r}\). There is an algorithm that constructs the first \(s+1\) terms \(F_{0},\ldots,F_{s}\) of a free \(A\)-resolution_ \[\cdots\to F_{s}\xrightarrow{d_{s}}\cdots\xrightarrow{d_{3}}F_{2}\xrightarrow {d_{2}}F_{1}\xrightarrow{d_{1}}F_{0}=A^{r}\xrightarrow{\epsilon}N\] _for any \(s\)._ The algorithms above have many applications and we focus on developing an algorithm for computing Ext groups which is important in homological algebra and algebraic topology. We then apply the algorithm to the mod 2 Steenrod algebra \(\mathscr{A}\) which yields the following computational result. **Theorem 1.3**.: _The bigraded algebra_ \[\bigoplus_{t\leq 261}\operatorname{Ext}_{\mathscr{A}}^{s,t}(\mathbb{F}_{2}, \mathbb{F}_{2})\] _in total degrees up to 261 is an algebra with 2914 indecomposables, 23822 basis elements and 227498 indecomposable relations. The complete charts are in given in [13]._ This algebra is the \(E_{2}\) page of the Adams spectral sequence converging to the stable homotopy groups of spheres completed at the prime 2 (See [1]). It plays a fundamental role in the stable homotopy theory and is related to many important problems such as the Hopf invariant one problem and the Kervaire invariant one problem. The computation of the stable homotopy groups of spheres relies on these machine outputs for the Adams \(E_{2}\) page. We refer the interested readers to Isaksen, Wang and Xu [6] for the most up to date calculation. Previously, the most extensive and publicly available calculation of \(\operatorname{Ext}_{\mathscr{A}}^{*,*}(\mathbb{F}_{2},\mathbb{F}_{2})\) is done by Bruner and Rognes [4], who compute the Ext up to dimension of 200. There are other computer programs made by Nassau [18] and Wang [21]. Our new algorithm is very efficient in both memory and time, which can compute the resolution in the range of 200 within one day on a personal computer and we spent 88+63 days to obtain the resolution and products for Theorem 1.3 on a 64-core CPU machine. This extends their calculation by a few magnitudes. The interested reader can find the latest implementation of the algorithm on Github [11], which can run twice as fast as the old version does for Theorem 1.3. ## 2. Filtered-Commutative Algebras Let \(k\) be any field and \(A\) an algebra defined below. **Definition 2.1**.: An algebra \(A\) over \(k\) is called a _(decreasingly-) filtered-commutative algebra_ if it satisfies the following conditions. 1. The algebra \(A\) is equipped with a filtration of ideals \[A=F_{0}A\supset F_{1}A\supset F_{2}A\supset\cdots\] such that \[F_{p}A\cdot F_{q}A\subset F_{p+q}A\] and \(F_{p_{\mathrm{aul}}}A=0\) for some number \(p_{\mathrm{nul}}>0\). 2. The associated graded algebra \(\mathrm{gr}(A)\) defined by \[\mathrm{gr}_{p}(A)=F_{p}A/F_{p+1}A\] is commutative. **Definition 2.2**.: We say that a nonzero element \(a\in A\) projects to \(f\in\mathrm{gr}_{p}(A)\), if \(p\) is the maximal number such that \(a\in F_{p}A\) and \(f\) is the image of \(a\) via the map \(F_{p}A\to F_{p}A/F_{p+1}A\). We write \[v(a)=p\text{ and }\mathrm{pr}(a)=f.\] If \(a\) is zero, we define \[v(0)=\infty\text{ and }\mathrm{pr}(0)=0.\] We also call \(a\) a _lift_ of \(f\). It is easy to see that another element \(a^{\prime}\in A\) projects to the same element \(f\in\mathrm{gr}_{p}(A)\) if and only if \(a-a^{\prime}\in F_{p+1}A\). The projection map is _not_ a linear map between vector spaces since \(\mathrm{pr}(a-a^{\prime})\) here could be nonzero. **Proposition 2.3**.: _For nonzero elements \(a,b\in A\), we have_ \[v(ab)\geq v(a)+v(b)\] _or equivalently,_ \[|\mathrm{pr}(ab)|\geq|\mathrm{pr}(a)|+|\mathrm{pr}(b)|\] _where \(|\cdot|\) is the degree of elements of \(\mathrm{gr}(A)\). The equality holds if and only if_ \[\mathrm{pr}(a)\mathrm{pr}(b)\neq 0\] _in \(\mathrm{gr}(A)\)._ Proof.: This is the direct consequence of the condition \(F_{p}A\cdot F_{q}A\subset F_{p+q}A\). The condition \(F_{p_{\mathrm{aul}}}A=0\) in Definition 2.1 makes sure that a basis of \(\mathrm{gr}(A)\) can lift to a basis of \(A\). More precisely, if \(\mathscr{B}=\{m_{i}\}\) is a \(k\)-basis of \(\mathrm{gr}(A)\), and \(\tilde{m}_{i}\in A\) projects to \(m_{i}\), then \(\tilde{\mathscr{B}}=\{\tilde{m}_{i}\}\) is a \(k\)-basis of \(A\). _Remark 2.4_.: The definition of a filtered-commutative algebra is a generalization of commutative algebras since if \(A\) is commutative, we can filter it with \[A=F_{0}A\subset F_{1}A=\{0\}.\] It is obvious that \(A\) satisfies the conditions in Definition 2.1. There are also non-commutative examples. The important ones that we will study in this paper are the truncated Steenrod algebra at the prime \(2\). From now on, we study filtered-commutative algebras \(A\) such that \(\mathrm{gr}(A)\) is finitely generated. Hence the associated graded algebra can be presented by \[\mathrm{gr}(A)=k[x_{1},\ldots,x_{n}]/I\] for some \(n\) and some ideal \(I\) of the polynomial ring \(P=k[x_{1},\ldots,x_{n}]\). Since \(\mathrm{gr}(A)\) is graded, we assume that \(x_{i}\) and \(I\) are homogeneous. The degree of a homogeneous element \(f\in P\) is denoted by \(|f|\). ## 3. Bases and Monomial Orderings In order to develop a theory of Grobner bases over \(A\), we want to choose a basis for \(A\) and a well-order on the basis. Since \(\operatorname{gr}(A)\) is a commutative algebra, we will do it on \(\operatorname{gr}(A)\) first and then lift it to \(A\). **Definition 3.1**.: A _degree-reversed admissible monomial ordering_ for the graded polynomial ring \(P=k[x_{1},\ldots,x_{n}]\) is a total ordering on the monomials such that for any monomials \(M,N,L\), 1. \(M\leq N\Longleftrightarrow ML\leq NL\), 2. \(|L|=0\Longrightarrow M\leq ML\). 3. \(|M|>|N|\Longrightarrow M<N\). _Remark 3.2_.: This is not a conventional monomial ordering because it is not a well-order on all monomials. However, we have \(F_{p_{\mathrm{null}}}A=0\) for some \(p_{\mathrm{null}}>0\) and hence \(\operatorname{gr}_{i}(A)=0\) for all \(i\geq p_{\mathrm{null}}\). The conditions (1)(2) make sure that it is a well-order in each degree, and the condition (3) makes sure that it is a well-order on all monomials in degrees less than \(p_{\mathrm{null}}\). Assume that \(P=k[x_{1},\ldots,x_{n}]\) is equipped with a degree-reversed admissible monomial ordering defined above. We denote the leading monomial of \(f\in P\) by \(\operatorname{LM}(f)\), which is the largest monomial in \(f\). The leading term \(\operatorname{LT}(f)\) is the leading monomial together with its coefficient in \(f\). Recall the following definition of a commutative Grobner basis. **Definition 3.3**.: A Grobner basis \(G\) of an ideal \(I\) of \(P\) is a finite generating set of \(I\) such that the following two ideals of \(P\) are equal: \[(\operatorname{LM}(f):f\in I)=(\operatorname{LM}(g):g\in G).\] Assume that \(G\) is a Grobner basis of the ideal \(I\). Let \(\mathscr{M}\) be the set of all monomials in \(P\). The definition of Grobner bases indicates that the set \[\mathscr{B}=\{m\in\mathscr{M}:\operatorname{LM}(g)\nmid m\text{ for all }g\in G\}\] (modulo \(I\)) is a basis of \(\operatorname{gr}(A)=P/I\) as a vector space. Note that if \(m\in\mathscr{B}\), then all monomials that divide \(m\) also belong to \(\mathscr{B}\) by definition. **Definition 3.4**.: For any nonzero \(f\in\operatorname{gr}(A)=P/I\), it can be uniquely written as \[f=c_{1}m_{1}+\cdots+c_{l}m_{l}\mod I\] where \(c_{i}\in k\), \(m_{i}\in\mathscr{B}\) and \(m_{1}>\cdots>m_{l}\). The leading monomial of \(f\) is defined to be \(\operatorname{LM}_{\operatorname{gr}}(f)=m_{1}\), while the leading term of \(f\) is defined to be \(\operatorname{LT}_{\operatorname{gr}}(f)=c_{1}m_{1}\). Note that if \(m\notin\mathscr{B}\), then \(\operatorname{LM}_{\operatorname{gr}}(m)<m\). We choose and fix \(X_{i}\in A\) such that \(X_{i}\) projects to \(x_{i}\) in the associated graded algebra for each \(i\). Then for \(m=x_{1}^{r_{1}}\cdots x_{n}^{r_{n}}\in\mathscr{B}\), the product \[\tilde{m}=X_{1}^{r_{1}}\cdots X_{n}^{r_{n}}\in A\] projects to \(m+I\) since \(m+I\) is non-trivial in \(\operatorname{gr}(A)=P/I\). Hence the set \[\tilde{\mathscr{B}}=\{\tilde{m}:m\in\mathscr{B}\}\] is a basis of \(A\). We order \(\tilde{\mathscr{B}}\) the same way as we order \(\mathscr{B}\) via the lifting. Note that both \(\mathscr{B}\) and \(\tilde{\mathscr{B}}\) are well-ordered. **Definition 3.5**.: For any \(f\in\operatorname{gr}(A)\), we write \[f=c_{1}m_{1}+\cdots+c_{l}m_{l}\mod I\] where \(c_{i}\in k\) and \(m_{i}\in\mathscr{B}\). We define \[\tilde{f}=c_{1}\tilde{m}_{1}+\cdots+c_{l}\tilde{m}_{l}\in A\] which is a lift of \(f\). Sometimes we also write \(\ell(f)=\tilde{f}\). **Definition 3.6**.: For any nonzero \(a\in A\), we can write \(a\) in the form of \[a=c_{1}\tilde{m}_{1}+\cdots+c_{l}\tilde{m}_{l}\] where \(c_{i}\in k\), \(m_{i}\in\mathscr{B}\) and \(m_{1}>\cdots>m_{l}\). We call \(\operatorname{LM}_{A}(a)=\tilde{m}_{1}\in\tilde{\mathscr{B}}\) the _leading monomial_ of \(a\), and \(\operatorname{LT}_{A}(a)=c_{1}\tilde{m}_{1}\) the _leading term_ of \(a\). For convenience we also define \(\operatorname{LM}_{\mathscr{B}}(a)=m_{1}\in\mathscr{B}\) and we see that \(\operatorname{LM}_{\mathscr{B}}=\operatorname{pr}\circ\operatorname{LM}_{A}\). **Proposition 3.7**.: _For any nonzero \(a\in A\), we have_ \[\operatorname{LM}_{\mathscr{B}}(a)=\operatorname{pr}(\operatorname{LM}_{A}(a) )=\operatorname{LM}_{\operatorname{gr}}(\operatorname{pr}(a))\] _or equivalently_ \[\operatorname{LM}_{A}=\ell\circ\operatorname{LM}_{\operatorname{gr}}\circ \operatorname{pr}\] Proof.: If \[a=c_{1}\tilde{m}_{1}+\cdots+c_{l}\tilde{m}_{l}\] where \(c_{i}\in k\), \(m_{i}\in\mathscr{B}\) and \[m_{1}>\cdots>m_{l},\] then it is clear that \(\operatorname{pr}(a)=\sum\limits_{\begin{subarray}{c}1\leq i\leq l\\ |m_{i}|=|m_{1}|\end{subarray}}m_{i}\). Therefore \(\operatorname{LM}_{\operatorname{gr}}(\operatorname{pr}(a))=m_{1}\). The following propositions show that the ordering on \(\tilde{\mathscr{B}}\) is "admissible" meaning that it behaves well with the multiplication. For our convenience, when the context is clear, sometimes we omit "modulo \(I\)" and consider some elements of \(P\) as elements of \(\operatorname{gr}(A)=P/I\). **Proposition 3.8**.: _Assume that \(m_{1},m_{2}\in\mathscr{B}\) and \(\tilde{m}_{1}\tilde{m}_{2}\neq 0\). We have_ \[\operatorname{LM}_{\mathscr{B}}(\tilde{m}_{1}\tilde{m}_{2})\leq m_{1}m_{2}.\] _The equality holds if and only if \(m_{1}m_{2}\in\mathscr{B}\)._ Proof.: By the previous proposition, we have \[\operatorname{LM}_{\mathscr{B}}(\tilde{m}_{1}\tilde{m}_{2})=\operatorname{LM }_{\operatorname{gr}}(\operatorname{pr}(\tilde{m}_{1}\tilde{m}_{2})).\] If \(m_{1}m_{2}\) is trivial in \(\operatorname{gr}(A)\), then \(|\operatorname{pr}(\tilde{m}_{1}\tilde{m}_{2})|>|m_{1}m_{2}|\). Therefore \[\operatorname{LM}_{\operatorname{gr}}(\operatorname{pr}(\tilde{m}_{1}\tilde{m }_{2}))<m_{1}m_{2}.\] If \(m_{1}m_{2}\) is nontrivial in \(\operatorname{gr}(A)\), we have \[\operatorname{LM}_{\operatorname{gr}}(\operatorname{pr}(\tilde{m}_{1}\tilde{m }_{2}))=\operatorname{LM}_{\operatorname{gr}}(m_{1}m_{2})\leq m_{1}m_{2}.\] The last equality holds if and only \(m_{1}m_{2}\in\mathscr{B}\). This proposition directly implies the following. **Corollary 3.9**.: _Assume \(a\in A\), \(\operatorname{LM}_{A}(a)=\tilde{m}\) and \(q\in\mathscr{B}\). If \(qm\in\mathscr{B}\), then \(\operatorname{LM}_{\mathscr{B}}(\tilde{q}a)=qm\). Otherwise \(\operatorname{LM}_{\mathscr{B}}(\tilde{q}a)<qm\)._ ## 4. Grobner bases over Filtered-Commutative Algebras Let \(A\) be a filtered-commutative algebra and we proceed with notations defined in the previous section. We focus on left ideals of \(A\) first. We will consider left modules over \(A\) in the next section. **Definition 4.1**.: A Grobner basis \(H\) of a left ideal \(J\) of \(A\) is a finite generating set of \(J\) such that the following two ideals of \(P\) are equal: \[(\mathrm{LM}_{\mathscr{B}}(a):a\in J)=(\mathrm{LM}_{\mathscr{B}}(h):h\in H).\] In order to give an algorithm that calculates a Grobner basis of \(J\subset A\) starting from a given generating set of \(J\), we need to define reductions in \(A\). **Definition 4.2**.: Given \(a,b\in A\), we write \[a=c_{1}\tilde{m}_{1}+\cdots+c_{l}\tilde{m}_{l}\] where \(c_{i}\in k\), \(m_{i}\in\mathscr{B}\) and \(m_{1}>\cdots>m_{l}\). Let \(c\tilde{m}=\mathrm{LT}_{A}(b)\) for some \(c\in k\) and \(m\in\mathscr{B}\). One says that \(a\) is _reducible_ by \(b\) if \(m_{i}\) is divisible by \(m\) for some \(i\). In this case we choose the smallest \(i\) and define the _one-step reduction_ of \(a\) by \(b\) by \[\mathrm{red}_{1}(a,b)=a-\frac{c_{i}}{c}\widetilde{\left(\frac{m_{i}}{m}\right) }b.\] Note that compared with \(a\), \(\mathrm{red}_{1}(a,b)\) replaces \(c_{i}\tilde{m}_{i}\) in \(a\) with other summands strictly less than \(\tilde{m}_{i}\) since \[\mathrm{LT}_{\mathrm{gr}}(\mathrm{pr}(\frac{c_{i}}{c}\widetilde{\left(\frac{m_ {i}}{m}\right)}b))=\frac{c_{i}}{c}\frac{m_{i}}{m}cm=c_{i}m_{i}.\] **Definition 4.3**.: For \(a\in A\) and a finite ordered subset \(H\subset A\), we say that \(a\) is _reducible_ by \(H\) if \(a\) is reducible by some \(h\in H\). In this case we assume that \(\tilde{m}\) is the largest summand of \(a\) that is reducible by \(H\), and \(h\in H\) is the first element of \(H\) that reduces \(\tilde{m}\). We define the one-step reduction of \(a\) by \(H\) by \[\mathrm{red}_{1}(a,H)=\mathrm{red}_{1}(a,h).\] We replace \(a\) with \(\mathrm{red}_{1}(a,H)\) and iterate this until \(a\) is not reducible by \(H\). This will terminate because the monomial ordering is well-ordered on \(\tilde{\mathscr{B}}\). We call the final outcome the _reduction of \(a\) by \(H\)_, denoted by \(\mathrm{red}(a,H)\). _Remark 4.4_.: By the definition of reductions, if \(\mathrm{red}(a,H)=0\), we can write \[a=c_{1}\tilde{m}_{1}h_{1}+\cdots+c_{l}\tilde{m}_{l}h_{l}\] where \(c_{i}\in k\), \(m_{i}\in\mathscr{B}\) and \(h_{i}\in H\) such that \(m_{i}\mathrm{LM}_{\mathscr{B}}(h_{i})\in\mathscr{B}\) and \[m_{1}\mathrm{LM}_{\mathscr{B}}(h_{1})>\cdots>m_{l}\mathrm{LM}_{\mathscr{B}}(h_ {l}).\] We can see \(\mathrm{LM}_{\mathscr{B}}(a)=m_{1}\mathrm{LM}_{\mathscr{B}}(h_{1})\). The following is a generalized Buchberger's algorithm that calculates a Grobner basis of an ideal of \(A\). **Algorithm 4.5**.: _Given a finite ordered generating set \(H\) (\(0\notin H\)) of a left ideal \(J\) of \(A\), we can expand \(H\) to a Grobner basis of \(J\) by doing the following_ 1. _For_ \(a\in H\) _and_ \(g\in G\)_, let_ \(L\) _be the least common multiple_ \[L=\operatorname{lcm}(\operatorname{LM}_{\mathscr{B}}(a),\operatorname{LM}(g))\] _and_ \[q=\frac{L}{\operatorname{LM}_{\mathscr{B}}(a)}.\] _If_ \(\operatorname{red}(\tilde{q}a,H)\) _is nontrivial, append it to_ \(H\)_._ 2. _For_ \(a,b\in H\)_, let_ \[L=\operatorname{lcm}(\operatorname{LM}_{\mathscr{B}}(a),\operatorname{LM}_{ \mathscr{B}}(b))\] _and_ \[t_{1}=\frac{L}{\operatorname{LT}_{\operatorname{gr}}(\operatorname{pr}(a))}, \ t_{2}=\frac{L}{\operatorname{LT}_{\operatorname{gr}}(\operatorname{pr}(b))}.\] _If_ \(\operatorname{red}(\tilde{t}_{1}a-\tilde{t}_{2}b,H)\) _is nontrivial, append it to_ \(H\)_._ 3. _Repeat (1)(2) until no more elements can be added to_ \(H\)_._ We verify the correctness of the algorithm by the following two propositions. **Proposition 4.6**.: _Algorithm 4.5 always terminates._ Proof.: It is clear that the new elements added to \(H\) belong to \(J\). Since each new element is reduced before it is added to \(H\), the ideal \[(\operatorname{LM}_{\mathscr{B}}(h):h\in H)\] of \(P\) will be strictly larger when the element is added. This cannot continue infinitely because \(P=k[x_{1},\ldots,x_{n}]\) is a Noetherian ring. **Proposition 4.7**.: _When Algorithm 4.5 terminates, for any \(a\in J\), we have_ \[\operatorname{LM}_{\mathscr{B}}(a)\in(\operatorname{LM}_{\mathscr{B}}(h):h\in H).\] Proof.: Since \(H\) is a generating set of \(J\), it suffices to prove that for all \(c_{i}\in k\), \(m_{i}\in\mathscr{B}\) and \(h_{i}\in H\), \(1\leq i\leq l\), such that \[m_{1}\operatorname{LM}_{\mathscr{B}}(h_{1})\geq\cdots\geq m_{l}\operatorname{ LM}_{\mathscr{B}}(h_{l}),\] the proposition is true for \[a=c_{1}\tilde{m}_{1}h_{1}+\cdots+c_{l}\tilde{m}_{l}h_{l}. \tag{4.8}\] Without loss of generality, we assume that the leading coefficient of any \(h\in H\) is one which implies that \(\operatorname{LT}_{A}(h)=\operatorname{LM}_{A}(h)\). If \(l=0\), the proof is done since this only happens when \(a=0\). If \(m_{1}\operatorname{LM}_{\mathscr{B}}(h_{1})\notin\mathscr{B}\), then there exists \(g\in G\) such that \(\operatorname{LM}(g)\ |\ m_{1}\operatorname{LM}_{\mathscr{B}}(h_{1})\). Let \[L=\operatorname{lcm}(\operatorname{LM}_{\mathscr{B}}(h_{1}),\operatorname{LM} (g))\] and \[q=\frac{L}{\operatorname{LM}_{\mathscr{B}}(h_{1})}.\] We have \(q|m_{1}\) and by the termination of Algorithm 4.5, \[\operatorname{red}(\tilde{q}h_{1},H)=0.\] Thus by Remark 4.4 we can write \[\tilde{q}h_{1}=c_{1}^{\prime}\tilde{m}_{1}^{\prime}h_{1}^{\prime}+\cdots+c_{s} ^{\prime}\tilde{m}_{s}^{\prime}h_{s}^{\prime}\] for some \(c^{\prime}_{i}\in k\), \(m^{\prime}_{i}\in\mathscr{B}\) and \(h^{\prime}_{i}\in H\) such that \[m^{\prime}_{1}\mathrm{LM}_{\mathscr{B}}(h^{\prime}_{1})>\cdots>m^{\prime}_{s} \mathrm{LM}_{\mathscr{B}}(h^{\prime}_{s}).\] We have \[m^{\prime}_{j}\mathrm{LM}_{\mathscr{B}}(h^{\prime}_{j})\leq m^{\prime}_{1} \mathrm{LM}_{\mathscr{B}}(h^{\prime}_{1})=\mathrm{LM}_{\mathscr{B}}(\tilde{q} h_{1})<q\mathrm{LM}_{\mathscr{B}}(h_{1})\] for all \(1\leq j\leq s\). Now we consider \[\begin{split}\tilde{m}_{1}h_{1}&=\Big{(}\tilde{m}_ {1}-\widetilde{\left(\frac{m_{1}}{q}\right)}\tilde{q}\Big{)}h_{1}+\widetilde{ \left(\frac{m_{1}}{q}\right)}(\tilde{q}h_{1})\\ &=\Big{(}\tilde{m}_{1}-\widetilde{\left(\frac{m_{1}}{q}\right)} \tilde{q}\Big{)}h_{1}+\sum_{j=1}^{s}c^{\prime}_{j}\widetilde{\left(\frac{m_{1 }}{q}\right)}\tilde{m}^{\prime}_{j}h^{\prime}_{j}.\end{split} \tag{4.9}\] Since \(\mathrm{gr}(A)\) is commutative, we have \[v\Big{(}\tilde{m}_{1}-\widetilde{\left(\frac{m_{1}}{q}\right)}\tilde{q} \Big{)}>v(\tilde{m}_{1}).\] Thus \[\mathrm{LM}_{\mathscr{B}}\Big{(}\tilde{m}_{1}-\widetilde{\left(\frac{m_{1}}{ q}\right)}\tilde{q}\Big{)}\cdot\mathrm{LM}_{\mathscr{B}}(h_{1})<m_{1}\mathrm{LM}_{ \mathscr{B}}(h_{1}).\] We also have \[\begin{split}\mathrm{LM}_{\mathscr{B}}\Big{(}\widetilde{\left( \frac{m_{1}}{q}\right)}\tilde{m}^{\prime}_{j}\Big{)}\cdot\mathrm{LM}_{ \mathscr{B}}(h^{\prime}_{j})&\leq\frac{m_{1}}{q}\cdot m^{\prime}_ {j}\mathrm{LM}_{\mathscr{B}}(h^{\prime}_{j})\\ &<\frac{m_{1}}{q}\cdot q\mathrm{LM}_{\mathscr{B}}(h_{1})\\ &=m_{1}\mathrm{LM}_{\mathscr{B}}(h_{1})\end{split}\] for all \(1\leq j\leq s\). The above two equalities and (4.9) show that \(\tilde{m}_{1}h_{1}\) can be rewritten as a linear combination of elements in the form of \(\tilde{m}h\) where \(m\in\mathscr{B}\), \(h\in H\) and \(m\mathrm{LM}_{\mathscr{B}}(h)<m_{1}\mathrm{LM}_{\mathscr{B}}(h_{1})\). We can keep doing the rewriting on (4.8) as long as \(m_{1}\mathrm{LM}_{\mathscr{B}}(h_{1})\notin\mathscr{B}\) and this has to stop at some point because \(|m_{1}\mathrm{LM}_{\mathscr{B}}(h_{1})|<2p_{\mathrm{nul}}\) and the monomial ordering is always a well-order on monomials up to a finite degree. Now we can always assume that \(m_{1}\mathrm{LM}_{\mathscr{B}}(h_{1})\in\mathscr{B}\) on (4.8). If \(l>1\) and \[m_{1}\mathrm{LM}_{\mathscr{B}}(h_{1})=m_{2}\mathrm{LM}_{\mathscr{B}}(h_{2}),\] we can rewrite (4.8) in a way similar to the previous part of this proof. In fact, let \[L=\mathrm{lcm}(\mathrm{LM}_{\mathscr{B}}(h_{1}),\mathrm{LM}_{\mathscr{B}}(h_{2 }))\] and \[q_{1}=\frac{L}{\mathrm{LM}_{\mathscr{B}}(h_{1})},\ q_{2}=\frac{L}{\mathrm{LM} _{\mathscr{B}}(h_{2})}.\] It is easy to see that there exists \(m^{\prime}\in\mathscr{B}\) such that \(m_{1}=m^{\prime}q_{1}\) and \(m_{2}=m^{\prime}q_{2}\). By the termination of Algorithm 4.5, \[\mathrm{red}(\tilde{q}_{1}h_{1}-\tilde{q}_{2}h_{2},H)=0.\] Thus we can write \[\tilde{q}_{1}h_{1}-\tilde{q}_{2}h_{2}=c^{\prime}_{1}\tilde{m}^{\prime}_{1}h^{ \prime}_{1}+\cdots+c^{\prime}_{s}\tilde{m}^{\prime}_{s}h^{\prime}_{s}\] for some \(c^{\prime}_{i}\in k\), \(m^{\prime}_{i}\in\mathscr{B}\) and \(h^{\prime}_{i}\in H\) such that \[m^{\prime}_{1}\mathrm{LM}_{\mathscr{B}}(h^{\prime}_{1})>\cdots>m^{\prime}_{s} \mathrm{LM}_{\mathscr{B}}(h^{\prime}_{s}).\] We have \[m^{\prime}_{j}\mathrm{LM}_{\mathscr{B}}(h^{\prime}_{j})\leq m^{\prime}_{1} \mathrm{LM}_{\mathscr{B}}(h^{\prime}_{1})=\mathrm{LM}_{\mathscr{B}}(\tilde{q}_ {1}h_{1}-\tilde{q}_{2}h_{2})<q_{1}\mathrm{LM}_{\mathscr{B}}(h_{1})\] for all \(1\leq j\leq s\). Now we consider \[\tilde{m}_{1}h_{1}-\tilde{m}_{2}h_{2}\] \[= \Big{(}\tilde{m}_{1}-\tilde{m}^{\prime}\tilde{q}_{1}\Big{)}h_{1} -\Big{(}\tilde{m}_{2}-\tilde{m}^{\prime}\tilde{q}_{2}\Big{)}h_{2}+\tilde{m}^{ \prime}(\tilde{q}_{1}h-\tilde{q}_{2}h)\] \[= \Big{(}\tilde{m}_{1}-\tilde{m}^{\prime}\tilde{q}_{1}\Big{)}h_{1} -\Big{(}\tilde{m}_{2}-\tilde{m}^{\prime}\tilde{q}_{2}\Big{)}h_{2}+\sum_{j=1}^{ s}c^{\prime}_{j}\tilde{m}^{\prime}\tilde{m}^{\prime}\tilde{m}^{\prime}_{j}h^{ \prime}_{j}. \tag{4.10}\] Again, for \(i=1,2\) we have \[v\Big{(}\tilde{m}_{i}-\tilde{m}^{\prime}\tilde{q}_{i}\Big{)}>v(\tilde{m}_{i}).\] Thus \[\mathrm{LM}_{\mathscr{B}}\Big{(}\tilde{m}_{i}-\tilde{m}^{\prime}\tilde{q}_{i} \Big{)}\cdot\mathrm{LM}_{\mathscr{B}}(h_{i})<m_{i}\mathrm{LM}_{\mathscr{B}}(h _{i})=m_{1}\mathrm{LM}_{\mathscr{B}}(h_{1}).\] We also have \[\mathrm{LM}_{\mathscr{B}}\Big{(}\tilde{m}^{\prime}\tilde{m}^{ \prime}_{j}\Big{)}\cdot\mathrm{LM}_{\mathscr{B}}(h^{\prime}_{j}) \leq\frac{m_{1}}{q}\cdot m^{\prime}_{j}\mathrm{LM}_{\mathscr{B}}( h^{\prime}_{j})\] \[<\tilde{m}^{\prime}\cdot q\mathrm{LM}_{\mathscr{B}}(h_{1})\] \[=m_{1}\mathrm{LM}_{\mathscr{B}}(h_{1})\] for all \(1\leq j\leq s\). The above two equalities and (4.10) show that \(\tilde{m}_{1}h_{1}\) can be rewritten as a linear combination of \(\tilde{m}_{2}h_{2}\) and elements in the form of \(\tilde{m}h\) where \(m\in\mathscr{B}\), \(h\in H\) and \(m\mathrm{LM}_{\mathscr{B}}(h)<m_{1}\mathrm{LM}_{\mathscr{B}}(h_{1})\). We can keep doing the rewriting on (4.8) as long as \(m_{1}\mathrm{LM}_{\mathscr{B}}(h_{1})=m_{2}\mathrm{LM}_{\mathscr{B}}(h_{2})\) and this will stop at some point. Eventually we get \(l=0\) or \(m_{1}\mathrm{LM}_{\mathscr{B}}(h_{1})\in\mathscr{B}\) and \(m_{1}\mathrm{LM}_{\mathscr{B}}(h_{1})>m_{2}\mathrm{LM}_{\mathscr{B}}(h_{2})\) (when \(l>1\)). In the second case \[\mathrm{LM}_{\mathscr{B}}(a)=m_{1}\mathrm{LM}_{\mathscr{B}}(h_{1})\in( \mathrm{LM}_{\mathscr{B}}(h):h\in H).\] The Grobner bases over \(A\) behave very similarly to commutative Grobner bases. It is quite straightforward to see that \(H\) is a Grobner bases of \(J\subset A\) if and only if \(\mathrm{red}(a,H)=0\) for any \(a\in J\), and the set \[\{\tilde{m}\in\tilde{\mathscr{B}}:\mathrm{LM}_{\mathscr{B}}(h)\nmid m\text{ for all }h\in H\}\] (modulo \(J\)) is a \(k\)-basis of the left \(A\)-module \(A/J\). ## 5. Modules In previous sections, we study the Grobner bases of left ideals of \(A\). Now we consider a finitely generated left module \(N\) over \(A\). We can find a number \(r\) and a left submodule \(M\) of \(A^{r}\) such that \(N\cong A^{r}/M\). Therefore we can study left submodules of \(A^{r}\) in order to study finitely generated left modules over \(A\). Consider the truncated graded ring \(\mathbb{A}=\mathbb{A}_{0}\oplus\mathbb{A}_{1}=A\oplus A^{r}\), where the multiplication is given by \[(a_{1},x_{1})\cdot(a_{2},x_{2})=(a_{1}a_{2},a_{1}x_{2}+a_{2}x_{1})\] for \(a_{i}\in A\), \(x_{i}\in A^{r}\), \(i=1,2\). We can see that left submodules of \(A^{r}\) over \(A\) are in one-to-one correspondence to left ideals of \(\mathbb{A}\) that are contained in \(\mathbb{A}_{1}\). We can filter \(\mathbb{A}\) by \[F_{p}\mathbb{A}=F_{p}A\oplus(F_{p}A)^{r}\subset\mathbb{A}.\] Then \(\mathbb{A}\) is actually a filtered-graded commutative algebra and \[\operatorname{gr}(\mathbb{A})=\operatorname{gr}(A)\oplus\operatorname{gr}(A) ^{r}\cong k[x_{1},\dots,x_{n},e_{1},\dots,e_{r}]/(I,e_{i}e_{j}:1\leq i\leq j \leq r)\] where \(\{\tilde{e}_{i}\}\) is the \(A\)-basis of the free \(A\)-module \(\mathbb{A}_{1}=A^{r}\) and \(\operatorname{pr}(\tilde{e}_{i})=e_{i}\). Since \(M\subset\mathbb{A}_{1}\), we are only concerned with monomials in the following set \[\mathbb{B}=\{me_{i}:m\in\mathscr{M},1\leq i\leq r\}.\] A monomial ordering on those monomials should satisfy \[m_{1}e_{i}<m_{2}e_{i}\Longleftrightarrow m_{1}<m_{2}\text{ for all }i.\] One example of such orderings could be \[m_{1}e_{i}<m_{2}e_{j}\Longleftrightarrow i>j\text{ or }(i=j\text{ and }m_{1}<m_{2}). \tag{5.1}\] There are many other examples but we assume that we have chosen one of them for \(\mathbb{B}\). Note that the set \(\tilde{\mathbb{B}}=\{\tilde{m}\tilde{e}_{i}:m\in\mathscr{M},1\leq i\leq r\}\) is a \(k\)-basis of \(A^{r}\). If \(\tilde{m}\tilde{e}_{i}\) is the leading monomial of \(x\in A^{r}\), we write \(\operatorname{LM}_{\mathbb{B}}(x)=me_{i}\). **Definition 5.2**.: A Grobner basis \(H\) of a left submodule \(M\subset A^{r}\) over \(A\) is a Grobner basis of \(M\subset\mathbb{A}_{1}\) as a left ideal of the filtered-commutative algebra \(\mathbb{A}\). If we expand the definition in terms of \(A\), we get the following. **Proposition 5.3**.: _A Grobner basis \(H\) of a left submodule \(M\subset A^{r}\) over \(A\) is a finite generating set of \(M\) such that the following two ideals of \(k[x_{1},\dots,x_{n},e_{1},\dots,e_{r}]\) are equal:_ \[(\operatorname{LM}_{\mathbb{B}}(x):x\in M)=(\operatorname{LM}_{\mathbb{B}}(h): h\in H).\] Since \(M\subset A^{r}\) can be considered as an ideal of \(\mathbb{A}\), Algorithm 4.5 can be used to compute the Grobner bases of \(M\) if we replace \(A\) with \(\mathbb{A}\). Here we rewrite the algorithm in terms of \(A\) and the auxiliary symbols \(e_{1},\dots,e_{r}\). **Algorithm 5.4**.: _Given a finite ordered generating set \(H\) (\(0\notin H\)) of a left submodule \(M\subset A^{r}\) over \(A\), we can expand \(H\) to a Grobner basis of \(M\) by doing the following._ 1. _For_ \(a\in H\) _and_ \(g\in G\)_, let_ \(L\) _be the least common multiple_ \[L=\operatorname{lcm}(\operatorname{LM}_{\mathbb{B}}(a),\operatorname{LM}(g))\] _and_ \[q=\frac{L}{\operatorname{LM}_{\mathbb{B}}(a)}.\] _If_ \(\operatorname{red}(\tilde{q}a,H)\) _is nontrivial, append it to_ \(H\) _._ 2. _For_ \(a,b\in H\) _such that_ \(\operatorname{LM}_{\mathbb{B}}(a)\) _and_ \(\operatorname{LM}_{\mathbb{B}}(b)\) _contain the same_ \(e_{i}\) _factor, let_ \[L=\operatorname{lcm}(\operatorname{LM}_{\mathbb{B}}(a),\operatorname{LM}_{ \mathbb{B}}(b))\] _and_ \[t_{1}=\frac{L}{\operatorname{LT}_{\operatorname{gr}(\mathbb{A})}(\operatorname{ pr}(a))},\ t_{2}=\frac{L}{\operatorname{LT}_{\operatorname{gr}(\mathbb{A})}( \operatorname{pr}(b))}.\] _If_ \(\operatorname{red}(\tilde{t}_{1}a-\tilde{t}_{2}b,H)\) _is nontrivial, append it to_ \(H\)_._ 3. _Repeat (1)(2) until no more elements can be added to_ \(H\)_._ ## 6. Syzygies **Definition 6.1**.: Let \((x_{1},\cdots,x_{s})\) be a tuple of elements of \(A^{r}\). 1. A _syzygy_ of \((x_{1},\ldots,x_{s})\) is a tuple \((a_{1},\ldots,a_{s})\in A^{s}\) such that \(a_{1}x_{1}+\cdots+a_{s}x_{s}=0\). 2. The set of all syzygies of \((x_{1},\cdots,x_{s})\) is call the _(first) syzygy module_ of \((x_{1},\cdots,x_{s})\), denoted by \(\operatorname{Syz}(x_{1},\cdots,x_{s})\). It is obvious that \(\operatorname{Syz}(x_{1},\cdots,x_{s})\) is a left submodule of \(A^{s}\). Our goal in this section is to find a generating set of \(\operatorname{Syz}(x_{1},\cdots,x_{s})\). **Definition 6.2**.: Let \(H=(h_{1},\ldots,h_{s})\) be an ordered Grobner basis of \(M\subset A^{r}\). For \(1\leq i\leq s\) and \(g\in G\), let \[L=\operatorname{lcm}(\operatorname{LM}_{\mathbb{B}}(h_{i}),\operatorname{LM} (g))\] and \[q=\frac{L}{\operatorname{LM}_{\mathbb{B}}(h_{i})}.\] We have \(\operatorname{red}(\tilde{q}h_{i},H)=0\) and it expands to \[\tilde{q}h_{i}=a_{1}h_{1}+\cdots+a_{s}h_{s}\] for some \(a_{l}\in A\), \(1\leq l\leq s\). This implies that \[(-a_{1},\ldots,-a_{i-1},\tilde{q}-a_{i},-a_{i+1},\ldots,-a_{s})\] is a syzygy of \(H\). We denote this syzygy of \(H\) by \(\sigma_{i,g}\). **Definition 6.3**.: Let \(H=(h_{1},\ldots,h_{s})\) be an ordered Grobner basis of \(M\subset A^{r}\). For \(1\leq i<j\leq s\), let \[L=\operatorname{lcm}(\operatorname{LM}_{\mathbb{B}}(h_{i}),\operatorname{LM}_ {\mathbb{B}}(h_{j}))\] and \[t_{i}=\frac{L}{\operatorname{LT}_{\operatorname{gr}(\mathbb{A})}(\operatorname {pr}(h_{i}))},\ t_{j}=\frac{L}{\operatorname{LT}_{\operatorname{gr}(\mathbb{A })}(\operatorname{pr}(h_{j}))}.\] We have \(\operatorname{red}(\tilde{t}_{i}h_{i}-\tilde{t}_{j}h_{j},H)=0\) and it expands to \[\tilde{t}_{i}h_{i}-\tilde{t}_{j}h_{j}=a_{1}h_{1}+\cdots+a_{s}h_{s}\] for some \(a_{l}\in A\), \(1\leq l\leq s\). This implies that \[(-a_{1},\ldots,-a_{i-1},\tilde{t}_{i}-a_{i},a_{i+1},\ldots,-a_{j-1},-\tilde{t} _{j}-a_{j},-a_{j+1},\ldots,-a_{s})\] is a syzygy of \(H\). We denote this syzygy of \(H\) by \(\sigma_{i,j}\). The following proposition finds a generating set of \(\operatorname{Syz}(h_{1},\ldots,h_{s})\) for a Grobner basis \((h_{1},\ldots,h_{s})\). **Theorem 6.4**.: _If \(H=(h_{1},\dots,h_{s})\) is an ordered Grobner basis of some left submodule \(M\subset A^{r}\), then the syzygy module \(\operatorname{Syz}(H)\) is generated by the following two sets of elements._ \[\{\sigma_{i,g}\}_{1\leq i\leq s,g\in G},\ \{\sigma_{i,j}\}_{1\leq i\leq s,1\leq j \leq s}.\] Proof.: This actually follows from the proof of Proposition 4.7 when we replace \(A\) with \(\mathbb{A}\) and consider the case \(a=0\) in (4.8). We can see that the right-hand side of (4.8) becomes zero after we rewrite it by linear relations between \(H\) that correspond to \(m\sigma_{i,g}\) or \(m\sigma_{i,j}\) for some \(m\in\mathscr{M}\). For any tuple \(X=(h_{1},\dots,h_{t})\) of elements of \(A^{r}\), we can use the generalized Buchberger's algorithm to compute a Grobner basis \(H=(h_{1},\dots,h_{s})\) of the submodule of \(A^{r}\) generated by \(h_{1},\dots,h_{t}\) (\(s\geq t\)). Since all elements of \(H\) belong to the submodule, we can find a matrix \(Q=\begin{pmatrix}I_{t}\\ Q^{\prime}\end{pmatrix}\in M_{s\times t}(A)\) such that \[H^{T}=QX^{T}.\] Where \(I_{t}\) is the \(t\times t\) identity matrix and \((\cdot)^{T}\) is the transposition of matrices. **Theorem 6.5**.: _If \(S\) is a matrix such that the row vectors of \(S\) generate the syzygy module \(\operatorname{Syz}(H)\), then the row vectors of \(SQ\) generate \(\operatorname{Syz}(X)\)._ Proof.: Since \(SQX^{T}=SH^{T}=0\) we know that the row vectors of \(SQ\) belong to \(Syz(X)\). On the other hand, assume \(vX^{T}=0\) for some row vector \(v\in R^{t}\). Let \[P=\begin{pmatrix}I_{t}&0\end{pmatrix}\in M_{t\times s}(A).\] Then we have \(X^{T}=PX^{T}\) and \(PQ=I_{t}\). Thus \[0=vX^{T}=(vPQ)X^{T}=vPH^{T}\implies vP\in\operatorname{Syz}(H).\] Therefore \(vP\) lies in span of row vectors of \(S\) and \(v=vPQ\) lies in the span of row vectors of \(SQ\). Combining the two theorems above solves the computation of \(\operatorname{Syz}(X)\) for any tuple \(X\). ## 7. Ext Groups In this section, we further assume that \(A=\oplus_{i\geq 0}A_{i}\) is a connected graded algebra, which means that \(A_{0}=k\). The degree of a (homogeneous) element \(a\in A\) is denoted by \(\deg(a)\). We require the filtration \(F_{\bullet}A\) to be homogeneous. Thus the associated graded algebra \(\operatorname{gr}(A)\) has two gradings. To avoid confusion, the degree function on \(\operatorname{gr}(A)\) induced by the grading of \(A\) is also denoted by \(\deg(\cdot)\) while the degree function induced by the filtration is denoted by \(|\cdot|\) as before. Let \(N=A^{r}/M\) be a graded left \(A\)-module. Here \(A^{r}=A\{\tilde{v}_{i}:1\leq i\leq r\}\) is a direct sum of shifts of \(A\) where the degrees of basis \(\deg(\tilde{v}_{i})\) are integers. The goal of this section is to compute the Ext groups \(\operatorname{Ext}_{A}^{*}(N,k)\) by constructing a minimal free resolution of \(N\). Most of the algorithms are adapted from the theory of commutative Grobner bases (See [8]). The only differences are the steps that construct noncommutative Grobner bases. First we have to make sure that the number \(r\) in the representation \(N=A^{r}/M\) is as small as possible. Let \(A_{+}=\oplus_{i>0}A_{i}\) be the "graded maximal ideal" of \(A\). By Nakayama's lemma, we see that a minimal generating set of the graded left module \(N\) of \(A\) corresponds to a basis of the \(k\)-vector space \(N/A_{+}N\). Hence we can use the following algorithm to minimize \(r\). **Algorithm 7.1**.: _Let \(N\) be a graded left submodule over \(A\) with the following presentation_ \[N=A\{\tilde{v}_{1},\dots,\tilde{v}_{r}\}/(Ax_{1}+\dots+Ax_{l})\] _where \(x_{i}\) are \(A\)-linear combinations of \(\tilde{v}_{j}\). We can minimize \(r\) in the presentation by the following steps._ 1. _For_ \(1\leq i\leq l\)_, let_ \(y_{i}\) _be the sum of all monomials in_ \(x_{i}\) _that equals_ \(\tilde{v}_{j}\) _for some_ \(j\)_. In other words, we remove monomials in_ \(A_{+}\{\tilde{v}_{1},\dots,\tilde{v}_{r}\}\) _from summands of_ \(x_{i}\)_. Then we have_ \[N/A_{+}N\cong k\{\tilde{v}_{1},\dots,\tilde{v}_{r}\}/(ky_{1}+\dots+ky_{l}).\] 2. _Assume that_ \(\dim_{k}(N/A_{+}N)=r^{\prime}\) _for some_ \(r^{\prime}\leq r\)_. Perform row reductions on_ \(y_{i}\) _to find a subset of_ \(\{\tilde{v}_{1},\dots,\tilde{v}_{r}\}\) _that generates_ \(N/A_{+}N\)_. Without loss of generality we assume that_ \(\{\tilde{v}_{1},\dots,\tilde{v}_{r^{\prime}}\}\) _generates_ \(N/A_{+}N\) _and we can write_ \(\tilde{v}_{j}\) _as a_ \(k\)_-linear combination of_ \(\tilde{v}_{1},\dots,\tilde{v}_{r^{\prime}}\) _for_ \(j>r^{\prime}\) _in_ \(N/A_{+}N\)_._ 3. _Perform the exact same row reductions on_ \(x_{i}\) _instead of_ \(y_{i}\)_. Then in_ \(N\) _we can write_ \(\tilde{v}_{j}\) _as a_ \(k\)_-linear combination of_ \(\tilde{v}_{1},\dots,\tilde{v}_{r^{\prime}}\) _and other monomials in_ \(A_{+}\{\tilde{v}_{1},\dots,\tilde{v}_{r}\}\) _for_ \(j>r^{\prime}\)_. By iterating this process_ \(\tilde{v}_{j}\) _can be further rewritten as an_ \(A\)_-linear combination of_ \(\tilde{v}_{1},\dots,\tilde{v}_{r^{\prime}}\)_. Thus we can replace_ \(x_{i}\) _with_ \[x_{i}^{\prime}\in Av_{1}+\dots+Av_{r^{\prime}}\] _such that_ \[N\cong A\{\tilde{v}_{1},\dots,\tilde{v}_{r^{\prime}}\}/(Ax_{1}^{\prime}+ \dots+Ax_{l}^{\prime})\] _where_ \(\{\tilde{v}_{1},\dots,\tilde{v}_{r^{\prime}}\}\) _is a minimal generating set of_ \(N\)_._ Next we need the following simple algorithm which computes a minimal generating set of of a submodule \(M\subset A^{r}\). **Algorithm 7.2**.: _Let \(M\) be a graded left submodule of \(A^{r}\) generated by \(x_{1},\dots,x_{l}\). We can find a subset of \(\{x_{1},\dots,x_{l}\}\) that is a minimal generating set of \(M\) by the following steps._ 1. _Order_ \(x_{1},\dots,x_{l}\) _by degrees such that_ \[\deg(x_{1})\leq\dots\leq\deg(x_{l}).\] 2. _Compute the Grobner basis_ \(H_{i}\) _of the submodule generated by_ \(x_{1},\dots,x_{i}\) _for_ \(0\leq i\leq l\) _(_\(H_{0}=\emptyset\)_). We can apply Algorithm_ 5.4 _on_ \(H_{i-1}\cup\{\operatorname{red}(x_{i},H_{i-1})\}\) _to obtain_ \(H_{i}\)_._ 3. _When_ \(\operatorname{red}(x_{i},H_{i-1})=0\)_, mark_ \(x_{i}\) _as redundant for_ \(1\leq i\leq l\)_._ 4. _Remove the redundant elements from_ \(x_{1},\dots,x_{l}\)_. The remaining elements form a minimal generating set of of_ \(M\)_._ Now we have all the ingredients to construct a minimal resolution of \(N=A^{r}/M\). **Algorithm 7.3**.: _Let \(N=A^{r}/M\) be a graded left \(A\)-module where \(M\subset A^{r}\) is generated by \(x_{1},\dots,x_{l}\). We construct the first \(s+1\) terms \(F_{0},\dots,F_{s}\) of a free resolution_ \[\dots\to F_{s}\xrightarrow{d_{s}}\dots\xrightarrow{d_{3}}F_{2}\xrightarrow{ d_{2}}F_{1}\xrightarrow{d_{1}}F_{0}=A^{r}\xrightarrow{\epsilon}N\] _by the following steps._ 1. _Apply Algorithm_ 7.1 _to minimize_ \(r\) _in the presentation of_ \(N\)_._ 2. _Let_ \(d_{0}=\epsilon:F_{0}=A^{r}\to N\) _be the quotient map. Apply Algorithm_ 7.2 _on_ \(M=\ker(d_{0})\) _to obtain a minimal generating set_ \(\{x_{11},\dots,x_{l_{1}}\}\) _of_ \(M\)_._ 3. _For_ \(i=1,2,\dots,s\)_, assume that_ \(\{x_{i1},\dots,x_{il_{i}}\}\) _is a minimal generating set of_ \(\ker(d_{i-1})\)_. Define_ \[F_{i}=A\{v_{i1},\dots,v_{il_{i}}\},\ \deg(v_{ij})=\deg(x_{ij})\] _and_ \[d_{i}(v_{ij})=x_{ij},\text{ for }1\leq j\leq l_{i}.\] _Apply Theorem_ 6.5 _to compute a minimal generating set of_ \(\ker(d_{i})\cong\operatorname{Syz}(x_{i1},\dots,x_{il_{i}})\)_._ By executing the algorithm we can find that \[\operatorname{Ext}^{i}_{A}(N,k)\cong\begin{cases}k^{r}&i=0,\\ k^{l_{i}}&1\leq i\leq s.\end{cases}\] We call this algorithm a vertical method for computing a minimal resolution. We can write a horizontal version of this algorithm which executes this algorithm degree by degree (from the grading of \(A\)). We leave it for the reader to write up this version. We refer the reader to [8, Theorem 4.8.16] of a commutative version of a horizontal method which incorporates the elimination algorithm as an optimization and it can be directly transferred to our case. ## 8. The Steenrod Algebra In the rest of this paper, we would like to apply our noncommutative Grobner bases to the Steenrod algebra at the prime 2 and compute the Ext groups over it. In this section we recall some facts about the Steenrod algebra. Let \(\mathscr{A}\) be the Steenrod algebra at the prime 2. It is a Hopf algebra generated by symbols \(Sq^{n}\), \(n\geq 1\) in degree \(n\) with relations given by \[Sq^{i}Sq^{j}=\sum_{k=0}^{[i/2]}\binom{j-k-1}{i-2k}Sq^{i+j-k}Sq^{k}\ \ \ \ (Sq^{0}=1)\] for all \(i,j>0\) such that \(i<2j\). They are call the Adem relations. The coproduct is given by \[\psi(Sq^{n})=\sum_{i=0}^{n}Sq^{i}\otimes Sq^{n-i}.\] In order to define a suitable filtration for \(\mathscr{A}\), first we consider the dual of \(\mathscr{A}\). By the work of Milnor [15], the dual of the Steenrod algebra \(\mathscr{A}_{*}\) can be characterized by \[\mathscr{A}_{*}=\mathbb{F}_{2}[\xi_{1},\xi_{2},\dots],\ \ \ \ \deg(\xi_{i})=2^{i}-1\] with coproduct given by \[\psi(\xi_{n})=\sum_{i=0}^{n}\xi_{n-i}^{2^{i}}\otimes\xi_{i}.\ \ \ \ (\xi_{0}=1)\] The dual basis of the monomial basis of \(\mathscr{A}_{*}\) is denoted by \(\{P(R)\}\) where \(R=(r_{1},r_{2},\dots)\) range over all sequences of non-negative integers which are almost all zero, and \(P(R)\) is dual to \[\xi(R)=\xi_{1}^{r_{1}}\xi_{2}^{r_{2}}\cdots.\] Milnor gives a product formula which computes a product \(P(R_{1})\cdot P(R_{2})\) as a linear combination of \(P(R)\) again. We define a weight function \(w\) on \(\mathscr{A}\) by \[w(P(r_{1},r_{2},\dots))=\sum_{k,i}(2k-1)a_{k,i}\] where \(r_{k}=\sum_{i}a_{k,i}2^{i}\) is the \(2\)-adic expansion. We define a decreasing filtration on \(\mathscr{A}\) by \[F_{p}\mathscr{A}=k\{P(R):w(P(R))\geq p\}.\] It is not hard to check via the coproduct formula in the dual that \(F_{\bullet}\mathscr{A}\) satisfies \[F_{p}\mathscr{A}\cdot F_{q}\mathscr{A}\subset F_{p+q}\mathscr{A}\] and in each degree \(n\), we have \(F_{p}\mathscr{A}_{n}=0\) when \(p\) is large enough. **Proposition 8.1**.: _The associated graded algebra of the Steenrod algebra is isomorphic to an exterior algebra_ \[\operatorname{gr}(\mathscr{A})\cong E[P_{j}^{i}:i\geq 0,j>0]\] _where the symbol \(P_{j}^{i}\) is the projection of_ \[\tilde{P}_{j}^{i}=P(0,\dots,0,2^{j},0,\dots)\] _which is dual to \(\xi_{j}^{2^{i}}\). The gradings of \(\operatorname{gr}(\mathscr{A})\) are given by_ \[\deg(P_{j}^{i})=2^{i}(2^{j}-1),\ |P_{j}^{i}|=2j-1.\] The reader can find a proof of this theorem from the proof of [20, Theorem 3.2.2]. _Remark 8.2_.: This associated graded algebra here is actually the same as the associated homogeneous Koszul algebra (defined by Priddy [19]) of May's associated graded algebra of the Steenrod algebra (see [14]). _Remark 8.3_.: For prime \(p>2\), there is a similar filtration for the Steenrod algebra at the prime \(p\) such that the associated graded algebra is a product of exterior algebras and truncated polynomial algebras of height \(p\) (see the remarks before [20, Lemma 3.2.4]). Therefore we can also apply our algorithms to the Steenrod algebras at odd primes. However, the author has not implemented these algorithms into computer programs yet. Although \(\operatorname{gr}(\mathscr{A})\) is not a finitely generated commutative algebra, the truncated graded algebra \(\mathscr{A}_{\leq n}\) is finitely generated and so is \(\operatorname{gr}(\mathscr{A}_{\leq n})\). When we want to compute a minimal resolution of a left \(\mathscr{A}\)-module up to degree \(n\), it suffices to use \(\mathscr{A}_{\leq n}\) instead of \(\mathscr{A}\). For our convenience, in the rest of this section, we implicitly consider everything truncated to some degree so that Grobner bases are still well-defined. When \(A=\mathscr{A}\), consider \[\operatorname{gr}(\mathscr{A})=k[P_{j}^{i}:i\geq 0,j>0]/I\] where the ideal is generated by the set of squares \[G=\{(P_{j}^{i})^{2}:i\geq 0,j>0\}.\] It is obvious that \(G\) is a Grobner basis of \(I\). We order the generators \(P^{i}_{j}\) by degree. In other words, we have \[P^{i}_{j}<P^{s}_{t}\Longleftrightarrow\deg(P^{i}_{j})<\deg(P^{s}_{t}).\] This is well defined since all \(P^{i}_{j}\) have different degrees. We order monomials of \(P^{i}_{j}\) such that \[m<m^{\prime}\Longleftrightarrow |m|<|m^{\prime}|\text{ or }\] \[|m|=|m^{\prime}|\text{ and }m\text{ is {\em greater} than }m^{\prime}\text{ lexicographically.}\] The author uses this particular monomial ordering because the corresponding computer program is faster than other versions the author has tried so far. For left submodules of \(\mathscr{A}^{r}\) we use (5.1) as our monomial ordering. The set \[\mathscr{B}=\{\text{square-free monomials in variables }P^{i}_{j}\}\] is a \(\mathbb{F}_{2}\)-basis of \(\operatorname{gr}(\mathscr{A})\) while the set \[\tilde{\mathscr{B}}=\{\tilde{P}^{i_{1}}_{j_{1}}\cdots\tilde{P}^{i_{l}}_{j_{l} }:\tilde{P}^{i_{1}}_{j_{1}}<\cdots<\tilde{P}^{i_{l}}_{j_{l}},l\geq 0\}\] is a \(\mathbb{F}_{2}\)-basis of \(\mathscr{A}\). The basis \(\tilde{\mathscr{B}}\) is similar to the "\(P^{s}_{t}\) basis" in Monks [16] with a different orderings of generators. ## 9. Computing the Ext Groups over the Steenrod Algebra In the previous section, we have shown that the truncated graded algebra \(\mathscr{A}_{\leq n}\) is a finitely generated filtered-commutative algebra. Therefore we can apply the noncommutative Grobner bases to compute the Ext groups \(\operatorname{Ext}^{*,*}_{\mathscr{A}}(N,k)\) for a finitely generated graded left \(\mathscr{A}\)-module \(N\). The author has implemented the algorithms into computer programs to compute \(\operatorname{Ext}^{*,*}_{\mathscr{A}}(N,k)\) for various \(N\) up to some degree. The program code can be found on Github [11]. In addition to Algorithm 7.3, the computer programs incorporate the following optimizations in commutative algebras which can be easily adapted to our noncommutative case. 1. Use Buchberger triple to reduce the number of steps needed to compute a Grobner basis. (See [9, Tutorial 25]) 2. Build the resolution degree by degree and use eliminations. (See [8, Theorem 4.8.16]) Since \(\mathscr{A}\) is a Hopf algebra, \(\operatorname{Ext}^{*,*}_{\mathscr{A}}(\mathbb{F}_{2},\mathbb{F}_{2})\) is actually a commutative algebra over \(\mathbb{F}_{2}\). After we obtain a resolution by the programs above, we can use methods described in [5, 6] to compute the products in \(\operatorname{Ext}^{*,*}_{\mathscr{A}}(\mathbb{F}_{2},\mathbb{F}_{2})\) and module structures of \(\operatorname{Ext}^{*,*}_{\mathscr{A}}(N,\mathbb{F}_{2})\) over \(\operatorname{Ext}^{*,*}_{\mathscr{A}}(\mathbb{F}_{2},\mathbb{F}_{2})\) for \(A\)-modules \(N\). However, we always use Grobner bases to do linear algebra other than enumerating \(\mathbb{F}_{2}\)-bases of free \(\mathscr{A}\)-modules. The elimination algorithm mentioned above is especially efficient for building maps between chain complexes, which is used to compute products in Ext. We have computed the cohomology of the Steenrod algebra \(\operatorname{Ext}^{*,*}_{\mathscr{A}}(\mathbb{F}_{2},\mathbb{F}_{2})\) with outcomes described in Theorem 1.3. The computed range is very large and contains \(h_{7}^{2}\) and \(h_{8}\). The reader can find a graphical diagram of \(\operatorname{Ext}^{*,*}_{\mathscr{A}}(\mathbb{F}_{2},\mathbb{F}_{2})\) on the webpage [12].
2306.01061
Reimagining Retrieval Augmented Language Models for Answering Queries
We present a reality check on large language models and inspect the promise of retrieval augmented language models in comparison. Such language models are semi-parametric, where models integrate model parameters and knowledge from external data sources to make their predictions, as opposed to the parametric nature of vanilla large language models. We give initial experimental findings that semi-parametric architectures can be enhanced with views, a query analyzer/planner, and provenance to make a significantly more powerful system for question answering in terms of accuracy and efficiency, and potentially for other NLP tasks
Wang-Chiew Tan, Yuliang Li, Pedro Rodriguez, Richard James, Xi Victoria Lin, Alon Halevy, Scott Yih
2023-06-01T18:08:51Z
http://arxiv.org/abs/2306.01061v1
# Reimagining Retrieval Augmented Language Models ###### Abstract We present a reality check on large language models and inspect the promise of retrieval-augmented language models in comparison. Such language models are semi-parametric, where models integrate model parameters and knowledge from external data sources to make their predictions, as opposed to the parametric nature of vanilla large language models. We give initial experimental findings that semi-parametric architectures can be enhanced with views, a query analyzer/planner, and provenance to make a significantly more powerful system for question answering in terms of accuracy and efficiency, and potentially for other NLP tasks. ## 1 Introduction As language models have grown larger Kaplan et al. (2020); Hoffmann et al. (2022), they have fared better and better on question answering tasks Hendrycks et al. (2021) and have become the foundation of impressive demos like Chat-GPT Ouyang et al. (2022); ChatGPT3-OpenAI). Models like GPT-3 Brown et al. (2020) and Chat-GPT generate fluent, human-like text, which comes the potential for misuse as in high-stakes healthcare settings Dinan et al. (2021). Large language models (LLMs) also come with several significant issues Hoffmann et al. (2022); Bender et al. (2021). LLMs are costly to train, deploy, and maintain, both financially and in terms of environmental impact Bender et al. (2021). These models are also almost always the exclusive game of industrial companies with large budgets. Perhaps most importantly, the ability of LLMs to make predictions is not commensurate with their ability to obtain insights about their predictions. Such models can be prompted to generate false statements Wallace et al. (2019), often do so unprompted Asai et al. (2022) and when combined with its ability to easily fool humans, can lead to misuse Macaulay (2020). In recent years, we have seen the promise of retrieval-augmented language models partially addressing the aforementioned shortcomings Guu et al. (2020); Lewis et al. (2020); Borgeaud et al. (2021); Izacard et al. (2022); Yasunaga et al. (2022). The architecture of such models is _semi-parametric_, where the model integrates model parameters and knowledge from external data sources to make its predictions. The first step of performing a task in these architectures is to retrieve relevant knowledge from the external sources, and then perform finer-grained reasoning. Some of the benefits these architectures offer are that the external sources can be verified and updated easily, thereby reducing hallucinations Shuster et al. (2021) and making it easy to incorporate new knowledge and correct existing knowledge without needing to retrain the entire model Lewis et al. (2020). Models that follow semi-parametric architectures (SPA) are typically smaller than LLMs and they have been shown to outperform LLMs on several NLP tasks such as open domain question answering (see Table 1). Recent work that extends LLMs with modular reasoning and knowledge retrieval Karpas et al. (2022); LangChain) is also a type of SPA. In this paper we argue that building on the core ideas of SPA, we can potentially construct much more powerful question answering systems that also provide access to multi-modal data such as image, video and tabular data. We describe PostText, a class of systems that extend SPA in three important ways. First, PostText allows the external data to include _views_, a concept we borrow from database systems Garcia-Molina et al. (2008). A _view_ is a function over a number of data sources, \(V=f(D_{1},...,D_{n})\). In databases, SQL queries are used to define tabular views. For example, \(V\) can be a table of records of minors that is derived from a table of person records by selecting only those with age\(<\)18. In general, however, views need not be tabular. When a view is materialized (i.e., executed and stored), it may be useful for answering certain queries1 more effectively. In this paper, we adopt a more general notion of views, not limited to results of SQL queries, which can (compositionally) support a variety of user questions. Views are particularly important to support multi-modal data, because combinations of data from multiple modalities can be modeled as views. Second, PostText contains a question analyzer and planner module that decides on the best strategy to answer a question that may involve first answering multiple subquestions in sequence or in parallel. This module bears similarity to query optimization techniques in database systems but will go significantly beyond the techniques established in database systems since, there are multiple different ways to answer a natural language question, especially with the availability of multi-modal data. Finally, PostText supports computing the provenance of answers to questions. The provenance-aware answer generator module can track the evidence (training data or external sources) that is used for the answers, even if views are used as intermediate results. Footnote 1: We use queries and questions interchangeably. We illustrate the power of PostText with examples in the next section and also the overview of its architecture. In the remaining sections, we describe the different components of PostText. **add some description of experiments** ## 2 Overview of PostText **Example 1**: Consider a setting where we answer questions over data that includes images of dishes and text with restaurant reviews. We can create a view that aligns these two data sets so we can answer more complex queries readily. The view, the table in the middle of Figure 1, aligns dishes with relevant reviews and the corresponding restaurants. Note that creating this view involves an intermediate step of identifying the name of the dish in an image. The view also stores the provenance links to the actual reviews from which the snippets were extracted. There are also provenance links for the images and the name of the dish (not shown). This view can be used to answer questions that would be more difficult without it. For example, if a person recalls a nice dish she had in the past but does not remember its name and is trying to figure out which restaurants serve the same dish and what are the reviews, she can pose the question, which includes both the question in text and an image of the dish. The answer states the name of the dish in question and lists restaurants with top reviews for that dish, along with images of the dish and snippets of those reviews and their provenance. **Example 2**: The same view can also be used to answer the question _"how many reviews raved about Shaking beef?"_. The answer requires counting the number of reviews that are synonymous to very positive reviews about Shaking beef. The view surfaces the reviews associated with Shaking beef immediately and alleviates the amount of work that is required to compute the answer otherwise. The examples show that some questions can be answered more easily if they are supported by views that surface useful associations between data. In fact, indices are a type of views to accelerate lookups between an item and its attributes. In database systems, views have been used extensively to enable more efficient query answering Halevy (2001); Goldstein and Larson (2001) with significant work on automatically materializing a set of indices for efficient query answering Jindal et al. (2018); Das et al. (2019). A set of views and indices are defined automatically or manually in anticipation of a set of frequently asked queries under a budget constraint, e.g., space, so that during runtime, most of the incoming queries can be answered immediately or after applying simple operations over the views. Otherwise, the system falls back to answering the queries using the actual data sources. In other words, PostText prefers to use views to answer the questions, which will likely to be more efficient and accurate in general but otherwise, the \begin{table} \begin{tabular}{c c c c} \hline \hline Model & \#Params & Outperformed LLM’s sizes & Tasks \\ \hline REALM Guu et al. (2020) & 330M & 11B (T5) & Open-QA \\ RETRO Borgeaud et al. (2021) & 7.5B & 178B (Jurassic-1), 280B (Gopher) & Language modeling \\ Atlas Izacard et al. (2022) & 11B & 175B (GPT-3), 540B (PaLM) & Multi-task NLU, Open-QA \\ RAG Lewis et al. (2020) & 400M & 11B (T5) & Open-QA \\ FiD Izacard and Grave (2021) & 770M & 11B (T5), 175B (GPT-3) & Open-QA \\ \hline \hline \end{tabular} \end{table} Table 1: The sizes of SPA models with those of comparable or outperformed LLMs. system falls back to the traditional question answering strategy. In addition to query answering, views have also been used to define content-based access control Bertino and Sandhu (2005), i.e., which parts of the data are accessible and by whom. The examples also show how provenance is provided as part of the answer. In these examples, it happened that provenance was easily determined through the provenance links that are already captured in the views. If actual data sources are accessed, the links to the data sources used (e.g., spans of text documents, parts of images, segments of videos) to derive the answer are provided as part of the answer. If the answer is generated by the language model, we trace how PostText derives the answer from parametric knowledge and retrieved data through analyzing its weights or determining "influential" parametric knowledge (Section 6) similarly to Akyurek et al. (2022). **PostText architecture** PostText enhances the core architecture of semi-parametric models with three components: views, a query analyzer & planner (QAP), and a provenance-aware answer generator (PAG). In addition, all components including the "traditional" knowledge retrievers are equipped to manage both structured and unstructured data of different modalities. Figure 2 shows the architecture of PostText. Views are synthesized from different types of external data sources (e.g., text, images, videos, and tabular data), which can be public or private. When a question is posed in natural language (NL), the QAP module interprets and decomposes the question into subquestions whose answers can be composed to obtain an answer to the input question. QAP coordinates with the knowledge retriever to derive the data needed to answer these questions. It also coordinates with the PAG module with its plan so that provenance-aware answers can be returned. Adding these components raises interesting challenges such as what views should we construct and how do we construct and maintain these views automatically as data sources changes? What is a good plan for deriving an answer and how do we choose among alternative plans? And how do we measure the "goodness" of an answer with provenance? In the remaining sections, we describe the challenges associated with each of these components ## 3 Data Sources and Views **Data Sources** Most existing work on retrieval augmented language models are focused on text. More recently, Chen et al. (2022); Yasunaga et al. (2022); Sheynin et al. (2022) has applied SPA models on image-text and text-only corpus. The data sources in PostText are multi-modal, unstructured or structured. They can be external public data sources or private ones. **Views** Views are results computed (not necessarily Figure 1: Multimodal question with multimodal answer. The view (middle) associates the dishes with its corresponding review snippets and images. The provenance links show where the snippets are extracted from. There are also provenance links for the images and name of the dish (not shown). through SQL queries) from data sources or other views. For example, a view can be a document involving data of different modalities (e.g., an image or a table). Views are powerful constructs for surfacing important and useful associations that are not obvious otherwise, whether they are associations from data within one data source or across multiple data sources. The table in Figure 1 is a view over restaurant reviews from Yelp, Google, and images provided by restaurants. This view makes it easier to compute the number of reviews associated with each dish in each restaurant or even across all restaurants. This view also makes it easier to determine the answer as to which dishes has more reviews than Shaking beef at Tamarine. Indexes are a special type of views. They associate an item with its attribute. Several implementations of retrieval augmented language models Guu et al. (2020); Lewis et al. (2020); Izacard et al. (2022) already construct indices that associate a document with its nearest neighbors. Recently, GPT-index (GPT-Index, 2022) developed a set of APIs for creating data structures that can be traversed using LLMs to answer queries. The data structures are structured indexes and can be used to determine an answer to a question. Relational views are extensively used in data warehouses for optimizing queries. Indexes and views are typically created by users or database administrators or they can be automatically selected Agrawal et al. (2000); Schnaitter et al. (2007); Jindal et al. (2018) and tuned Agrawal et al. (2006); Bruno and Chaudhuri (2008) to efficiently answer queries of a given workload Das et al. (2019), which are queries that are anticipated to be frequently occurring. In typical settings, a set of views are constructed, usually under a budget constraint such as space, to maximize the queries that can be answered (either directly or through applying a few simple operators on the views) in a given workload. When a new query arrives after the views are constructed, the query optimizer determines the best plan to adopt for computing the answer. Queries are directly executed over the views if possible. Otherwise, it falls back to old strategy of answering the query with the data sources. For example, early last year, in anticipation of frequent queries about statistics of past World Cups due to the World Cup 2022 event at the end of the year, a set of views about the different World Cup statistics could have been constructed a priori so that most World Cup related questions can be directly answered using the views. We hypothesize that views in PostText can bring similar benefits to question answering. The right views will make it easier for the QAP module and the knowledge retriever to discover and obtain relevant data and subsequently for the answer generator to derive the right answers. Existing SPAs Guu et al. (2020); Lewis et al. (2020); Izacard et al. (2022) are already leveraging dense-vector indices to accelerate the retrieval of document spans. In PostText with views being available, it is a natural extension to annotate each view with a description of its content (e.g., "_Restaurants and highly ranked dishes_"), which would make it even easier for the knowledge retriever to find the relevant data. The core challenges in developing views are how do we determine what is a "right" set of views to materialize automatically or semi-automatically? Figure 2: Semi-parametric architectures enhanced with views, a query analyzer & planner module, and a provenance-aware answer generator. The data sources may be public or private. How do we incrementally maintain such views as data sources are updated? These problems are extensively studied in the database community and it will be interesting to explore those ideas that transfer to the PostText. The architecture can also be instrumented in such a way that views are the only sources of data for the knowledge retriever (i.e., actual data sources are excluded). Hence, in this case, views act as a gateway that define which parts of the data sources are accessible by the knowledge retriever to answer queries. Finer-grained access control can also be instrumented through views as described in Bertino and Sandhu (2005). With views, it is also possible to enable a finer-grained public-private autoregressive information retrieval privacy system Arora et al. (2022). ## 4 Question Analyzer & Planner The question analyzer and planner (QAP) module examines the input question and generates a plan, i.e., a sequence of sub-questions whose answers can be combined to form an answer to the input question. For each subquestion in the plan, QAP first checks whether external knowledge is needed. If not, the language model can be used to derive the answer. Otherwise, the subquestion is passed to the knowledge retriever to discover and retrieve relevant data for the subquestion at hand. The results from the knowledge retriever and the plan are passed to PAG (i.e., the rightmost green box in Figure 2). It is still an open and challenging question to determine whether a language model can confidently answer a question Kamath et al. (2020); Si et al. (2022). Any solution to this problem will help improve the plan generator. An example plan from the QAP module for our running example is as follows: (1) find the name of the dish \(X\) in the input image, (2) find restaurants that serve \(X\), (3) find the top restaurant among the results from (2). This plan is viable because (a) there is an index associating embeddings of images with the name of the main entity of the image, (b) there exists a view as shown in Figure 1, which supports the search for restaurants that serve a particular dish. Top answers can be derived by computing the scores of the reviews or approximating it based on the sentiment of the reviews and then ranking the results based on such scores. The information from (2) is passed to PAG which will compute the answer along with its provenance. This plan is based on the heuristic to push selection conditions early before joining/combining different data sources if needed. The conditions in the question are "good version" and "this dish". In this case, no joins are required as the view already combines the required information in one place. Hence, QAP seeks to first find the name of the dish to narrow down the reviews restricted to this dish. Alternatively, it could also retrieve all good reviews before conditioning on the name of the dish. Yet another plan could be to match the image directly to the images of the view to find the top reviews. Or, it may decide to directly retrieve only top reviews with images similar to the image in the question from the external data sources and condition the answer based on the name of the restaurant mentioned in the reviews. In all possible plans, the knowledge retriever is responsible for discovering and retrieving the relevant data for the QAP plan. In addition to the logic that may be needed for decomposing the question into subquestions, a plan is also needed for composing the subanswers obtained to form an answer to the input question. The plan is shared with the PAG module for deriving the associated provenance. A fundamental challenge in developing the QAP module is how to derive candidate plans and decide what is the "best" plan for answering the question when there are different ways to obtain an answer. Achieving this requires understanding how to compare amongst alternative plans for deriving an answer to the question. This problem bears similarity to query evaluation techniques for database systems (e.g., Graefe (1993)). It will be interesting to investigate whether database query planning techniques and ideas can synergize with question understanding and planning techniques (e.g., Wolfson et al. (2020); Dunietz et al. (2020); Zhao et al. (2021); Xiong et al. (2021) to develop a comprehensive query planner. Emerging work such as chain of thought reasoning Wei et al. (2022), where a sequence of prompts are engineered to elicit better answers, ReAct Yao et al. (2022), where reasoning and action techniques are applied for deriving an answer, and more recently, work that generates a plan which can call LMs for resolving subquestions Cheng et al. (2022) are also relevant. These techniques so far are restricted to text and does not compare among different plans. Another challenge in the context of NL questions is that while there is a single correct answer to an SQL query over a database, there are potentially many different correct answers to a NL question (Si et al., 2021; Min et al., 2020; Chen et al., 2020). Hence the space of possible plans to derive the "best" answer most efficiently is even more challenging in this case. We are advocating for a system that can reason and compare at least some viable strategies to arrive at a best plan for deriving a good answer efficiently. Naturally, one can also train a LM to create a plan. Our belief is that taking a more systematic route to planning can relief the need for the amount of training data required and will also aid provenance generation through its ability to describe the steps it took and the sources of data used in each step to generate an answer. As we shall explain in Section 5, the cost and accuracy of knowledge retrievers can also play a role in determining what is a better strategy for computing a good answer. ## 5 Knowledge Retriever The role of the knowledge retriever is to provide the information that the system lacks in order to fulfill the given task, typically at the inference time. More importantly, we envision that the knowledge retriever proposed in our framework has the ability to access knowledge stored in different sources and modalities, retrieve and integrate the relevant pieces of information, and present the output in a tabular data view. The structured output contains raw data items (e.g., text documents, images or videos) and and optionally different metadata, such as textual description of each data item. Such structured output allows downstream (neural) models to consume the retrieved knowledge efficiently and also allows developers and users to validate the provenance conveniently. Existing information retrieval models mostly focus on a single form of data. Below we first describe briefly how knowledge retrieval is done for unstructured and structured data. We then discuss the technical challenges for building a unified knowledge retriever, as well as recent research efforts towards this direction. Retrievers for unstructured dataFor unstructured data, such as a large collection of documents (i.e., text corpus) or images, knowledge retrieval is often reduced to a simple similarity search problem, where both queries and data in the knowledge source are represented as vectors in the same vector space (Turney and Pantel, 2010). Data points that are _close_ to the query are considered as _relevant_ and thus returned as the knowledge requested. Traditional information retrieval methods, whether relying on sparse vector representations, such as TFIDF (Salton et al., 1975) and BM25 (Robertson et al., 2009), or dense representations, such as LSA (Deerwester et al., 1990), DSSM (Huang et al., 2013), DPR (Karpukhin et al., 2020), are the canonical examples of this paradigm. Notice that the vector space model is not restricted to text but is also applicable to problems in other modalities, such as image tagging (Weston et al., 2011) and image retrieval (Gordo et al., 2016). Retrievers for structured dataWhen the knowledge source is semi-structured (e.g., tables) or structured (e.g., databases), the query can be structured and allows the information need to be defined in a more precise way. Because the data is typically stored in a highly optimized management system and sometimes only accessible through a set of predefined API calls, the key technical challenge in the knowledge retriever is to formulate the information need into a formal, structured query. To map natural language questions to structured queries, semantic parsing is the key technical component for building a knowledge retriever for structured data. Some early works propose mapping the natural language questions to a generic meaning representation, which is later translated to the formal language used by the target knowledge base through ontology matching (Kwiatkowski et al., 2013; Berant et al., 2013). Others advocate that the meaning representation should be closely tight to the target formal language (Yih et al., 2015), such as SPARQL for triple stores. Because of the success of deep learning, especially the large pre-trained language models, semantic parsing has mostly been reduced to a sequence generation problem (e.g., Text-to-SQL). For example, RASAT (Qi et al., 2022) and Picard(Scholak et al., 2021), which are generation models based on T5 (Raffel et al., 2020), give state-of-the-art results on benchmarks like Spider (Yu et al., 2018) and CoSQL (Yu et al., 2019). Towards a unified knowledge retrieverAs knowledge can exist in different forms, a unified knowledge retriever that can handle both structured and unstructured data in different modalities is more desirable. One possible solution for realizing a unified retriever is to leverage multiple single-source knowledge retrievers. When a query comes in, the QAP module first decomposes it into several smaller sub-queries, where each sub-query can be answered using one component knowledge retriever. The results from multiple knowledge retrievers can be integrated and then returned as the final output. However, several technical difficulties, including how to accurately decompose the question and how to join the retrieved results often hinder the success of this approach. Alternatively, unifying multiple sources of information in a standard representation, using text as a denominator representation, has been promoted recently (Oguz et al., 2022; Zeng et al., 2022). If all data items have a corresponding textual description, it is possible for the knowledge retriever to use only text-based retrieval techniques to find relevant data items once all input entities of non-textual modality have been mapped to their corresponding textual descriptions. Such approach circumvents the complexity of managing multiple knowledge stores in different format. Moreover, with the success of large multilingual and multi-modal language models (Conneau and Lample, 2019; Aghajanyan et al., 2022), data of different structures or from different modalities can naturally share the same representation space. While unifying multiple sources of information through representation learning seems to be a promising direction, it should be noted that certain structured information may be lost in the process. For example, by flatting a knowledge graph to sequences of (subject, predicate, object) triples, the graph structure is then buried in the textual form. Whether the information loss limits the retriever's ability to handle certain highly relational queries remains to be seen. ## 6 Provenance-aware answer generators ### Semi-Parametric Engine Demonstrating the provenance of a QA model prediction should center on identifying the data--whether in training data, retrieval corpora, or input--that is most influential in causing the model to make a particular prediction. For example, given the question "_who was the first U.S. president?_", the system should return the correct answer "_George Washington_" and references to training or retrieval corpora that are--to the model--causally linked to the answer. If the training or retrieval data included Washington's Wikipedia page, a typical human would expect for this to be included. However, the requirement we impose is causal and counterfactual: had the model not used that data, the prediction should change. If the prediction does not change, then from the causal perspective, there may be other data that is either more influential or duplicative (e.g., if whitehouse.gov is in the training data, it is duplicative). Next, we describe common semi-parametric models and sketch how this casually-based answer provenance could be obtained and computational challenges to overcome. Provided an input prompt and retrieved text, semi-parametric models like ATLAS (Izacard et al., 2022) or passing documents as prompts to GPT-3 (Kasai et al., 2022) are adept at generating free-text, short answers. Likewise, parametric models with flexible input like GPT-3 can be combined with retrievers to achieve a similar goal; alternatively, transformer models can be retrofitted with layers so that passages can be integrated in embedding space (Borgeaud et al., 2021). While retrieval-augmentation is no catch-all panacea to model hallucination, it does mitigate the problem (Shuster et al., 2021). Additionally, models' explanations can make it easier to know when to trust models and when not to (Feng and Boyd-Graber, 2022). In the case of QA models that take question plus retrieved text as input, there are several options. First, the model could provide several alternative answers which provide insight into the distribution of model outputs, rather than just a point estimate. Second, the model could provide a combination of feature-based explanations such as token saliency maps and the model's confidence in a correct answer (Wallace et al., 2019). When combined, they can jointly influence the degree to which humans trust the model (Lai and Tan, 2019). However, to provide a complete account of model behavior, we must return to the training of model and the data used. In short, we endeavor to identify the combination of input, training data, and retrieved text that caused the model to produce the distribution of outputs (i.e., answer(s)). This is, of course, challenging due to scale of language model training data like C4 (Raffel et al., 2020) and the Pile (Gao et al., 2020) and that establishing causal--and therefore more faithful--explanations of model behavior is difficult. Training data attribution is one promising idea in this direction--it uses gradient and embedding based methods to attribute inference behavior to training data (Akyurek et al., 2022). For example, influence functions (Hampel, 1974; Han et al., 2020) and TracIn (Pruthi et al., 2020) link predictions to specific training examples, but are computationally expensive and are approximate rather than exact solutions. To firmly establish a causal connection, one could fully re-train the model without the identified training examples, but this is prohibitively expensive in practice. Future development of efficient training data attribution, combined with methods like interpretations of input plus retrieved data, is a promising direction towards more complete explanations of model predictions. ### Tabular Engine As described at the end of Section 4, the knowledge retriever will pass on the data obtained to PAG. The QAP module will pass information about its plan to PAG. If the data obtained is tabular and a SQL query is generated, the information is passed to the tabular engine of PAG to compute the required answer(s). The recent advances in Text-to-SQL Wang et al. (2020); Zhao et al. (2022) provide a good technical foundation for generating such SQL queries. In most cases, it is not difficult to understand the correspondence between the natural language question and the SQL query that is generated. Once the SQL query is obtained, provenance can be systematically derived. In databases, the notion of provenance is well-studied Cheney et al. (2009) for a large class of SQL queries; from explaining why a tuple is in the output (i.e., the set of tuples in the database that led to the answer), where a value in a tuple is copied from (i.e., which cell in the source table is the value copied from) Buneman et al. (2001) to how that tuple was derived, which is formalized as semirings Green et al. (2007), a polynomial that essentially describes conjunction/disjunction of records required materialize a record in the result. Database provenance has also been extended to aggregate queries Amsterdam et al. (2011). Since one can derive the mapping between the input question and the SQL query that is generated and also derive the provenance from the data sources based on the SQL query, it becomes possible to understand how the input question led to the answers given by PostText. Putting all together, PostText first explains that the name of the image (i.e., "_a good version of this dish_") referred in question is Shaking beef. It then shows the SQL query that is generated for the question "_Where can I find a good version of Shaking beef_" and the ranking function used for ranking the rows of restaurants with reviews for the dish Shaking beef. For our running example, the answer is obtained from the first row of the table in Figure 1. Specifically, the answer is summarized from the column _Dish_ and _Review snippets/embeddings_. The actual snippets are found following the provenance links captured in the column _Provenance_. A more direct relationship between the summary and the actual review snippets can also be established Carmeli et al. (2021). The success of this approach depends on how far we can push database provenance systematically as SQL queries can still be far more complex than what is investigated in past research (e.g., complex arithmetic and aggregate functions involving also negation, group filters, and functions over values of different modalities). As an alternative to executing the SQL query over the tables obtained, the tabular engine can also choose to deploy table question answering (tableQA) methods where a model directly searches the tabular data for answers based on the input question Sun et al. (2016). Tapas Herzig et al. (2020) and Tapex Liu et al. (2022) are two example solutions for tableQA that formulates tableQA as sequence understanding/generation tasks. Like other recent tableQA works Glass et al. (2021); Herzig et al. (2021), they consider the problem of computing the answer from a single input. It will be interesting to explore how to explain the results obtained using tableQA methods and how tableQA methods can be extended to handle multi-hop questions where the answer may span multiple tables or involve different types of aggregations, reasoning and modalities. ## 7 Preliminary Findings To test our hypothesis that views are valuable for answering queries, especially queries that involve counting or aggregation, we have implemented a first version of PostText2 and compared it against some QA baselines. Footnote 2: PostText source code will be made available soon. The current implementation of PostText assumes views over the underlying data are available in tabular format. The QAP module simply routes the query to a view-based engine (VBE) or a retrieval-based engine (RBE) to answer the query. VBE picks the best view and translates the natural language query into an SQLite query against the view using OpenAI's gpt-3.5-turbo/gpt-4 model. It then executes the SQLite query against the view to obtain a table result which is then translated into English as the final answer. VBE also analyzes the SQLite query to compute the provenance of the answers. At present, it does so by simply retrieving all tuples that contributed to every (nested) aggregated query that is a simple (select-from-where-groupby-having clause) and does not handle negations. An example of the VBE process is described in Appendix B. RBE is implemented with Langchain's RetrievalQAwithSources library. It first retrieves top-\(k\) documents that are relevant for the query and then conditions its answer based on the retrieval. The answer and the ids of the retrieved documents are returned. For our experiments, we use the 42 multihop queries over 3 synthetic personal timelines of different sizes from TimelineQA's benchmark Tan et al. (2023). The personal timelines model the daily activities (e.g., the trips made, things bought, people talked to) of a person over a period of time. We create a view around each type of activity (e.g., trips, shopping, daily_chats) for VBE. For further comparison, we also ran Langchain's SQL-DatabaseChain (DBChain) to perform QA over the same VBE views. Furthermore, we ran it over timelines loosely structured as a binary relation of (date,description) pairs (called DBChain (no views)). We compared the returned answers against the ground truth answers by grading them on a scale of 1-5, with a LLM, where 5 means the returned answer has the same meaning as the ground truth answer (the grading scheme is described in the Appendix C). Our results are shown in Tables 2 and 3. Across both tables, the results on DBChain vs. DBChain(no views) reveal that adding some structure (in this case adding views) is crucial for better performance. Although the benchmark is a relatively small dataset, the scale of the timelines already reveals an impact on the accuracy across all QA systems. For DBChain, the drop in accuracy as the size increases because it sometimes relies on generating SQL queries that return all relevant records and passing all the records to the language model to compute the aggregate. When the results returned are large, which tends to be the case for larger timelines, the token limit of the LLM is often exceeded. VBE has a similar downward trend. It tends to generate queries that push the aggregates to the SQL engine and hence, avoids the issue of exceeding the token limit of the language models for many cases encountered in DBChain. Still, as the timeline gets larger, the result returned by the generated SQL query tends to be bigger and when these results are passed to the verbalization component to compose an answer in English, this may sometimes exceed the token limit of the language model. We also found that on a handful of cases, it so happens that the SQL query generated for L is invalid compared with those generated for the sparse dataset. The scores of RBE is relatively stable across all data densities. But overall, it tends to score lower compared with VBE and DBChain. This is because RBE relies on retrieving the top \(k\) documents from an index to condition the answers upon, regardless of the size of the timeline. However, these retrieved documents may not contain all the necessary information for answering the question in general. Even though the grading scores may not reveal this, the answers tend to be "more wrong" for aggregate queries over a larger timeline. ## 8 Conclusion PostText enhances the core ideas of semi-parametric architectures with views, a query analyzer & planner, and a provenance-aware answer generator. Our initial results indicate that PostText is more effective on queries involving counting/aggregation when we provide structured views to facilitate computation. We plan to further develop and investigate PostText to automatically determine what views to construct, how does one generate plans and compare amongst plans, and how can one measure the quality of answers with provenance. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & **VBE** & **RBE** & **DBChain** & **DBChain (no views)** \\ \hline \hline S & 3.45 & 2.81 & 3.37 & 2.72 \\ M & 3.79 & 2.69 & 3.28 & 2.61 \\ L & 3.11 & 2.44 & 2.95 & 1.95 \\ \hline \end{tabular} \end{table} Table 2: Results with GPT-3.5-turbo. Sizes of (S)mall, (M)edium, (L)arge are 1.1MB, 2.4MB, and 5.6MB respectively. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & **VBE** & **RBE** & **DBChain** & **DBChain (no views)** \\ \hline \hline S & 3.33\({}^{*}\) & 2.10\({}^{*}\) & 2.14\({}^{*}\) & 1.10\({}^{*}\) \\ M & 3.55 & 1.93 & 2.35 & 1.51\({}^{*}\) \\ L & 3.08 & 2 & 1.97 & 1.11\({}^{*}\) \\ \hline \end{tabular} \end{table} Table 3: Results with GPT-4. \({}^{*}\) indicates that timeouts or API errors were encountered during experimentation. ### Limitations and Ethical Considerations We point out the limitations of large language models (costly to train, deploy, maintain, hallucinate, opaque). The vision of PostText shows promise of less costly training, maintenance, and more explainability. However, no actual system is built yet to validate these claims and it is also not clear that a system with PostText architecture will be easier to deploy since it has more components.
2305.14672
Quantifying Character Similarity with Vision Transformers
Record linkage is a bedrock of quantitative social science, as analyses often require linking data from multiple, noisy sources. Off-the-shelf string matching methods are widely used, as they are straightforward and cheap to implement and scale. Not all character substitutions are equally probable, and for some settings there are widely used handcrafted lists denoting which string substitutions are more likely, that improve the accuracy of string matching. However, such lists do not exist for many settings, skewing research with linked datasets towards a few high-resource contexts that are not representative of the diversity of human societies. This study develops an extensible way to measure character substitution costs for OCR'ed documents, by employing large-scale self-supervised training of vision transformers (ViT) with augmented digital fonts. For each language written with the CJK script, we contrastively learn a metric space where different augmentations of the same character are represented nearby. In this space, homoglyphic characters - those with similar appearance such as ``O'' and ``0'' - have similar vector representations. Using the cosine distance between characters' representations as the substitution cost in an edit distance matching algorithm significantly improves record linkage compared to other widely used string matching methods, as OCR errors tend to be homoglyphic in nature. Homoglyphs can plausibly capture character visual similarity across any script, including low-resource settings. We illustrate this by creating homoglyph sets for 3,000 year old ancient Chinese characters, which are highly pictorial. Fascinatingly, a ViT is able to capture relationships in how different abstract concepts were conceptualized by ancient societies, that have been noted in the archaeological literature.
Xinmei Yang, Abhishek Arora, Shao-Yu Jheng, Melissa Dell
2023-05-24T03:25:33Z
http://arxiv.org/abs/2305.14672v1
# Quantifying Character Similarity with Vision Transformers ###### Abstract Record linkage is a bedrock of quantitative social science, as analyses often require linking data from multiple, noisy sources. Off-the-shelf string matching methods are widely used, as they are straightforward and cheap to implement and scale. Not all character substitutions are equally probable, and for some settings there are widely used handcrafted lists denoting which string substitutions are more likely, that improve the accuracy of string matching. However, such lists do not exist for many settings, skewing research with linked datasets towards a few high-resource contexts that are not representative of the diversity of human societies. This study develops an extensible way to measure character substitution costs for OCR'ed documents, by employing large-scale self-supervised training of vision transformers (ViT) with augmented digital fonts. For each language written with the CJK script, we contrastively learn a metric space where different augmentations of the same character are represented nearby. In this space, homoglyphic characters - those with similar appearance such as "O" and "0" - have similar vector representations. Using the cosine distance between characters' representations as the substitution cost in an edit distance matching algorithm significantly improves record linkage compared to other widely used string matching methods, as OCR errors tend to be homoglyphic in nature. Homoglyphs can plausibly capture character visual similarity across _any_ script, including low-resource settings. We illustrate this by creating homoglyph sets for 3,000 year old ancient Chinese characters, which are highly pictorial. Fascinatingly, a ViT is able to capture relationships in how different abstract concepts were conceptualized by ancient societies, that have been noted in the archaeological literature. ## 1 Introduction Many quantitative analyses in the social sciences - as well as government and business applications - require linking information from multiple datasets. For example, researchers and governments link historical censuses, match hand-written records from vaccination campaigns to administrative data, and de-duplicate voter rolls. The sources to be linked often contain noise, particularly when they were created with optical character recognition (OCR). String matching methods are widely used to link entities across datasets, as they are straightforward to implement off-the-shelf and can be scaled to massive datasets (Binette and Steorts, 2022; Abramitzky et al., 2021). Most simply, approximate string matching methods count the number of edits (insertions, deletions, and substitutions) to transform one string into another (Levenshtein et al., 1966). Another common approach computes the similarity between \(n\)-gram representations of strings, where \(n\)-grams are all substrings of length \(n\)(Okazaki and Tsujii, 2010). In practice, not all string substitutions are equally probable, and efforts to construct lists that vary their costs date back over a century. For example, in 1918 Russell and Odell patented Soundex, a sound standardization toolkit that accounts for the fact that census enumerators often misspelled names according to their sound. Together with the updated New York State Identification and Intelligence System (Silbert, 1970), it remains a bedrock for linking U.S. historical censuses (Abramitzky et al., 2021). Similarly, Novosad (2018) adjusts Levenshtein distance to impose smaller penalties for common alternative spellings in Hindi, and the FuzzyChinese package (znwang25, 2020) uses strokes as the unit for \(n\)-grams substring representations, where the strokes for a given character are drawn from an external database (kfcd, 2015) covering a subset of the CJK script. Characters sharing strokes are more likely to be matched. Such methods can perform well in the contexts for which they are tailored but are labor-intensive to extend to new settings, due to the use of hand crafted features. Low extensibility skews research with linked data - necessary to examine intergenerational mobility, the evolution of firm productivity, the persistence of poverty, and many other topics - towards a few higher resource settings that are not representative of the diversity of human societies. This study aims to preserve the advantages of string matching methods - simple off-the-shelf implementation and high scalability - while developing an extensible, self-supervised method for determining the relative costs of character substitutions in databases created with OCR. OCR often confuses characters with their homoglyphs, which have a similar visual appearance (_e.g._ "0" and "O"). Incorporating character visual similarity into string matching can thus plausibly improve record linkage. Homoglyphs can be constructed by hand for small script sets such as Latin, as in a psychology literature on literacy acquisition (Simpson et al., 2013), but for a script such as CJK, containing over 38,000 characters, this is infeasible. Following a literature on self-supervision through simple data augmentation for image encoders (Grill et al., 2020; Chen et al., 2021; Chen and He, 2021), this study uses augmented digital fonts to contrastively learn a metric space where different augmentations of a character have similar vector representations. The resulting space can be used, with a reference font, to measure the visual similarity of different characters. Figure 1 shows representative examples of how the same characters are rendered very differently across fonts. These different representations form positive examples for the contrastively trained HOMOSLPPH model. This purely self-supervised approach can be extended to any character set, but since creating evaluation data for record linkage is costly, the study focuses on languages written with CJK: Simplified and Traditional Chinese, Japanese, and Korean. We train on augmentations of the same character - rather than paired data across characters - because a self-supervised approach is more extensible. Paired character similarity data are limited. Unicode maintains a set of confusables - constructed with rule-based methods - but for CJK the only confusables are structurally identical characters with different Unicode codepoints. Despite a large post-OCR error correction literature (Lyu et al., 2021; Nguyen et al., 2021; van Strien et al., 2020), there is also limited ground truth data about the types of errors that OCR makes across architectures, languages, scripts, layouts, and document contexts. Using the cosine distance between two characters as the substitution cost within a Levenshtein edit distance framework (Levenshtein et al., 1966) improves record linkage with 1950s firm level data about Japanese supply chains (Jinji Koshinjo, 1954; Teikoku Koshinjo, 1957), relative to other string matching methods. The study also compares to end-to-end deep learning methods for record linkage. While these methods can outperform string matching, the data required for them are not always available and technical requirements for implementation are higher, explaining why string matching methods predominate in social science applications. Homoglyphic matching is a cheap and extensible way to improve these predominant methods. Because creating annotated ground truth data is costly, we provide additional evaluations using synthetically generated data. We augment image renders of place and firm names written with different fonts, for Simplified and Traditional Chinese, Japanese, and Korean character sets. We then OCR two different views of each entity with different OCR engines - EasyOCR and PaddleOCR - that Figure 1: **Character variation across fonts.** This figure illustrates examples of the same character rendered with different fonts. Augmentations of these comprise positives in the HOMOSLPPH training data. use very different architectures. The different augmentations and OCR engines lead to different text string views of the same entity with high frequency. We then link these using string matching methods. Homoglyphic matching outperforms other widely used string matching techniques for all four scripts. Our HomoglyphsCJK python package provides a simple, off-the-shelf implementation.1 Footnote 1: Package available at [https://pypi.org/project/HomoglyphsCJK/](https://pypi.org/project/HomoglyphsCJK/). Homoglyphs can be extended to any script. To explore this, we contrastively train a HOMOGLYPH model for ancient Chinese characters, using a database that provides views of the same character from different archaeological sites and time periods (Academia Sinica et al., 2023). Ancient characters are much more pictorial than their more abstract, modern equivalents. Fascinatingly, homoglyphs constructed with a ViT for the Shang Dynasty (1600 BC-1045 BC) capture ways in which ancient Chinese society related abstract concepts that have been noted in the archaeological literature (_e.g._ Wang (2003)). The rest of this study is organized as follows: Section 2 develops methods for learning character similarity and incorporating it into record linkage, and Section 3 describes the evaluation datasets. Section 4 compares the performance of homoglyphic edit distance to other string matching and neural methods for record linkage. Section 5 examines extensibility by constructing homoglyphs for ancient Chinese, Section 6 discusses the limitations of homoglyphs, and Section 7 concludes. ## 2 Methods ### The HOMOGLYPH model The HOMOGLYPH model contrastively learns a mapping between character crops and dense vector representations, such that crops of augmentations of the same character are nearby. HOMOGLYPH is trained purely on digital fonts. Figure 1 shows variations of the same characters rendered with different fonts, which form positive examples for training. Variations across fonts are non-trivial, forcing the model to learn character similarities at varying levels of abstraction. We use a DINO (Self-**D**istillation, **No** Labels) pre-trained ViT as the encoder (Caron et al., 2021). DINO ViT embeddings perform well as a nearest neighbor classifier, making them well-suited for homoglyphic matching. The model is trained using a Supervised Contrastive loss function (Khosla et al., 2020), a generalization of the InfoNCE loss (Oord et al., 2018) that allows for multiple positive and negative pairs for a given anchor: \[\sum_{i\in I}\frac{-1}{|P(i)|}\sum_{p\in P(i)}\log\frac{\exp\left(\mathbf{z}_{i} \cdot\mathbf{z}_{p}/\tau\right)}{\sum_{a\in A(i)}\exp\left(\mathbf{z}_{i}\cdot\mathbf{z}_ {a}/\tau\right)} \tag{1}\] where \(\tau\) is a temperature parameter (equal to 0.1), \(i\) indexes a sample in a "multiviewed" batch (in this case multiple fonts/augmentations of characters with the same identity), \(P(i)\) is the set of indices of all positives in the multiviewed batch that are distinct from \(i\), \(A(i)\) is the set of all indices excluding \(i\), and \(z\) is an embedding of a sample in the batch. Training details are describe in the supplementary materials. To compute characters' similarity, we embed their image crops, created with a reference font (Google Noto), and compute cosine similarity with a Facebook Artificial Intelligence Similarly Search backend (Johnson et al., 2019). Figure 2 shows representative examples of characters and their five nearest neighbors. Characters with similar vector representations have qualitatively similar appearances. Figure 2: **Homoglyphs. This figure illustrates the five nearest neighbors in the HOMOGLYPH embedding space for representative characters.** HOMOGLYPH shares common elements with EfficientOCR (Carlson et al., 2023), an OCR architecture that learns to recognize characters by contrastively training on character crops rendered with augmented digital fonts. Different augmentations of a character provide positive examples. At inference time, localized characters are OCR'ed by retrieving their nearest neighbor from an index of exemplary character embeddings. The OCR application of contrastive learning on character renders aims to retrieve the same character in an offline index, whereas HOMOGLYPH measures similarity across characters. While HOMOGLYPH shares the architecture of the EfficientOCR character recognizer, it does not use the same model weights or training data as EffOCR (which does not support Chinese or Korean and is also trained on labeled crops from historical documents). ### String Matching Methods Dunn (1946) - in one of the first treatments of record linkage - wrote: "Each person in the world creates a Book of Life. This Book starts with birth and ends with death. Its pages are made up of the records of the principal events in life. Record linkage is the name given to the process of assembling the pages of this Book into a volume." Edit distance metrics are widely used for this task Levenshtein et al. (1966); Jaro (1989); Winkler (1990). Another common approach computes the cosine similarity between \(n\)-gram representations of strings (Okazaki and Tsujii, 2010). There are a variety of ways that character-level visual similarity could be incorporated into record linkage. We follow the literature modifying Levenshtein distance, e.g. Novosad (2018), by using cosine distance in the HOMOGLYPH space as the substitution cost. Insertion and deletion costs are set to one. It is straightforward to scale the insertion and deletion costs using parameters estimated on a validation set, but we focus on performance without any tuned parameters to maintain a purely off-the-shelf, self-supervised implementation. We compare matching with homoglyphic edit distance to a variety of other methods. The first comparison is to classic Levenshtein distance (insertions, deletions, and substitutions are all equally costly), to isolate the effect of varying the substitution cost. We also compare to the popular Simstring package, which uses a variety of similarity metrics (Jaccard, cosine, and Dice similarity), computed with 2-gram substrings (Okazaki and Tsujii, 2010). The third comparison is to FuzzyChinese, a widely used package that uses strokes or characters as the fundamental unit for n-gram substring representations (we use the default 3-grams). These are compared using the TF-IDF vectors. The strokes in each character are drawn from an external database (kfcd, 2015) covering a subset of the CJK script. ## 3 Evaluation Datasets To our knowledge, there are not widely used benchmarks for evaluating record linkage for the CJK script. Hence, we develop evaluation data. First, we link a dataset on the customers and suppliers of major Japanese firms, drawn from a 1956 Japanese firm publication (Jinji Koshinjo, 1954), to a firm index of around 7,000 firms. The index is from the same publication but written in a different font. Supply chains are fundamental to the transmission of economic shocks (Acemoglu et al., 2016, 2012), agglomeration (Ellison et al., 2010), and economic development (Hirschman, 1958; Myrdal and Sithang, 1957; Rasmussen, 1956; Bartelme and Gorodnichenko, 2015; Lane, 2022). Supply chains are challenging to study historically, as they require accurate record linkage. This makes them a particularly relevant test case for downstream applications. Firm names are localized with LayoutParser (Shen et al., 2021) and then OCR'ed twice, to shed light on whether errors tend to be homoglyphic in popular vision-only OCR and vision-language sequence-to-sequence OCR. We employ two widely used, open-source OCR engines: PaddleOCR and EasyOCR. EasyOCR uses a convolutional recurrent neural network (CRNN) (Shi et al., 2016), with learned embeddings from a vision model serving as inputs to a learned language model. PaddleOCR abandons language modeling, dividing text images into small patches, using mixing blocks to perceive inter- and intra-character patterns, and recognizing text by linear prediction (Du et al., 2022). Neither engine localizes individual characters. In a second exercise, we use the dataset examined in Arora et al. (2023), which links the same customer-supplier list to a firm directory containing over 70,000 firms (Teikoku Koshinjo, 1957). Examining this dataset allows a comparison of string matching methods to the OCR-free vision only methods and multimodal methods from Arora et al. (2023). This dataset was created with EfficientOCR Carlson et al. (2023) and cannot be re-created with EasyOCR or PaddleOCR because the directory is written vertically, which these engines do not support. We would expect EfficientOCR's character retrieval framework to make homoglyphic errors. Performance across datasets created by three highly diverse OCR architectures is important to extensibility, since database collections have also been constructed with diverse OCR architectures. Because creating ground truth data for record linkage is costly, we use synthetically generated data for a third set of evaluations. We render place and firm names using different digital fonts and image augmentations, conducting separate experiments for Traditional Chinese, Simplified Chinese, Japanese, and Korean. For Simplified Chinese, Japanese, and Korean, we draw placenames from the Geonames database Geonames (2023). Because Traditional Chinese placenames in Geonames are rare, we instead draw from a list of Taiwanese firms, as Taiwan - unlike Mainland China - still uses Traditional Chinese Taiwan Ministry of Economic Affairs (2023). We randomly select two image crops of each entity name, and OCR them using EasyOCR and PaddleOCR. Anywhere from 40% (Simplified Chinese) to 88% (Traditional Chinese) of OCR'ed string pairs differ between the two OCR engines. We limit the evaluation dataset to pairs where the two string representations differ.2 Footnote 2: The sample size is 20,162 for Simplified Chinese, 66,943 for Traditional Chinese, 86,470 for Japanese, and 48,809 for Korean. ## 4 Results Homoglyphic edit distance outperforms the other string matching methods in all three evaluation exercises - across different OCR engines and languages - typically by an appreciable margin. This illustrates that homoglyphic errors in OCR are common and can be captured with self-supervised vision transformers. Our first evaluation exercise - with linked Japanese supply chain data - aims to elucidate whether homoglyphic matching is as helpful for linking datasets created with vision-language OCR as for linking datasets created with vision-only OCR, and whether it can similarly be useful for linking datasets created with different OCR architectures. We hence separately consider results linking PaddleOCR'ed customers and suppliers to the EasyOCR'ed firm index, vice versa, as well as linking when both are OCR'ed by either PaddleOCR or EasyOCR. Homoglyphic edit distance outperforms other string matching methods and does so by a similar margin (around 4 percentage points higher accuracy) regardless of the OCR architecture used. FuzzyChinese has the weakest performance, as expected, since many Japanese characters are not covered in their stroke dictionary. The primary objective of our second evaluation is to compare homoglyphic matching to the OCR-free vision-only and end-to-end multimodal frameworks developed in Arora et al. (2023), using customer-supplier data linked to an extensive index of 70K Japanese firms. Homoglyphic distance outperforms all other string matching methods, with a matching accuracy of 82%. The Arora et al. (2023) self-supervised multimodal record linkage model - which employs language-image contrastive pretraining on firm image crop-OCR text pairs (following Radford et al. (2021)) - outperforms homoglyphic distance, with 85% matching accuracy. The supervised multimodal model outperforms by a wider margin (95% accuracy). These methods avoid the OCR information bottleneck by using crops from the original document images. Moreover, the language model can understand different \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c}{OCR Engines} \\ & Paddle & Easy & Paddle & Easy \\ & to Easy & to Paddle & to Paddle & to Easy \\ \hline Homoglyphic & **0.808** & **0.753** & **0.844** & **0.728** \\ distance & & & & \\ Levenshtein & 0.766 & 0.697 & 0.807 & 0.693 \\ distance & & & & \\ Simstring & 0.762 & 0.662 & 0.787 & 0.673 \\ (cosine) & & & & \\ Simstring & 0.763 & 0.663 & 0.788 & 0.673 \\ (dice) & & & & \\ Simstring & 0.763 & 0.663 & 0.788 & 0.673 \\ (jaccard) & & & & \\ FuzzyChinese & 0.690 & 0.567 & 0.717 & 0.554 \\ (stroke) & & & & \\ FuzzyChinese & 0.533 & 0.445 & 0.559 & 0.464 \\ (character) & & & & \\ \hline \hline \end{tabular} \end{table} Table 1: **Baseline Matching Results: Historical Japanese Data**. This table reports accuracy using a variety of different methods for linking Japanese firms from supply chain records to a horizontally written firm directory. The four columns report results when (1) PaddleOCR is used to OCR the firm list and EasyOCR the directory, (2) EasyOCR is used to OCR the firm list and PaddleOCR the directory, (3) PaddleOCR is used to OCR both lists, (4) EasyOCR is used to OCR both lists. ways of writing the same firm name (_e.g._, using different terms for corporation). The Arora et al. (2023) supervised vision-only approach, which contrastively trains different views of the same firm's image crops to have similar representations, also outperforms homoglyphic matching (88% accuracy). While homoglyphs do not fully eliminate the OCR information bottleneck, they do significantly reduce it relative to widely used string matching methods (_e.g._ 75% accuracy with the Simstring package), with the advantage of not requiring labeled data or image crops. HOMGLYPH complements end-to-end deep neural methods. The Arora et al. (2023) methods cannot be used when researchers lack access to the original document images. Moreover, researchers often lack the compute or technical requirements to work with image data, whereas the vast majority of quantitative social science researchers are comfortable processing strings. On the language side, there are many contexts where using a language model may contribute little. Person or location names - for instance - don't contain much natural language relative to firm names. String matching methods remain the most widely used because they are simple and cheap to use off-the-shelf, and there are contexts where more sophisticated methods may not be feasible or offer large incremental gains. HOMGLYPH is an extensible way to improve string matching when linking OCR'ed datasets. Finally, Table 3 reports results with the synthetically generated record linkage dataset, to elucidate the performance of homoglyphic matching across languages using the CJK script. Homoglyphs outperform other string matching methods. The only case where the performance of another method is similar is Simplified Chinese, where the FuzzyChinese package using stroke level \(n\)-grams performs similarly. The stroke dictionary that underlies FuzzyChinese was crafted for Simplified Chinese, yet homoglyphs can perform similarly with self-supervised methods. On Traditional Chinese, which proliferates in historical documents, homoglyphic edit distance offers a nine percentage point accuracy advantage over FuzzyChinese, illustrating the extensibility advantages of self-supervised methods. The accuracy rates are rather low, but this must be interpreted in the context of the dataset, which only includes paired records where the OCR differs. Figure 3 provides an error analysis for the synthetic record linkage exercise. The ground truth string is shown in the first column, PaddleOCR is used to OCR the query (column 2) and EasyOCR is used to OCR the key and provides the correct match (column 3). The matches selected from the key by different string matching methods are shown in columns (4) through (7). Panel A shows cases where homoglyphic edit distance selects an incorrect match. This typically occurs when the OCR'ed text has a similar visual appearance to another firm in the index, showing the limits of homoglyphs to fully close the OCR information bottleneck. Panel B shows cases where \begin{table} \begin{tabular}{l c} \hline \hline Method & Accuracy \\ \hline _Panel A: String-Matching_ & \\ Homoglyphic distance & 0.824 \\ Levenshtein distance & 0.731 \\ Simstring (cosine) & 0.748 \\ Simstring (dice) & 0.752 \\ Simstring (jaccard) & 0.752 \\ FuzzyChinese (stroke) & 0.735 \\ FuzzyChinese (character) & 0.618 \\ _Panel B: Neural Methods_ & \\ Self-Supervised Multimodal Linking & 0.849 \\ Supervised Vision-Only Linking & 0.878 \\ Supervised Multimodal Linking & **0.945** \\ \hline \hline \end{tabular} \end{table} Table 2: **Comparisons to fully neural record linkage methods:** This table links Japanese firms from supply chain records to an extensive firm directory. String matching methods are reported in Panel A. End-to-end neural methods are reported in Panel B. \begin{table} \begin{tabular}{l c c c c} \hline \hline & & & Simplified & Traditional \\ & Japanese & Korean & Chinese & Chinese \\ \hline Homoglyphic & **0.456** & **0.292** & **0.476** & **0.465** \\ distance & & & & \\ Levenshtein & 0.396 & 0.188 & 0.375 & 0.407 \\ distance & & & & \\ Simstring & 0.376 & 0.247 & 0.425 & 0.383 \\ (cosine) & & & & \\ Simstring & 0.380 & 0.248 & 0.426 & 0.385 \\ (jaccard) & & & & \\ FuzzyChinese & 0.168 & 0.000 & 0.473 & 0.372 \\ (stroke) & & & & \\ FuzzyChinese & 0.230 & 0.110 & 0.137 & 0.197 \\ (character) & & & & \\ \hline \hline \end{tabular} \end{table} Table 3: **Matching Results: Synthetic Data.** This table reports accuracy linking synthetic paired data generated by OCR’ing location and firm names - rendered with augmented digital fonts - with two different OCR engines. homoglyphic edit distance selects a correct match, avoiding the wrong strings chosen by other methods through exploiting character visual similarity. ## 5 Extending Homoglyphs While this study focuses on the modern CJK script, HOMOGLYPH can be extended to any character set. As a proof of concept, we explore its extensibility to ancient Chinese characters. Like other early forms of human writing, ancient Chinese scripts are highly pictorial relative to modern characters. Using an existing database of grouped ancient characters from different archaeological sites and periods that correspond to the same concept [1], we contrastively learn a metric space where the representations of different views of ancient characters denoting the same concept are nearby. We train on 25,984 character views, as well as the corresponding modern augmented fonts. The dataset includes characters from the Shang Dynasty (1600 BC-1045 BC), the Western Zhou (1045 BC-771 BC), the Spring and Autumn Warring States Era (770 BBC -221 BC), and the Qin-Han Dynasties (221BC - circa third century).3 To illustrate homoglyphs, we create a reference set for the Shang Dynasty, randomly choosing one character for each concept. Footnote 3: We exclude images from the Shuowen Jiezi - a book on ancient characters - limiting to the most reliable character renders, which were drawn from archaeological sites. Figure 4 shows representative examples of homoglyphs, consisting of a character and its five nearest neighbors. The modern character descendant - taken from the database - as well as a short description of the ancient concept are provided. The description draws upon Li (2012) as well. The homoglyph sets are able to capture related abstract concepts noted in the archaeological lit Figure 4: **Ancient Homoglyphs. This figure shows homoglyph sets constructed for ancient Chinese, with the descendant modern Chinese character and a description of the character’s ancient meaning.** Figure 3: **Error analysis. Panel A shows representative errors from homoglyphic matching. Panel B shows representative cases that homoglyphic matching gets correct. The ground truth string is shown in column (1). PaddleOCR is used to OCR the query images (column (2)) and EasyOCR is used to OCR their corresponding keys (column (3)). Columns (4) through (7) give the selected match to the query using different string matching methods, with the correct match shown in column (3). Bold characters differ from the query.** erature. The first line shows that the concepts of writing, law, learning, and morning ("recording the sun") are homoglyphs, and the second line shows that characters for different types of officials are homoglyphs, as are characters denoting "joining." The final line shows that history and government official are homoglyphs - underscoring the central role of the government in constituting history - as are characters denoting conquest, tying up, and city center (denoted by a prisoner to be executed by the government, which occurred in the city center). Not all concepts within each set are related, but many of the connections above have been noted in an archaeological literature examining how ancient peoples conceptualized the world (_e.g._ Wang (2003)). That these meanings can be captured using _vision_ transformers is a fascinating illustration of the relationship between images, written language, and meaning in ancient societies. ## 6 Limitations Using homoglyphs for string matching inherits the well-known limitations of string matching. In some cases, OCR destroys too much information for record linkage to be feasible with the resulting strings. Even with clean OCR, sometimes language understanding is necessary to determine the correct match. Homoglyphs do not address other types of string substitutions, like those that result from enumerator misspellings, although in principle a similar contrastive approach could also be developed to quantify other types of string substitutions. More sophisticated methods have been developed as alternatives to string matching. For example, Ventura et al. (2015) use a random forest classifier trained on labeled data to disambiguate authors of U.S. patents, applying clustering to the resulting dissimilarity scores to enforce transitivity. Arora et al. (2023) use multimodal methods that combine the image crops of entities and their OCR and also develop a vision-only OCR free linkage method. Bayesian methods have also been used, _e.g._ Sadinle (2014, 2017). They offer the advantage of uncertainty quantification - another well-known limitation of string matching - but do not scale well. While these methods can offer advantages, they are not always applicable. Researchers may lack access to the original document images, or may lack the compute or technical resources to process images, limiting the use of OCR-free or multimodal approaches. Language models are unlikely to be useful in linking individual names, a common application. Labeled data may be infeasibly costly to create at a sufficient scale for training supervised models. Finally, researchers in disciplines like social science often lack familiarity with machine learning methods, but most are familiar with off-the-shelf string matching packages. String matching methods are also cheap to scale to massive datasets. Simple string matching algorithms are often preferred by practitioners and can be the most suitable tool given the constraints. ## 7 Conclusion Homoglyphic edit distance significantly improves string matching accuracy on OCR'ed documents, by integrating information about character similarity from purely self-supervised vision transformers. It can be implemented using a simple, off-the-shelf string matching package.4 Learning homoglyphs through self-supervised vision transformers is highly extensible, including to low resource settings and settings with many characters. By improving record linkage in such settings - where handcrafted features used to improve record linkage are not available - research on important questions requiring linked data can become more representative of the diversity of human societies. Footnote 4: Package available at [https://pypi.org/project/HomoglyphsCJK/](https://pypi.org/project/HomoglyphsCJK/). ## Supplementary Materials ### Homoglyph Model Details ### Encoder For both of our applications, we use a DINO pretrained Caron et al. (2021) vision transformer (ViT) as the encoder. Our implementation of the ViT comes from the Pytorch Image Models library (timm) Wightman (2019). Specifically, we use the _vit_base_patch16_224.dino_ model that corresponds to the official DINO-pretrained ViT-base model with a patch size of 16 and with input resolution of \(224^{2}\). The pretrained checkpoint does not have a classification head. ### Loss function We use Supervised Contrastive loss Khosla et al. (2020) as our training objective, as implemented in the PyTorch Metric Learning library Musgrave et al. (2020), where the temperature parameter is set to 0.1. ### Data Augmentation We deploy several image augmentations, using transformations provided in the Torchvision library TorchVision (2016). These include Affine transformation (only slight translation and scaling allowed), Random Color Jitter, Random Autocontrast, Random Gaussian Blurring, and Random Grayscale. Additionally, we pad the character to make the image square while preserving the aspect ratio of the character render. We do not use common augmentations like Random Cropping or Center Cropping, to avoid destroying too much information. For augmenting the skeleton of the rendered character itself, we use a variety of digital fonts to render the images. We use 27 fonts for Simplified Chinese, 17 fonts for Traditional Chinese (for both string matching and ancient Chinese), 62 fonts for Korean, and 14 fonts for Japanese. ### Application-specific details #### S-2.1 Record Linkage #### S-2.1.1 Data For each script, the dataset consists of images of characters from the corresponding script rendered with different fonts and augmented during training. The number of characters for each script seen during training is given in Table S-1. Each character can be considered a "class" to which its digital renders belong. Characters do not need to be seen during training to be considered at inference time, an advantage if users wish to expand the homoglyph sets (_e.g._ because an OCR engine uses a different character set). We illustrate this empirically by expanding the character set to characters covered by the three OCR engines we explore that were not included in our character ranges used initially for training. #### S-2.1.2 Batching **Without hard-negative mining** Let \(\mathcal{B}\) denote the batch size. A batch consists of \(m\) views of \(\dfrac{\mathcal{B}}{m}\) classes sampled without replacement. When all the views for a class are utilized, all images are replaced and the sampling process without replacement starts again. "Views" of a character are augmented digital renders using the fonts and transformations described above. One training epoch is defined as seeing all characters and their m views exactly once. **With hard-negative mining** We find \(k\) nearest neighbors of each character (or class) on a checkpoint trained without hard negatives. We do this by rendering all characters with a "reference font - Noto Serif CJK font (Tc/Sc/Jp/Ko)" depending upon the script and finding \(k\) nearest neighbors using the above checkpoint. We create batches as before, but this time, randomly intersperse all hard negative sets (of size k) in the batches. One training epoch is now defined as seeing all characters and their m views and additionally, all characters and their hard negative sets (composed of \(k-1\) neighbors) and their m views exactly once. Table S-2 contains the number of epochs we trained each model for. #### s-2.1.3 Model Validation We split the characters into an 80-10-10 train-val-test set. We embed the validation images and find the nearest neighbor among the embeddings of digital renders of the universe of characters in the script, rendered with the reference font described above. The top-1 retrieval accuracy is used as the validation metric for the selection of the best checkpoint. We see a peak validation accuracy of 90% for Japanese, 98% for Korean, 91% for Traditional Chinese, and 91% for Simplified Chinese. #### s-2.1.4 Other training details CJK glyphs are similar across the scripts. To converge faster, for the rest of the languages, we initialize the weights of the encoder with the checkpoint used for the HOMGGLYPH encoder for Japanese - the script with the largest number of characters. We use AdamW (Loshchilov and Hutter, 2019) as the optimizer and Cosine Annealing with Warm Restarts (Loshchilov and Hutter, 2016) as the learning rate schedule. We use the standard Pytorch implementation for both. The relevant hyperparameters are listed in Table S-2. We stop training the models once the validation accuracy stagnates and the checkpoint with the best validation accuracy is chosen as the encoder for each script. ### Homoglyph Sets We allow for the expansion of the character set beyond what is seen in training because different OCR engines use different character dictionaries (a list of characters supported by the engine). We take the union of characters from the character dictionaries of PaddleOCR, EasyOCR, and EfficientOCR. For each script, we render all its characters using the script's reference font and embed them using the script-specific HOMGGLYPH encoder. For each character, we then find 800-nearest neighbours (measured by Cosine Similarity between the embeddings) among the set of all renders in the reference set. We store these as a look-up dictionary that contains, for each character in a script, its 800 neighbors and its Cosine Similarity with all of them. This look-up dictionary is used in our modified Levenshtein distance implementation to modify the substitution cost. The dictionaries are available in our GitHub repository (Yang et al., 2023). Table S-1 contains the number of characters that were used to prepare these sets for each script. ### Implementing the Modified Levenshtein distance We use a standard algorithm to calculate Levenshtein distance that uses dynamic programming (Wagner and Fischer, 1974). The space and time complexity of the algorithm is \(\mathcal{O}(mn)\) where \(m\) and \(n\) are the lengths of the two strings that are being compared. We modify this algorithm by switching the standard substitution cost \(\lambda\) between two characters \(a\) and \(b\) with \(\lambda*(1-CosineSimilarity(u(a),u(b))\). Here \(u(a)\) and \(u(b)\) are the embeddings of the HOMGGLYPH encoder for the script to which \(a\) and \(b\) belong. \(\lambda\) is a tunable hyperparameter but for simplicity, we fix it as 1 for the results shown in the paper. We also fixed the addition and deletion cost as 1 but in the implementation provided in our package and our GitHub repository, the costs are tunable hyperparameters. ### Ancient Chinese Homoglyphs #### s-2.4.1 Data The source database (Academia Sinica et al., 2023) from which we collect the ancient Chinese character crops contains 5,024 concepts, comprised of 25,984 character renderings. Each of these concepts is mapped to a modern character. This en ables us to insert digital renders of these modern characters using the same fonts as above (for traditional Chinese) to create more variation. A "class" in this case comprises a character cluster - with both ancient crops and modern digital renders forming the positive samples for a class. We slightly modify the data augmentation scheme for this application to account for the wide variation in writing styles across centuries. We allow for a slight (\(-10\) to \(+10\) degree) rotation and also add more transformations tailored to this use case - Random Equalize, Random Posterize, Random Solarize, Random Inversion and Random Erase (randomly erase 0-5% of the image). We apply all augmentations to the digital renders but only apply Random Affine transformation and Random Inversion to the ancient crops. #### s-2.4.2 Batching We use the same sampling and batching process as we did for the modern homoglyph models. The only difference is in how the hard-negative sets are defined. Instead of one nearest neighbor per concept, for each ancient crop within a concept cluster, we find \(k\) nearest neighbors. This gives us as many nearest neighbor sets (hard-negative sets) as ancient crops in our dataset. This allows us to account for the fact that the homoglyphs of a character may differ across different historical periods, spanning millennia. #### s-2.4.3 Model Validation We split the character clusters into train and validation sets (90-10). We then transfer modern renders of the characters from the validation set to the train set. After this, we randomly transfer 50% of validation images to training. Only ancient characters remain in the validation set. We then make a reference set by embedding all the modern renders of our character (using the reference font Noto Serif CJK Tc). We use top-1 accuracy as our validation metric which is defined as the proportion of correct retrievals of the corresponding modern render to each ancient image in the validation set. During training, the model reached a peak validation accuracy of 50% demonstrating the difficult nature of this task. We use this metric for selecting the best checkpoint for our encoder. #### s-2.4.4 Other training details We again use the AdamW optimizer and Cosine Annealing with Warm Restarts as the learning rate schedule. Relevant Hyperparameters are listed in Table S-2. We stop training when validation accuracy stagnates. #### s-2.4.5 Creation of Ancient Chinese Homoglyphs The creation of homoglyph sets is analogous to the case of modern characters. Instead of using digital renders from a particular font as the "reference set", we look at the five nearest neighbors of ancient characters within a period. We illustrate homoglyphs using The Shang Dynasty period (1600 BC-1045 BC), the most ancient.
2310.01977
Experiences with Research Processes in an Undergraduate Theory of Computing Course
Theory of computing (ToC) courses are a staple in many undergraduate CS curricula as they lay the foundation of why CS is important to students. Although not a stated goal, an inevitable outcome of the course is enhancing the students' technical reading and writing abilities as it often contains formal reasoning and proof writing. Separately, many undergraduate students are interested in performing research, but often lack these abilities. Based on this observation, we emulated a common research environment within our ToC course by creating a mock conference assignment, where students (in groups) both wrote a technical paper solving an assigned problem and (individually) anonymously refereed other groups' papers. In this paper we discuss the details of this assignment and our experiences, and conclude with reflections and future work about similar courses.
Ryan E. Dougherty
2023-10-03T11:37:06Z
http://arxiv.org/abs/2310.01977v1
# Experiences with Research Processes in an Undergraduate Theory of Computing Course ###### Abstract. Theory of computing (ToC) courses are a staple in many undergraduate CS curricula as they lay the foundation of why CS is important to students. Although not a stated goal, an inevitable outcome of the course is enhancing the students' technical reading and writing abilities as it often contains formal reasoning and proof writing. Separately, many undergraduate students are interested in performing research, but often lack these abilities. Based on this observation, we emulated a common research environment within our ToC course by creating a mock conference assignment, where students (in groups) both wrote a technical paper solving an assigned problem and (individually) anonymously refereed other groups' papers. In this paper we discuss the details of this assignment and our experiences, and conclude with reflections and future work about similar courses. theory of computing, CS course design, CS pedagogy, technical CS course + Footnote †: journal: SIGCS '24, August 20-23, 2023, Portland, OR, USA + Footnote †: journal: SIGCS '24, August 20-23, 2023, Portland, OR, USA None of the above works include the results of refereeing submissions from other students. Jones et al. (Jones et al., 2017) had undergraduate students referee submissions from a "real" undergraduate journal in neuroscience, but does not include submitting works (whether a "real" or "mock" conference). ## 3. Course Context In Spring 2023 we had three sections of our ToC course, titled Intro Theoretical Computer Science, all taught by the author, with a total of 41 students. Our ToC course is primarily taken during the students' junior year, with prerequisites being discrete mathematics and digital logic, and a corequisite of algorithms; ToC is only offered once per year. Importantly, students for this particular offering had some training in FIEX during their discrete math course, specifically within Overleaf. There are no formal post-requisites, but students very often take a year-long expacting course and an Operating Systems course in the next semester; see Section 7 for details. Most ToC courses are divided into four large sections: (1) regular languages, (2) context-free languages, (3) Turing Machines and decidability, and ending with (4) undecidability. The order of the vast majority of such offerings is \((1)\rightarrow(2)\rightarrow(3)\rightarrow(4)\), and our course follows the same roadmap other than including NP-completeness after (4). The first two sections contain a formal model to both define and analyze, and some amount of proofs to show that each model cannot solve "all" problems; the fourth solely contains proofs that Turing Machines-computationally equivalent to "real" computers-cannot solve every problem, or even "reasonable" problems. There are two goals of dividing the course into sections like this: build upon previous knowledge in terms of the model's formal definition and its capabilities, and to showcase the classic trade-off of increased computational power vs. tractability of asking questions about the model. For example, it is algorithmically undecidable to determine if two machines in (2) have the same behavior, whereas it is possible for two machines in (1). Student assessment for our course was divided up as follows: * 5 In-Class Group Presentations: 200 points total * 10 In-Class Quizzes: 150 points total * Paper Writing (Part 1 of conference): 75 points * Paper Referecing (Part 2 of conference): 75 points * Formal Group Presentation (NP-Completeness): 150 points * Final Exam: 250 points * Lesson Preparedness: 100 points These choices were to support the conference with a nonnegligible amount of the final grade while also not dissuading students from not being prepared for lessons, studying for the final exam, or preparing for the formal presentations. ## 4. Assignment Design This section contains our process in designing the paper writing and refereeing assignments. Students were put into groups of two or three (potentially spanning across the three sections) randomly by the instructor. The assignment naturally is broken into three parts: paper writing (Part 1), paper refereeing (Part 2), and the conference itself (Part 3). The first two parts were conducted entirely through EasyChair.1 Footnote 1: [https://easychair.org/](https://easychair.org/) ### Paper Writing (Part 1) For Part 1, groups were required to use the ACM SIG Proceedings (FIEX) template for standardization purposes; a link to the Overleaf template was provided for ease of student use. Each group is assigned a unique problem that was manually created by the instructor that has some real-world application in mind. For example, one problem was to create a context-free grammar for edit distance of strings, as that problem has applications in computational biology. Each problem potentially had several sub-problems; the problem categories and what was required for each are given next. * - construction of some formal object, along with a formal definition of the object and proof that the object's language is correct. An example problem could be "Construct a non-deterministic finite automaton for the language \(L=\{...\}\)" or "Show that \(L=\{...\}\) is context-free by creating a context-free grammar for it." * - showing that a language operation is correct. Students need to consider an arbitrary object with that language, how to change that object for the operation, and to show that the resulting object has the desired language. An example problem could be "Let \(L\) be a regular language, and let \(s(L)\) be the set of strings in \(L\) with last character removed (if there is one). Show that if \(L\) is regular, then \(s(L)\) is also regular." * - showing that a language is not regular using the Pumping Lemma (PL) for Regular Languages. We provided students the steps for any proof using the PL during previous lessons, such as picking an arbitrary string in the language, performing all decompositions of the string according to the PL's rules, and finding a repetition that leaves the language. Table 1 shows the number of groups assigned for each problem category. Note that some groups have sub-problems that span several categories. The instructor made an effort to equally distribute paper categories across groups. The grade breakdown for Part 1 is as follows, with a total of 75 points (plus 10 bonus points for accepted papers): * Paper is anonymized; if not done, the paper is not graded. * Appropriate title. * Appropriate abstract (both length and content). * Inclusion of relevant background material. * Inclusion of any necessary notation and definitions. * Accuracy of proof/argument/construction. * Appropriate discussion/future work/conclusion sections. * Professional writing style and formatting. * References are properly formatted. As nearly all students have not performed research at this point, instructions were provided on how to appropriately structure a research paper: how to write an abstract (at most 200 words); introduction section; background and related work section that includes applications and motivation; a section for any necessary formal definitions; theorems within provided proof environments; and sections for discussion, future work, and concluding the paper. For background and related work, students were advised to search Google Scholar with appropriate keywords and read abstracts of "similar" papers; if any are sufficiently similar, then they were to read any relevant theorems and proofs within it for inclusion and comparison in their own paper. For the discussion section, students were not required but encouraged to think more deeply about the implications of their results, and if they can be extended or generalized. A rubric was provided that assigned many of the points to correctness of proofs, but a nontrivial amount for the other sections, anonymization, and properly formatted references. Additionally, an incentive of bonus points was provided for papers that get accepted to the conference. Groups were given approximately a month and a half to complete the paper, and was assigned roughly three weeks into the course. The problems had a difficulty that was appropriate for a regular (individual) homework spanning two or three weeks, and thus students were expected to easily complete solving the problem within the allotted time. Although there was no scaffolding to Part 1, the instructor provided a recommended schedule for when each part of the paper should be done. ### Paper Refereeing (Part 2) For Part 2 students worked individually for (anonymously) refereing other papers from the conference. Instructions were provided for how to write an appropriate review for a paper, as all students have not reviewed for a conference before. All students were assigned two papers to review (assigned randomly by the instructor), and most of the points are for accuracy of the evaluation. The overall evaluation (a numerical score) students assigned to a paper was graded on how similar it was to the instructor's score of the same paper. The instructor told students that the accuracy and honesty of their reviews are what primarily counted for their grade. Because most students have not read a technical paper before, the instructor give a link to the paper of Keshav (Keshav, 2016, Section 2). Students had approximately three weeks to complete this part. While students were refereeing, the instructor graded each of the submitted papers, and made a decision for each as to whether it should be accepted into the conference. At the end of the review process, the instructor's grade and feedback, along with the anonymous student reviews, were provided to each of the groups. The grade breakdown for Part 2 is as follows; the same rubric is used for both reviewed papers, for a total of 75 points. * Overall evaluation: a score from -3 (strong reject) to +3 (strong accept). * Brief paper summary. * Evaluation summary. * Evaluation. * Reviewer's confidence: a score from 1 to 5 about how well the reviewer understood the paper. ### Conference (Part 3) For groups of accepted papers, the instructor notified them that they need to incorporate changes given by the instructor and reviewers within two weeks (along with de-anonymizing their paper). Once the accepted paper revisions were taken, the instructor assembled them into one PDF to share with the rest of the course (with permission of the students in these groups). The conference was held in the last class period of the semester, during which students of accepted papers shared their story about how they approached the problem, what worked and what did not, and their recommendations for the rest of the students in tackling similar problems. ## 5. Issues and Challenges This section contains the issues and challenges we faced developing and running our mock conference. Since this project took the large majority of the semester, any assigned projects had to only contain course concepts from the first few sections of the course, namely regular languages (1) and context-free languages (2). The section on undecidability (4) is entirely proof-based, and thus the assigned problems have a bias in that there are fewer proofs than if a different ordering on course concepts were imposed. Additionally, it was the instructor's burden to design problems that are not only unique to groups, but are not readily available online or elsewhere; these constraints were to avoid potential plagiarism. In Part 1, there was no scaffolding. As a result, three groups did not submit their paper on time, and thus were not eligible to be accepted to the conference nor to be reviewed by other students; their work was nonetheless graded with the same rubric and feedback by the instructor. ## 6. Reflections This section contains our reflections after running the conference. We did not create any student surveys, but did store the following data: individual student IDs (made anonymous by a faculty member not part of the analysis), a mapping between student IDs and group identifiers, scores for each group for each of the items in Section 4 (both Parts 1 and 2), conference acceptance scores, and all text for reviews in Part 2. All data has been anonymized by the same faculty member. ### What Worked Well Having real-world applications as a requirement for generating the problems helped students see the purpose of such problems. Additionally, since students had to perform background research, they were largely able to contextualize what is fundamentally different about their problem compared to previous work. The tools used made coordination between the instructor and groups very simple and efficient. Having a \(\mathtt{\Delta F}\mathtt{\times}\) template along with a platform for distributing and compiling the document repository was instrumental in making this assignment not only possible, but realistic enough to feel like a "real" conference. Taking the time to generate different categories of problems based on real-world applications made the conference proceedings more realistic, as that emulates "real" conference proceedings with a wide body of research topics. Anecdotally during the conference \begin{table} \begin{tabular}{|l|l|l|l|} \hline Problem Type & P1 & P2 & P3 \\ \hline Number of Groups & 10 & 9 & 6 \\ \hline \end{tabular} \end{table} Table 1. Number of groups assigned per problem category for Part 1. (Part 3), we found that students were curious about how other groups approached their problems, as they were often quite different from their own. Part 2 of the conference showed that students were in general honest about their peers and themselves. However, some lower-performing students often would praise a paper and then give a low confidence score in their abilities to understand the paper (or vice versa). ### What Did Not Work Well One of the issues with EasyChair's free version is that it is limited in how many submissions are allowed. Even if a group accidentally made a submission that they want removed (e.g., forgetting a group member), or the instructor makes a test submission and subsequently removes it, these count against the total number of submissions anyways. We did not seek funding from our department for using a paid version, nor looked for a free alternative. We return to this as a recommendation in Section 7. Part 1's length was a major issue, as students are often tempted to not start an assignment until close to its due date. As performing nearly any kind of research takes more time than predicted, nearly all groups made their first submission (before any modifications) within a day of the due date. EasyChair by-default allows groups to submit their work as many times as they want before the deadline, but most groups did not take advantage of this. Students are known to only care about assignments if they are worth a significant number of points (Brandt, 2017). The total number of points for the two parts was 150, which is 15% of the total student grade. Even though this was a semester-long project, the fact that it was worth only this many points most likely negatively influenced how much dedication students had to writing the paper. Even though the instructor encouraged students not to spend significant time on the paper's visual aesthetics (and to instead save this once everything else is done), students ended up not following this advice. All groups with a problem of type P1 (creating a formal object) used some kind of visualization tool for showing the formal object to the reader, and then later proving its correctness. Students were allowed to use any tool that they wanted, either hand-drawn or on a computer. One tool the instructor suggested (among several) was \(\mathtt{ikz}\)-automata, which involves \(\mathtt{IdEX}\) code to produce automata figures, as was done for the course's lecture notes; see Figures 1 and 2 for an example and provided sample code.2 Many groups opted to use this tool, but ended up spending a significantly more than expected amount of time learning how to use it. Footnote 2: We provided students a \(\mathtt{IHEX}\) header that contains defined commands for automatonCenter to maintain consistency. The paper acceptance criteria of other reviewers did not necessarily match those of the instructor. The reviewing process used a -3 (strong reject) to +3 (strong accept) scale for papers; this is the default on EasyChair. Group paper scores had an average of slightly above 1.0. On average, there was a 1.65 point difference between the average student score and that of the instructor for each submitted paper. Additionally, the instructor gave the anonymized papers to a colleague3 for determining acceptance; there was a 1.55 point difference there (including the three papers submitted late). Table 2 contains the rating results for the ten accepted papers from the instructor, the student average rating for that paper, and that of the colleague. The difference between the highest and lowest student ratings are also given. Note that many of the papers have a very large student delta, especially group \(\sharp\)17 with a delta of 5. This was primarily due to students' reviews containing praise for the quality of writing and not about proof/argument accuracy. Footnote 3: This colleague has not ever taught ToC, but is aware of some of the concepts and proofs. Understandably, some students did not follow some of the instructor's recommendations. For \(\mathtt{IdEX}\), many groups did not use theorem and figure environments to structure their proofs, even though the instructor provided examples of how to use them. In several cases, groups would directly copy and paste some lecture notes for the definition and notation section (with appropriate citation). ### Limitations It is difficult to attribute an instructor grade on the accuracy of an argument to all of the merits of a student paper. Published research papers, even within technical venues, are often worth more than just their technical arguments. Especially with student writing, the quality of exposition, figures, flow, and adequate background research are also important to the success of a paper. Therefore it is possible that a paper was not accepted to the conference that should have been (or vice versa) based on the grade breakdown the instructor chose. Certain biases of the instructor, if they existed, could have altered both of the parts, within the instructions and grading. For example, the increased stressing of proof accuracy by the instructor could have led some groups to overemphasize that section, or potentially having lesser performing groups ignoring the proof section altogether (so that they can obtain points that they can more reasonably get). It is nearly impossible to give an assignment like this in an earlier semester (at least at our institution), nor within a course that is Figure 1. A \(\mathtt{ikz}\)-automata example within our ToC course. Figure 2. Code for the automaton in Figure 1. not technical in nature. We believe it is possible to modify Part 1's writing component to be more expository or survey-based and include this assignment in, for example, a CS ethics course. Due to lack of time, Part 3 (i.e., the conference itself) during the last class period did not include full-length formal presentations. We did not feel this to be necessary as all students had five formal presentations throughout the semester already. However, having a true research-style talk would be the final part of a standard conference and research experience. ## 7. Future Work & Recommendations This section contains future work already in progress, and recommendations for practitioners. The CS major stresses not only technical reading and writing, but also communication to audiences, both technical and not. To this end, we created another set of assignments in the same course and semester in which students gave five oral presentations on course concepts throughout the semester. Another motivation for having technical presentations is a follow-on capstone course, which takes place across two semesters. In that capstone course, students are placed into larger groups (between 4 and 6 students each) and are assigned a computing problem to solve for a client. Throughout the two semesters, students must make several technical presentations; for juniors who took our ToC course, we will perform a statistical analysis between how well those students give oral presentations and those who did not take our course with our paper assignment. Another senior-level course at our institution is Operating Systems, and we are currently designing a paper writing assignment based on performance analysis of writing a OS component in C. Based on the previous paragraph and the work in this paper, we created research questions that we are in the process of addressing in future work: * RQ1: Does writing a research-style paper on a topic in an undergraduate ToC course (i.e., Part 1) improve performance on that topic (e.g., exam scores) more than for other topics? Additionally, if they scored a B or better in Part 1, is there a stronger correlation? * RQ2: Same as RQ1, but for reviewing a paper. * RQ3: Same as RQ2, but for reviewing a paper in the same paper category as that of their paper in Part 1. * RQ4: Do undergraduate students reviewing fellow students' papers review primarily on merit or readability? * RQ5: How do undergraduate students, who have participated in our paper writing assignment, perform in follow-up courses that have technical presentations or paper writing assignments, compared to students who did not? * RQ6: Same as RQ5, but instead comparing presentations within our ToC course and those in the follow-up courses. In spirit of partially addressing some of these questions (particularly RQ2 and RQ3), observe Figure 3, a scatter plot of Part 1 vs. Part 2 overall scores; and Figure 4, a scatter plot of only the correctness of the paper's main argument and the review evaluation quality. Apart from one student, there does not appear to be any prediction between how well students can write a technical paper vs. how well they can provide a review of one; nor is there a correlation between the "main" components of both parts of the assignment. However, more statistical analysis of the grades and empirical analysis of the text in paper/review submissions need to be performed. For practitioners looking to implement a similar assignment, we recommend using a publically available LaTeX guide as it carries a steep learning curve. If the course is not entirely technical, then we do not see any reason to use LaTeX within an undergraduate classroom setting. Additionally, we recommend practitioners to scaffold Part 1, as that in practice takes much more time and effort to complete than Part 2. We recommend using a free alternative to EasyChair, especially for larger courses than ours, as its limitations for the free version \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|} \hline Group ID & 3 & 7 & 8 & 15 & 10 & 11 & 4 & 12 & 17 & 16 \\ \hline Instructor Rating & 3 & 3 & 3 & 1 & 2 & 2 & 2 & 2 & 1 & 1 \\ \hline Student Average Rating & 2.5 & 1.2 & 0.0 & 2.0 & 0.0 & 1.2 & 1.4 & 0.2 & 1.4 & 0.5 \\ \hline Student Max - Min & 1 & 4 & 3 & 2 & 3 & 1 & 4 & 3 & 5 & 3 \\ \hline Colleague Rating & 1 & 2 & -1 & 2 & 0 & 2 & 2 & -1 & 0 & 2 \\ \hline \end{tabular} \end{table} Table 2. Instructor Rating, student average rating, and colleague rating for the 10 accepted papers to the conference (given by group ID). Groups are sorted left-to-right in decreasing order of total score given by the instructor. Colors are given as follows: blue indicates 3.0, green is 2.0 to 2.9, yellow is 1.0 to 1.9, orange is 0.0 to 0.9, dark orange is -1.0 to -0.1, red is -2.0 to -1.9, and gray is -3.0 to -2.1. Additionally, the difference between the highest and lowest student ratings is also given. Figure 3. Overall Part 1 vs. Part 2 Scores (as percentages). influenced how we designed our assignment. One possibility is HotCRP4, which is used by many "real" CS conferences. Footnote 4: [https://hotrep.com/](https://hotrep.com/) Finally, several students have performed research before the start of this course (including one who has presented at a "real" CS conference), and thus impose a bias on the data: their writing is likely higher quality, and so their papers are more likely to be accepted. We propose extending some or all of the research questions above to compare students who have performed research previously vs. those who did not. The effects such students have we believe would be small as the number of undergraduate students who have presented research by the middle of their junior year is generally small. ## 8. Conclusion In this paper we gave our experience in creating and running an assignment within a ToC course that emulates a CS conference. Students were grouped to solve a unique assigned problem and asked to write a technical paper in LaTeX. Further, they individually read and anonymously provided feedback to some other group submissions. We discussed some of the issues and challenges in creating such an assignment. We hope that this paper gives CS educators inspiration to develop a similar assignment for other CS courses, and to determine the effectiveness of having research-style assignments within them. ###### Acknowledgements. We would like to thank the CS474 students in 2023 for participating in our study and supporting CS research. We also thank Dr. Maria Ebling for guidance in designing the future work research questions and studies, and aiding in the IRB process. The opinions in the work are solely of the author, and do not necessarily reflect those of the U.S. Army, U.S. Army Research Labs, the U.S. Military Academy, or the Department of Defense.
2304.03027
Electron Paramagnetic Resonance of $V_{N}-V_{Ga}$ complex in $BGaN$
Metastable photoinduced Electron Paramagnetic Resonance (EPR) signal at low temperatures is reported in GaN alloyed with boron ($B_{x}Ga_{1-x}N$) epitaxial layers grown at temperatures ranging from 840 {\deg}C to 1090 {\deg}C. An isotropic EPR line with g = 2.004 is observed with intensity depending on the growth temperature for all samples with boron content between 0.73% and 2.51%. Temperature dependence of EPR intensities is compared with the results of High-Resolution Photoinduced Transient Spectroscopy (HRPITS). This allows to link particular traps with EPR signal. The activation energies of these traps are consistent with the theoretical position of the $V_{N}-V_{Ga}$ complex. Thermal annihilation of the EPR signal with 30 meV activation energy corresponds to shallow donor ionization. The model explaining light-induced EPR signal involving redistribution of electrons between deep and shallow donors mediated by photoionization to the conduction band is proposed.
Jakub Kierdaszuk, Ewelina. B. MoΕΌdΕΌyΕ„ska, Aneta DrabiΕ„ska, Andrzej WysmoΕ‚ek, Jacek M. Baranowski
2023-04-06T12:28:10Z
http://arxiv.org/abs/2304.03027v1
**Electron Paramagnetic Resonance of V\({}_{\rm N}\)-V\({}_{\rm Ga}\) complex in BGaN** ###### Abstract Metastable photoinduced Electron Paramagnetic Resonance (EPR) signal at low temperatures is reported in GaN alloyed with boron (B\({}_{x}\)Ga\({}_{1\cdot x}\)N) epitaxial layers grown at temperatures ranging from 840 \({}^{\circ}\)C to 1090 \({}^{\circ}\)C. An isotropic EPR line with g = 2.004 is observed with intensity depending on the growth temperature for all samples with boron content between 0.73% and 2.51%. Temperature dependence of EPR intensities is compared with the results of High-Resolution Photoinduced Transient Spectroscopy (HRPITS). This allows to link particular traps with EPR signal. The activation energies of these traps are consistent with the theoretical position of the V\({}_{\rm N}\)-V\({}_{\rm Ga}\) complex. Thermal annihilation of the EPR signal with 30meV activation energy corresponds to shallow donor ionization. The model explaining light-induced EPR signal involving redistribution of electrons between deep and shallow donors mediated by photoionization to the conduction band is proposed. Faculty of Physics, University of Warsaw, Pasteura 5, 02-093 Warsaw, Poland Lukasiewicz Research Network-Institute of Microelectronics and Photonics, Al. Lotnikow 32/46, 02-668 Warsaw, Poland ## 1 Introduction Electron paramagnetic resonance (EPR) proved to be a powerful technique for investigating and identifying defects in solids. The first work concerning EPR measurements in as-grown n-type GaN identified shallow donors [1]. The other example was the EPR studies of the radiation-induced defects in n-type GaN irradiated by 2 MeV electrons [2]. In this case, four defects were detected by EPR [2, 3]. The well-studied defect was assigned to the oxygen gallium vacancy pair (V\({}_{\rm Ga}\)O\({}_{\rm N}\))\({}^{\rm-}\). There are also works that combine Optically Detected Electron Paramagnetic Resonance (ODEPR) with EPR or studies concerning magnetic dopants [4, 5, 6]. To the best of our knowledge, no reports on EPR studies of BGaN are available. In this work, we present the results of EPR spectra in B\({}_{x}\)Ga\({}_{1\cdot x}\)N. The epitaxial layers measured in this
2307.15315
Weak log-majorization and inequalities of power means
As non-commutative versions of the quasi-arithmetic mean, we consider the Lim-P\'{a}lfia's power mean, R\'{e}nyi right mean and R\'{e}nyi power means. We prove that the Lim-P\'{a}lfia's power mean of order $t \in [-1,0)$ is weakly log-majorized by the log-Euclidean mean and fulfills the Ando-Hiai inequality. We establish the log-majorization relationship between the R\'{e}nyi relative entropy and the product of square roots of given variables. Furthermore, we show the norm inequalities among power means and provide the boundedness of R\'{e}nyi power mean in terms of the quasi-arithmetic mean.
Miran Jeong, Sejong Kim
2023-07-28T05:27:16Z
http://arxiv.org/abs/2307.15315v1
# Weak log-majorization and inequalities of power means ###### Abstract. As non-commutative versions of the quasi-arithmetic mean, we consider the Lim-Palfia's power mean, Renyi right mean and Renyi power means. We prove that the Lim-Palfia's power mean of order \(t\in[-1,0)\) is weakly log-majorized by the log-Euclidean mean and fulfills the Ando-Hiai inequality. We establish the log-majorization relationship between the Renyi relative entropy and the product of square roots of given variables. Furthermore, we show the norm inequalities among power means and provide the boundedness of Renyi power mean in terms of the quasi-arithmetic mean. _2020 Mathematics Subject Classification_. 15A45, 15B48 _Keywords and phrases._ Log-majorization, Cartan mean, log-Euclidean mean, Lim-Palfia's power mean, Renyi right mean, Renyi power mean ## 1. Introduction Throughout the paper, \(\mathbb{C}_{m\times m}\) is the set of all \(m\times m\) complex matrices, \(\mathbb{H}_{m}\) is the real vector space of \(m\times m\) Hermitian matrices, and \(\mathbb{P}_{m}\subset\mathbb{H}_{m}\) is the open convex cone of \(m\times m\) positive definite matrices. For \(A,B\in\mathbb{H}_{m}\), the Loewner order \(A\geq(>)B\) means that \(A-B\) is positive semi-definite (resp. positive definite). We denote by \(s(X)\) the \(m\)-tuple of all singular values of a complex matrix \(X\), and denote by \(\lambda(X)\) the \(m\)-tuple of all eigenvalues of a Hermitian matrix \(X\) in decreasing order: \(\lambda_{1}(X)\geq\lambda_{2}(X)\geq\cdots\geq\lambda_{m}(X)\). Let \(x,y\) be two \(m\)-tuples of positive real numbers. We denote by \(x^{\downarrow}=(x_{1}^{\downarrow},\ldots,x_{m}^{\downarrow})\) the rearrangement of \(x\) in decreasing order. The notation \(x\prec_{\log}y\) represents that \(x\) is _log-majorized_ by \(y\), that is, \[\prod_{i=1}^{k}x_{i}^{\downarrow}\leq\prod_{i=1}^{k}y_{i}^{\downarrow} \tag{1.1}\] for \(1\leq k\leq m-1\) and the equality holds for \(k=m\). We say that \(x\) is _weakly log-majorized_ by \(y\), denoted by \(x\prec_{w\log}y\), if (1.1) is true for \(k=1,2,\ldots,m\). For simplicity, given \(A,B\in\mathbb{P}_{m}\), we write \(A\prec_{\log}B\) if \(\lambda(A)\prec_{\log}\lambda(B)\), and \(A\prec_{w\log}B\) if \(\lambda(A)\prec_{w\log}\lambda(B)\). For given \(A_{1},\ldots,A_{n}\in\mathbb{P}_{m}\) the _quasi-arithmetic mean_ (_generalized_ or _power mean_) of order \(t(\neq 0)\in\mathbb{R}\) is defined by \[\mathcal{Q}_{t}(\omega;A_{1},\ldots,A_{n}):=\left(\sum_{j=1}^{n}w_{j}A_{j}^{t} \right)^{\frac{1}{t}}\] where \(\omega=(w_{1},\ldots,w_{n})\) is a positive probability vector. Note that \[\lim_{t\to 0}\mathcal{Q}_{t}(\omega;A_{1},\ldots,A_{n})=\exp\left(\sum_{j=1}^{n} w_{j}\log A_{j}\right),\] where the right-hand side is called the _log-Euclidean mean_ of \(A_{1},\ldots,A_{n}\). Log-majorization properties and operator inequalities of the quasi-arithmetic mean have been studied [7, 9, 23]. As non-commutative versions of the quasi-arithmetic mean, we investigate in this paper the Lim-Palfia's power mean, Renyi right mean and Renyi power mean. (I) The _Lim-Palfia's power mean_\(P_{t}(\omega;A_{1},\ldots,A_{n})\) of order \(t\in(0,1]\) is defined as the unique positive definite solution of \[X=\sum_{i=1}^{n}w_{i}(X\#_{t}A_{i}),\] where \(A\#_{t}B=A^{1/2}(A^{-1/2}BA^{-1/2})^{t}A^{1/2}\) is known as the weighted geometric mean of \(A,B\in\mathbb{P}_{m}\). For \(t\in[-1,0)\) we define \(P_{t}(\omega;A_{1},\ldots,A_{n})=P_{-t}(\omega;A_{1}^{-1},\ldots,A_{n}^{-1})^{-1}\). See [22] for more information. We show in Section 3 that the sequence \(P_{t}(\omega;A_{1}^{p},\ldots,A_{n}^{p})^{1/p}\) for \(t\in[-1,0)\) is weakly log-majorized by the log-Euclidean mean for any \(p>0\): \[P_{t}(\omega;A_{1}^{p},\ldots,A_{n}^{p})^{1/p}\prec_{w\log}\exp\left(\sum_{j=1 }^{n}w_{j}\log A_{j}\right),\] and the power mean \(P_{t}\) satisfies the Ando-Hiai inequality: \(P_{t}(\omega;A_{1},\ldots,A_{n})\leq I\) implies \(P_{t}(\omega;A_{1}^{p},\ldots,A_{n}^{p})^{1/p}\leq I\). This provides an affirmative answer for the monotone convergence of Lim-Palfia's power means in terms of the weak log-majorization, but it is an open question: \[P_{t}(\omega;A_{1}^{p},\ldots,A_{n}^{p})^{1/p}\nearrow_{\prec_{w\log}}\exp \left(\sum_{j=1}^{n}w_{j}\log A_{j}\right)\quad\text{as}\quad p\searrow 0.\] (II) Recently, a new barycenter minimizing the weighted sum of quantum divergences, called the \(t\)_-\(z\) Renyi right mean_, has been introduced in [10]. Indeed, for \(0<t\leq z<1\) \[\Omega_{t,z}(\omega;A_{1},\ldots,A_{n}):=\operatorname*{arg\,min}_{X\in\mathbb{ P}_{m}}\sum_{j=1}^{n}w_{j}\Phi_{t,z}(A_{j},X),\] where \(\Phi_{t,z}(A,B)=\operatorname{tr}((1-t)A+tB)-\operatorname{tr}\left(A^{\frac {1-t}{2z}}B^{\frac{t}{z}}A^{\frac{1-t}{2z}}\right)^{z}\) is the \(t\)-\(z\) Bures-Wasserstein quantum divergence of \(A,B\in\mathbb{P}_{m}\). Here, \(Q_{t,z}(A,B)=\left(A^{\frac{1-t}{2z}}B^{\frac{t}{z}}A^{\frac{1-t}{2z}}\right) ^{z}\) is known as the \(t\)-\(z\) Renyi relative entropy of \(A,B\). The \(t\)-\(z\) Renyi right mean coincides with the unique positive definite solution of the equation \[X=\sum_{j=1}^{n}w_{j}\left(X^{\frac{t}{2z}}A_{j}^{\frac{1-t}{z}}X^{\frac{t}{2 z}}\right)^{z},\] which obtained by vanishing the gradient of objective function. For \(t=z=1/2\), the \(t\)-\(z\) Renyi right mean \(\Omega_{t,z}\) coincides with the Wasserstein mean: see [1, 2, 8, 17] for more information. We show in Section 4 the log-majorization relationship between the \(t\)-\(z\) Renyi relative entropy \(Q_{t,z}(A,B)\) and \(A^{1/2}B^{1/2}\), and establish norm inequalities among the power means. (III) Dumitru and Franco [12] have defined the _Renyi power mean_\(\mathcal{R}_{t,z}(\omega;A_{1},\ldots,A_{n})\) as the unique positive definite solution of the equation \[X=\sum_{j=1}^{n}w_{j}\left(A_{j}^{\frac{1-t}{2z}}X^{\frac{t}{z}}A_{j}^{\frac{ 1-t}{2z}}\right)^{z},\] and proved the norm inequality between \(\mathcal{R}_{t,z}\) and \(\mathcal{Q}_{1-t}\) with respect to the \(p\)-norm for \(p\geq 2\). Note that for commuting variables \[\mathcal{R}_{t,z}=\Omega_{t,z}=P_{1-t}=\mathcal{Q}_{1-t}.\] We show in Section 5 the boundedness of Renyi power mean \(\mathcal{R}_{t,z}\) in terms of the quasi-arithmetic mean. ## 2. Antisymmetric tensor power and homogeneous matrix means A crucial tool in the theory of log-majorization is the antisymmetric tensor power (or the compound matrix). Note that for \(A\geq 0\) and \(1\leq k\leq m\), \[\prod_{i=1}^{k}\lambda_{i}(A)=\lambda_{1}(\Lambda^{k}A), \tag{2.2}\] where \(\Lambda^{k}A\) denotes the \(k\)th antisymmetric tensor power of \(A\). By the definition of log-majorization, \(A\prec_{\log}B\) for \(A,B>0\) if and only if \(\lambda_{1}(\Lambda^{k}A)\leq\lambda_{1}(\Lambda^{k}B)\) for \(1\leq k\leq m-1\), and \(\det A=\det B\). We give a list of fundamental properties of the antisymmetric tensor powers by [6] and [14]. **Lemma 2.1**.: _Let \(A,B\in\mathbb{P}_{m}\), and \(I\) the identity matrix with certain dimension._ 1. \(\Lambda^{k}(cI)=c^{k}I\) _for any constant_ \(c\)__ 2. \(\Lambda^{k}(XY)=\Lambda^{k}(X)\Lambda^{k}(Y)\) _for any_ \(X,Y\in\mathbb{C}_{m\times m}\)__ 3. \((\Lambda^{k}(A))^{r}=\Lambda^{k}(A^{r})\) _for any_ \(r\in\mathbb{R}\)__ 4. \(\Lambda^{k}A\leq\Lambda^{k}B\) _whenever_ \(A\leq B\)_._ Another interesting property is that the weak log-majorization implies the weak majorization. More precisely, \(A\prec_{w\log}B\) implies \(A\prec_{w}B\), where \(A\prec_{w}B\) means that \[\sum_{i=1}^{k}\lambda_{i}(A)\leq\sum_{i=1}^{k}\lambda_{i}(B),\quad 1\leq k\leq m.\] Note that \(A\prec_{w}B\) if and only if \(|||A|||\leq|||B|||\) for any unitarily invariant norm \(|||\cdot|||\). One can easily see from Lemma 2.1 (4) and (2.2) that \(A\leq B\) for \(A,B\in\mathbb{P}_{m}\) implies \(A\prec_{w\log}B\), so \(A\prec_{w}B\). Let \(\Delta_{n}\) be the simplex of all positive probability vectors in \(\mathbb{R}^{n}\). A (multi-variable) matrix mean on the open convex cone \(\mathbb{P}_{m}\) is the map \(G:\Delta_{n}\times\mathbb{P}_{m}^{n}\to\mathbb{P}_{m}\) satisfying the idempotency: \(G(\omega;A,\ldots,A)=A\) for any \(\omega\in\Delta_{n}\) and \(A\in\mathbb{P}_{m}\). The matrix mean is said to be homogeneous if \(G(\omega;c\mathbb{A})=cG(\omega;\mathbb{A})\) for any \(c>0\), where \(\mathbb{A}=(A_{1},\ldots,A_{n})\in\mathbb{P}_{m}^{n}\). **Lemma 2.2**.: _Let \(G_{1},G_{2}:\Delta_{n}\times\mathbb{P}_{m}^{n}\to\mathbb{P}_{m}\) be homogeneous matrix means satisfying_ \[G_{2}(\omega;\mathbb{A})\leq I\quad\text{implies}\quad G_{1}(\omega;\mathbb{ A})\leq I \tag{2.3}\] _for any \(\omega\in\Delta_{n}\) and \(\mathbb{A}=(A_{1},\ldots,A_{n})\in\mathbb{P}_{m}^{n}\). Then \(\|G_{1}(\omega;\mathbb{A})\|\leq\|G_{2}(\omega;\mathbb{A})\|\), where \(\|\cdot\|\) denotes the operator norm. In addition, if such homogeneous matrix means \(G_{i}\) for \(i=1,2\) _are preserved by the antisymmetric tensor power:_ \[\Lambda^{k}G_{i}(\omega;\mathbb{A})=G_{i}(\omega;\Lambda^{k}\mathbb{A})\] _where \(\Lambda^{k}\mathbb{A}=(\Lambda^{k}A_{1},\ldots,\Lambda^{k}A_{n})\), then \(G_{1}(\omega;\mathbb{A})\prec_{w\log}G_{2}(\omega;\mathbb{A})\)._ Proof.: Let \(\kappa=\|G_{2}(\omega;\mathbb{A})\|\). Then \(G_{2}(\omega;\mathbb{A})\leq\kappa I\), and \[G_{2}\left(\omega;\frac{1}{\kappa}\mathbb{A}\right)=\frac{1}{\kappa}G_{2}( \omega;\mathbb{A})\leq I\] since \(G_{2}\) is homogeneous. By (2.3) and the homogeneity of \(G_{1}\) \[\frac{1}{\kappa}G_{1}(\omega;\mathbb{A})=G_{1}\left(\omega;\frac{1}{\kappa} \mathbb{A}\right)\leq I.\] Thus, \(G_{1}(\omega;\mathbb{A})\leq\kappa I\), which implies \(\|G_{1}(\omega;\mathbb{A})\|\leq\|G_{2}(\omega;\mathbb{A})\|\). Additionally, assume that \(G_{i}\) for \(i=1,2\) are preserved by the antisymmetric tensor power. Then using fundamental properties of the antisymmetric tensor powers in Lemma 2.1, (2.3) yields \[\Lambda^{k}G_{2}(\omega;\mathbb{A})\leq I\quad\Longrightarrow\quad\Lambda^{k }G_{1}(\omega;\mathbb{A})\leq I.\] So \(\lambda_{1}(\Lambda^{k}G_{1}(\omega;\mathbb{A}))\leq\lambda_{1}(\Lambda^{k}G_ {2}(\omega;\mathbb{A}))\), equivalently \(G_{1}(\omega;\mathbb{A})\prec_{w\log}G_{2}(\omega;\mathbb{A})\). ## 3. Log-majorization of the Lim-Palfia's power mean Let \(\mathbb{A}=(A_{1},\ldots,A_{n})\in\mathbb{P}_{m}^{n}\). For convenience, we denote \[\mathbb{A}^{p}:=(A_{1}^{p},\ldots,A_{n}^{p})\in\mathbb{P}_{m}^{n}\] for any \(p\in\mathbb{R}\). For \(t\in(0,1]\) we denote by \(P_{t}(\omega;\mathbb{A})\) the unique positive definite solution of \[X=\sum_{i=1}^{n}w_{i}(X\#_{t}A_{i}).\] For \(t\in[-1,0)\) we define \(P_{t}(\omega;\mathbb{A})=P_{-t}(\omega;\mathbb{A}^{-1})^{-1}\). We call \(P_{t}(\omega;\mathbb{A})\) the _Lim-Palfia's power mean_ of order \(t\) for \(A_{1},\ldots,A_{n}\). Note that \[P_{1}(\omega;\mathbb{A})=\sum_{j=1}^{n}w_{j}A_{j}=\mathcal{A}(\omega;\mathbb{ A})\quad\text{ and }\quad P_{-1}(\omega;\mathbb{A})=\left(\sum_{j=1}^{n}w_{j}A_{j}^{-1} \right)^{-1}=\mathcal{H}(\omega;\mathbb{A}),\] where \(\mathcal{A}\) and \(\mathcal{H}\) denote the arithmetic and harmonic means respectively. One can easily see that for commuting \(A_{1},\ldots,A_{n}\) \[P_{t}(\omega;\mathbb{A})=\left(\sum_{i=1}^{n}w_{i}A_{i}^{t}\right)^{1/t}= \mathcal{Q}_{t}(\omega;\mathbb{A}),\] where \(\mathcal{Q}_{t}\) denotes the quasi-arithmetic mean of order \(t\); it can be defined for all \(t\in\mathbb{R}\), and \[\lim_{t\to 0}\mathcal{Q}_{t}(\omega;\mathbb{A})=\exp\left(\sum_{i=1}^{n}w_{i} \log A_{i}\right).\] The remarkable consequence of power means appeared in [21, 22] is that \(P_{t}\) converges monotonically to the Cartan mean \(\Lambda\) as \(t\to 0\) such that \[P_{-t}\leq P_{-s}\leq\cdots\leq\Lambda=\lim_{t\to 0}P_{t}\leq\cdots\leq P_{s}\leq P_{t} \tag{3.4}\] for \(0<s\leq t\leq 1\), where the Cartan mean \(\Lambda\) is the least squares mean for the Riemannian trace metric \(d_{R}\): \[\Lambda(\omega;A_{1},\ldots,A_{n}):=\operatorname*{arg\,min}_{X\in\mathbb{P}_ {m}}\sum_{j=1}^{n}w_{j}d_{R}^{2}(A_{j},X),\] and \(d_{R}(A,B)=\|\log A^{-1/2}BA^{-1/2}\|_{2}\). **Remark 3.1**.: Note that Lim-Palfia's power mean and Cartan mean are homogeneous. So applying Lemma 2.2 with the monotonicity (3.4) of Lim-Palfia's power means yields that \[P_{t}(\omega;\mathbb{A}) \searrow_{\omega\log} \Lambda(\omega;\mathbb{A})\qquad\text{as}\qquad t\searrow 0,\] \[P_{t}(\omega;\mathbb{A}) \nearrow_{\omega\log} \Lambda(\omega;\mathbb{A})\qquad\text{as}\qquad t\nearrow 0.\] **Theorem 3.2**.: _[_26_, Theorem 1]_ _Let \(\mathbb{A}=(A_{1},\ldots,A_{n})\in\mathbb{P}_{m}^{n}\) and \(\omega=(w_{1},\ldots,w_{n})\in\Delta_{n}\). Then_ \[\sum_{j=1}^{n}w_{j}\log A_{j}\leq 0\qquad\text{ implies}\qquad\Lambda(\omega;\mathbb{A})\leq I.\] **Proposition 3.3**.: _Let \(\mathbb{A}=(A_{1},\ldots,A_{n})\in\mathbb{P}_{m}^{n}\), \(\omega=(w_{1},\ldots,w_{n})\in\Delta_{n}\), and \(0<t\leq 1\). Then for any \(p>0\)_ \[\|P_{-t}(\omega;\mathbb{A}^{p})^{1/p}\|\leq\left\|\exp\left(\sum_{j=1}^{n}w_{ j}\log A_{j}\right)\right\|\leq\|P_{t}(\omega;\mathbb{A}^{p})^{1/p}\|. \tag{3.5}\] _Furthermore,_ \[P_{-t}(\omega;\mathbb{A}^{p})^{1/p}\prec_{w\log}\exp\left(\sum_{j=1}^{n}w_{j}\log A _{j}\right). \tag{3.6}\] Proof.: Let \(p>0\). Since the Lim-Palfia's power mean and log-Euclidean mean are homogeneous, by Lemma 2.2 it is enough for the second inequality of (3.5) to show that for \(0<t\leq 1\) \[P_{t}(\omega;\mathbb{A}^{p})^{1/p}\leq I\qquad\text{ implies }\qquad\exp\left(\sum_{j=1}^{n}w_{j}\log A_{j}\right)\leq I.\] Assume that \(P_{t}(\omega;\mathbb{A}^{p})\leq I\) for \(0<t\leq 1\). By (3.4) \(\Lambda(\omega;\mathbb{A}^{p})\leq I\), and \(\Lambda(\omega;\mathbb{A}^{p})^{1/p}\leq I\). Taking the limit as \(p\to 0^{+}\) implies that \[\exp\left(\sum_{j=1}^{n}w_{j}\log A_{j}\right)\leq I.\] Now assume that \(\exp\left(\sum_{j=1}^{n}w_{j}\log A_{j}\right)\leq I\). Since the logarithmic map is operator monotone, we have \(\sum_{j=1}^{n}w_{j}\log A_{j}\leq 0\). Then \(\sum_{j=1}^{n}w_{j}\log A_{j}^{p}=p\sum_{j=1}^{n}w_{j}\log A_{j}\leq 0\) for any \(p>0\). By Theorem 3.2 \[\Lambda(\omega;\mathbb{A}^{p})\leq I,\] and by (3.4) \(P_{-t}(\omega;\mathbb{A}^{p})\leq I\) for \(0<t\leq 1\). This completes the proof of (3.5). Furthermore, by (3.4) \(\Lambda^{k}P_{-t}(\omega;\mathbb{A}^{p})\leq\Lambda^{k}\Lambda(\omega;\mathbb{ A}^{p})\) for the \(k\)th antisymmetric tensor power \(\Lambda^{k}\). So \(\lambda_{1}(\Lambda^{k}P_{-t}(\omega;\mathbb{A}^{p}))\leq\lambda_{1}(\Lambda^ {k}\Lambda(\omega;\mathbb{A}^{p}))\), and by Lemma 2.1 (3) \[\lambda_{1}(\Lambda^{k}P_{-t}(\omega;\mathbb{A}^{p})^{1/p})=\lambda_{1}( \Lambda^{k}P_{-t}(\omega;\mathbb{A}^{p}))^{1/p}\leq\lambda_{1}(\Lambda^{k} \Lambda(\omega;\mathbb{A}^{p}))^{1/p}=\lambda_{1}(\Lambda^{k}\Lambda(\omega; \mathbb{A}^{p})^{1/p}).\] Since \(\Lambda(\omega;\mathbb{A})\prec_{\log}\exp\left(\sum_{j=1}^{n}w_{j}\log A_{j}\right)\) by [8, Theorem 1], we conclude that \[P_{-t}(\omega;\mathbb{A}^{p})^{1/p}\prec_{w\log}\Lambda(\omega;\mathbb{A}^{p} )^{1/p}\prec_{\log}\exp\left(\sum_{j=1}^{n}w_{j}\log A_{j}\right).\] **Remark 3.4**.: Note from [22, Proposition 3.5] that for \(t\in(0,1]\) \[\det P_{-t}(\omega;\mathbb{A})\leq\prod_{j=1}^{n}(\det A_{j})^{w_{j}},\] so (3.6) must be the weak log-majorization. A variant of Ando-Hiai inequality for power means has been shown in [23, Corollary 3.2]: for \(t\in(0,1]\) \[P_{t}(\omega;\mathbb{A})\leq I\quad\mbox{ implies }\quad P_{\frac{t}{p}}( \omega;\mathbb{A}^{p})\leq I\quad\mbox{ for all }p\geq 1.\] We provide different types of Ando-Hiai inequality for power means using Jensen inequalities [13]. Let \(X\) be a contraction. For any \(A>0\) we have \[(XAX^{*})^{p}\leq XA^{p}X^{*}\quad\mbox{ if }\ 1\leq p\leq 2, \tag{3.7}\] and \[(XAX^{*})^{p}\geq XA^{p}X^{*}\quad\mbox{ if }\ 0\leq p\leq 1. \tag{3.8}\] **Theorem 3.5**.: _Let \(p\geq 1\). Then_ * _if_ \(P_{t}(\omega;\mathbb{A})\geq I\) _then_ \(P_{t}(\omega;\mathbb{A})\leq P_{t}(\omega;\mathbb{A}^{p})\) _for_ \(0<t\leq 1\)_, and_ * _if_ \(P_{t}(\omega;\mathbb{A})\leq I\) _then_ \(P_{t}(\omega;\mathbb{A})\geq P_{t}(\omega;\mathbb{A}^{p})\) _for_ \(-1\leq t<0\)_._ Proof.: We first consider \(1\leq p\leq 2\). Assume that \(X:=P_{t}(\omega;\mathbb{A})\geq I\) for \(0<t\leq 1\). Then by taking the congruence transformation \[I=\sum_{j=1}^{n}w_{j}(X^{-1/2}A_{j}X^{-1/2})^{t}=\sum_{j=1}^{n}w_{j}\left[(X^{ -1/2}A_{j}X^{-1/2})^{p}\right]^{t/p}.\] Since \(0<t/p\leq 1\), the above identity reduces to \[I=P_{t/p}(\omega;(X^{-1/2}A_{1}X^{-1/2})^{p},\ldots,(X^{-1/2}A_{n}X^{-1/2})^{p }).\] Since \(X^{-1/2}\leq I\), Hansen's inequality (3.7) and the monotonicity of power means yield \[I\leq P_{t/p}(\omega;X^{-1/2}A_{1}^{p}X^{-1/2},\ldots,X^{-1/2}A_{n}^{p}X^{-1/2 }).\] Taking the congruence transformation by \(X^{1/2}\) implies that \(X\leq P_{t/p}(\omega;\mathbb{A}^{p})\). Since \(0<t/p\leq t\leq 1\) we obtain from (3.4) \[X\leq P_{t/p}(\omega;\mathbb{A}^{p})\leq P_{t}(\omega;\mathbb{A}^{p}).\] Replacing \(A_{j}\) by \(A_{j}^{2}\) we can extend the interval \([2,4]\), and successfully for all \(p\geq 1\). Assume that \(X:=P_{t}(\omega;\mathbb{A})\leq I\) for \(-1\leq t<0\). Then \(X^{-1}=P_{-t}(\omega;\mathbb{A}^{-1})\geq I\). By (i) with \(0<-t\leq 1\) \[X^{-1}\leq P_{-t}(\omega;\mathbb{A}^{-p}),\] equivalently, \(X\geq P_{-t}(\omega;\mathbb{A}^{-p})^{-1}=P_{t}(\omega;\mathbb{A}^{p})\). **Remark 3.6**.: We give another proof for Theorem 3.5 (i). Let \(1\leq p\leq 2\). Assume that \(X=P_{t}(\omega;\mathbb{A})\geq I\) for \(0<t\leq 1\). Since the map \(A\in\mathbb{P}_{m}\mapsto A^{p}\) is operator convex, \[I=\left[\sum_{j=1}^{n}w_{j}(X^{-1/2}A_{j}X^{-1/2})^{t}\right]^{p}\leq\sum_{j= 1}^{n}w_{j}(X^{-1/2}A_{j}X^{-1/2})^{pt}.\] By (3.7) and the monotonicity of the power map \(A\in\mathbb{P}_{m}\mapsto A^{t}\), \[I\leq\sum_{j=1}^{n}w_{j}(X^{-1/2}A_{j}^{p}X^{-1/2})^{t}.\] Taking congruence transformation by \(X^{1/2}\) implies \[X\leq\sum_{j=1}^{n}w_{j}X^{1/2}(X^{-1/2}A_{j}^{p}X^{-1/2})^{t}X^{1/2}=\sum_{j= 1}^{n}w_{j}X\#_{t}A_{j}^{p}=:f(X).\] Since the map \(f\) is operator monotone on \(\mathbb{P}_{m}\), we have \(X\leq f(X)\leq f^{2}(X)\leq\cdots\leq f^{k}(X)\) for all \(k\geq 1\). Taking the limit as \(k\to\infty\) yields \(X\leq P_{t}(\omega;\mathbb{A}^{p})\) for \(1\leq p\leq 2\). Replacing \(A_{j}\) by \(A_{j}^{2}\) we can extend the interval \([2,4]\), and successfully for all \(p\geq 1\). Applying Lemma 2.2 to Theorem 3.5 (ii) we obtain **Corollary 3.7**.: _Let \(-1\leq t<0\). Then_ \[\|P_{t}(\omega;\mathbb{A}^{p})^{1/p}\|\leq\|P_{t}(\omega;\mathbb{A})\|\] _for \(p\geq 1\), where \(\|\cdot\|\) denotes the operator norm._ **Remark 3.8**.: The following is the unique characterization of the Cartan mean among other multi-variable geometric means satisfying the Ando-Li-Mathias axioms: \[\Lambda(\omega;\mathbb{A})\leq I\qquad\text{implies}\qquad\Lambda(\omega; \mathbb{A}^{p})\leq I \tag{3.9}\] for all \(p\geq 1\). This is known as the Ando-Hiai inequality; see [26, Theorem 3, Corollary 6]. We can derive it by using Theorem 3.5 (ii). Indeed, assume that \(\Lambda(\omega;\mathbb{A})\leq I\). Then by (3.4) \(P_{t}(\omega;\mathbb{A})\leq I\) for \(-1\leq t<0\), and by Theorem 3.5 (ii) \(P_{t}(\omega;\mathbb{A}^{p})\leq I\). Taking the limit as \(t\to 0^{-}\) yields \(\Lambda(\omega;\mathbb{A}^{p})\leq I\). **Theorem 3.9**.: _Let \(\mathbb{A}=(A_{1},\ldots,A_{n})\in\mathbb{P}_{m}^{n}\), and \(\omega=(w_{1},\ldots,w_{n})\in\Delta_{n}\). Then_ \[\Lambda(\omega;\mathbb{A}^{p})^{1/p}\nearrow_{\l_{\log}}\exp\left(\sum_{j=1}^{n }w_{j}\log A_{j}\right)\quad\text{as}\quad p\searrow 0.\] Proof.: Note from [8] that \[\lim_{p\to 0}\Lambda(\omega;\mathbb{A}^{p})^{1/p}=\exp\left(\sum_{j=1}^{n}w_{j} \log A_{j}\right),\] and \[\Lambda(\omega;\mathbb{A}^{p})^{1/p}\prec_{\log}\exp\left(\sum_{j=1}^{n}w_{j} \log A_{j}\right).\] So it is enough to show that \(\Lambda(\omega;\mathbb{A}^{q})^{1/q}\prec_{\log}\Lambda(\omega;\mathbb{A}^{p}) ^{1/p}\) for \(0<p\leq q\). By (3.9), if \(\Lambda(\omega;\mathbb{A})\leq I\) then \(\Lambda(\omega;\mathbb{A}^{r})\leq I\) for any \(r\geq 1\) so \(\Lambda(\omega;\mathbb{A}^{r})^{1/r}\leq I\). Since the Cartan mean and \(\Lambda(\omega;\mathbb{A}^{r})^{1/r}\) are preserved by the antisymmetric tensor power and homogeneous, from Lemma 2.2 we have \(\Lambda(\omega;\mathbb{A}^{r})^{1/r}\prec_{\log}\Lambda(\omega;\mathbb{A})\) for all \(r\geq 1\). Letting \(r=q/p\) for \(0<p\leq q\) and replacing \(A_{j}\) by \(A_{j}^{p}\) we obtain \(\Lambda(\omega;\mathbb{A}^{q})^{1/q}\prec_{\log}\Lambda(\omega;\mathbb{A}^{p}) ^{1/p}\). **Remark 3.10**.: Ando and Hiai [3, 4] have shown that \((A^{p}\#_{t}B^{p})^{1/p}\) converges increasingly to the log-Euclidean mean as \(p\to 0^{+}\) with respect to the log-majorization: \[(A^{p}\#_{t}B^{p})^{1/p}\nearrow_{\l_{\log}}\exp((1-t)\log A+t\log B)\quad \text{as}\quad p\searrow 0.\] Theorem 3.9 is a generalization of the Ando-Hiai's log-majorization result to multi-variable geometric mean, which is the Cartan mean. **Remark 3.11**.: Since the Lim-Palfia's power mean satisfies the arithmetic-power-harmonic mean inequalities: \[\mathcal{H}(\omega;\mathbb{A})=\left(\sum_{j=1}^{n}w_{j}A_{j}^{-1}\right)^{-1 }\leq P_{t}(\omega;\mathbb{A})\leq\sum_{j=1}^{n}w_{j}A_{j}=\mathcal{A}(\omega ;\mathbb{A})\] for any nonzero \(t\in[-1,1]\), it satisfies from [16, Theorem 4.2] \[\lim_{p\to 0}P_{t}(\omega;\mathbb{A}^{p})^{1/p}=\exp\left(\sum_{j=1}^{n}w_{j} \log A_{j}\right).\] Moreover, \(P_{t}(\omega;\mathbb{A}^{p})^{1/p}\prec_{w\log}\exp\left(\sum_{j=1}^{n}w_{j}\log A _{j}\right)\) for \(t\in[-1,0)\) by Proposition 3.3. One can naturally ask that the Lim-Palfia's power mean \(P_{t}(\omega;\mathbb{A}^{p})^{1/p}\) for \(t\in[-1,0)\) converges increasingly to the log-Euclidean mean as \(p\to 0^{+}\) with respect to the weak log-majorization. In order to show this, it remains an open problem as follows: for \(0<p\leq q\) \[P_{t}(\omega;\mathbb{A}^{q})^{1/q}\prec_{w\log}P_{t}(\omega;\mathbb{A}^{p})^{1 /p}.\] ## 4. Log-majorization of the \(t\)-\(z\) Renyi right mean Let \(A,B\in\mathbb{P}_{m}\). For \(0\leq t\leq 1\) and \(z>0\) \[Q_{t,z}(A,B)=\left(A^{\frac{1-t}{2z}}B^{\frac{t}{z}}A^{\frac{1-t}{2z}}\right) ^{z}\] is the matrix version of the \(t\)-\(z\) Renyi relative entropy [5, 24]. Especially, \(Q_{t,t}(A,B)\) is known as the sandwiched Renyi relative entropy [25]. This can be considered as a non-commutative version of geometric mean in the sense that \(Q_{t,z}(A,B)=A^{1-t}B^{t}\) for commuting \(A\) and \(B\). From this point of view it is interesting to find a log-majorization relation between \(Q_{t,z}(A,B)\) and \(A^{1/2}B^{1/2}\). **Theorem 4.1**.: _Let \(A,B\in\mathbb{P}_{m}\). For \(0\leq t\leq 1/2\) and \(t\leq z\leq 1\),_ * \(\lambda(Q_{t,z}(A,B))\prec_{\log}s(A^{1/2}B^{1/2})\)_, and_ * \(s(A^{t-\frac{1}{2}}Q_{t,z}(A,B)B^{\frac{1}{2}-t})\prec_{\log}s(A^{1/2}B^{1/2})\)_._ Proof.: Note that \(s(A^{1/2}B^{1/2})=\lambda((A^{1/2}BA^{1/2})^{1/2})\). Since \(Q_{t,z}(A,B)\) and \((A^{1/2}BA^{1/2})^{1/2}\) are invariant under the antisymmetric tensor product and homogeneous, it is enough from Lemma 2.2 to show that * \(A^{1/2}BA^{1/2}\leq I\) implies \(Q_{t,z}(A,B)\leq I\), * \(A^{1/2}BA^{1/2}\leq I\) implies \(A^{t-\frac{1}{2}}Q_{t,z}(A,B)B^{1-2t}Q_{t,z}(A,B)A^{t-\frac{1}{2}}\leq I\). Let \(0\leq t\leq 1/2\) and \(t\leq z\leq 1\). (i) We first prove it when \(B\geq I\). Assuming that \(A^{1/2}BA^{1/2}\leq I\), we have \(B\leq A^{-1}\) so \(B^{\frac{t}{z}}\leq A^{-\frac{t}{z}}\) by the Loewner-Heinz inequality with \(0<t\leq z\leq 1\). Then \[Q_{t,z}(A,B)\leq\left(A^{\frac{1-t}{2z}}A^{\frac{t}{z}}A^{\frac{1-t}{2z}} \right)^{z}=A^{1-2t}\leq I\] since \(A\leq B^{-1}\leq I\) and \(1-2t\geq 0\). So (i) holds when \(B\geq I\). Let \(\lambda_{m}:=\min\{\lambda_{i}(B):1\leq i\leq m\}\). Then \(\lambda_{m}^{-1}B\geq I\). By the preceding argument \[\lambda_{m}^{-1}Q_{t,z}(A,B)=Q_{t,z}(\lambda_{m}^{-1}A,\lambda_{m}^ {-1}B)\] \[\prec_{\log}((\lambda_{m}^{-1}A)^{1/2}(\lambda_{m}^{-1}B)(\lambda_ {m}^{-1}A)^{1/2})^{1/2}=\lambda_{m}^{-1}(A^{1/2}BA^{1/2})^{1/2},\] which completes the proof of (i). (ii) Assume that \(A^{1/2}BA^{1/2}\leq I\). Then \(B\leq A^{-1}\), and \(B^{1-2t}\leq A^{2t-1}\) by the Loewner-Heinz inequality since \(2t\in[0,1]\). Therefore we have \[A^{t-\frac{1}{2}}Q_{t,z}(A,B)B^{1-2t}Q_{t,z}(A,B)A^{t-\frac{1}{2 }} \leq A^{t-\frac{1}{2}}Q_{t,z}(A,B)A^{2t-1}Q_{t,z}(A,B)A^{t-\frac{1}{2}}\] \[=\left(A^{t-\frac{1}{2}}Q_{t,z}(A,B)A^{t-\frac{1}{2}}\right)^{2}.\] Since \(B\leq A^{-1}\), we obtain \(Q_{t,z}(A,B)\leq A^{1-2t}\) by the Loewner-Heinz inequality since \(0\leq t\leq z\leq 1\). So \[A^{t-\frac{1}{2}}Q_{t,z}(A,B)A^{t-\frac{1}{2}}\leq I,\] and thus, \(A^{t-\frac{1}{2}}Q_{t,z}(A,B)B^{1-2t}Q_{t,z}(A,B)A^{t-\frac{1}{2}}\leq I\). Moreover, \[\det\left[A^{t-\frac{1}{2}}Q_{t,z}(A,B)B^{1-2t}Q_{t,z}(A,B)A^{t-\frac{1}{2}} \right]=\det(AB)=\det(A^{1/2}BA^{1/2}),\] and hence, (ii) holds for \(t\in[0,1/2]\). The \(t\)-\(z\) Renyi right mean \(\Omega_{t,z}\) is defined as \[\Omega_{t,z}(\omega;\mathbb{A})=\operatorname*{arg\,min}_{X\in\mathbb{P}_{m}} \sum_{j=1}^{n}w_{j}\Phi_{t,z}(A_{j},X).\] Since the map \(A\in\mathbb{P}_{m}\mapsto\operatorname{tr}A^{t}\) for \(t\in(0,1)\) is strictly concave, the map \(X\in\mathbb{P}_{m}\mapsto\operatorname{tr}\Phi_{t,z}(A,X)\) is strictly concave for \(0<t\leq z<1\). So one can see that \(\Omega_{t,z}(\omega;\mathbb{A})\) coincides with the unique positive definite solution of the matrix nonlinear equation \[X=\sum_{j=1}^{n}w_{j}Q_{1-t,z}(X,A_{j}). \tag{4.10}\] Note that (4.10) is equivalent to \[X^{1-\frac{t}{z}}=\sum_{j=1}^{n}w_{j}X^{-\frac{t}{z}}\#_{z}A_{j}^{\frac{1-t}{z }}. \tag{4.11}\] See [10, 15, 18] for more details. **Theorem 4.2**.: _[_18_, Theorem 3.2]_ _Let \(0<t\leq z<1\). If \(\Omega_{t,z}(\omega;\mathbb{A})\leq I\) then_ \[\Omega_{t,z}(\omega;\mathbb{A})^{1-\frac{t}{z}}\geq\mathcal{A}(\omega;\mathbb{A }^{1-t}).\] _If \(\Omega_{t,z}(\omega;\mathbb{A})\geq I\) then the reverse inequality holds._ **Theorem 4.3**.: _[_11_, Theorem 13]_ _Let \(0<t\leq z<1\). Then we have_ \[\frac{1+z-t}{1-t}I-\frac{z}{1-t}\sum_{j=1}^{n}w_{j}A_{j}^{-\frac{1-t}{z}}\leq \Omega_{t,z}(\omega;\mathbb{A})\leq\left(\frac{1+z-t}{1-t}I-\frac{z}{1-t}\sum_ {j=1}^{n}w_{j}A_{j}^{\frac{1-t}{z}}\right)^{-1},\] _where the second inequality holds when \((1+z-t)I-z\sum_{j=1}^{n}w_{j}A_{j}^{\frac{1-t}{z}}\) is invertible._ **Theorem 4.4**.: _For \(0<t\leq z<1\),_ \[\|P_{1-t}(\omega;\mathbb{A})\|\leq\|\Omega_{t,z}(\omega;\mathbb{A})\|\leq\| \mathcal{Q}_{\frac{1-t}{z}}(\omega;\mathbb{A})\|.\] _Furthermore, \(\|\mathcal{Q}_{\frac{t-1}{z}}(\omega;\mathbb{A})\|\leq\|\Omega_{t,z}(\omega; \mathbb{A})\|\)._ Proof.: Let \(0<t\leq z<1\). Since the Renyi right mean \(\Omega_{t,z}\), power mean \(P_{1-t}\) and quasi-arithmetic mean \(\mathcal{Q}_{\frac{1-t}{z}}\) are all homogeneous, it is enough from Lemma 2.2 to show that for each cases \[\Omega_{t,z}(\omega;\mathbb{A})\leq I \quad\text{implies}\quad\ P_{1-t}(\omega;\mathbb{A})\leq I,\] \[\mathcal{Q}_{\frac{1-t}{z}}(\omega;\mathbb{A})\leq I \quad\text{implies}\quad\ \Omega_{t,z}(\omega;\mathbb{A})\leq I.\] By Theorem 4.2, \(\Omega_{t,z}(\omega;\mathbb{A})\leq I\) implies that \[\sum_{j=1}^{n}w_{j}A_{j}^{1-t}\leq\Omega_{t,z}(\omega;\mathbb{A})^{1-\frac{t} {z}}\leq I,\] and hence, \(\mathcal{Q}_{1-t}(\omega;\mathbb{A})=\left(\sum_{j=1}^{n}w_{j}A_{j}^{1-t} \right)^{\frac{1}{1-t}}\leq I\). By [23, Theorem 3.1]\(P_{1-t}(\omega;\mathbb{A})\leq I\). Next, we assume \(\mathcal{Q}_{\frac{1-t}{z}}(\omega;\mathbb{A})\leq I\). Then \(\sum_{j=1}^{n}w_{j}A_{j}^{\frac{1-t}{z}}\leq I\) so one can see that \[\frac{1+z-t}{1-t}I-\frac{z}{1-t}\sum_{j=1}^{n}w_{j}A_{j}^{\frac{1-t}{z}}\geq I.\] By assumption, the second inequality in Theorem 4.3 holds, and hence, we have \[\Omega_{t,z}(\omega;\mathbb{A})^{\frac{1-t}{z}}\leq\left(\frac{1+z-t}{1-t}I-\frac {z}{1-t}\sum_{j=1}^{n}w_{j}A_{j}^{\frac{1-t}{z}}\right)^{-1}\leq I.\] Moreover, assuming that \(\Omega_{t,z}(\omega;\mathbb{A})\leq I\) yields \[\frac{1+z-t}{1-t}I-\frac{z}{1-t}\sum_{j=1}^{n}w_{j}A_{j}^{-\frac{1-t}{z}}\leq I\] by Theorem 4.3. Then \(\sum_{j=1}^{n}w_{j}A_{j}^{\frac{t-1}{z}}\geq I\), and hence, \(\mathcal{Q}_{\frac{t-1}{z}}(\omega;\mathbb{A})\leq I\) since \(t\in(0,1)\). This completes the proof. ## 5. Boundedness of the Renyi power mean Another type of the Renyi power mean has been introduced in [12], as a unique positive definite solution of the equation \[X=\sum_{j=1}^{n}w_{j}Q_{t,z}(A_{j},X)=\sum_{j=1}^{n}w_{j}\left(A_{j}^{\frac{1- t}{2z}}X^{\frac{t}{z}}A_{j}^{\frac{1-t}{2z}}\right)^{z}. \tag{5.12}\] We denote it as \(\mathcal{R}_{t,z}(\omega;\mathbb{A})\). We see the inequalities between the Renyi power mean and quasi-arithmetic mean by using Jensen type inequalities. **Theorem 5.1**.: _Let \(0<t\leq z<1\). If \(\mathcal{R}_{t,z}(\omega;\mathbb{A})\leq I\) then_ \[\mathcal{R}_{t,z}(\omega;\mathbb{A})\leq\mathcal{Q}_{\frac{1}{p}}(\omega; \mathbb{A}^{1-t})=\left(\sum_{j=1}^{n}w_{j}A_{j}^{\frac{1-t}{p}}\right)^{p}\] _for all \(p\) such that \(p\leq z\)._ Proof.: Let \(X=\mathcal{R}_{t,z}(\omega;\mathbb{A})\leq I\) for \(0<t\leq z<1\). Since \(X^{\frac{t}{z}}\leq I\), we have \(A_{j}^{\frac{1-t}{2z}}X^{\frac{t}{z}}A_{j}^{\frac{1-t}{2z}}\leq A_{j}^{\frac{ 1-t}{z}}\) for each \(j=1,\ldots,n\). Then from the equation (5.12) \[X=\sum_{j=1}^{n}w_{j}\left(A_{j}^{\frac{1-t}{2z}}X^{\frac{t}{z}}A_{j}^{\frac{ 1-t}{2z}}\right)^{z}\leq\sum_{j=1}^{n}w_{j}A_{j}^{1-t}.\] Since the map \(\mathbb{P}_{m}\ni A\mapsto A^{z}\) is concave, we obtain \[X\leq\sum_{j=1}^{n}w_{j}A_{j}^{1-t}\leq\left[\sum_{j=1}^{n}w_{j}A_{j}^{\frac{ 1-t}{z}}\right]^{z}=\mathcal{Q}_{\frac{1}{z}}(\omega;\mathbb{A}^{1-t}).\] Moreover, \(\mathcal{Q}_{p}\) is monotone on \(p\in(-\infty,-1]\cup[1,\infty)\) from [19, Theorem 5.1] so \[\mathcal{Q}_{\frac{1}{z}}(\omega;\mathbb{A}^{1-t})\leq\mathcal{Q}_{\frac{1}{p}} (\omega;\mathbb{A}^{1-t}),\] for \(0<p\leq z<1\). Hence, we completes the proof. **Lemma 5.2**.: _Let \(0<t\leq z<1\)._ 1. _If_ \(A_{j}\leq I\) _for all_ \(j\)_, then_ \(\mathcal{R}_{t,z}(\omega;\mathbb{A})\leq I\)_._ 2. _If_ \(A_{j}\geq I\) _for all_ \(j\)_, then_ \(\mathcal{R}_{t,z}(\omega;\mathbb{A})\geq I\)_._ Proof.: Assume that \(A_{j}\leq I\) for all \(j\). Let \(X=\mathcal{R}_{t,z}(\omega;\mathbb{A})\) for \(0<t\leq z<1\). Suppose that \(\lambda_{1}(X)>1\). Since \(X\leq\lambda_{1}(X)I\), \[X=\sum_{j=1}^{n}w_{j}\left(A_{j}^{\frac{1-t}{2z}}X^{\frac{t}{z}}A_{j}^{\frac{1 -t}{2z}}\right)^{z}\leq\lambda_{1}(X)^{t}\sum_{j=1}^{n}w_{j}A_{j}^{1-t}\leq \lambda_{1}(X)^{t}I.\] This inequality implies that \(\lambda_{1}(X)\leq\lambda_{1}(X)^{t}\), which is a contradiction because \(\lambda_{1}(X)>1\) and \(0<t<1\). So \(\lambda_{1}(X)\leq 1\), equivalently \(X\leq I\). In order to prove (2), suppose that \(\lambda_{m}(X)<1\). Since \(X\geq\lambda_{m}(X)I\), the similar argument as above yields \(\lambda_{m}(X)\geq\lambda_{m}(X)^{t}\), but it is a contradiction. Thus, \(\lambda_{m}(X)\geq 1\), equivalently \(X\geq I\). In the following we denote as \(\lambda_{M}:=\max\{\lambda_{1}(A_{j}):1\leq j\leq n\}\). **Corollary 5.3**.: _Let \(0<t\leq z<1\). Then for all \(p\) such that \(p\leq z\)_ \[\mathcal{R}_{t,z}(\omega;\mathbb{A})\leq\lambda_{M}^{t}\mathcal{Q}_{\frac{1}{p }}(\omega;\mathbb{A}^{1-t}).\] Proof.: Since \(\lambda_{M}^{-1}A_{j}\leq I\) for all \(j\), we have \(\mathcal{R}_{t,z}\left(\omega;\lambda_{M}^{-1}\mathbb{A}\right)\leq I\) by Lemma 5.2 (1). From Theorem 5.1 together with the homogeneity of the Renyi power mean, \[\lambda_{M}^{-1}\mathcal{R}_{t,z}(\omega;\mathbb{A})=\mathcal{R}_{t,z}\left( \omega;\lambda_{M}^{-1}\mathbb{A}\right)\leq\left(\sum_{j=1}^{n}w_{j}(\lambda_ {M}^{-1}A_{j})^{\frac{1-t}{p}}\right)^{p}=\lambda_{M}^{t-1}\left(\sum_{j=1}^{n }w_{j}A_{j}^{\frac{1-t}{p}}\right)^{p}.\] By simplifying the terms of \(\lambda_{M}\) we complete the proof. **Theorem 5.4**.: _Let \(0<t\leq z<1\). Then_ \[\mathcal{R}_{t,z}(\omega;\mathbb{A})^{\frac{1-t}{2}}\geq\lambda_{M}^{-\frac{( 1-t)(1-z)}{2z}}\sum_{j=1}^{n}w_{j}A_{j}^{\frac{1-t}{2z}}.\] Proof.: We first assume that \(A_{j}\leq I\) for all \(j\). Let \(X=\mathcal{R}_{t,z}(\omega;\mathbb{A})\) for \(0<t\leq z<1\). By (3.8) \[X=\sum_{j=1}^{n}w_{j}\left(A_{j}^{\frac{1-t}{2z}}X^{\frac{t}{z}}A_{j}^{\frac{1-t }{2z}}\right)^{z}\geq\sum_{j=1}^{n}w_{j}A_{j}^{\frac{1-t}{2z}}X^{t}A_{j}^{ \frac{1-t}{2z}}.\] Taking the congruence transformation by \(X^{\frac{t}{2}}\) and applying the convexity of a square map yield \[X^{1+t}\geq\sum_{j=1}^{n}w_{j}\left(X^{\frac{t}{2}}A_{j}^{\frac{1-t}{2z}}X^{ \frac{t}{2}}\right)^{2}\geq\left(\sum_{j=1}^{n}w_{j}X^{\frac{t}{2}}A_{j}^{ \frac{1-t}{2z}}X^{\frac{t}{2}}\right)^{2}.\] Since the square root map is operator monotone, we have \(X^{\frac{1+t}{2}}\geq\sum_{j=1}^{n}w_{j}X^{\frac{t}{2}}A_{j}^{\frac{1-t}{2z}} X^{\frac{t}{2}}\). Taking the congruence transformation by \(X^{-t/2}\) we obtain \[X^{\frac{1-t}{2}}\geq\sum_{j=1}^{n}w_{j}A_{j}^{\frac{1-t}{2z}}. \tag{5.13}\] Now, replacing \(A_{j}\) by \(\lambda_{M}^{-1}A_{j}(\leq I)\) for all \(j\) in (5.13) we have \[\mathcal{R}_{t,z}\left(\omega;\lambda_{M}^{-1}\mathbb{A}\right)^{\frac{1-t}{2 }}\geq\sum_{j=1}^{n}w_{j}\left(\lambda_{M}^{-1}A_{j}\right)^{\frac{1-t}{2z}}.\] Since the Renyi power mean \(\mathcal{R}_{t,z}\) is homogeneous, it reduces to \[\lambda_{M}^{\frac{t-1}{2}}\mathcal{R}_{t,z}\left(\omega;\mathbb{A}\right)^{ \frac{1-t}{2}}\geq\lambda_{M}^{\frac{t-1}{2z}}\sum_{j=1}^{n}w_{j}A_{j}^{\frac{ 1-t}{2z}}.\] By simplifying the terms of \(\lambda_{M}\) we obtain the desired inequality. **Remark 5.5**.: The multi-variable matrix mean on the open convex cone \(\mathbb{P}_{m}\) can be defined as a map \(G:\Delta_{n}\times\mathbb{P}_{m}^{n}\rightarrow\mathbb{P}_{m}\) satisfying the idempotency: \(G(\omega;A,\ldots,A)=A\) for any \(\omega\in\Delta_{n}\) and \(A\in\mathbb{P}_{m}\). Boundedness of the multi-variable matrix mean plays an important role in operator inequality and majorization. Especially, the multi-variable matrix mean \(G\) satisfying the arithmetic-\(G\)-harmonic mean inequalities \[\left(\sum_{j=1}^{n}w_{j}A_{j}^{-1}\right)^{-1}\leq G(\omega;A_{1},\ldots,A_{n })\leq\sum_{j=1}^{n}w_{j}A_{j}\] fulfills the extended version of Lie-Trotter formula [16]: \[\lim_{s\to 0}G(\omega;A_{1}^{s},\ldots,A_{n}^{s})^{1/s}=\exp\left(\sum_{j=1}^{n}w_{ j}\log A_{j}\right). \tag{5.14}\] See [17, 20] for more information. We here have established boundedness of the Renyi power mean, but it is still open whether (5.14) holds for the Renyi power mean. **Acknowledgement** No potential competing interest was reported by authors. The work of S. Kim was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. NRF-2022R1A2C4001306).
2308.05783
Toward Globally Optimal State Estimation Using Automatically Tightened Semidefinite Relaxations
In recent years, semidefinite relaxations of common optimization problems in robotics have attracted growing attention due to their ability to provide globally optimal solutions. In many cases, it was shown that specific handcrafted redundant constraints are required to obtain tight relaxations and thus global optimality. These constraints are formulation-dependent and typically identified through a lengthy manual process. Instead, the present paper suggests an automatic method to find a set of sufficient redundant constraints to obtain tightness, if they exist. We first propose an efficient feasibility check to determine if a given set of variables can lead to a tight formulation. Secondly, we show how to scale the method to problems of bigger size. At no point of the process do we have to find redundant constraints manually. We showcase the effectiveness of the approach, in simulation and on real datasets, for range-based localization and stereo-based pose estimation. Finally, we reproduce semidefinite relaxations presented in recent literature and show that our automatic method always finds a smaller set of constraints sufficient for tightness than previously considered.
Frederike DΓΌmbgen, Connor Holmes, Ben Agro, Timothy D. Barfoot
2023-08-10T16:30:42Z
http://arxiv.org/abs/2308.05783v5
# Toward Globally Optimal State Estimation Using Automatically Tightened Semidefinite Relaxations ###### Abstract In recent years, semidefinite relaxations of common optimization problems in robotics have attracted growing attention due to their ability to provide globally optimal solutions. In many cases, it was shown that specific handcrafted redundant constraints are required to obtain tight relaxations and thus global optimality. These constraints are formulation-dependent and typically require a lengthy manual process to find. Instead, the present paper suggests an automatic method to find a set of sufficient redundant constraints to obtain tightness, if they exist. We first propose an efficient feasibility check to determine if a given set of variables can lead to a tight formulation. Secondly, we show how to scale the method to problems of bigger size. At no point of the process do we have to manually find redundant constraints. We showcase the effectiveness of the approach, in simulation and on real datasets, for range-based localization and stereo-based pose estimation. Finally, we reproduce semidefinite relaxations presented in recent literature and show that our automatic method finds a smaller set of constraints sufficient for tightness than previously considered. Optimization and optimal control, Localization, Robot Safety, Global Optimality ## I Introduction Many problems encountered in robotic state estimation, such as calibration and simultaneous localization and mapping (SLAM), are typically posed as nonlinear least-squares optimization problems [1, 2]. Widely adopted solvers used to tackle these problems, such as Gauss-Newton (GN) and Levenberg-Marquardt (LM), have only local, if any, convergence guarantees and may terminate in suboptimal solutions [3]. Over the past years, there has been a growing effort to exploit semidefinite relaxations of these optimization problems. Semidefinite relaxations open the door to global optimality in at least two different ways: in certain cases, a (convex) semidefinite program (SDP) (or a sequence thereof) may be solved instead of the original nonconvex problem to find the globally optimal solution [4, 5, 6, 7]. In other cases, the Lagrangian dual of the SDP offers the possibility to construct so-called 'optimality certificates' [8, 9] to determine the global optimality of the solutions obtained by local solvers. The performance and feasibility of the aforementioned methods greatly depends on whether the SDP relaxation is tight. For example, the globally optimal solution to the original problem can only be extracted from the SDP solution when it is rank one, in which case the relaxation is tight [8, 5, 10]. Similarly, certifiable algorithms work only when strong duality obtains [11], _i.e._, when the cost the relaxed problem solution equals the cost of the original problem [8, 5]. Tightness can also be a computational advantage; some state-of-the-art SDP solvers, for instance, work only for problems with low-rank optimal solutions [12, 6]. One important enabler for tight relaxations has been a mathematical framework called _Lasserre's hierarchy_[13]. Put simply, the hierarchy consists of a sequence of semidefinite relaxations where polynomial substitutions of increasing order are added to the original problem. Calling the original variable dimension \(d\) and the hierarchy order \(k\), each level results in a \(N_{k}\)-dimensional SDP, with \(N_{k}:=\binom{d+k}{k}\). Astonishingly, under weak technical assumptions, any problem that can be written as a polynomial optimization problem (POP) can be 'lifted' to a high enough order \(k\) to allow for a tight relaxation. In theory, the required order may be infinite, but many follow-up works have shown that tightness is obtained within a few hierarchy levels only [14, 8, 15, 16]. More recently, it has been shown that many problems admit a _sparse Lasserre's hierarchy_, meaning that only some of the \(N_{k}\) terms may be required at each level [17, 7]. As SDPs scale poorly with problem dimension, it is desirable to achieve tightness with as few additional higher-order substitutions as possible (ideally, with none). For this matter, it has been shown that so-called redundant constraints are paramount [8, 10, 14]. However, to this date, these constraints are usually the result of a lengthy manual search process and it is often hard to retrace how the constraints were discovered [14]. In [7], a method to find all 'trivially satisfied' constraints is provided, but not all of these constraints may Fig. 1: The proposed method in a nutshell: we circumvent the lengthy process of finding redundant constraints to tighten a given semidefinite relaxation, using instead a sampling-based approach to automatically find all possible constraints. This allows for the quick evaluation of different formulations and substitutions of a given optimization problem, hopefully lowering the barrier for SDPs to be more widely adopted for finding globally optimal solutions to optimization problems in robotics. be necessary, and important constraints might be missed. To make matters worse, using different formulations may lead to entirely different forms and numbers of required redundant constraints. Due to the lack of a systematic method of finding the right formulation and sufficient redundant constraints, practitioners often have to spend great effort in trial-and-error reformulations. This adds significant overhead as opposed to easy-to-use local solvers, and thus may hinder the wide adoption of SDP methods in robotics. In this paper, we provide tools that help automate the search for redundant constraints required for tightness. In particular, the proposed methods allow us to 1. determine, in only a few lines of code, if a problem in a given form can be tightened by adding enough redundant constraints. Notably, no manual steps for guessing the redundant constraints are required. This step is purposefully kept simple, allowing for quick evaluation of any problem in a given formulation (AutoTight). 2. automatically determine a set of 'constraint templates' that can be generalized to any number of variables, requiring again no need to explicitly model or interpret the found constraints (AutoTemplate). The focus of AutoTight is feasibility and it should be performed on a small example problem. The focus of AutoTemplate is scalability, enabling to generalize the findings from AutoTight to problems of any size, which is a hard requirement for typically high-dimensional problems encountered in robotics. The only prerequisite for using the provided tools is a method of randomly generating many problem setups (also called a'sampling oracle' in the literature [18]). We believe that most roboticists generate such a method as part of their standard development process, and if not, can do so fairly easily in only a few lines of code. This paper is structured as follows. We put the proposed method in context with related work in Section II. Then, we introduce mathematical preliminaries for relaxing a non-linear least-squares problem to an SDP in Section III. In Section IV, we present our method to determine the feasibility of tightening a given problem. Building on this method, in Section V we propose a scalable method for determining constraint templates that can be applied to any number of variables. We use the methods to provide novel insights on two state-estimation problems in Sections VI-B and VI-C, and on previously studied relaxations in VI-D. Finally, we test the method on real-world datasets for range-only (RO) and stereo-camera localization in Section VII and conclude in Section VIII. ## II Related Work The list of problems in robotics and computer vision that have been solved using semidefinite relaxations is long and continues to grow. In vision-based state estimation, semidefinite relaxations have been widely explored, for example to solve rotation averaging [19, 20, 21] or to perform camera pose estimation from pixel measurements [10, 22]. The first theoretical guarantees on tightness of these and other problems were given in [23, 9]. A set of analytical redundant constraints that successfully tightens many problem instances involving rotations has been proposed in [24, 10] and used successfully in follow-up works to certify, for instance, hand-eye calibration [25] and generalized-essential-matrix estimation [26]. Follow-up works have shown that tight relaxations can be achieved for robust cost functions, too, which account for outliers [16, 8]. A great overview of many successfully tightened problems and robust cost functions is given in [7], in which a recipe for constructing trivially satisfied redundant constraints is also provided. Robotics planning and control problems have recently also seen a surge of relaxation-based methods [15, 27, 28]. Notably, specific redundant constraints (again, analytically specified) were found to be paramount for tightness in [28]. For some problems, no redundant constraints are required for tightness. For these problems, methods based on the _Burer Monteiro_ approach [29] and the _Riemannian staircase_[30] have been shown to be extremely effective at finding the optimal solution with speeds competitive with efficient local solvers [4, 5, 31, 32]. Other methods have explored fast global optimality certificates of solutions of local solvers [33, 34]. To date, whenever redundant constraints are required for tightness, SDP solvers are generally too slow for real-time performance [7]. However, recent advances have shown that solvers can be significantly sped up when the optimal solution is of low rank [35, 6, 7]. More progress in finding faster SDP solvers for these convex relaxations is a requirement to enable the large-scale adoption of SDPs for robotics; another requirement is finding the necessary redundant constraints for a larger class of problems. We hope that the method proposed in this paper contributes to the latter. Recently, a sampling paradigm has been explored in the sums-of-squares (SOS) literature to overcome some of the limitations of SDP solvers [18].1 The authors suggest to sample feasible points of an SOS program and to solve an SDP including only a minimally required number of samples. The method thus implicitly exploits coordinate ring structure of the variety without the use of advanced concepts such as Grobner bases [36]. This solution has been picked up and shown great promise on small problems [37]. We use a similar paradigm in this paper, but instead of solving a sampling-based SDP, we use the samples to find generalizable constraints. Not only does this provide more insight into the kind of redundant constraints required to tighten standard SDP problems of a wide range of problems, it also allows us to generalize to novel, higher-dimensional problems. Footnote 1: There is a tight connection between the SOS relaxation and Lasserre’s hierarchy (also called moment relaxation in this context); a clear description of this connection is given in [21]. ## III Preliminaries ### _Notation_ We denote vectors and matrices by bold-face lowercase and uppercase letters, respectively. The transpose of matrix \(\mathbf{A}\) is written as \(\mathbf{A}^{\top}\). The identity matrix in \(d\) dimensions is \(\mathbf{I}_{d}\), and vector \(\mathbf{e}_{d}\) is the \(d\)-th standard basis vector (the \(d\)-th column of the identity matrix). A positive-semidefinite (PSD) matrix is written as \(\mathbf{X}\succeq 0\), and we denote the space of \(N\times N\) PSD matrices by \(\mathbb{S}_{+}^{N}\). The inner product is denoted by \(\langle\cdot,\cdot\rangle\), and the matrix inner product is defined as \(\langle\mathbf{A},\mathbf{B}\rangle=\operatorname{tr}\left(\mathbf{A}^{\top}\mathbf{B}\right)\) where \(\operatorname{tr}\left(\cdot\right)\) is the trace operator. We introduce \(\operatorname{vech}(\cdot)\) which extracts the elements of the upper-triangular part of a matrix, and divides the diagonal elements by \(\sqrt{2}\). This ensures that \(\langle\mathbf{A},\mathbf{B}\rangle=\operatorname{vech}\left(\mathbf{A}\right)^{\top} \operatorname{vech}\left(\mathbf{B}\right)\), and is commonly used in SDP solvers [38]. We denote the inverse operation by \(\operatorname{vech}^{-1}(\cdot)\). \(\mathbf{x}[k]\) denotes the \(k\)-th element of vector \(\mathbf{x}\). For shorter notation, we use \([N]\) for the index set \(\{1,\dots,N\}\). ### _Semi-definite Relaxations_ In the remainder of this section, we provide theoretical background on semidefinite relaxations and duality theory necessary to understand this paper for the nonexpert reader. For an in-depth introduction to these topics we refer to [1, 3]. Most generally speaking, the subject of this paper is optimization problems of the form \[\min_{\mathbf{\theta}\in\mathbb{R}^{d}}\{c(\mathbf{\theta})\,\big{|}\,h_{i}(\mathbf{ \theta})=0,i\in[N_{h}]\}, \tag{1}\] where \(\mathbf{\theta}\) is a decision variable, \(c(\cdot)\) is the cost, and \(h_{i}(\cdot)\) are equality constraints.2 In robotics, the cost is most commonly a (robust) least-squares cost function, and the constraints may enforce the nature of the decision variables, such as \(SO(3)\) for rotations or \(SE(3)\) for poses [2]. Footnote 2: We focus on equality constraints here for the sake of clarity. Note that inequality constraints can be added as long as they can also be written as quadratic constraints in the lifted vector and thus carried forward as quadratic inequality constraints in the relaxations. We include one example of inequality constraints in Section V–D. The problems in which we are interested can be 'lifted' to a quadratically constrained quadratic program (QCQP). This includes, for instance, any POP; we show examples of a quartic and a rational cost function in Section VI. For such problems, we can rewrite (1) as \[\min_{\mathbf{x}\in\mathbb{R}^{N}}\{f(\mathbf{x})\,\big{|}\,g_{i}(\mathbf{x})=0,l_{j}(\mathbf{ x})=0,i\in[N_{h}],j\in[N_{l}]\}, \tag{2}\] where \(f\) and \(g_{i}\) are now quadratic in the lifted vector \(\mathbf{x}\). The lifted vector is given by \[\mathbf{x}^{\top}=\begin{bmatrix}1&\mathbf{\theta}^{\top}&z_{1}&\cdots&z_{N_{l}}\end{bmatrix}, \tag{3}\] where we have introduced \(z_{l}:=\ell_{l}(\mathbf{\theta})\), higher-order lifting functions of \(\mathbf{\theta}\). By choosing enough of these substitutions, we can enforce that each substitution can itself be written as a quadratic constraint: \(l_{j}(\mathbf{x})=0\). We have also added \(h\) in (3) as a homogenization variable, allowing constant and linear functions to be written as quadratic functions. We illustrate these concepts in the following example: **Example** (stereo-1D).: _Inspired by stereo-based localization problems, which typically involve rational cost functions, we propose the following pedagogical example problem:_ \[\min_{\theta}\sum_{i=1}^{N}\left(\frac{1}{(\theta-a_{i})}\right)^{2}, \tag{4}\] _where \(\theta\in\mathbb{R}\) is the decision variable, and \(a_{i}\in\mathbb{R}\) are known. Using the lifted vector_ \[\mathbf{x}^{\top}=\begin{bmatrix}h&\theta&z_{1}&\cdots&z_{N}\end{bmatrix},\quad z _{i}=\ell_{i}(\mathbf{\theta}):=\frac{1}{\theta-a_{i}}, \tag{5}\] _we can rewrite (24) in the form (2), with \(f(\mathbf{x})=\sum_{i=1}^{N}\mathbf{x}[2+i]^{2}\), and \(l_{i}(\mathbf{x})=\mathbf{x}[2+i]\mathbf{x}[1]-\mathbf{x}[2+i]a_{i}=0\), which are both quadratic functions in the lifted variable \(\mathbf{x}^{\top}=\begin{bmatrix}1&\theta&z_{1}&\cdots&z_{N}\end{bmatrix}\)._ Since all functions in (2) are quadratic in the lifted vector, we can now rewrite (2) as \[\min_{\mathbf{x}\in\mathbb{R}^{N}}\{\mathbf{x}^{\top}\mathbf{Qx}\,\big{|}\,\mathbf{x}^{\top} \mathbf{A}_{0}\mathbf{x}=1,\mathbf{x}^{\top}\mathbf{A}_{i}\mathbf{x}=0,i\in[N_{A}]\}, \tag{6}\] where \(\mathbf{Q}\) and \(\mathbf{A}_{i},i\in[N_{A}]\) are the cost and constraint matrices, respectively, and \(N_{A}=N_{h}+N_{l}\). The matrix \(\mathbf{A}_{0}\) enforces the homogenization variable through the constraint \(\mathbf{x}[1]^{2}=1\).3 We call the constraints in (6) the 'primary constraints'. Footnote 3: Technically, the first element of \(\mathbf{x}\) may thus take the value \(-1\), but this does not pose a problem as the whole vector can then be simply negated. **Example** (stereo-1D, cont'd).: _The cost and constraints matrices for the toy stereo problem are zero except for \(\mathbf{Q}[i,i]=1\) for \(i=3\dots N+2\) and \(\mathbf{A}_{i}[1,2+i]=\mathbf{A}_{i}[2+i,1]=-a_{i}\) and \(\mathbf{A}_{i}[2,2+i]=\mathbf{A}_{i}[2+i,2]=1\)._ Problem (6) is a QCQP. Its solution space, defined by a set of polynomial equality constraints, defines a real algebraic variety, which is a central object of the field of algebraic geometry. This is by itself an active area of research, with methods existing for finding, for example, the minimal set of constraints to uniquely define a variety [36]. For the proposed paper, no knowledge of these advanced concepts is required as we take a numerical approach rather than an algebraic approach to describe the varieties. For the interested reader, we do include some references to the algebraic geometry perspective in footnotes. Because (6) is, in general, NP-hard to solve, a common strategy is to relax the problem to a SDP by introducing \(\mathbf{X}:=\mathbf{x}\mathbf{x}^{\top}\), which can be enforced using \(\mathbf{X}\succeq 0,\operatorname{rank}\left(\mathbf{X}\right)=1\), where the semidefinite constraint is convex while the rank constraint is not. We can solve the following standard SDP, also called the primal relaxation of (6): \[\min_{\mathbf{X}\in\mathbb{S}_{+}^{N}}\{\left\langle\mathbf{Q},\mathbf{X}\right\rangle|\, \langle\mathbf{A}_{0},\mathbf{X}\rangle=1,\langle\mathbf{A}_{i},\mathbf{X}\rangle=0,i\in[N_{A}]\}, \tag{7}\] which is the rank-relaxation of (6) (_i.e._, we relax the \(\operatorname{rank}(\mathbf{X})=1\) constraint). ### _Duality Theory and Global Optimality_ The SDP problem can be used in several ways to make claims about the global optimality of candidate solutions. Let us denote by \(\mathbf{X}^{*}\) the solution of (7) and its associated cost by \(p^{*}:=\langle\mathbf{Q},\mathbf{X}^{*}\rangle\). If \(\mathbf{X}^{*}\) has rank one, then it can be factored as \(\mathbf{X}^{*}=\mathbf{x}^{*}\mathbf{x}^{*\top}\) and \(\mathbf{x}^{*}\) is the optimal solution to (6) with \(q^{*}:=f(\mathbf{x}^{*})=p^{*}\). This leads us to the first form of tightness used in this paper. **Definition 1** (Rank-tightness of the SDP relaxation).: _We call the SDP relaxation (7) rank-tight if its optimal solution \(\mathbf{X}^{\star}\) has rank one._ SDPs also enjoy a well-understood duality theory, which makes them great candidates for so-called 'optimality certificates'. The Lagrangian dual problem of (7) is given by \[d^{\star}=\max_{\rho,\mathbf{\lambda}}\{-\rho\,\big{|}\,\mathbf{H}(\rho,\mathbf{\lambda}): =\mathbf{Q}+\rho\mathbf{A}_{0}+\sum_{i=1}^{N_{A}}\lambda_{i}\mathbf{A}_{i}\succeq 0\}, \tag{8}\] where \(\rho\), \(\mathbf{\lambda}\in\mathbb{R}^{N_{A}}\) are the Lagrangian dual variables corresponding to \(\mathbf{A}_{0}\) and \(\mathbf{A}_{i},i\in[N_{A}]\), respectively. It is well known that we always have \(d^{\star}\leq p^{\star}\leq q^{\star}\) (see left graph of Figure 1 for a graphical depiction). In what follows, we will also make the assumption that \(d^{\star}=p^{\star}\) which holds under common constraint qualifications such as _Slater's condition_[11]. We can use the dual problem to, instead of solving the primal SDP and checking the rank of the solution, certify a local candidate solution \(\hat{\mathbf{x}}\). Indeed, using the Karush-Kuhn-Tucker (KKT) conditions of (8), it is well-known (see _e.g._, [39]) that a solution candidate \(\hat{\mathbf{x}}\) is globally optimal if there exist \(\hat{\rho},\hat{\mathbf{\lambda}}\) such that \[\mathbf{H}(\rho,\hat{\mathbf{\lambda}})\hat{\mathbf{x}}=\mathbf{0}, \tag{9a}\] \[\mathbf{H}(\rho,\hat{\mathbf{\lambda}})\succeq 0. \tag{9b}\] If these two conditions hold, we have _strong duality_, meaning that \(d^{\star}=p^{\star}=q^{\star}\) (right plot of Figure 1). If we do not have strong duality, the above conditions cannot be jointly satisfied and we cannot make claims about the global optimality of a candidate solution. Therefore, we introduce the notion of _cost-tightness_, a weaker form of tightness than rank-tightness,4 which allows for candidate solutions to be certified: Footnote 4: It is straightforward to see that rank-tightness implies cost-tightness. **Definition 2** (Cost-tightness of the SDP relaxation).: _We call the SDP relaxation (7) cost-tight if \(d^{\star}=p^{\star}=q^{\star}\)._ Both forms of tightness may be useful in practice: when we have rank-tightness, we can solve the SDP and derive the optimal value of the QCQP from it. When the SDP is prohibitively large, or when only cost-tightness is attained, one may instead resort to a local solver and certify the solution candidate using Lagrangian duality. For completeness, we also mention that in some cases, one may extract a solution estimate from a higher-rank solution of the SDP in a procedure called 'rounding', see _e.g._, [5]. This typically consists of extracting the dominant eigenvector from \(\mathbf{X}^{\star}\), and projecting it to the feasible set of (1). Note that in this case there are no guarantees on the quality of the solution and cases have been reported where the obtained estimate is far from the global optimum [40]. We have seen that either rank- or cost-tightness are necessary for efficiently obtaining or certifying globally optimal solutions, respectively. The remaining question is how one may increase the tightness of a given problem. This leads to the notion of redundant constraints, as explained next. ### _Redundant Constraints_ Redundant constraints can be added to (2) without changing its feasible set (thus the name'redundant').5 While the constraints are redundant for the QCQP, they may, however, change the feasible region of the SDP. In particular, redundant constraints typically reimpose structure on \(\mathbf{X}\) that is lost when relaxing the rank-one constraint. For example, if the lifted vector is \(\mathbf{x}^{\top}=\begin{bmatrix}1&\theta&\theta^{2}&\theta^{3}\end{bmatrix}\), then Footnote 5: Speaking in terms of algebraic geometry, the redundant constraints do not change the algebraic variety that is defined as the solution space. \[\mathbf{X}=\mathbf{x}\mathbf{x}^{\top}=\begin{bmatrix}1&\theta&\theta^{2}&\theta^{3}\\ \star&\theta^{2}&\theta^{3}&\theta^{1}\\ \star&\star&\theta^{4}&\theta^{5}\\ \star&\star&\star&\theta^{6}\end{bmatrix}, \tag{10}\] which has a very clear structure (it is a Hankel matrix, as is always the case for semidefinite relaxations of scalar polynomial problems [41]) that might be lost in the relaxation. The lifting constraints (in this case, \(\mathbf{x}[3]=\mathbf{x}[2]^{2}\) and \(\mathbf{x}[4]=\mathbf{x}[3]\mathbf{x}[2]\)) and symmetry of the solution take care of constraining all terms of degree 0 to 3 as well as \(\theta^{5}\), but nothing directly enforces that the elements corresponding to \(\theta^{4}\) in the variable \(\mathbf{X}\) are equal. In this case, we can add the redundant constraint corresponding to (\(\mathbf{x}[3]^{2}=\mathbf{x}[2]\mathbf{x}[4]\)) to enforce exactly that. Redundant constraints can often be hard to find -- as our continued example illustrates. **Example** (stereo-1D, cont'd).: _A simple computation shows that_ \[z_{i}-z_{j}=\frac{1}{\theta-a_{i}}-\frac{1}{\theta-a_{j}}=(a_{i}-a_{j})z_{i}z_ {j}, \tag{11}\] _which holds for any \(i\neq j\) and \(z_{i}\), \(z_{j}\) constructed using the lifting functions \(\ell_{i}(\theta)\) introduced in (5). This shows that equation (11), which is quadratic in the elements of \(\mathbf{x}\), is redundant in (2), but non-redundant in the QCQP. It can be added to the QCQP with matrices \(\mathbf{A}_{ij}\); \(\mathbf{A}_{ij}[1,i]=\mathbf{A}_{ij}[i,1]=1\), \(\mathbf{A}_{ij}[1,j]=\mathbf{A}_{ij}[j,1]=-1\), \(\mathbf{A}_{ij}[i,j]=\mathbf{A}_{ij}[j,i]=(a_{i}-a_{j})\), for all \(i,j\in[N],i\neq j\)._ Because they impose more structure on \(\mathbf{X}\), redundant constraints may have the effect of reducing the rank of \(\mathbf{X}\), and thus improve the tightness of the relaxation. However, finding the right form and number of redundant constraints can be a tedious process, especially as the dimension of the problem increases. The present paper circumvents this process by proposing a numerical method to find all available redundant constraints, as we explain next. ## IV Determining Feasibility of Tightening (AutoTight) In this Section, we present our method to determine whether a problem in a given form can be tightened, adding all possible redundant constraints without having to manually find or interpret them. ### _Setting up the Nullspace Problem_ At the core of the presented method is the idea that all of the constraint matrices \(\mathbf{A}_{i}\) lie in the nullspace of the linear subspace spanned by the feasible points. Indeed, assume we can generate feasible samples \(\mathbf{\theta}^{(s)}\), and therefore also a set of lifted samples \(\mathcal{X}=\{\mathbf{x}^{(1)},\dots,\mathbf{x}^{(N_{s})}\}\) with \(\mathbf{x}^{(s)}\) constructed using the _known_ lifting functions \(\ell\).6 Then, for any valid constraint \(\mathbf{A}_{i}\) (whether primary or redundant), we must have Footnote 6: Note that we can also allow for unknown or numerical lifting functions, as long as a sampler of \(\mathbf{x}\) is available. \[\langle\mathbf{A}_{i},\mathbf{X}^{(s)}\rangle=\operatorname{vech}\left(\mathbf{A}_{i} \right)^{\top}\operatorname{vech}\left(\mathbf{X}^{(s)}\right)=0, \tag{12}\] with \(\mathbf{X}^{(s)}:=\mathbf{x}^{(s)}\mathbf{x}^{(s)}{}^{\top}\). This must hold for all samples \(\mathbf{x}^{(s)}\). Defining the data matrix \(\mathbf{Y}=\begin{bmatrix}\operatorname{vech}\left(\mathbf{X}^{(1)}\right)&\cdots& \operatorname{vech}\left(\mathbf{X}^{(N_{s})}\right)\end{bmatrix}\in\mathbb{R}^{n \times N_{s}}\), the set of 'learned' constraints, \(\mathcal{A}_{l}\), is the left nullspace basis of \(\mathbf{Y}\): \[\mathcal{A}_{l}=\{\mathbf{A}_{1},\dots,\mathbf{A}_{N_{n}}\}=\{\operatorname{vech}^{-1 }\left(\mathbf{a}_{i}\right)\,\big{|}\,\mathbf{a}_{i}^{\top}\mathbf{Y}=\mathbf{0}\}. \tag{13}\] In other words, each nullspace basis vector corresponds to one (vectorized) constraint matrix. Therefore, finding all possible constraints is a standard nullspace problem. The dimension of the nullspace, \(N_{n}\), corresponds to the total number of constraints. Note that we have exploited the fact that \(\mathbf{X}^{(s)}\) and \(\mathbf{A}_{i}\) are symmetric in using the half-vectorization operator, which reduces the problem size to \(n:=\frac{N(N+1)}{2}\). By definition, all the constraints found through (13) are linearly independent when operating in matrix form. When using the constraints in (6), however, the constraints may become dependent; in other words, the method finds both primary and redundant constraints. Sometimes, it may be desirable to enforce some of the basis vectors to be known, for example to enforce the primary constraints. We denote the set of constraints to be enforced by \(\mathcal{A}_{k}=\{\mathbf{\tilde{A}}_{1},\dots,\mathbf{\tilde{A}}_{N_{k}}\}\). Completing the nullspace basis is as simple as appending the known constraints to the data matrix \(\mathbf{Y}\): \[\mathbf{Y}=\big{[}\operatorname{vech}\left(\mathbf{X}^{(1)}\right) \,\cdots\,\operatorname{vech}\left(\mathbf{X}^{(N_{s})}\right) \tag{14}\] \[\operatorname{vech}\left(\mathbf{\tilde{A}}_{1}\right)\,\cdots\, \operatorname{vech}\left(\mathbf{\tilde{A}}_{N_{k}}\right)\big{]}.\] By definition, the left nullspace vectors of \(\mathbf{Y}\) will then be orthogonal to the known constraints. To find a valid nullspace basis, we need to have at least \(r=n-N_{n}\) samples, with \(n\) the number of rows of \(\mathbf{Y}\), \(N_{n}\) the nullspace dimension, and \(r\) the rank of \(\mathbf{Y}\). However, since \(r\) is not known a priori, a viable strategy is to randomly generate \(N_{s}>n\) samples.7 This ensures that the data matrix is rank-deficient, and the nullspace basis can be calculated using the permuted QR decomposition, as we explain next. Footnote 7: The emphasis here is on random; this ensures that all samples are linearly independent with probability one, to yield what is also called β€˜generic’ samples [18]. Intuitively speaking, when using generic samples one can ensure that properties derived from the samples hold for the entire variety. ### _Sparse Basis Retrieval_ We know that the constraint matrices are typically sparse, since they usually involve a subset of variables. Can we ensure that the learned constraints are expressed in a basis that encourages sparsity? Sparsity is good not only for lower runtime and memory consumption of SDP solvers, but also because sparser matrices are more easily interpretable, should we want to determine what the algebraic expression of the constraints is.8 Unfortunately, finding the sparsest nullspace basis is a NP-hard problem [42]. However, we can use a pivoted, or rank-revealing, QR decomposition [43] to find the left nullspace of the data matrix and to induce sparsity in the resulting basis vectors. We found the resulting constraints to be sufficiently sparse for downstream operations, and in both applications covered in Section VI, some basis vectors are even as sparse as analytically constructed constraints. Other matrix decomposition alternatives, such as the singular value decomposition (SVD), were empirically found to exhibit less sparsity. Footnote 8: We reiterate that interpretability is not necessary, but it may be beneficial for scalability and for gaining a better understanding of a given problem. The pivoted QR decomposition returns a decomposition of the form [43] \[\mathbf{Y}^{\top}\mathbf{P}=\mathbf{Q}\mathbf{R}=\mathbf{Q}\begin{bmatrix}\mathbf{R}_{1}&\mathbf{R}_{2}\\ \mathbf{0}&\mathbf{0}\end{bmatrix}, \tag{15}\] where \(\mathbf{P}\) is a \(n\times n\) permutation matrix ensuring that the diagonal of \(\mathbf{R}\) is non-increasing, \(\mathbf{Q}\) is \(N_{s}\times N_{s}\) and orthogonal, \(\mathbf{R}_{1}\) is upper-diagonal with dimensions \(r\times r\), and \(\mathbf{R}_{2}\) is of size \(r\times N_{n}\). The nullspace basis vectors \(\mathbf{a}_{i}\) are then given by \[\begin{bmatrix}\mathbf{a}_{1}&\cdots&\mathbf{a}_{N_{n}}\end{bmatrix}=\mathbf{P}\begin{bmatrix} \mathbf{R}_{1}^{-1}\mathbf{R}_{2}\\ -\mathbf{I}_{N_{n}}\end{bmatrix}. \tag{16}\] Note that when using the permuted QR decomposition, the obtained basis vectors are linearly independent, but not necessarily orthogonal to each other, as would be the case with an SVD, for example. However, we found that the increased sparsity was of higher importance, both for computational speed and interpretability, than orthogonality. ### _Determining Tightness_ All considerations so far are independent of the cost function and only depend on the chosen substitutions and primary constraints. To determine if the relaxation is tight, we need to define the cost, _i.e._, form the matrix \(\mathbf{Q}\) in (6). In general, tightness may be a function of both the noise magnitude [39] and the sparsity of the measurement graph [9]. The proposed method takes both into account as we fix \(\mathbf{Q}\) and then determine the tightness (and required redundant constraints) for this particular choice. We determine cost-tightness by comparing the cost of the dual problem with the cost of a candidate global solution. The candidate global solution is found by running an off-the-shelf local solver initialized at the ground-truth state, which we expect to be close to the optimal solution for low-enough noise. Indeed, this strategy allowed us to find the global minimum almost always for the noise regimes considered in Section VI. If not, we regenerate a new random setup and start again.9 We compute the relative duality gap (RDG) between the cost of this local solution, called \(\hat{q}\), and the optimal dual cost \(d^{\star}\) through \((\hat{q}-d^{\star})/q^{\star}\), and report cost-tightness if the RDG is below a fixed threshold (see Section VI-A). To determine rank-tightness, we calculate the eigenvalues of the solution \(\mathbf{X}\), and take the ratio of the first to the second-largest eigenvalue, called the singular-value ratio (SVR) in what follows. If the ratio is larger than a fixed value (see Section VI-A) we report that the solution is rank one. ### _Summary_ We conceptualize the algorithm AutoTight, which is defined by the successive application of IV-A to IV-C, by the gray boxes in Figure 2. In summary, we randomly generate \(N_{s}>n\) samples of the half-vectorized feasible points and compute a nullspace basis of the samples, which gives us all possible constraints. We then determine if the SDP relaxation is rank- or cost-tight when using all found constraints. There are three possible outcomes of this method: 1. The problem cannot be tightened. Knowing this, no additional effort has to be spent in trying to find redundant constraints for this formulation. Either a new formulation can be tried -- adding for instance (a subset of) higher-order Lasserre terms [13] -- or the SDP can be used in conjunction with rounding, for example as an initialization for a local solver. 2. The problem can be tightened without any redundant constraints, or with few redundant constraints that are interpretable. By interpretable we mean that the algebraic form can be derived directly from each matrix -- we will see such examples in Section VI, Figure 4. In this case, constraints matrices can be efficiently created analytically, as in classical methods [7]. 3. The problem can be tightened, but with many redundant constraints. In this case, the method presented so far would have to be reapplied to any new problem instance in order to find the required redundant constraints, which does not scale to the large problem sizes typically encountered in robotics. We will revisit the first two outcomes in the experiments in Section VI. The next section deals with the third outcome: We present a method that finds what we call constraint _templates_ -- particular patterns that can be applied to any number and combination of variables of particular types. ## V Generating Scalable Constraints (AutoTemplate) The method AutoTight finds, for a given problem instance, whether the problem can be tightened. However, tightness will most likely be lost as we increase the problem size if we do not add the redundant constraints corresponding to new variables. Applying AutoTight is prohibitively expensive as the dimensionality of the problem increases, due to the cubic complexity of the QR decomposition. We thus present AutoTemplate, an extended version of AutoTight that is more scalable. Using the lifted parameters, we modify each feasible sample to include the parameter dependencies, leading to the 'augmented' feasible sample \(\bar{\mathbf{z}}^{(s)}\in\mathbb{R}^{\bar{n}}\) of size \(\bar{n}:=nK\): \[\bar{\mathbf{z}}^{(s)}:=\mathrm{vech}\left(\mathbf{x}^{(s)}{\mathbf{x}^{(s)}}^{\top}\right) \otimes\kappa\left(\mathbf{p}\right). \tag{18}\] The augmented data matrix \(\bar{\mathbf{Y}}\in\mathbb{R}^{\bar{n}\times\bar{N}_{s}}\) is given by \[\bar{\mathbf{Y}}=\left[\bar{\mathbf{z}}_{1}\quad\cdots\quad\bar{\mathbf{z}}_{N_{s}}\right], \tag{19}\] where we note that the number of samples \(\bar{N}_{s}\) now has to be chosen as to ensure that \(\bar{\mathbf{Y}}\) is rank-deficient. We denote the left nullspace basis vectors of (19) by \(\bar{\mathbf{a}}_{l}\in\mathbb{R}^{nK}\), with \(l\in[\bar{N}_{n}]\). We call these basis vectors 'templates' because we will apply them to new variable sets, and in particular, scale them to any required problem size, as we explain next. ### _Applying Templates_ Conceptually speaking, applying the templates means repeating each constraint for each possible combination of the variables that it involves. For example, if one constraint matrix involves one position and two different landmarks, then we repeat the constraint for each position and each possible pair of landmarks per position. To facilitate this operation programmatically, we have created an easy-to-use tool to generate sparse matrices using variable names for indexing.10 That way, applying constraints to all possible variables simply means creating duplicates of a given constraint, and then renaming the variables that it touches. Footnote 10: The code is available as an open-source package at [https://github.com/utiasASRL/poly_matrix](https://github.com/utiasASRL/poly_matrix). If parameters were factored out as explained in V-B, then they need to be factored back in before solving the SDP, using the current parameter realization. We introduce the operator \(\mathrm{mat}\left(\cdot\right)\), which folds the augmented basis vector \(\bar{\mathbf{a}}_{l}\) (which we recall has \(nK\) dimensions, with \(K\) the number of lifted parameters) column-wise into a \(n\times K\) matrix. Then, _factoring in_ the parameters of a specific parameter realization \(\mathbf{p}^{(s)}\) can be written as \[\mathbf{a}_{l}=\mathrm{mat}\left(\bar{\mathbf{a}}_{l}\right)\kappa\left(\mathbf{p}^{(s)} \right), \tag{20}\] where the output \(\mathbf{a}_{l}\in\mathbb{R}^{n}\) is now a problem-specific vectorized constraint that can be converted to the corresponding constraint matrix \(\mathbf{A}_{l}=\mathrm{vech}^{-1}\left(\mathbf{a}_{l}\right)\). We return to the stereo-1D example to illustrate these concepts: **Example** (stereo-1D, cont'd).: _Looking at (11), we see that the redundant constraints depend on the problem parameters \(a_{i}\): we have \(\mathbf{p}^{\top}:=\begin{bmatrix}a_{1}\cdots a_{N}\end{bmatrix}\in\mathbb{R}^{N}\) and \(K:=N\). Because the lifting constraints \(l_{i}(\mathbf{x})\) are linear in \(\theta\), we introduce the lifting function \(\kappa\left(\mathbf{p}\right):=\begin{bmatrix}1&a_{1}&a_{2}&\ldots a_{N}\end{bmatrix}^ {\top}\). We define the following set of variable groups: \(\{(1,\theta,z_{1},a_{1}),(1,\theta,z_{1},a_{1},z_{2},a_{2}),\cdots\}\). When imposing the known substitution constraints, we would not find any additional constraints at the first level. At the second level, however, each augmented sample would be of the form:_ \[\bar{\mathbf{z}}^{\top}:= \begin{bmatrix}1&\theta&z_{1}&z_{2}&\theta^{2}&\theta z_{1}& \theta z_{2}&z_{1}^{2}&z_{1}z_{2}&z_{2}^{2}\end{bmatrix} \tag{21}\] \[\otimes\begin{bmatrix}1&a_{1}&a_{2}\end{bmatrix},\] _where we have dropped superscript \((s)\) for better readability. Clearly, the redundant constraints would now be in the nullspace of the augmented data matrix, as we have:_ \[0=\bar{\mathbf{a}}_{1}^{\top}\bar{\mathbf{z}}=\begin{bmatrix}\mathbf{\alpha}_{1}^{\top}& \mathbf{\alpha}_{2}^{\top}&\mathbf{\alpha}_{3}^{\top}\end{bmatrix}\bar{\mathbf{z}}, \tag{22}\] _for any \(\bar{\mathbf{z}}\), where we have introduced_ \[\mathbf{\alpha}_{1}^{\top}= \begin{bmatrix}0&0&1&\text{-1}&0&0&0&0&0\end{bmatrix}, \tag{23}\] \[\mathbf{\alpha}_{2}^{\top}= \begin{bmatrix}0&0&0&0&0&0&0&0&1&0\end{bmatrix},\] \[\mathbf{\alpha}_{3}^{\top}= \begin{bmatrix}0&0&0&0&0&0&0&0&\text{-1}&0\end{bmatrix}.\] _Note that the template \(\bar{\mathbf{\alpha}_{1}}\) does not depend on the landmarks anymore. This template can be applied to any variables by changing the labels as explained in Section V-C. Given also new realizations of parameters \(\mathbf{p}^{(t)}\) corresponding to the new variables, we can create the corresponding constraint matrix:_ \[\mathbf{a}_{1}^{(t)} =\mathrm{mat}\left(\bar{\mathbf{a}}_{1}\right)\mathbf{p}^{(t)}=\begin{bmatrix} \mathbf{\alpha}_{1}&\mathbf{\alpha}_{2}&\mathbf{\alpha}_{3}\end{bmatrix}\begin{bmatrix} 1\\ a_{1}^{(t)}\\ a_{2}^{(t)}\end{bmatrix}, \tag{24}\] \[\mathbf{A}_{1}^{(t)} =\mathrm{vech}^{-1}\left(\mathbf{a}_{1}^{(t)}\right).\] ### _Reducing the Number of Constraints_ Even when using the efficient sparse representation, applying the templates to all other possible combinations of variables can become the computational bottleneck of the problem. However, in practice not all of the found templates are actually necessary for tightness. Therefore, we suggest to prune the found templates before applying them to large problem sizes. In order to do that, we proceed as follows. Fig. 2: Overview of proposed algorithm to automatically find constraints or templates. Highlighted in gray and white, respectively, are the components of AutoTight and AutoTemplate. The two stages where (minor) user input is required are shown in the bottom. Assume we have found a set of learned constraints \(\mathcal{A}_{l}\) for which the problem is (at least) cost-tight. Then, we can solve the following optimization problem in an attempt to sort the constraints by their importance for tightness: \[\min_{\mathbf{\lambda},\rho}\lVert\mathbf{\lambda}\rVert_{1}\] (25) s.t. \[\mathbf{H}(\rho,\mathbf{\lambda})\succeq 0\] \[\mathbf{H}(\rho,\mathbf{\lambda})\hat{\mathbf{x}}=\mathbf{0},\] where \(\left\lVert\cdot\right\rVert_{1}\) denotes the \(L_{1}\)-norm, \(\mathbf{H}\) is defined as in (\(\lx@sectionsign\)1) (with the learned matrices substituted for \(\mathbf{A}_{k}\)) and \(\hat{\mathbf{x}}\) is the optimal solution of (6), found as explained in IV-C. Intuitively, Problem (25) finds a sparse set of dual variables required for cost-tightness, as the \(L1\)-norm promotes sparsity. By ordering the learned constraints by decreasing magnitude of \(\mathbf{\lambda}\) and adding them one by one, we find which subset of constraints is sufficient for cost-tightness. This problem naturally lends itself to a bisection-like algorithm, where we try using all and no redundant constraints, at first, and then continue trying cutting the number of constraints in half. We terminate when the considered interval is of size one. At that point, we use only these constraints as templates, which significantly reduces the computation cost of all downstream operations, as shown in Section VI. As another pruning step, we also make sure that all constraints are linearly independent after applying templates to other variables. For this purpose, we use the same rank-revealing QR decomposition as in IV-B but keep only the valid range-space basis vectors. Because of the sparsity of the constraints, this adds no significant cost. ### _Summary_ To summarize, AutoTemplate generates scalable templates by iteratively finding the nullspace basis of smaller subsets of variable groups. We stop when the templates lead to a tight relaxation after applying them to all variable groups in a given example problem. Then, we find a subset of constraints sufficient for tightness, which can be used as templates for any new problem of the same type. When constraints depend on problem parameters, such as landmark coordinates, we also suggest a method to factor out this dependency and learn 'augmented' templates instead. ## VI Simulation Results We show the effectiveness of the proposed method on a variety of robotics problems encountered in real-world applications. An overview of all problems considered in this Section is given in Table I. First, we perform an in-depth analysis of two example applications, providing new insights on the tightness of their relaxations. The first application is range-only localization with fixed and known landmarks, as encountered in ultra-wideband (UWB)-based localization [44, 45] or WiFi- or Bluetooth-based indoor localization [46]. We evaluate two different formulations, one of which requires redundant constraints while the other one does not. In this example, we find constraints are interpretable and we can derive their algebraic expressions. The second application is the estimation of the pose of a stereo camera by minimizing the reprojection error of known landmarks, which refer to as stereo localization. The reprojection error can be used to model Gaussian noise on pixel measurements [47]. To the best of our knowledge, this problem has not been successfully relaxed to a tight SDP before, with common solutions typically resorting to the back-projection error [48, 16] (_i.e._, the error is assumed Gaussian in Euclidean space). Closest to our solution is [19], where a branch-and-bound method in combination with a (non-tight) semidefinite relaxation is used to minimize the reprojection cost. Instead, we use the proposed methods to 1) find a new formulation of the problem that can be tightened using automatically-determined constraints obtained by AutoTight, and 2) use AutoTemplate to generate templates that can be scaled to new problem instances. Finally, we select representative examples from multimodal registration [10] and robust estimation [7], and verify their tightness results using our method. ### _Hyperparameters_ Throughout the experiments, we keep the following parameters fixed. When learning the constraints, we oversample the data matrix \(\mathbf{Y}\) by 20% to improve conditioning of the nullspace problem. For the SDP solver, we use MOSEK [38] interfaced through cvxpy[49, 50], fixing the tolerances of primal and dual feasibility, as well as the relative complementary gap to \(10^{-10}\) and the tolerance of infeasibility to \(10^{-12}\). For finding the minimal set of constraints (Section V-D), we set the relative gap termination to \(10^{-1}\) to allow even for inaccurate solutions to be returned (as the output is only used for ordering the constraints). In terms of local solvers, we use the off-the-shelf pymanopt[51] solver for all problems in Section VI-D, using the conjugate gradient optimizer and for stopping criteria \(10^{-6}\) in gradient norm and \(10^{-10}\) in step size. When inequality \begin{table} \begin{tabular}{l l l l l} Problem & lifting function & redundant constr. & cost-tight & rank-tight \\ \hline \hline range-only localization & \(\mathbf{z}\) (\(\mathbf{\gamma}\)) & no & yes & yes \\ & \(\mathbf{y}\) (\(\mathbf{\delta}\)) & yes & yes & yes \\ stereo pose estimation & \(\mathbf{u}\) (\(\mathbf{\hat{x}}\)) & yes & no & no \\ & \(\mathbf{u}\),\(\mathbf{u}\otimes\mathbf{t}\) & yes & yes & no \\ \hline point-point registration [10] (PPR) & none & no & yes & yes \\ & & & & \\ & & & & \\ & & & & \\ & & & & \\ \hline robust point-cloud registration [7] (rPR) & \(\theta\otimes\theta\) & yes & no & no \\ & & & & \\ & & & & \\ & & & & \\ & & & & \\ \hline robust absolute pose estimation [7] (rPLR) & \(\theta\otimes\theta\) & yes & no & no \\ & & & & \\ & & & & \\ & & & & \\ \end{tabular} \end{table} TABLE I: Overview of the considered problems, their tightness and whether there are redundant constraints. Highlighted in red are formulations that were found to be non-tight. constraints are present in the QCQP, we use the log-sum-exp function described in [52, SS4.1] with \(\rho=10\) and \(u=10^{-3}\). For RO localization, we use the scipy implementation of the BFGS solver, and our custom GN implementation, respectively, with the same stopping criteria as for pymanopt. A problem is considered cost-tight when its RDG is below \(0.1\%\). It is considered rank-tight when the SVR is above \(10^{7}\). Parameters that change for each problem, such as the considered noise levels, variable groups, and toy problem sizes, are summarized in Table II. We use fully connected measurement graphs for all considered problems.11 Footnote 11: This means that we assume that at each pose, all landmarks are observed. ### _Range-Only Localization_ #### Iv-B1 Problem Statement The goal of RO localization is to estimate the position of a moving device over time, given range measurements to fixed and known anchors. We call the anchor points \(\mathbf{m}_{k}\in\mathbb{R}^{d}\) with \(k\in[N_{m}]\) and the position at time \(t_{n}\) is denoted \(\mathbf{\theta}_{n}\in\mathbb{R}^{d}\), with \(n\in[N]\). We use \(d=3\) in all of the experiments. We use the following common formulation of the problem [40] \[\min_{\{\mathbf{\theta}_{n}\}_{n=1}^{N}}\sum_{n,k\in\mathcal{E}}\left(d_{nk}^{2}- \left\|\mathbf{m}_{k}-\mathbf{\theta}_{n}\right\|^{2}\right)^{2}, \tag{26}\] where \(\mathcal{E}\) is the edge set of a measurement graph, with an edge between position \(n\) and anchor \(k\) if their distance \(d_{nk}\) has been measured. 12 Footnote 12: Note that it is straight-forward to include a motion prior in (26), such as a constant-velocity prior, as shown in [3-4]. Such priors are typically up to quadratic in the unknowns, thus not requiring any special treatment when it comes to constraints, and are omitted for simplicity. Problem (26) is quartic in the unknowns, and thus may contain multiple local minima [34]. However, by introducing substitutions that are quadratic in \(\mathbf{\theta}_{n}\), it can be lifted to a QCQP, making it a candidate for SDP relaxation. We study two such substitutions. First, looking at the cost of (26), we see that the substitution \[z_{n}:=\left\|\mathbf{\theta}_{n}\right\|^{2}\in\mathbb{R} \tag{27}\] is enough to make the problem quadratic in the lifted vector \(\mathbf{x}^{\top}=\begin{bmatrix}h&\mathbf{\theta}^{\top}&z_{n}\end{bmatrix}\). The same substitution was used in [34] and was shown to require no redundant constraints for tightness. This substitution (28) is also called a _sparse_ Lasserre substitution [17] substitution. Here, we also study the more methodological (dense Lasserre) substitution that introduces all quadratic terms of \(\mathbf{\theta}_{n}\), or in other words: \[\mathbf{y}_{n}:=\mathrm{vech}\left(\mathbf{\theta}_{n}\mathbf{\theta}_{n}^{\top}\right) \in\mathbb{R}^{d(d+1)/2}. \tag{28}\] #### Iv-B2 Determining Feasibility of Tightening (AutoTight) We start by using AutoTight to evaluate the two different substitutions, on a small example problem, defined in Table II. The data matrix \(\mathbf{Y}\) introduced in Section IV-A exhibits a well-separated nullspace for both substitutions, as can be seen in Figure 3. We can see immediately that the \(z_{n}\) substitution leads to a small nullspace (\(N_{n}=3=N\)), corresponding exactly to the number of substitutions. The substitution \(\mathbf{y}_{n}\), on the other hand, leads to a nullspace that includes more than just the substitution variables (\(N_{n}=60=20N\)), which shows the existence of redundant constraints. We show the three found constraint matrices for the \(z_{n}\) substitution in the first row of Figure 4. Interestingly, the three automatically found matrices correspond exactly to the three substitution formulas (shown below each matrix).13 The second row of Figure 4 shows three example matrices for the \(\mathbf{y}_{n}\) substitution. The first one is an example of a substitution constraint found by the algorithm, while the other two matrices are examples of discovered redundant constraints. Our method finds the \(d(d+1)/2=6\) substitution constraints, and 14 redundant constraints similar to the two shown examples. For completeness, Figure 5 shows a compact representation of all the constraints for the \(\mathbf{y}_{n}\) substitution, where each row corresponds to one found basis vector (_i.e._, the constraint matrix in half-vectorized form). We can see that all learned matrices are sparse and quantized, with nonzero values in \(\{-1,\frac{1}{\sqrt{2}},1\}\). Footnote 13: Here, we chose not to enforce the known constraint matrices using (14), to highlight the interpretability of the found constraints. We find that both substitutions lead to cost-tight and rank-tight relaxations when all constraints are imposed, with SVR above \(10^{9}\) and RDG below \(10^{-4}\). For the \(\mathbf{y}_{n}\) substitution, Fig. 4: Examples of learned constraint matrices for \(z_{n}\) substitution (top) and the \(\mathbf{y}_{n}\) substitution (bottom) of RO localization. Shown below each matrix are the algebraic identities that the matrices enforce. For simplicity, we call \(\mathbf{\theta}_{1}^{\top}=\begin{bmatrix}a&b&c\end{bmatrix}\). Fig. 3: Singular value spectrum of the data matrix for RO localization. The singular values below the threshold (in orange) correspond to the nullspace basis vectors. For the substitution \(z_{n}\) (27) (left plot), we find 3 basis vectors, however, for the substitution \(\mathbf{y}_{n}\) (28) (right plot) we find 20 basis vectors. redundant constraints are required. Exactly which constraints are required and how they scale is investigated next. #### Iv-B3 Generating Scalable Constraints (AutoTemplate) We have shown that the formulations with substitution \(\mathbf{y}_{n}\) of the RO localization problem can be tightened, at least for a small problem. In this section, we show that the method can be generalized to problems of larger size. In this particular example, the learned constraints are interpretable, therefore we could infer the mathematical expression of all constraints, as we saw in Figure 4, and apply them to new setups (outcome 2 of Section IV-D). Instead, we show here that the algorithmic way of scaling up, which does not require any intermediate manual steps, is also tractable for larger problem sizes. To generate scalable templates, we use AutoTemplate with the variable ordering given in Table II. The algorithm terminates after including variables \(\{h,\mathbf{\theta}_{0},\mathbf{y}_{0}\}\), at which point the found templates lead to a tight relaxation (in both cost and rank) when applied to all \(N=3\) positions.14 Footnote 14: Note that we do not need to consider any combinations of positions (or substitutions), which is a consequence of the problem being separable. This could have been observed from (26), but we did not exploit this structure here to facilitate the extension to regularized problems (_i.e._, with motion prior). Before applying the templates to new problems of increasing size, AutoTemplate reduces them to a sufficient subset of constraints using (25). Figure 6 visualizes this process, showing rank- and cost-tightness for different subsets of constraints used. First, we confirm that the substitution \(z_{n}\) leads to rank- and cost-tightness after adding the substitution constraints only. For \(\mathbf{y}_{n}\), when adding constraints one-by-one in the order dictated by (25), we find that 55 out of the 61 constraints are enough for rank-tightness. Cost-tightness, on the other hand, is achieved already after adding 4 constraints only.15 Footnote 15: It is expected that the reordering of constraints works for achieving cost-tightness faster, as this is what the Problem (25) is primarily designed to do. Faster rank-tightness is (if existent) merely a side-product of this process. We generalize the templates required for rank-tightness to \begin{table} \begin{tabular}{l l l l} Problem & Parameters & Variables & Inlier noise (Outlier noise) \\ \hline range-only localization & \(d=3,N_{m}=10,N=3\) & \(\{\{h,\mathbf{\theta}_{0}\},\,\{h,\mathbf{z}_{0}\},\,\{h,\mathbf{\theta}_{0},\,\mathbf{z}_{1}\},\,\cdots\}\) & \(10^{-2}\) \\ \hline stereo-localization & \(d\in\{2,3\},N=d+1\) & \(\{\{h,\mathbf{\theta}\},\,\{h,\mathbf{z}_{0}\},\,\{h,\mathbf{\theta},\mathbf{z}_{0}\},\,\{h, \mathbf{z}_{0},\mathbf{z}_{1}\},\,\cdots\}\) & \(10^{0}\) \\ \hline point-point registration [10] & \(d=3,N=3\) & \(\{\{h,\mathbf{\theta}\}\}\) & \(10^{-2}\) \\ point-line registration [10] & \(d=3,N=5\) & \(\{\{h,\mathbf{\theta}\}\}\) & \(10^{-3}\) \\ \hline robust point-cloud registration [7] & \(d=3,N=4\), \(N_{\mathrm{out}}=1\) & \(\{\{h,\mathbf{\theta}\},\{h,\mathbf{\theta},w_{0}\},\{h,\mathbf{\theta},\mathbf{z}_{0}\},\{h, \mathbf{\theta},w_{0},w_{1}\}\) & \(10^{-2}\) (\(10^{0}\)) \\ robust absolute pose estimation [7] & \(d=3,N=6,N_{\mathrm{out}}=1\) & \(\{h,\mathbf{\theta},w_{0},\mathbf{z}_{0}\},\{h,\mathbf{\theta},\mathbf{z}_{0},\mathbf{z}_{1}\},\cdots\}\) & \(10^{-3}\) (\(10^{-1}\)) \\ \hline \end{tabular} \end{table} TABLE II: Overview of the tightened problems, including the variable groups, problem dimensions, and noise parameters. For simplicity, all substitutions are called \(\mathbf{z}_{i}\). \(N_{\mathrm{out}}\) denotes the number of outliers, and noise levels correspond to the standard deviation of zero-mean Gaussian noise. Fig. 5: Compact visualization of the learned constraint matrices for the \(\mathbf{y}_{n}\) substitution of RO localization. Each row contains one basis vector, equivalent to the half-vectorized constraint matrix. Highlighted in dark are the sufficient constraints for rank-tightness. Fig. 6: Rank-tightness study for RO localization, using \(z_{n}\) substitution (left) vs. \(\mathbf{y}_{n}\) substitution (right). We compare the spectra with different numbers of added constraints (gray lines), highlighting the points where cost-tightness (C) and rank-tightness (R) are obtained in red and black, respectively. problems with up to 30 positions. Figure 7 shows the time required for creating the constraints and solving the SDP for each problem size. We compare the processing times of learning the constraints for each problem from scratch using AutoTight, with using templates computed from AutoTemplate, using either all or only those sufficient for rank-tightness. Naturally, when using the substitution \(z_{n}\), applying templates and checking for tightness remain relatively cheap as the problem size grows, because the number of total constraints grows only linearly in the number of variables. Even learning the constraints from scratch is reasonably fast for this case. For the substitution \(\mathbf{y}_{n}\), however, AutoTight becomes prohibitively expensive beyond \(N=15\) positions. On the other hand, when using AutoTemplate, the cost of generating the constraints stays close to the cost of solving the SDP for all problem sizes. Ordering the constraints according to (25) did not have a significant effect in this case, and there is little difference between using all _vs_. only the sufficient constraints. Note that learning the templates and determining the sufficient subset constitute a fixed cost and are listed separately in Table III. ### _Stereo Localization_ #### Iv-C1 Problem Statement In stereo localization, the goal is to estimate a stereo cameras' pose given the image coordinates, in both left and right frames, of a number of known landmarks. We call the known, homogenized landmarks \(\mathbf{m}_{k}\) with \(k\in[N]\).16 For simplicity, we focus on one measurement time only, and call the unknown pose at that time \(\mathbf{T}\in SE(d)\), which contains both the rotation matrix from world to camera frame, \(\mathbf{C}\in SO(d)\), and the associated translation \(\mathbf{t}\in\mathbb{R}^{d}\). We collect the pixel measurements of landmark \(k\) in \(\mathbf{y}_{k}^{\top}:=\begin{bmatrix}u_{k}^{(l)}&v_{k}^{(l)}&u_{k}^{(r)}&v_{k}^{ (r)}\end{bmatrix}\), where \(u\) and \(v\) denote the \(x\) and \(y\) coordinates in pixel space, and superscripts \((l)\) and \((r)\) correspond to the left and right frame, respectively. We call the intrinsic stereo camera matrix in \(d\) dimensions \(\mathbf{M}_{d}\). For \(d=3\), we have: Footnote 16: We use \(N\) and not \(N_{m}\) because in stereo localization, the number of landmarks determines the problem size \(N\) (since the number of poses is fixed to one). \[\mathbf{M}_{d}=\begin{bmatrix}f_{u}&0&c_{u}&f_{u}\frac{b}{2}\\ f_{v}&c_{v}&0&0\\ f_{u}&0&c_{u}&-f_{u}\frac{b}{2}\\ f_{v}&c_{v}&0&0\end{bmatrix}, \tag{29}\] where \(f_{u}\), \(f_{v}\), and \(b\) are the focal lengths and baseline, respectively. The forward measurement model is given by: \[\mathbf{y}_{k}=\mathbf{M}_{d}\left(\mathbf{e}_{d}^{\top}\mathbf{T}\mathbf{m}_{k}\right)^{-1}\mathbf{T} \mathbf{m}_{k}, \tag{30}\] where \(\mathbf{e}_{d}\) is the \(d\)-th standard basis vector. Given a number of pixel measurements from \(N\) landmarks, the pose can be estimated as the solution of the optimization problem \[\min_{\mathbf{T}\in SE(d)}\sum_{k\in[N]}\|\mathbf{y}_{k}-\mathbf{M}_{d}\left(\mathbf{e}_{d}^{ \top}\mathbf{T}\mathbf{m}_{k}\right)^{-1}\mathbf{T}\mathbf{m}_{k}\|^{2}. \tag{31}\] Due to the \(SE(d)\) constraint and the rational cost function, Problem (31) is hard to solve globally. However, the problem can again be lifted to a QCQP by introducing a series of relaxations and substitutions. First, we relax the \(SO(d)\) to a \(O(d)\) constraint, which essentially drops the \(\det\left(\mathbf{C}\right)=1\) constraint. As discussed in [5], this relaxation is often tight without additional constraints, and if not, handedness constraints can be added [10]. As we are automatically finding all redundant constraints, these constraints will be added later if required. Secondly, we can introduce the substitutions \[\mathbf{v}_{k} :=\left(\mathbf{e}_{d}^{\top}\mathbf{T}\mathbf{m}_{k}\right)^{-1}\mathbf{T}\mathbf{m}_{ k}, \tag{32a}\] \[\mathbf{u}_{k}^{\top} :=\left[\mathbf{v}_{k}[1]\quad\cdots\quad\mathbf{v}_{k}[d-1]\quad\mathbf{v}_{k }[d+1]\right], \tag{32b}\] where in \(\mathbf{u}_{k}\) we have removed the \(d\)-th element of \(\mathbf{v}_{k}\), which is always one by definition. Using \(\mathbf{v}_{k}\), we obtain the following QCQP: \[\min_{\mathbf{C},\mathbf{I}} \sum_{k\in[N]}\|\mathbf{y}_{k}-\mathbf{M}_{d}\mathbf{v}_{k}\|^{2}\] (33) s.t. \[\left(\mathbf{I}_{d}-\mathbf{v}_{k}\mathbf{e}_{d}^{\top}\right)\mathbf{T}\mathbf{m}_{k }=\mathbf{0},\,k\in[N]\] \[\mathbf{C}^{\top}\mathbf{C}=\mathbf{I}_{d},\] which we write as a homogeneous QCQP using the following lifted vector: \[\mathbf{x}^{\top}=\begin{bmatrix}h&\mathbf{t}^{\top}&\operatorname{vec}\left(\mathbf{C} \right)^{\top}&\mathbf{u}_{1}^{\top}&\cdots&\mathbf{u}_{N}^{\top}\end{bmatrix}. \tag{34}\] #### Iv-C2 Determining Feasibility of Tightening (AutoTight) As before, we first use AutoTight to investigate whether Problem (33) can be tightened. We use a small toy problem, choosing \(N=d+1\) landmarks. The left plots of Figure 9 show the cost-tightness study for the 3D problem. Even when adding all 45 found constraints, the problem cannot be tightened in the present form. Note Fig. 7: Timing study for RO localization, using the \(z_{n}\) substitution (top) vs. the \(\mathbf{y}_{n}\) substitution (bottom). We compare using only the sufficient (solid line) or all templates (dashed line) output by AutoTemplate, which are very close for this particular problem. They compare favorably to learning constraints from scratch for each problem using AutoTight. how quickly we came to this conclusion: no manual search for redundant constraints had to be performed, a process that can be very time consuming. We resort to (sparse) Lasserre hierarchy [13] to tighten the problem. We try different higher-order lifting functions and retest for tightness after adding all possible redundant constraints. We individually test additions such as \(\mathbf{u}_{k}\otimes\mathbf{u}_{k}\), \(\mathbf{t}\otimes\mathbf{t}\), _etc._ and find that by adding \((\mathbf{u}_{k}\otimes\mathbf{t})\) for each landmark, we achieve tightness. For simplicity, we call the combined substitution \(\mathbf{z}_{k}^{\top}:=\begin{bmatrix}\mathbf{u}_{k}^{\top}&(\mathbf{u}_{k}\otimes\mathbf{t})^ {\top}\end{bmatrix}\in\mathbb{R}^{d+d^{2}}\). Figure 9 on the right shows the cost-tightness tests 3D, which now passes. Since cost-tightness is achieved, we can solve (25) to determine a significantly smaller subset of sufficient constraints: we reduce the number from 639 to 80 constraints, as shown in Figure 9 and Table III. In all considered cases, rank-tightness is not attained (see Figure 8) and may require additional lifting functions of even higher order. As we are already approaching what is computationally feasible for the SDP solver, we settle for cost-tightness for now. #### V-B3 Generating Scalable Constraints (AutoTemplate) To scale this problem, it is crucial to use AutoTemplate, for two reasons. Firstly, the problem dimension is large, in particular after adding the additional lifting functions required for tightness. Secondly, an investigation of the learned constraints, shown in Figure 10, suggests that many matrices actually depend on the (known) landmark coordinates and therefore do not generalize to other setups. As explained in Section V-B, we use as parameters that are (up to) quadratic polynomials of each landmark's coordinates. The succession of variable groups is listed in Table II. For each variable selection, we consider only the parameters touched by the considered variables. We achieve tightness after including all groups up to \(\{h,\mathbf{z}_{0},\mathbf{z}_{1}\}\). Figure 11 shows the output of the method (in compressed format), for the 2D example: a set of templates over not only the original variables, but also their products with the parameters. Most importantly, note that thanks to factoring out the parameters, the matrix is now more quantized, with all nonzero elements in \(\{2,\sqrt{2},\pm 1,\pm\frac{1}{\sqrt{2}},\pm\frac{1}{2}\}\). We have thus eliminated the landmark dependencies and the obtained templates can be applied to any setup. The amount of constraints may seem unmanageable at first; but the templates can be significantly reduced by solving (25): only 48 of the 170 constraints (highlighted in dark in Figure 11) are sufficient for tightness. We successfully apply the patterns for up to 30 landmarks, as shown in Figure 12. We show how the times to create constraints and solve the SDP scale with \(N\), and report the one-time cost of finding the sufficient set of templates in Fig. 11: Subset of the constraint templates learned for stereo-localization in 2D after factoring out parameters. The red bars delimit different parameter dependencies, with the left-most block corresponding to the original variables. Highlighted in dark are some of the sufficient templates for tightness (in total, 48 out of 170). Fig. 8: Study of the singular value spectra of stereo localization using original order of constraints (left) and the sorted order using (25) (right). Even after adding the higher-order substitutions and all found redundant constraints, a significant number of eigenvalues are nonzero. More higher-order Lasserre variables may be required to achieve rank-tightness. See Figure \(\circ\) for a detailed description of the labels. Fig. 10: Three learned constraint matrices for the 2D stereo localization problem. Many of the matrices are less sparse than in the range-only localization example and contain non-quantized numbers which suggests a dependency on landmark coordinates. Only few matrices, such as the one shown on the left, are interpretable (the identity is shown below the plot, where for simplicity, we call \(\mathbf{t}=(t_{x},t_{y})^{\top}\), \(\mathbf{T}\mathbf{m}_{2}=(x,y,1)^{\top}\), thus \(\mathbf{u}=\frac{1}{y}(x,1)^{\top}\) and \(\mathbf{z}_{2}=\frac{1}{y}(x,1,t_{x}x,t_{y}x,t_{x},t_{y})^{\top}\)). Fig. 9: Tightness study for stereo localization problem, using the original substitutions only (left) vs. the higher-order substitutions (right). We use a bisection algorithm on the number of added constraints, which immediately terminates when using the original substitutions as the problem is not cost-tight even when adding all possible constraints. When adding higher-order substitutions, tightness is achieved after a few steps. Table III. As for RO localization, learning templates from scratch for each new setup does not scale beyond \(N=15\) landmarks, while applying the reduced templates comes at a reasonable cost, comparable to the cost of solving the SDP itself. This is a considerable improvement compared to existing approaches: inputting the same problem formulation to the (sparse) Lasserre hierarchy tool provided by [7] leads to unmanageable numbers of variables and constraints, even for small problem sizes. For \(d=3\) and only \(N=3\) landmarks, a total of \(27,692\) trivially satisfied constraints are generated, which is far beyond what SDP solvers can currently handle in reasonable time. In contrast, we can go to as many as \(N=30\) landmarks, at which point we compute \(4,733\) sufficient constraints for tightness. ### _Other Problems_ We conclude the simulation study by applying the proposed method to a number of problems from the literature whose semidefinite relaxations have been shown to be tight using certain redundant constraints. As a starting point, we consider two multimodal registration problems that have been treated by Briales _et al._[10]: point-point registration (PPR) and point-line registration (PLR). #### Iv-D1 PPR and PLR [10] In multimodal registration, the goal is to find an object's translation \(\mathbf{t}\in\mathbb{R}^{d}\) and orientation \(\mathbf{C}\in SO(d)\) w.r.t. a world frame, given measurements of points lying on the object. The object is assumed to be represented by a set of known geometric primitives of either points, lines, or planes. The problem is posed as the following minimization problem [10]: \[\min_{\mathbf{C}\in SO(d),\mathbf{t}\in\mathbb{R}^{d}}\sum_{i=1}^{N}\lVert\mathbf{C}\mathbf{p }_{i}+\mathbf{t}-\mathbf{y}_{i}\rVert_{\mathbf{W}_{i}}^{2}, \tag{35}\] with \(\mathbf{p}_{i}\in\mathbb{R}^{d}\) the measured point and \(\mathbf{y}_{i}\) an arbitrary point on the associated primitive \(P_{i}\) (note that data association is assumed known). The matrix \(\mathbf{W}_{i}\in\mathbb{R}^{d\times d}\) is chosen depending on the type of primitive \(P_{i}\): \[\mathbf{W}_{i} =\mathbf{I}_{d}\] (point), (36a) \[\mathbf{W}_{i} =\mathbf{I}_{d}-\mathbf{v}_{i}\mathbf{v}_{i}^{\top}\] (line with unit direction \[\mathbf{v}_{i}\] ), (36b) \[\mathbf{W}_{i} =\mathbf{n}_{i}\mathbf{n}_{i}^{\top}\] (plane with normal \[\mathbf{n}_{i}\] ). (36c) Manual method [10]Problem (35) can be relaxed to a QCQP by dropping the determinant constraint from \(SO(d)\) as explained in Section VI-C, and introducing \(\mathbf{x}^{\top}=\begin{bmatrix}h&\mathbf{t}^{\top}&\operatorname{vec}\left(\mathbf{C} \right)^{\top}\end{bmatrix}\). The rank relaxation of this QCQP was shown to be always tight when using the following set of constraints [10]: \[h^{2}=1\] (prim., homogenization), (37a) \[\mathbf{I}_{d}=\mathbf{C}^{\top}\mathbf{C}\] (prim., orthonormal rows), (37b) \[\mathbf{I}_{d}=\mathbf{C}\mathbf{C}^{\top}\] (red., orthonormal columns), (37c) \[\mathbf{c}_{i|3}\times\mathbf{c}_{i+1|3}=\mathbf{c}_{i+2|3},i\in[3]\] (red., handedness), (37d) where "prim." and "red." are short for primary and redundant, \(|\) is the modulo operator and \(\mathbf{c}_{i}\) is the \(i\)-th column of \(\mathbf{C}\). This leads to a total of \(1+2\cdot 6+3\cdot 3=22\) constraints in 3D, accounting for the symmetry of the optimization variable. Proposed methodOur method finds the required redundant constraints outlined above, but without any manual steps. Figure 13 shows the discovered constraint matrices (in compressed form) by AutoTight. We find a total of 21 independent constraints, including the homogenization, suggesting that at least one of the 22 constraints presented by [10] are linearly dependent. Indeed, looking at the orthonormality constraints, we observe that out of the \(3+3\) constraints that touch the diagonal in (37b) and (37c), respectively, only \(5\) are linearly independent. To see this, let us call \(h_{i}(\mathbf{C})=1\) the constraints touching the diagonal with \(i\in\{1,2,3\}\) for (37b) and \(i\in\{4,5,6\}\) for (37c). Then, it is easy to see that \[\sum_{i=1}^{3}h_{i}(\mathbf{C})=\sum_{i=4}^{6}h_{i}(\mathbf{C}), \tag{38}\] so any of these six constraints can be written as a linear combination of the five others. While these 21 constraints have been shown to be _sufficient_ for tightness [10], they have not been shown to be _necessary_. In fact, we found that, for the considered noise level, none of the redundant constraints are required for PPR to be both cost- and rank-tight, as shown in Figure 14. For PLR, Figure 14 shows that the solution is in fact exactly rank two with only the seven primary constraints, but it becomes rank one after adding as few as two of the 12 available redundant constraints. Note that for this problem, we observed that the order of adding constraints did not make a difference. #### Iv-D2 Robust estimation [7] Next, we consider two example problems treated by Yang _et al._[7]: robust pointcloud registration and robust absolute-pose estimation. These two problems can in fact be seen as 'robust' variations of PPR and PLR, respectively, which is why we call them rPPR and rPLR, respectively. Both rPPR and rPLR can be written in the form \[\min_{\mathbf{\theta}\in\mathcal{D}}\sum_{i=1}^{N}\rho\left(r(\mathbf{\theta},\mathbf{y}_{i })\right), \tag{39}\] where \(\mathcal{D}\) is the domain of \(\mathbf{\theta}\), \(\rho\) is a robust cost function and \(r\) the residual function. It is shown in [7] that for a vast selection of robust cost functions, residual functions, and Fig. 12: Timing study of the stereo-localization problem in 3D as we increase the number of landmarks \(N\). The labels are the same as in Figure 7. Learning constraints from scratch using AutoTight is prohibitively expensive even for \(N=10\). On the other hand, AutoTemplate scales reasonably up to \(N=30\). domains, Problem (39) can be written as a QCQP. As an example, we focus on the truncated least-sqares (TLS) cost function in what follows. The residual functions are given by \[\text{rPPR:}\quad r(\mathbf{\theta},\mathbf{y}_{i}) =\|\mathbf{C}\mathbf{p}_{i}+\mathbf{t}-\mathbf{y}_{i}\|^{2}, \tag{40}\] \[\text{rPLR:}\quad r(\mathbf{\theta},\mathbf{y}_{i}) =\|\mathbf{C}\mathbf{p}_{i}+\mathbf{t}\|_{\mathbf{I}_{d}-\mathbf{v}_{i}\mathbf{v}_{i}^{ \top}}^{2}. \tag{41}\] In rPPR, \(\mathbf{p}_{i}\) and \(\mathbf{y}_{i}\) are matched measurements of a point-cloud observed from two different poses, while in rPLR, we assume \(\mathbf{p}_{i}\) to be known landmark coordinates, and \(\mathbf{v}_{i}\) unit vector measurements thereof, obtained for instance from a calibrated camera. The unknown state \(\mathbf{\theta}\) is again the pose \(\mathbf{t}\in\mathbb{R}^{d},\mathbf{C}\in SO(d)\). In order to satisfy the Archimedian condition, the authors further restrict the domain \(\mathcal{D}\) to the domain with \(\mathbf{t}\in\mathbb{R}^{d}\) contained in the ball of radius \(T\).17 For the robust pose estimation problem, \(\mathbf{t}\) is also chosen so that the landmarks are in the field of view of the camera, characterized by aperture angle \(\alpha\). These two problems are thus examples with primary inequality constraints in (1). Footnote 17: The Archimedian condition is a stronger form of compactness [41]. Manual method [7]For TLS cost, it has be shown that solving (39) is equivalent to solving [53] \[\min_{\mathbf{\theta}\in\mathcal{D},\mathbf{w}\in\{\pm 1\}^{N}}\frac{1}{2}\sum_{i=1}^{N} \frac{1+w_{i}}{\beta_{i}^{2}}r^{2}(\mathbf{\theta},\mathbf{y}_{i})+1-w_{i}, \tag{42}\] where \(\mathbf{y}_{i}\) are measurements, \(\mathbf{w}\) is the vector of decision variables (for outliers, \(w_{i}=-1\) and for inliers \(w_{i}=1\)) and \(\beta_{i}>0\) are user-defined parameters determining the truncation threshold. Problem (42) can be written as a QCQP in the lifted vector \[\mathbf{x}^{\top}=\begin{bmatrix}h&\mathbf{\theta}^{\top}&\mathbf{w}^{\top}&\mathbf{z}^{\top} \end{bmatrix}, \tag{43}\] with \(\mathbf{\theta}^{\top}=[\mathbf{t}^{\top}\ \mathrm{vec}\left(\mathbf{C}\right)^{\top}]\). The variable \(\mathbf{z}\) contains additional substitutions that are required to make Problem (42) quadratic in \(\mathbf{x}\) (the cost is cubic because the residual functions \(r\) are linear in \(\mathbf{\theta}\)). The authors propose to add the (sparse) Lasserre lifting function \(\mathbf{z}=\mathbf{\theta}\otimes\mathbf{w}\), which leads to a tight relaxation after adding a list of (trivially satisfied) constraints. The authors also mention in passing that other lifting functions, such as \(\mathbf{z}=\mathbf{\theta}\otimes\mathbf{\theta}\), which allow to write (42) as a QCQP, do not lead to a tight relaxation [7]. Proposed methodWe study both lifting functions and come to the same conclusions as in [7]: both formulations allow for a large number of redundant constraints (which we found automatically), but only the second formulation becomes tight. Because of the large number of variables in the lifted state vector, we resort directly to AutoTemplate. The variable ordering used (for both problems) can be found in Table II. When using the lifting function \(\mathbf{z}:=\mathbf{\theta}\otimes\mathbf{w}\), the method terminates with cost-tightness after considering variables \(\{l,\mathbf{\theta},\mathbf{z}_{0},\mathbf{z}_{1}\}\). For \(\mathbf{z}:=\mathbf{\theta}\otimes\mathbf{\theta}\), the method returns that no tightness can be achieved. The number of found and sufficient constraint templates can be found in Table III. We note that the number of required constraints is already very high when considering only \(N=4\) and \(N=6\) for rPPR and rPLR, respectively. Nevertheless, we can apply the templates to problems up to size \(N=15\), as shown in Figure 15. For both problems, learning constraints from scratch is prohibitively expensive. Thanks to AutoTemplate, we can use the templates instead, Fig. 14: Rank-tightness study for PPR (left) and PLR (right) problems [10]. Both problems are cost-tight without redundant constraints, and for PLR, only 2 redundant constraints are required for tightness; a small subset of the 12 available redundant constraints [10]. Fig. 13: Learned constraint templates for the multimodal registration problems [10]. The labels \(l\) and \(c_{i}\) correspond to the homogenization variable and the \(i\)-th element of \(\mathrm{vec}\left(\mathbf{C}\right)\), respectively. We find that only 21 of the constraints suggested in [10] are actually linearly independent, see (38), and none of the redundant constraints are required for cost- and rank-tightness of point-point registration, while only a total of 10 (7 primary and 3 redundant) constraints, highlighted in dark, are required the point-line registration. Fig. 15: Timing results of scaling to \(N\) landmarks for rPPR (top) and rPLR (bottom). Thanks AutoTemplate, we can automatically create the constraints of problems up to \(N=15\) landmarks. and we obtain cost-tightness for all considered problems. Note that, just as in stereo localization, rank-tightness is not achieved and seems to not be computationally tractable since we already need many constraints for cost-tightness. As a final study, we compare the number of constraints we find with the number of constraints found in [7] in Table IV. The results suggest that we find a significantly smaller subset of constraints, but without compromising tightness. One possible explanation is that we find more than only the "trivially satisfied" redundant constraints, thus we can chose from a larger pool when tightening the problem. We plan to further investigate this finding, taking a closer look at the nature of the found constraints. ## VII Real-world Experiments To conclude, we showcase the performance of the proposed framework on real-world datasets for RO localization and stereo-camera localization, respectively. The purpose of these experiments is to 1. clarify how to interface the simulation-based tightening methods with real-world datasets, and 2. investigate how the constraints, determined using a specific sampling function and noise level, generalize to real data with different characteristics. ### _Method_ We first use AutoTemplate with random landmark placements to generate constraint templates. No knowledge of the actual measurement setup is required at this point. We then apply the templates to generate constraints, in each case using the actual landmark locations at each considered pose. One could also use the known landmarks for AutoTemplate, but we show that even when using a generic sampler, the learned constraints generalize well. ### _Experimental Setups_ We test our methods on two different experimental setups. The first dataset, called _starrynight_[54], includes stereo-camera images of Vicon markers scattered on the floor. The second dataset, called _STAR-loc_[55], includes stereo-camera images of _Apriltag_[56] landmarks scattered around a room at different heights and orientations. The _STAR-loc_ dataset also includes UWB-based distance measurements to eight fixed landmarks, called anchors. Depictions of the two datasets are provided in Figure 17. For RO localization, we always randomly select 4 out of the 8 available anchors to investigate the local minima that typically arise when anchors are almost co-planar. We report results on three example runs: _zigzag_s3_, _loop-2d_s4_, and _eight_s3_. For stereo localization, we only consider poses where more than 4 landmarks are observed, and we cap at maximum 8 landmarks, to limit the computation time. ### _Results_ First, we investigate the tightness of the relaxations when evaluated on real data. Figure 18 shows the SVR and RDG for both RO and stereo localization, for randomly picked poses from both datasets (see Figure 17 for plots of the selected poses). We plot the respective tightness measures against the maximum residual error, which is a good proxy for the noise level and has been shown to affect the tightness of semidefinite relaxations [9, 39]. As expected, the relaxation of RO localization is mostly rank-tight across all considered datasets and poses, with an SVR of more than \(10^{6}\) for most poses. On the other hand, the stereo-localization relaxation is only reliably cost-tight for poses with a sub-pixel maximum residual error, which is a characteristic found in the _starrynight_ dataset but in none of the runs from the _STAR-loc_ dataset. Next, we study the occurrence of local _vs._ global minima found in both problems. We certify a local solution by trying to find dual variables that satisfy (9) via a feasibility SDP. To account for numerical errors, we change (9a) to \(|\mathbf{H}(\rho,\mathbf{\lambda})|\leq\epsilon\mathbf{1}\) and minimize \(\epsilon\) as objective function. We claim a candidate solution \(\hat{\mathbf{x}}\) is certified if we find a feasible solution with \(\epsilon\leq 10^{-3}\). Figure 19 shows the distribution of certified and uncertified solutions of RO and stereo localization, as a function of the maximum residual error. First, we note that local minima are ubiquitous for both problems, across all noise levels. Second, it can be observed that, as the noise increases, the global solution candidates from the stereo-localization problem are not all certified anymore, because of the lack of tightness at higher noise levels. However, local minima occur even at lower levels, and the relaxation can correctly identify them. For RO localization, all global solutions are correctly certified. All local solutions, which, compared to stereo localization, are harder to classify based on only their cost, are detected correctly. Note that since this relaxation is rank-tight, solving \begin{table} \begin{tabular}{l l l l l} N & \multicolumn{2}{c}{rPPR} & \multicolumn{2}{c}{rPLR} \\ \hline & our method & [7] & our method & [7] \\ \hline 10 & 4,508 & 6,257 & 5,330 & 7,379 \\ 11 & 5,293 & 7,398 & 6,279 & 8,724 \\ 12 & 6,139 & 8,633 & 7,304 & 10,180 \\ 13 & 7,046 & 9,962 & 8,405 & 11,747 \\ 14 & 8,014 & 11,385 & 9,582 & 13,425 \\ 15 & 9,043 & 12,902 & 10,835 & 15,214 \\ \end{tabular} \end{table} TABLE IV: Comparison of the number of constraints found for rPPR and rPLR, respectively, using our method and the method proposed by [7], as a function of the number of measurements \(N\). Fig. 16: Rank-tightness study for rPPR (left) and rPLR (right). We obtain cost-tightness, but not rank-tightness, for both problems. the primal SDP and extracting \(\mathbf{x}^{\star}\) from the rank-1 \(\mathbf{X}^{\star}\) would also be a viable solution method. Finally, we show examples of local minima in Figure 20. For range-only localization, local minima typically occur when the anchors are in a degenerate configuration, such as almost co-planar. Intuitively speaking, the cost landscape nearly exhibits a symmetry in these situations and the local solver gets stuck in the wrong half when initialized there. For stereo localization, local minima occur more frequently and are typically completely off in terms of orientation, and were found to typically occur when initializing close to the wrong orientation. To summarize, in both applications, initializing close to ground truth leads to the globally optimal solution, but without prior knowledge, random initializations and landmark placements are prone to yield bad, locally optimal solutions. For low-enough noise levels, we can certify globally optimal solutions since our formulation is tight when the automatically found redundant constraints are used. ## VIII Conclusion and Future Work We have presented new tools to find all possible redundant constraints for a given QCQP, which is paramount to tighten the semidefinite relaxations of many problems encountered in robotics. The first tool, AutoTight, allows for the fast evaluation of different problem formulations. We have successfully used this tool to evaluate different substitutions for range-only localization and find a novel tight formulation for stereo-based localization. The second tool, AutoTemplate, can then be employed to create scalable templates to tighten new Fig. 19: Comparison of occurrence of local solutions and global solutions as a function of the maximum residual error. Each data point corresponds to one pose from any of the considered datasets, and we distinguish between true positives (tp, certified global minimum), true negatives (tn, uncertified local minimum), false negatives (tn, uncertified global minimum). Crucially, no false positives occurred. Note that all local solutions for stereo localization exhibit a high cost, and as the maximum residual error increases, some of what seem to be globally optimal solutions (judging from their low cost) are not certified. This is a consequence for tightness breaking for higher noise levels (compare with Figure 18). For RO localization, globally optimal solutions are certified for all noise levels. Fig. 17: Experimental setups of real-world datasets. In the starynight dataset [54], the visual landmarks are Yicon markers placed on the dark floor, and are captured with a stereo camera circling above them [54]. In the STAR-loc dataset [55], the visual landmarks are Apriltags placed at different heights and orientations, and are captured with a stereo camera moved along different trajectories through the room. The rig is also equipped with UWB tags that measure distances to 8 fixed anchors. The plots show the ground-truth poses at which stereo (top) and UWB (bottom) measurements are processed. The observed landmarks are depicted with black crosses. Fig. 18: Tightness study for RO localization (left plots) and stereo localization (right plots). Each data point corresponds to one estimated pose. Cost-tightness (top) and rank-tightness (bottom) are compared with the maximum residual error. The tightness thresholds are fixed to common values. We see that RO localization is mostly cost- and rank-tight, while stereo localization is only cost-tight for the lowest residual error levels in the starynight dataset. setups and larger problem sizes. To show the wide applicability of these tools, we have also evaluated their performance on example problems from the literature [10, 7], showing that we find tight relaxations with even fewer redundant constraints than previously considered. As SDPs scale poorly with the number of constraints, this is an important step to make semidefinite relaxations scale to general problems encountered in robotics. A number of follow-up questions deserve further attention. First, it has been shown that both the measurement graph and the noise level can have an effect on tightness [39, 33, 4]. In future work, we plan to investigate these characteristics using the given tool, and in particular understand to what level the additional redundant constraints may push the boundaries of tightness. Along the same lines, a given measurement graph may in fact help in finding the variable substitutions and parameters that are most likely to succeed, a component of the proposed method that is currently defined by user input. Finally, the full potential of the proposed method will be unlocked when faster SDP solvers are developed for problems that require redundant constraints. First steps into this direction have shown promising results [6, 7, 12], but more work remains to be done. In parallel, there lies potential in further pushing the efficiency of optimality certificates of fast local solvers, for example using sampling-based approaches as in [18] or sparsity-exploiting approaches as in [33, 34].
2306.03886
Systematic performance of the ASKAP Fast Radio Burst search algorithm
Detecting fast radio bursts (FRBs) requires software pipelines to search for dispersed single pulses of emission in radio telescope data. In order to enable an unbiased estimation of the underlying FRB population, it is important to understand the algorithm efficiency with respect to the search parameter space and thus the survey completeness. The Fast Real-time Engine for Dedispersing Amplitudes (FREDDA) search pipeline is a single pulse detection pipeline designed to identify radio pulses over a large range of dispersion measures (DM) with low latency. It is used on the Australian Square Kilometre Array Pathfinder (ASKAP) for the Commensal Real-time ASKAP Fast Transients (CRAFT) project . We utilise simulated single pulses in the low- and high-frequency observation bands of ASKAP to analyse the performance of the pipeline and infer the underlying FRB population. The simulation explores the Signal-to-Noise Ratio (S/N) recovery as a function of DM and the temporal duration of FRB pulses in comparison to injected values. The effects of intra-channel broadening caused by dispersion are also carefully studied in this work using control datasets. Our results show that for Gaussian-like single pulses, $> 85 \%$ of the injected signal is recovered by pipelines such as FREDDA at DM < 3000 $\mathrm{pc\ cm^{-3}}$ using standard boxcar filters compared to an ideal incoherent dedispersion match filter. Further calculations with sensitivity implies at least $\sim 10\%$ of FRBs in a Euclidean universe at target sensitivity will be missed by FREDDA and HEIMDALL, another common pipeline, in ideal radio environments at 1.1 GHz.
Hao Qiu, Evan F. Keane, Keith W. Bannister, Clancy W. James, Ryan M. Shannon
2023-06-06T17:45:31Z
http://arxiv.org/abs/2306.03886v1
# Systematic performance of the ASKAP Fast Radio Burst search algorithm ###### Abstract Detecting fast radio bursts (FRBs) requires software pipelines to search for dispersed single pulses of emission in radio telescope data. In order to enable an unbiased estimation of the underlying FRB population, it is important to understand the algorithm efficiency with respect to the search parameter space and thus the survey completeness. The Fast Real-time Engine for Dedispersing Amplitudes (fredda) search pipeline is a single pulse detection pipeline designed to identify radio pulses over a large range of dispersion measures (DM) with low latency. It is used on the Australian Square Kilometre Array Pathfinder (ASKAP) for the Commensal Real-time ASKAP Fast Transients (CRAFT) project. We utilise simulated single pulses in the low- and high-frequency observation bands of ASKAP to analyse the performance of the pipeline and infer the underlying FRB population. The simulation explores the Signal-to-Noise Ratio (S/N) recovery as a function of DM and the temporal duration of FRB pulses in comparison to injected values. The effects of intra-channel broadening caused by dispersion are also carefully studied in this work using control datasets. Our results show that for Gaussian-like single pulses, \(>85\%\) of the injected signal is recovered by pipelines such as fredda at DM < 3000 \(\,\mathrm{pc}\,\mathrm{cm}^{-3}\)using standard boxcar filters compared to an ideal incoherent dedispersion match filter. Further calculations with sensitivity implies at least \(\sim 10\%\) of FRBs in a Euclidean universe at target sensitivity will be missed by fredda and heindmall, another common pipeline, in ideal radio environments at 1.1 GHz. keywords: fast radio bursts - methods: data analysis - software: simulations ## 1 Introduction Fast Radio Bursts (FRBs; Lorimer et al.2007) are dispersed bright single pulses of extragalactic origin (Chatterjee et al.2017; Bannister et al.2019). Like pulses observed from pulsars, FRBs are subject to propagation effects: for FRBs this arises when passing through the extragalactic and interstellar media. Their most distinctive feature is a frequency-dependent time delay caused by dispersion when travelling through cold ionised media. The magnitude of this effect is given by the integral of the free electron column density along the line of sight, known as the dispersion measure (DM). A key feature of FRBs, and perhaps their working observational definition (Thornton et al.2013; Keane2016), is that their DMs exceed the maximum contribution from the Milky Way, indicating an extragalactic origin. In support of this, there is an strong observed correlation between the extragalactic DM and luminosity distance (Macquart et al.2020). Additional propagation effects such as scatter broadening and Faraday rotation caused by turbulent or clumping plasma structures and magnetic fields along the line of sight can have a significant effect on the pulse morphology of FRBs (Macquart and Kozy2013; Prochaska et al.2019; Chitidi et al.2021). These propagation effects have shown to be useful probes of the extragalactic medium (Simha et al.2020) and, for example, can be used as an independent measurement to reveal the density and magnetic fields of foreground galaxy haloes (Prochaska et al.2019). The search for FRBs is performed using single pulse search pipelines for both real-time observations and post-facto offline processing; typically there is a trade-off between processing speed and the degree of thoroughness in terms of data cleaning (Keane et al.2018). The core algorithm of single pulse detection is dedispersing the data to a large number of trial DM values, and then identifying high signal-to-noise ratio (S/N) pulses within these data for a range of pulse durations. Many algorithms have been used to detect FRBs, including but not limited to seek (Lorimer et al.2007), pestrov (Keane et al.2012), dedisperse-all (Burke-Spolaor and Bannister2014), presto (Spitler et al.2016), heindmall (Barsdell et al.2012), bonsai (CHIME/FRB Collaboration et al.2018), freedda (Bannister et al.2017, 2019), amber (Sclocco et al.2020) and astroaccelerate (Armour et al.2011; Carels et al.2019; Adamek and Armour2020). For the typical case where rapid follow-up is needed, it is the real-time pipelines that make the discoveries. These real-time pipelines are often optimised for low latency while maintaining as complete a search sensitivity as affordable. The ideal pipeline should have a stable uniform characterized performance across the parameter space during interference-free observations. The characteristic response of the algorithm or pipeline, across its searched parameter space, impacts how one interprets the output results. Pipeline performance can be understood through the analysis of detection rates and also the S/N reported by the algorithm. Incomplete signal recovery of pulses and other systematic biases during single pulse searches will cause lower S/N and fewer pulses to be detected above the threshold; these systematic errors are often related to the DM and pulse width of the pulse. The underlying population distribution estimated from the observed sample is crucial to cosmological parameter estimations using FRBs (Connor, 2019; Luo et al., 2020; James et al., 2022). This paper focuses on the identification of this S/N response function by using simulated pulses injected in observational format data for two FRB search pipelines: freeda and heimdall. Mock FRB injections systems have been deployed in major FRB observing facilities such as CHIME (CHIME/FRB Collaboration et al., 2018), UTMOST (Farah et al., 2019) and the GBT (Agarwal et al., 2020). The greenburst system for GBT developed in Agarwal et al. (2020) measured the system recall curve, measuring a 100% recovery of injected bursts at \(\rm{S/N}\gtrsim 12\). Recent systematic injection tests by Gupta et al. (2021) showed that heimdall used on UTMOST was able to recover over 90% of the synthetic injections above a S/N threshold of 9. The Fast Real-time Engine for Dedispersing Amplitudes (freda; Bannister et al., 2017, 2019) is the search pipeline used on the Australian Square Kilometre Array Pathfinder (ASKAP) for the Commensal Real-time ASKAP Fast Transients (CRAFT) project. It is a GPU-based implementation of the Fast Dispersion Measure Transform (FDMT; Zackay and Ofek, 2017), a rapid dedispersion trial algorithm to detect dispersed single pulses. freeda aims to detect FRBs from the incoherent data stream of ASKAP in low latency to trigger the download of baseband ring buffer data for interferometry. In this paper, we examine the performance of freeda using dispersed pulses simulated over a large range of dispersion measures and pulse widths. The aim is to understand the signal recovery fraction from the detection pipeline to infer the real sensitivity threshold of ASKAP FRBs and other major FRB surveys. We use heimdall, a widely-used effective search pipeline, to perform a comparison search on the simulated data. This allows us to verify the simulated bursts and compare the S/N algorithms between these two software. The simulation methods are described in SS 2. The data processing setup and results are presented in SS 3. We then, in SS 4, analyse and interpret these results, discussing possible implications for how to interpret the observed FRB samples. ## 2 Modelling ### Simulation data format For most detection pipelines acting in a blind survey mode, the input data are the Stokes I dynamic spectra, i.e. sigproc filterbank files1. The data are usually stored with millisecond time resolution with no coherent dedispersion applied. As a result, the microsecond fine structure seen in some FRBs (e.g Farah et al., 2018; Day et al., 2020; Michilli et al., 2018; Nimmo et al., 2020) cannot be resolved in the initial detection. If a pulse is detected with low latency and a higher resolution data product is in temporary storage then analysis at higher resolution is possible post-facto. Footnote 1: [https://sigproc.sourceforge.net/](https://sigproc.sourceforge.net/) In this work, we use single Gaussian pulses to mimic the smeared pulse appearance of most observed FRBs (Pleunis et al., 2021). The simulated pulses have a constant injected S/N of 50. The data are generated at time and frequency resolution of 0.1 ms and 0.1 MHz respectively. The data are then down-scaled by a factor of 10 in both dimensions. This is done so as to introduce various smearing effects and create a precise pulse profile. The reduced resolutions aim to match closely those of ASKAP data (see Table 1). The pulses are injected into a background of Gaussian white noise. Together these steps simulate ideal observation conditions. ### Simulation format The data recorded for ASKAP FRB searches are also filterbanks. ASKAP observes at radio frequencies between 0.7 and 1.8 GHz. The ASKAP beamformers create 36 Stokes I beams per dish, i.e. \(36\times N_{\rm dish}\) data streams (Clarke et al., 2014). The CRAFT backend adds these incoherently to create \(36\times 1\) data streams. The CRAFT pipeline is designed to search such filterbank data generated with a time resolution typical between 0.7-1.7 ms and with a bandwidth of 336 MHz. In this work we use the two standard frequency bands employed by CRAFT (see Table 1). The ASKAP data streams are typically channelized to \(336\times 1\)MHz channels with a time resolution of 1.26 ms (before 2019, later changed to 1.73 ms for lower frequency band searches). The resolution of the simulated dynamic spectrum in this work is intended to be similar to the resolution of the current incoherent sum data stream during CRAFT observations. The format is \(336\times 1\) MHz channels and a time resolution of 1 ms recorded in 8-bit data(see Table 1). ### Model of injected pulses We assume a single Gaussian as the underlying profile of each pulse, i.e. in 0.1-MHz 0.1-ms resolution with the following equation. For channel \(i\) the flux of the pulse is: \[S_{i}(t)=\frac{A}{\sqrt{2\pi\sigma_{i}^{2}}}\exp\left[\frac{-(t-t_{0}-t_{\rm DM }(r_{i})^{2}}{2\sigma_{i}^{2}}\right], \tag{1}\] where \(A\) is an amplitude scale factor, \(t_{0}\) is the time reference of the burst at the reference frequency which we take to be the top of the band, i.e. the centre of the highest frequency channel. The standard deviation of the intrinsic Gaussian is \(\sigma_{1}\) and the dispersion delay time, \(t_{\rm DM}(r_{i})\) relative to the reference frequency and is proportional to the DM according to: \[t_{\rm DM}(r_{i})=4.15\ {\rm DM}(r_{\rm top}^{-2}-r_{i}^{-2})\ {\rm ms}, \tag{2}\] \begin{table} \begin{tabular}{l l l} \hline Dataset & Gaussian & Scattering \\ \hline DM ( pc cm\({}^{-3}\)) & 0–3000 & 0–3000 \\ DM step & 50 & 500 \\ Intrinsic Width (ms) & 0.5–11.0 & 1 \& 5 \\ \(\sigma_{\rm intrinsic}\) step (ms) & 0.5 – & – \\ Scattering time \(\tau_{\rm scat}\) (ms) & 0 & 0.5–10 \\ \(\tau_{\rm scat}\) step & – & 0.5 \\ \hline Nbits & 8 \\ Nchan & 336 \\ tamp (ms) & 1 \\ \(\Delta r\) (ms) & 1 \\ High Freq Band (GHz) & 1.1–1.436 \\ Low Freq Band (GHz) & 0.764–1.1 \\ Injected S/N & 50 \\ Number of pulses & 50 \\ \hline \end{tabular} \end{table} Table 1: Simulated Dataset properties where \(v_{\rm i}\) is the centre frequency of the \(i^{\rm th}\) channel and \(v_{\rm top}\) is the centre frequency of the top channel. The intra-channel dispersion smearing is the dispersion delay time within one channel, causing the pulse to be broadened within the channel (Clarke et al., 2013). The dispersion smearing in the \(i^{\rm th}\) channel is: \[\Delta t_{\rm DM}=(8.3\times 10^{-3})\ \Delta v\ {\rm DM}\ v_{i}^{-3}{\rm ms}. \tag{3}\] Here \(\Delta v\) is the channel bandwidth in units of MHz and \(v_{\rm i}\) is the channel frequency in GHz. We interpret \(\Delta t_{\rm DM}\) as the full-width half maximum (FWHM) of a Gaussian with width \(\sigma_{\rm DM}=\Delta t_{\rm DM}/(2\sqrt{2\ln 2})\). For our simulated pulses we define the standard deviation in Eq. 1 as the quadrature sum: \[\sigma_{\rm i}=(\sigma_{\rm intrinsic}^{2}+\sigma_{\rm DM}^{2})^{1/2}. \tag{4}\] This generates a Gaussian pulse profile with a typically very small amount of dispersion smearing across the 0.1 MHz channels. Then reducing the resolution in both frequency and time by a factor of 10 in each produces dispersion smearing in each channel that is 10 times higher accounting for the majority of the effect. An example of a resultant pulse is shown in Figure 1. For this simulation the software we have developed allows for the addition of scatter broadening to the pulse profile. This is achieved by the convolution with a one-sided frequency-dependent exponential decay function. The scattering index is set as \(\alpha=4\) based on measurements from scattering in pulsar observations (Bhat et al., 2004): \[S_{i}(t)=\begin{cases}\exp\left[-\frac{(t-t_{p})}{\tau\left(v_{\rm i}/10{\rm GHz }\right)^{-\alpha}}\right],&(t\geq t_{p}),\\ 0,&(t<t_{p}).\end{cases} \tag{5}\] where 1 GHz is the reference frequency and \(t_{\rm p}=t_{0}+t_{\rm DM}(v_{\rm i})\). ### S/N rescaling The pulse is independently modelled in each channel based on equation 1. Down-sampling causes a DM smearing effect on the pulse profile, broadening its width. The S/N of an injected pulse is calculated using a match filter on the dedispersed time series in units of the standard deviation of the noise background. All of the simulation steps are done using 32-bit floats; the very last step in our pipeline is to write out 8-bit files (again to match the ASKAP output). This calculation standardizes the S/N of the simulated burst so that we can compare with the S/N measured from the pipelines. We define the match-filter S/N in the time series as: \[{\rm S/N_{filter}}=\sqrt{\sum{\rm S_{i}^{2}}}, \tag{6}\] where \({\rm S_{i}}\) is the signal intensity of each sample in units of the standard deviation. After being time averaged, the pulse fluence (in units of Jy ms) remains constant, but S/N is not conserved. This is shown in Figure 2: for narrow Gaussian pulses with FWHM less than 1 coarse time sample (the final data time resolution, here 1 ms), the S/N decreases with burst width. The pulse S/N is again re-adjusted, by scaling, to the intended value so that all injected pulses have consistent S/N for the purpose of searching. Gaussian background noise with a standard deviation of 1 is then added to the array and the data are written to disk in sigproc filterbank format. Using the match filter equation (Eq.6), it is known that the final readout S/N of the pulse under the added noise background will have an approximate uncertainty of 1 standard deviation unit, our output results also match this result. The bursts are chosen to be scaled to S/N=50 during this process. The response of the algorithm is independent of this choice. This exact value is arbitrary, we choose to inject bright bursts to guarantee we do not miss candidates below the minimum S/N threshold, as would occur if using injections at S/N=10. (See SS A) ### Dataset setup The parameters describing the datasets are shown in Table 1. The experiment is conducted using two frequency bands that are commonly observed by ASKAP. We refer to the two frequency bands in this work as the high frequency band: 1100 MHz - 1436 MHz and the low frequency band: 764 MHz - 1100 MHz. For each band, we generated Gaussian single pulses over a large range in DM and intrinsic Figure 1: A simulated Gaussian pulse with an intrinsic width of \(\sigma\)=0.5 ms, a dispersion time delay of \(\rm DM=100\) pc cm\({}^{-3}\), a scattering time of \(\tau_{\rm 1GHz}=10\) ms, scattering index \(\alpha=4\) and flat spectral index. The time resolution of the filterbank is 1 ms, with no background noise. The slight variability in the intensity of the pulse shows the pulse energy distributed over different number of time samples in each channel caused by intra-channel dispersion and different pulse arrival time. Figure 2: The theoretical S/N scaling after resolution downscaling measured by boxcar matchfilters for pulses in the high frequency band. pulse width. For each sample in DM and width space, 50 pulses were injected to effect 50 trial iterations to sample the noise. Scattered pulses were created as a separate dataset for additional comparison studies (see SS 8). An incoherently dedispersed 'zero-DM' dataset was also created as a control sample for the Gaussian pulses. The pulses in this dataset are the exact same pulses as the original dataset except that the pulse time is aligned across the frequency band. This dataset was created to avoid the effects of incorrect dedispersion that is unavoidable for blind searches with finite numbers of DM trials. Thus it examined only the basic boxcar S/N retrieval efficiency of the pipelines. ## 3 Pipeline Setup ### Pipeline Settings For fredda, we use a basic setup only defining two parameters beyond default values. These are the block size of the data stream to ingest at any one time, and the number of DM trials (these are the _-t_ and _-d_ input flags for fredda). We use a block size of 8192 (16384) samples for the high (low) frequency bands to ensure that the widest pulses at the highest DMs are not split across blocks. By default fredda is configured so that boxcar widths searches range from 1 up to a maximum of 32 samples in steps of 1 (Bannister et al., 2017). For heimdall we also use mostly default parameters. We set the baseline length parameter (_-baseline_length_ flag) to be 20 s, 10 times the default as we would need to do when observing a strong pulsar, so as to avoid incorrect statistical estimates for the noise. We further turned down a setting to the friends-of-friends algorithm (the _-cand_sep_ flag) to ensure adjacent pulses did not get erroneously associated. In the case of our zero-DM pulses, we remove (using the _-rf_no_broad_ flag) the usual zero-DM filter (Eatough et al., 2009). Finally we set the DM range and set the maximum boxcar width to 32 samples, however in the case of heimdall this parameter is searched in logarithmic steps of a factor of 2. One parameter for heimdall can drastically change its output response; this is the DM tolerance parameter (_-dm_tof_ flag). This parameter defines the DM step size, the lower the tolerance the smaller the step size. A range of tolerances are used at different observatories (see Table 2); the default value is 1.25. redda uses a constant (frequency-dependent) DM step size for the full DM range, and this feature is not configurable. We perform analyses using 1.25 and 1.01 tolerances. The former represents a typical value for searches which have discovered FRBs; the latter is in some sense a fairer comparison to fredda. We show results for a DM tolerance of 1.01 for heimdall. Coarser tolerance results are worse and can be examined in the supplementary online material for this paper. ### Known boxcar efficiency and scallop responses To measure the dispersion of the candidates, fredda uses the fast dispersion measure transform (FDMT) algorithm (Zackay and Ofek, 2017), while heimdall uses a brute force dedispersion tree. In order to understand the performance of the pipeline, we investigate the single pulse search algorithms in isolation from the dedispersion algorithms. Both pipelines use boxcar filters. We first calculate the theoretical response on single channel boxcar pulses, where the search filter perfectly matches the shape of the signal. For the results shown here we ensured that the boxcars were aligned in phase with the time samples, but verified that the S/N fall-off when pulses are out-of-phase with the sampling is as expected. We use filter templates with widths of 1, 2, 4, 8 samples to measure the S/N of boxcar signals (injected S/N of 50, no noise) with boxcar width between \(1-10\) ms. The result in Figure 3 shows how using a set of fixed-width match filters, the S/N between the exact filter widths dips. The response is sharply peaked at perfect recovery; the falls of are \((W/B)^{-1/2}\) (\((B/W)^{-1/2}\)) when the boxcar width \(B\) is greater (less) than the injected pulse width \(W\). The overall response shape taken from the maximum achivable S/N drops between matching widths and is commonly known as'scalloping'. The response is different when we inject Gaussian pulses but again use the boxcar filters of the same widths, as deployed by heimdall and fredda. The peak response is not perfect but reaches the theoretical maximum of 94.3% recovery (Keane and Petroff, 2015; Morello et al., 2020), with a smoother falloff than for boxcar pulses. Using a set of boxcar filters with a power of 2 step size, we can observe that heimdall suffers between boxcar widths from the scalloping in Figure 4. Due to the low time resolution of the data ingested by the CRAFT pipeline, fredda is designed to search consecutive sample widths as this gives a more optimal smooth response curve for wider pulses as shown in Figure 5. It should be noted that both fredda and heimdall are specifically designed to lower the computation costs and latency when searching for pulses across a large range of width. For Parkes radio telescope data, where heimdall was first applied, the time resolution could be as high as 64 \(\mu\)s. Hence a wider boxcar width range is needed to search for wide pulses and heimdall enables a faster search across the range of DM and width in real-time for these observations. On the other hand, fredda was designed specifically for ASKAP data at coarse time resolution, the FDMT algorithm also optimised processing for a large ranges of dedispersion trials. \begin{table} \begin{tabular}{l l l} \hline Survey/Telescope & dm\_tol & Reference \\ \hline Parkes-SUPERB & 1.20 & Keane et al. (2018) \\ UTMOST & 1.20 & Gupta et al. (2021) \\ DSA-10 & 1.15 & Kocz et al. (2019) \\ STARE2 & 1.25 & Bochenek et al. (2020) \\ Sardinia (Targeted) & 1.01 & Pilia et al. (2020) \\ \hline \end{tabular} \end{table} Table 2: Example heimdall set up for the _-dm_tof_ input flag parameter in different real-time blind surveys and targeted searches Figure 3: Theoretical S/N recovery of S/N=50 boxcar signals using boxcar match filters with a width of 1, 2, 3, 4 and 8 samples. ## 4 Results ### Reported properties of simulated single pulses We first process the pulses in their original dispersed form. The average S/N values reported over our 50 iterations as a function of \(\sigma_{\textrm{intrinsic}}\) and DM are shown in Figure 6. The results confirm that freeda maintains a high S/N recovery across the entire parameter range. This is especially important for high DM searches where a drop in S/N is expected due to broadening caused by DM smearing. The low SN at higher DM and widths of FREDDA low band are caused by the maximum limit of 32 sample boxscars. The scalloping effect can be seen across the parameter space for heimdall in Figure 6. The response is the one shown in Figure 4, but with the peaks/troughs of the scallops at constant observed pulse width, i.e. ever lower \(\sigma_{1}\) as DM increases. The average boxcar values reported over 50 iterations are displayed in Figure 7. This shows that heimdall, as expected, is using less steps of boxscars compared to freeda. The total pulse width which consists of a contribution from both the intrinsic pulse width and the smearing width (which correlates with DM). This produces the ripple-like S/N response pattern of heimdall as it applies the same fixed boxcar over wide width ranges. ### Zero-DM pulses For comparison, we have a dataset that is incoherently dedispersed to the exact DM (intrachannel smearing remains). The results of processing these data are shown in Figure 8. The results are a direct examination of the S/N boxcar algorithm applied by the software when the pulse is correctly dedispersed but smeared, hence this is the theoretical best S/N achievable for each pipeline. It can be seen that the response is consistent to the simulated estimates in Figure 4 and Figure 5 as DM and \(\sigma_{\textrm{intrinsic}}\) correlates with the pulse FWHM. ### FREDDA high S/N at intermediate DM We find that for some parts of the parameter space the S/N reported by FREDDA from dispersed pulses is nominally better than the theoretical maximum, and even exceeds 100% of that injected. For example, in the high frequency range, the maximum S/N reported by redda at around DM \(\sim 750\) pc cm\({}^{-3}\) reaches 102% of the injected S/N. For the low frequency dataset, the S/N bump shifts to a lower range at DM \(\sim 350\) pc cm\({}^{-3}\) and is less emphatic. This result greatly affects the statistics of FRBs discovered by freeda. For these data there are no islands in the parameter space where excessively high S/N values are obtained. We deem it to be important that this issue only appears for dispersed pulses, i.e. not for the perfectly dedispersed pulses. This would indicate that the underlying issue is related to the dedispersion process, not the pulse search part of the algorithm. We note that the channel smearing width at the top of the band for a burst at these DM ranges in each frequency band is between \(1-2\) ms (corresponding to \(1-2\) time sample in this work) as shown in Figure 10. This indicates that the S/N reported is affected by how the algorithm collects the dispersed signal for the S/N calculation, e.g. if the samples occupied by the dispersive sweep were under-estimated so too would the rms noise, resulting in an over-estimated S/N. We also examine how freeda calculates the noise across different DMs by processing 20 seconds of Gaussian white noise data under the same settings. We show in Figure 11 the number of noise candidates Figure 4: The match filter S/N of Gaussian pulses in 1-ms time resolution using Log-spaced boxcar widths. The upper figure shows the response of each respective match filter while the lower figure shows the resulting response curve by taking the maxima at each width. Figure 5: Match filter S/N of Gaussian pulses in 1-ms time resolution using linearly spaced boxcar filters in comparison to Figure 4. By using a set of uniformly spaced and consecutive width boxcar match filters up to 32 samples, it is possible to achieve a more uniform response towards wider pulses. above \(3\sigma\) in comparison to the average S/N in high frequency band data. The number of noise candidates is expected to drop slightly as DM increases, as the larger DM increases the minimum width and time delay of pulses which reduces the effective search length of the data, which will be more significant in shorter length data (20 seconds). We however see significant fluctuations in the figure: a large increase in the number of noise candidates at around \(700-800\) pc cm\({}^{-3}\), which correlates with the rise of reported S/N in fredda. We also speculate that the lower-DM S/N dips may be correlated with the dip in the number of noise candidates at \(400\) pc cm\({}^{-3}\)and \(1400\) pc cm\({}^{-3}\)respectively. This indicates that there is an error with the noise estimation function that may be the cause of the S/N inconsistency. Further investigation of the algorithm is being conducted to explain the phenomena and remove this software error in future searches. But this feature exists in all searches to date and so the response function for the ASKAP FRB sample, for instance, should include it as it is calculated here. ### Search Completeness The sensitivity of the pipelines are well characterised from this work. For statistical studies on ASKAP FRBs and the FRB searches conducted with heimball, we can utilise the reported S/N results to estimate the real S/N/fluence of these FRBs. We assess the search completeness at different frequencies for both pipelines based on the \(\log N-\log S\) relation of a uniformly distributed source population in a Euclidean Universe with a slope of -1.5. We relate the search flux density S\({}_{\rm search}\) with the target search sensitivity S\({}_{\rm target}\) with S\({}_{\rm search}={\rm S}_{\rm target}\times\mathcal{R}\). Here \(\mathcal{R}\) is the ratio between the reported and the injected S/N (e.g Figure 6) scaled by the S/N down-scaling response function for constant S/N pulses (see SS 2.4 and Figure 2). When the reported S/N is higher than 100% of that injected, the recovery is considered as 100% (e.g parts of the fredda results), as it has reached target sensitivity. The real flux density of the population actually detected is lower than the target sensitivity of the observations. The search completeness is therefore expressed in the terms of the cumulative N\({}_{\rm search}/{\rm N}_{\rm target}\). Our calculations are shown in Table 3. The reported S/N maps show that pipelines are only probing \(\sim 90\%\) of the designated parameter space in this work due to not reaching the real target sensitivity. Figure 6: FREDDA and HEIMDALL reported/injected S/N of single pulses in the high and low frequency band. At detection thresholds or for real time searches, the search completeness will be further lowered, according to the noise statistics. ## 5 Conclusion In this work, we utilise simulated single pulses to test the performance of the FRB search pipeline freedda and heimdall. We simulated pulses in Gaussian white noise with no interference across a parameter space of DM < 3000 \(\,\mathrm{pc\ cm^{-3}}\)and intrinsic pulse standard deviation below < 11 ms. Our results show that both heimdall and freedda perform effectively in detecting single dispersed pulses under ideal conditions. When a very fine DM tolerance of heimdall is taken, it matches the performance of freedda, with both pipelines sensitive to \(\sim 90\%\) of the search volume. However, in exchange for computation cost and low latency, if one reduces the DM tolerance for heimdall(as is typically done) the performance is worse at \(\sim 86\%\). We identify an issue in freedda where the reported S/N is higher than \(100\%\) of that injected likely due to dispersion smearing dominating pulse width. We reconfirm results from previous tests (Keane and Petroff, 2015) that pipelines using a non-consecutive set of boxcar match filters such as heimdall will receive a signal to noise penalty on pulses in between the boxcar widths. We further broadened the response calculation to include DM and a number of other additional subtle effects. We demonstrate that using consecutive boxcars (e.g freedda) is more sensitive to search for pulses in coarse time resolution real-time data. We also calculate a theoretical search completeness using the S/N response distribution over the DM and pulse width parameter space obtained in this work for heimdall and freedda. We show that Figure 7: Average reported boxcars of injected single pulses in the high and low frequency band from FREDDA and HEIMDALL. FREDDA benefits from consecutive boxcar filters and provides a very smooth boxcar measurement. \begin{table} \begin{tabular}{c c c c} Pipeline & Band & dm\_tol & N\({}_{\mathrm{search}}/\mathrm{N_{target}}\) \\ \hline \multirow{2}{*}{freeda} & High & – & 92.6\% \\ & Low & – & 87.5\% \\ \hline \multirow{2}{*}{heimdall} & High & 1.01 & 90.2\% \\ & Low & 1.01 & 90.0\% \\ \hline \multirow{2}{*}{heimdall} & High & 1.25 & 86.7\% \\ & Low & 1.25 & 85.1\% \\ \hline \end{tabular} \end{table} Table 3: Search completeness compared to target sensitivity for both pipelines in high and low frequency bands. heimdall shows consistency over frequency, while freedda performs better at high frequency. fredda achieves a search completness of 92.6% at 1.1-1.4 GHz. Depending on the settings the search completeness of heimdall in that frequency band is between 86.7% to 90.2%. We believe this result demonstrates the importance of understanding the pipeline search completeness not only for initial detection but also for FRB population and statistical studies. It is essential to the FRB community that current and future large FRB surveys such as CHIME/FRB and DSA-2000 to investigate such underlying effects. The data and software for this work has been made publicly available for further simulation and analysis use on other pipelines. ## Acknowledgements HQ thanks David McKenna, Vivek Gupta, Wael Farah, Andrew Jameson and Adam Deller for useful comments on FREDDA and HEIMDALL for this work. HQ and EFK acknowledge support from Fondation MERAC (Project: SUPERHeRO, PI: Keane) and are grateful to Aaron Golden for support when working in Galway. This work was performed on the OzSTAR national facility at Swinburne University of Technology and the SKA Observatory Science Computing facilities. The OzSTAR program receives funding in part from the Astronomy National Collaborative Research Infrastructure Strategy (NCRIS) allocation provided by the Australian Government. ## Data Availability The source code to simulate FRBs for this work is publicly available at the following repository: [https://github.com/hqiu-nju/simfred/](https://github.com/hqiu-nju/simfred/). The data results in this work, supplement material and script to reproduce the simulations are available in the following repository : [https://github.com/hqiu-nju/CRATF-ICS-pipeline](https://github.com/hqiu-nju/CRATF-ICS-pipeline)
2307.13764
Phase transition in the Galam's majority-rule model with information-mediated independence
We study the Galam's majority-rule model in the presence of an independent behavior that can be driven intrinsically or can be mediated by information regarding the collective opinion of the whole population. We first apply the mean-field approach where we obtained an explicit time-dependent solution for the order parameter of the model. We complement our results with Monte Carlo simulations where our findings indicate that independent opinion leads to order-disorder continuous nonequilibrium phase transitions. Finite-size scaling analysis show that the model belongs to the mean-field Ising model universality class. Moreover, results from an approach with the Kramers-Moyal coefficients provide insights about the social volatility.
André L. Oestereich, Marcelo A. Pires, Silvio M. Duarte Queirós, Nuno Crokidakis
2023-07-25T18:49:05Z
http://arxiv.org/abs/2307.13764v2
# Phase transition in the Galam's majority-rule model with information-mediated independence ###### Abstract We study the Galam's majority-rule model in the presence of an independent behavior that can be driven intrinsically or can be mediated by information regarding the collective opinion of the whole population. We first apply the mean-field approach where we obtained an explicit time-dependent solution for the order parameter of the model. We complement our results with Monte Carlo simulations where our findings indicate that independent opinion leads to order-disorder continuous nonequilibrium phase transitions. Finite-size scaling analysis show that the model belongs to the mean-field Ising model universality class. Moreover, results from an approach with the Kramers-Moyal coefficients provide insights about the social volatility. S 2023 1 2023 1 ###### Contents * 1 Introduction * 2 Model and methods * 2.1 Model * 2.2 Simulation details * 3 Results and discussion * 3.1 Analytical results * 3.2 Probabilistic approach * 3.3 Monte Carlo simulation and finite-size scaling * 4 Conclusions ## 1 Introduction Opinion dynamics is one of the hottest topics in Sociophysics. This recent research area uses tools and concepts of statistical physics to describe some aspects of social and political behavior [1; 2; 3; 4]. From the theoretical point of view, opinion models are interesting to physicists because they can present order-disorder transitions, hysteresis, scaling, and universality, among other typical features of physical systems, which have attracted the attention of many groups throughout the world [5; 6; 7; 8; 9; 10; 11; 12; 13; 14]. Concerning sociologists, these methods are useful to improve forecasting by means of controlled toy-models that can be run multiple times and help fine tune field studies as well [15]. In addition to the interesting properties of opinion dynamics models, per se, such dynamics have also been applied in various fields such finance and business [16], epidemic dynamics with the presence of conflicting opinions [17; 18; 19; 20; 21; 22; 23], among others [3]. Among the most studied models, we can highlight the voter model [24; 25], the Sznajd model [26], the Deffuant model [27], the kinetic exchange opinion models [28] and the majority rule model [29; 30; 31; 32]. All the mentioned models are build based on distinct microscopic rules that control the dynamics of interactions among agents. The Sznajd model considers a two-state (up/down spins) outflow dynamics, where a group of agents sharing a common opinion influence the groups neighbors to follow group's opinion. The model presents a phase transition between the positive and the negative consensus: initial densities of spins up smaller than \(1/2\) lead eventually to all spins down, and densities greater than \(1/2\) to all spins up, i.e., consensus absorbing states where the system cannot escape [26]. On the other hand, the Deffuant model considers the opinions as continuous variables, and the interactions depend on the "distance" among pairs of opinions, which defines the concept of bounded confidence. Depending on the value of such bounded confidence, the population can evolve to consensus (all equal opinions) or to polarization (population divided in two distinct opinions). No phase transition is observed [27]. The majority rule model considers groups of \(g\) agents, that interaction through a simple rule: all agents in the group follow the local majority. In case of even values of \(g\), a probability \(k\) defines which opinion will win the debate inside the group. The results of the model, regarding consensus and phase transitions, are similar to the observed in the Sznajd model [30]. Some application of the majority rule model are mentioned in the following. Finally, the kinetic exchange opinion models are based on dynamics of wealth exchange. Interactions are pairwise and considers continuous opinions originally, or discrete three-state opinions (\(+1,-1\) or \(0\) states) [28]. Both formulations lead the population to undergoes order-disorder phase transitions, similar to which occurs in spin models. Absorbing states, where all agents are in the neutral state (all opinions \(0\) in the population) are observed. Observe that such absorbing states are distinct to the ones observed in the previous models, where all agents share opinion \(+1\) or \(-1\). Such kind of consensus states are observed in kinetic exchange opinion models only in very specific situations [28]. We are especially interested in the majority rule model, proposed by Serge Galam [29]. In this model, random groups of agents are chosen and after the interaction of such agents all of them assume the initial majority opinion. The model was studied by many groups [33; 34; 35; 36; 37; 38; 39; 40], and it was applied to a series of practical problems, like antivax movement [41], USA [42] and French [43] presidential elections, terrorism [44], among many others. Independence in opinion making and the failure of group influence was considered in several opinion dynamics models [45; 46; 47; 48; 49; 50; 51; 52; 53]. A recent extension of Galam's model in Ref. [33] considered the impact of independence in social dynamics. In that case, with probability \(q\), an individual acts independently of the majority opinion of their group and chooses at random one of the two possible opinions. The introduction of that condition, quantified by the parameter paves the way to the occurrence of an order-disorder nonequilibrium phase transition that does not occur in the original majority-rule model [29]. In this work, we go farther afield than the independence mechanism considered in Ref. [33], and we take into account the overall global opinion of the population when an agent decides to act independently of the group's opinion. With this we can paint a more detailed picture of the process of independence, since now agents can take global opinion into account when they ignore in-group majority. This change manages to incorporate the concept of "impersonal influence" [54] established within political science. The goal of which is to quantify the influence of the anonymous mass of individuals outside her small-world composed of family, (close) friends and acquaintances. That impersonal influence encompasses polls, reader's comments on news on digital media and the individual's general perception by consulting social networks that can have an effect on her decision making process. The strength of this new effect can be controlled by a new parameter \(g\) that gauges the impact of the global population opinion, which is the macrostate, on the individuals. This impact can be of a contrarian nature, for negative values of \(g\) where agents tend to to take opposite opinion from the population or it can be positive and reinforce the predominant opinion, thus helping the building of consensus. With that we go along the lines of canonical considerations over complex systems for which microscopic and macroscopic features influence one another. We develop an analytical framework in order to understand the results from numerical simulations. All results suggest the occurrence of order-disorder transitions, and the estimates of the critical exponents indicate that the model is in the mean-field Ising model universality class. ## 2 Model and methods Herein, we analyze a majority-rule model with independence; however, differently to Ref. [33] we assume a density-dependent probability \(f_{t}\) for changing the current opinion independently of the interaction group. ### Model Let us consider a population of \(N\) individuals, \(i\), with opinions \(A\) or \(B\), with respect to a given issue, that map into a stochastic variable, \(o_{i}\), such that \(o_{i}(A,t)=+1\) and \(o_{i}(B,t)=-1\). Macroscopically, we compute the density of agents with opinion A, \[\eta_{A}(t)\equiv\frac{1}{N}\sum_{i=1}^{N}\delta_{o_{i}(t),+1}, \tag{1}\] and the density of agents with opinion B, \[\eta_{B}(t)\equiv\frac{1}{N}\sum_{i=1}^{N}\delta_{o_{i}(t),-1}=1-\eta_{A}(t). \tag{2}\] The mean opinion, from which we establish the macroscopic state of the system reads, \[m(t)\equiv\frac{1}{N}\sum_{i=1}^{N}o_{i}(t)=\eta_{A}(t)-\eta_{B}(t). \tag{3}\] The dynamics of each individual is governed at each time step, \(t\), by the following set of rules: * An individual with opinion \(A\) can change to opinion \(B\) through two mechanisms: * with probability \(q\) the individual acts independently of their group. In that case, they change their opinion with probability \(f^{AB}(t)=f(1-g\,m(t))\); * otherwise the individual does not act on their own, then there is a probability \(1-q\) that they change their opinion according to a local majority-rule, \(A+2B\to 3B\). * On the other hand, an individual with opinion \(B\) can flip to opinion \(A\) through 2 mechanisms: * with probability \(q\) they decide to whether act independently of their group or not. In that case, the agent will change their opinion with probability \(f^{BA}(t)=f(1+g\,m(t))\); * else if the individual does not act on their own, then there is a probability \(1-q\) that they change their opinion according to a local majority-rule, \(B+2A\to 3A\). The rules above are translated into the transition matrix, \[W(t)\equiv\begin{bmatrix}w_{1}&w_{2}\\ w_{3}&w_{4}\end{bmatrix}=\begin{bmatrix}q\,f^{AB}(t)&1-q\\ 1-q&q\,f^{BA}(t)\end{bmatrix}. \tag{4}\] Note that the definitions \(f(t)^{AB}=f(1-g\,m(t))\) and \(f(t)^{BA}=f(1+g\,m(t))\) imply that if \[\eta_{A}(t)>\eta_{B}(t)\Rightarrow m(t)>0\Rightarrow f(t)^{BA}>f(t)^{AB}\] as expected. Let us have a closer look at the parameters involved in the model: the parameter \(q\) is related to the backbone of our approach establishing the relative weight of the local peer-pressure, \(p=1-q\), leading to a decision-making process wherein the individual either submits to the local majority (a conformist behavior) or the decision-making dynamics is carried out on her own. The probability \(f^{XY}(t)\) - related to the latter case - is naturally shaped by the assessment of the state of affairs provided by the global state, \(m(t)\), so that a standard propensity to change opinion through reflection, \(f\), is either boosted or mitigated. Epistemologically, the shaping of the probability is equivalent to the process of risk-taking versus risk-aversion described within prospect theory [55]. Herein, we assume a linearized form \(f^{XY}(m(t))=f+v\,m(t)+\mathcal{O}(m(t)^{2})\); depending on the sign of \(v\) we have either a follower or a contrarian impact. If \(g=0\), then \(f^{XY}(t)=f\) and we recover the results of [33]. ### Simulation details Our Monte Carlo simulations are structured within an agent-based framework, as individuals constitute the underlying object of study in social theories [56]. In our algorithm we consider a computational array of size \(N\) to store the opinion of each agent. In each time \(t\) we apply a Monte-Carlo step (MCS) that represents a complete iteration through all agents. During each interaction, the simulation chooses a group of 3 agents at random, considering their current opinion and applying specific rules. These rules are summarized in Table 1 and define how an agent's opinion may change based on various conditions and probabilities. After each MCS we implement a simultaneous-parallel updating. This means that the updated opinions are applied to all agents at the same time, ensuring that the changes in opinions are synchronized across the entire population. ## 3 Results and discussion ### Analytical results Using the Mean-Field approach we can obtain a set of ordinary differential equations that describes the time evolution of the competing opinions in a population. To derive the rate of change of opinions \(A\) and \(B\) at time \(t\) we need to consider that each opinion is influenced by: the intrinsic independent behavior (controlled by the parameter \(f\)), information-driven independence (modulated by the parameter \(g\)) and local interactions. Thus, based on the rules summarized in Table 1 we obtain the following Mean-Field equations: \begin{table} \begin{tabular}{|l l|} \hline \multicolumn{3}{|c|}{Agent-based rules of our model} \\ \hline Each agent with opinion \(A\) can flip to opinion \(B\) through two mechanisms: \\ \hline **1.**\(A\to B\) & \(p_{A\to B}^{(1)}=qf^{AB}(t)\) \\ \hline **2.**\(A+2B\to 3B\) & \(p_{A\to B}^{(2)}=(1-q)\,\eta_{B}(t)^{2}\) \\ \hline Each agent with opinion \(B\) can flip to opinion \(A\) through two mechanisms: \\ \hline **1.**\(B\to A\) & \(p_{B\to A}^{(3)}=qf^{BA}(t)\) \\ \hline **2.**\(B+2A\to 3A\) & \(p_{B\to A}^{(4)}=(1-q)\,\eta_{A}(t)^{2}\) \\ \hline \end{tabular} \end{table} Table 1: red \[\frac{d\eta_{A}(t)}{dt} =q\,f^{BA}(t)\,\eta_{B}(t)+(1-q)\,\eta_{A}(t)^{2}\,\eta_{B}(t)-q\,f^ {AB}(t)\,\eta_{A}(t)-(1-q)\,\eta_{A}(t)\,\eta_{B}(t)^{2}, \tag{5}\] \[\frac{d\eta_{B}(t)}{dt} =q\,f^{AB}(t)\,\eta_{A}(t)+(1-q)\,\eta_{B}(t)^{2}\,\eta_{A}(t)-q\, f^{BA}(t)\,\eta_{B}(t)-(1-q)\,\eta_{B}(t)\,\eta_{A}(t)^{2},\] (6) \[f^{AB}(t) =f(1-g\,m(t)),\] (7) \[f^{BA}(t) =f(1+g\,m(t)). \tag{8}\] From Eqs. (1) - (3), namely that \[\eta_{A}(t)=\frac{1}{2}(1+m(t)),\quad\eta_{B}(t)=\frac{1}{2}(1-m(t)),\quad\eta _{A}(t)\,\eta_{B}(t)=\frac{1}{4}(1-m(t))^{2}, \tag{9}\] the set of differential equations yields the ordinary differential equation for the macroscopic variable, \(m(t)\). \[\frac{dm(t)}{dt}=-2\,q\,f\,(1-g)m(t)+(1-q)\,m(t)\frac{1-m(t)^{2}}{2}. \tag{10}\] In other words, starting from a given condition, \(m(0)=m_{0}\), the macroscopic state evolves and eventually reaches a stationary state \(dm/dt=0\); that state is lower bounded by the maximal state of disagreement then \(m=0\), whereas when the population presents unanimity, \(|m|=1\). Thus, we expect that for certain conditions dictated by the parameters of the problem, the system can evade the final stationary state of disagreement and end up in a situation for which \(|m|\neq 0\), i.e., a majority of individuals favoring A(B). Physically, \(m(t)\) is thus defined as an order parameter. That turns out clearer when we consider that the population adjust is macrostate \(m\) aiming at minimizing its so-called Hamiltonian function. That is best understood when we recast the previous equation into \[\frac{dm(t)}{dt}=-\frac{\partial\mathcal{H}}{\partial m}=r\,m(t)+u\,m(t)^{3} \tag{11}\] where \[r=\frac{1}{2}\{q[4f(1-g)+1]-1\},\qquad u=\frac{1}{2}(q-1). \tag{12}\] Therefore, the analytical form of \(\mathcal{H}\), \[\mathcal{H}(m)=-\frac{1}{2}r\,m^{2}-\frac{1}{4}u\,m^{4}, \tag{13}\] dictates not only the dynamics of the parameter \(m\), but its stable outcome. First, since \(q\leq 1\) and \(u<0\), the stability of the process is assured as the fourth-order term is positive. In the limit \(q\to 1\), the agents will act independently from the local group - and totally rely on their assessment of the position of the whole population - we have \(\lim_{q\to 1}u=0^{-}\) and \(\lim_{q\to 1}r>0\), which opens the door to non-trivial minima of \(\mathcal{H}\) at \(m_{c}\neq 0\). In being \(u<0\), the emergence of those \(m\neq 0\) minima are related to the change of convexity of \(\mathcal{H}\) at \(m=0\) from \(\frac{d^{2}\mathcal{H}}{dm^{2}}|_{m=0}>0\) to a concave profile \(\frac{d^{2}\mathcal{H}}{dm^{2}}|_{m=0}<0\). The fulfillment of the concave condition implies \[|m|=m_{c}=\sqrt{-\frac{r}{u}}=\sqrt{1-\frac{4fq(1-g)}{1-q}},\qquad|m|\leq 1. \tag{14}\] The graphical representation of which can be seen in Figure 1. After plugging the relations in Eq. (12) it reads \[m\sim(q_{c}-q)^{\beta} \tag{15}\] where \(\beta=1/2\) and \[q_{c}=\frac{1}{1+4f(1-g)}, \tag{16}\] which defines the critical peer-pressure relative weight, \(p_{c}\equiv 1-q_{c}\). Heed that instances where \(g<0\), imply in a smaller value of \(q_{c}\) and therefore a larger \(p_{c}\) in what we assume as a freethinker-prone behavior; on the other hand, when \(g>0\) we regard it as a conformist-prone case. Eq. (15) with \(\beta=1/2\) suggests a phase transition in the same universality class of the mean-field Ising model. We will discuss this point in more details in the following, when we will exhibit the results of Monte Carlo simulations of the model. Equation (16) corresponds to the limit \(t\to\infty\) of the solution to Eq. (17) which reads, \[m(t)=\left[\exp(-2rt)\left(\frac{1}{m_{0}^{2}}+\frac{u}{r}\right)-\frac{u}{r} \right]^{-1/2}=m_{c}\left[\exp(-2rt)\left(\frac{m_{c}^{2}}{m_{0}^{2}}-1\right) +1\right]^{-1/2}, \tag{17}\] where \(m_{0}\) is the macroscopic initial condition of the system. We can further explore the dynamical behavior of the system, especially when the parameters are set at their critical values and one lets the system evolve. In that case, two situations deserve particular attention: when the initial state corresponds to unanimity, \(m_{0}=1\), the factor given by \(\exp[-2rt]\left(\frac{m_{c}^{2}}{m_{0}^{2}}-1\right)\) in Eq. (17) can be seen as perturbation, whereas for the same factor dominates Eq. (17) when the initial condition is that of full disagreement (\(m_{0}\to 0\)). That results in two quite different behavior of \(m(t)\) in the short-term. Figure 1: Stationary state solution \(|m|\) in the plane \(f\) vs \(q\), for typical values of \(g\). The panels are graphical representations of Equation (14). As negative values of \(g\) imply a contrarian effect and positive a follower we notice an increase in the ordered region with an increase in \(g\). ### Probabilistic approach The previous deterministic approach can be further seasoned when fluctuations are taken into account. Recalling for a population of \(N\) individuals the macroscopic state, \(m\), changes by \(\mu=\pm 2/N\) every time an individual switches their opinion with each opinion fraction varying by \(1/N\), if we focusing on the time evolution of the fraction of individuals with opinion \(A\) at time \(t+1\) it reads, \[\eta_{A}(t+1)-\eta_{A}(t)=\frac{1}{N}p^{\dagger}(t)-\frac{1}{N}p(t), \tag{18}\] where \[p^{\dagger}(m,t)=w_{1}\,\eta_{A}+w_{2}\,\eta_{A}\,\eta_{B}^{2}, \tag{19}\] corresponds to the probability that the number of people with opinion \(A\) increases by one individual whereas \[p(m,t)=w_{3}\,\eta_{B}\,\eta_{A}^{2}+w_{4}\,\eta_{B}, \tag{20}\] gives one the probability that the number of people with opinion \(A\) diminishes by one individual. These quantities are identified as operators of creation and destruction in the probability space [57]. Taking into consideration that \(p^{\dagger}\) and \(p\) correspond to an increment and a reduction of the macroscopic state by \(\mu=2/N\), respectively, we can establish the following master equation for the evolution of \(m\) for a time step \(\epsilon=1/N\), \[\eta(m,t+\epsilon)=p^{\dagger}(m-\mu,t)\,\eta(m-\mu,t)+p(m+\mu,t)\,\eta(m+\mu, t)+\bar{p}(m,t)\eta(m,t), \tag{21}\] with \(\bar{p}\equiv 1-p^{\dagger}-p\) quantifying to the maintenance of the macroscopic state. Formally, Eq. (21) fits within the (normalized) one-step class of stochastic processes and thus, \[\eta(m,t)=\exp[\mathbf{L}_{KM}(m,t)]\eta(m_{0},0)\qquad\qquad\eta(m,0)=\delta( m-m_{0}). \tag{22}\] where, bearing in mind we are computing a normalized quantity and not simply \(N_{A}-N_{B}\), the Kramers-Moyal operator reads \[\mathbf{L}_{KM}(m,t)=\sum_{n=1}^{\infty}\frac{(-\mu)^{-n}}{n!}\frac{\partial ^{n}}{\partial m^{n}}\Big{[}p_{m,t}+(-1)^{n}\,p_{m,t}^{\dagger}\Big{]}. \tag{23}\] In considering \(\mu\to 0\) so that the variance of \(m_{t}\) is kept fixed and equal to \(\sigma_{m}^{2}(t)\), we neglect the terms of order \(n>2\) and the formal solution gets the form of a Fokker-Planck Equation, \[\frac{\partial\eta(m,t)}{\partial t}=-\mu^{-1}\frac{\partial}{\partial m}[D_{1 }(m,t)\,\eta(m,t)]+\frac{\mu^{-2}}{2}\frac{\partial^{2}}{\partial m^{2}}[D_{2 }(m,t)\,\eta(m,t)]. \tag{24}\] Therefrom we identify, \[D_{1}(m,t)\propto p(m,t)-\,p^{\dagger}(m,t) \tag{25}\] that defines the shape of the effective potential wherein the macroscopic dynamics of the order parameter evolves in time; on the other hand, \[D_{2}(m,t)\propto p(m,t)+\,p^{\dagger}(m,t) \tag{26}\] characterizes the magnitude of the fluctuations, which in the present social system we associate to the concept of social volatility [58]. Plugging the previous relations for the probability creation/annihilation operators into Eqs. (25)-(26) we finally get, \[D_{1}(m,t)\propto r\,m(t)+u\,m(t)^{3} \tag{27}\] as given by the effective Hamiltonian Landau approach. Regarding the second order term, \[D_{2}(m,t)\propto 1+q\,(4f-1)-\left[1+q(4f\,g-1)\right]m(t)^{2} \tag{28}\] Equation (28) indicates a macroscopic feature of this model that is worth noting: the magnitude of the fluctuations - i.e., the social volatility - exhibited by the system depends on its state in such a form that as \(m\) increases and approaches \(m=1\) (unanimity), the volatility decreases. On the contrary, when the group shows strong disagreement, \(m\approx 0\), they approach the sate of maximal volatility. That behavior contrasts with what is measured in quantitative finance where the realized volatility is directly proportional to price variations [59]. If we heed that within a physical context \(D_{2}\) is related to the (local) temperature of a physical system, we assert that our model is able to capture the cooling down and the heating up of a social system as it approaches or departs from consensus. Alternatively, the fluctuations given by \(D_{2}(m,t)\) can be understood from another perspective: bearing in mind that \(p^{\dagger}\) and \(p\) respectively correspond to an increment and a reduction of the macroscopic state by \(\mu=2/N\), we then interpret \(D_{1}\) as the imbalance between the likelihood of increment and reduction of \(p(m)\) whereas \(D_{2}\) is related to the average over increment and reduction, which is clearly non-vanishing. That is related to the microscopic change of opinion that each individual can make and which corresponds to a source of the macroscopic fluctuations that end up being expressed by the social volatility. Complementary, those fluctuations yield an entropy production that can be associated with the total information output due to the microscopic interaction between agents. Therefore, around a consensus we measure a less volatile state as it is less entropic and vice-versa. ### Monte Carlo simulation and finite-size scaling In Fig.2 (a) we provide a comparison between the analytical solution, as elucidated in the preceding section, and the Monte Carlo simulation for Galam's model with information-mediated independence. We plot the stationary values of the macrostate obtained from simulations for a population size \(N=10^{4}\) and from Eq. (14) for typical values of \(g\) and fixed \(f=0.5\). We see a good agreement between the results obtained in both ways. We observe clear order-disorder phase transitions as well, which denotes a collective change in the population behavior. These transitions denote a macroscopic change from the so-called ordered state characterized by the presence of a well-defined majority (\(|m|>0\)) to the disordered-state characterized by the absence of a clear majority (\(|m|\sim 0\)). When \(q=0\), we recover the usual result obtained from Galam's model., i.e., a consensus in the population (all agents sharing opinion A or B). In order to verify the universality class of the model, we have performed numerical simulations for distinct population sizes and applied a so-called scaling analysis. In addition to the order parameter, \(m\), we have also computed the fluctuations \(\chi\) of the order parameter (or "susceptibility"), defined as \[\chi\equiv N\,(\langle m^{2}\rangle-\langle m\rangle^{2}) \tag{29}\] and the Binder cumulant \(U\), defined as [60] \[U\equiv 1-\frac{\langle m^{4}\rangle}{3\,\langle m^{2}\rangle^{2}}\,. \tag{30}\] As an example, we exhibit in Fig. 2 the finite-size scaling (FSS) analysis of the order parameter, the susceptibility and the Binder cumulant for four lattice sizes, for \(f=0.5\) and \(g=0.2\). We have identified the critical value \(q_{c}\) by the crossing of the Binder cumulant curves, as can be seen in the main panel of Fig. 2 (b). We have obtained \(q_{c}\approx 0.385\), in excellent agreement with the analytical result of Eq. (16), that gives us \(q_{c}\approx 0.3846\). The critical exponents \(\beta\), \(\gamma\) and \(\nu\) were found by the best collapse of data. The FSS analysis was based on the standard relations, \[m(q,N) \sim N^{-\beta/\nu} \tag{31}\] \[\chi(q,N) \sim N^{\gamma/\nu}\] (32) \[U(q,N) \sim constant\] (33) \[q_{c}(N)-q_{c} \sim N^{-1/\nu} \tag{34}\] Considering the above equations, we obtained \(\beta\approx 0.5\), \(\gamma\approx 1.0\) and \(\nu\approx 2.0\). The data collapses are exhibited in Fig. 2, panels (b) - (d). We also verified that for other values of \(f\) and \(g\) the same exponents are obtained. The results suggest that the model belongs to the mean-field Ising model universality class, as well as it is in the same universality class of the Sznajd model and kinetic exchange opinion models in the presence of independence [45; 48; 51; 61]. The above results, namely Eq. (34) bridge with the dynamical analysis as Eq. (17) sets up a relaxation time scale, \(\tau\) of the macroscopic parameter that is inversely proportional to \(r\). Comparing the exponential factor in Eq. 17 with the usual term related to the relaxation \(e^{t/\tau}\) we obtain the relaxation time \(\tau=-1/(2r)\). Then, plugging Eq. (16) into the definition of \(r\) we get \(r=(q-q_{c})/(2q_{c})\) and finally \[\tau\sim(q_{c}-q)^{-1}. \tag{35}\] Explicitly, at criticality we have a relaxation time scale of the order parameter \(m(t)\) that diverges with the same scale-invariant functional form as the correlation length. Because the propagator given by the Fokker-Planck Equation rules all relaxation quantities of \(m(t)\) Figure 2: Results of Monte Carlo simulations of the model for \(f=0.5\). (a) Stationary order parameter \(m\) (collective opinion) as a function of \(q\). The symbols come from simulations for \(N=10^{4}\) and typical values of \(g\), and the lines are obtained from the analytical result, Eq. (14). We also show the results for fixed \(g=0.2\) and distinct population sizes \(N\), and the corresponding finite-size scaling analysis for the Binder cumulant \(U\) (panel (b)), the order parameter \(m\) (panel (c)) and the susceptibility \(\chi\) (panel (d)). We obtained \(q_{c}\approx 0.385\), \(\beta\approx 0.50\), \(\gamma\approx 1.00\) and \(\nu\approx 2.00\). Data are averaged over 100 simulations. the same slowing down near the transition is found for the self-correlation function of m(t), \(\langle m(t^{\prime})\,m(t)\rangle\sim\exp[-|t^{\prime}-t|/\tau]\). ## 4 Conclusions In this work we have studied an extension of the Galam's majority-rule model. For this purpose, we introduced the mechanism of independence, considering that individuals can act independently of their interaction groups with a given probability \(q\) that is complementary to the peer-pressure weight, \(p=1-q\). In addition, the individual inspects the global population opinion and such opinion affects their independent probability. When an individual does not act independently of the group, she follows the local majority opinion, as in the original Galam model. We have observed that the independence mechanism leads the population to undergo a critical change of behavior at \(q=q_{c}\) in which a minimal consensus \(m\neq 0\) - where \(m\) is the order parameter of the model -, optimizes the overall state of the population best than the case of complete disagreement. Within that phase transition context, we have derived an expression for order parameter \(m(t)\). From its stationary solution, we have obtained the critical behavior \(m\sim(q_{c}-q)^{\beta}\) with \(\beta=1/2\). We understood that as one approached the critical transition, the relaxation of the overall state is ever slower with its typical time scale \(\tau\sim(q_{c}-q)^{-1}\). The other canonical critical exponents \(\gamma\) and \(\nu\) were obtained through Monte Carlo simulations. From the set of critical exponents, we have verified that the model is in the ubiquitous universality class of the mean-field Ising model. That result is expected since as opinions are mapped into random variables, \(o_{i}=\pm 1\), the phase transition corresponds to a group \(\mathbb{Z}_{2}\) symmetry breaking of which the Ising model is the quintessential case. We mention that while our model is not defined by a physical Hamiltonian, the identification with the Ising universality class arises from a series of results coming from 3 methods: mean-field approach, Monte Carlo simulations, and finite-size scaling analysis. From the microscopic dynamics, we have derived the probabilistic evolution of \(m\). Those results allowed us to confirm the critical behavior of \(m\) from the first Kramers-Moyal coefficient and from the second the nature of the fluctuations that can be coined as social volatility. In respect of the latter, we have learned that the magnitude of the volatility depends on the state of the population in a inverse proportion relation way, so that in this case herding in opinion tends to induce less agitation in the population. Further insights into this subject-matter will be discussed in future work.
2306.04131
A chemotaxis-Navier-Stokes system with dynamical boundary conditions
A chemotaxis-Navier-Stokes system is studied under dynamical boundary conditions in a bounded convex domain with smooth boundary. This models the interaction of populations of swimming bacteria with the surrounding fluid. The existence of a global weak solution is proved using multiple layers of approximations and Rothe's method for the time discretization.
Baili Chen
2023-06-07T04:01:24Z
http://arxiv.org/abs/2306.04131v1
# A chemotaxis-Navier-Stokes system with dynamical boundary conditions ###### Abstract. A chemotaxis-Navier-Stokes system is studied under dynamical boundary conditions in a bounded convex domain \(\Omega\in\mathbb{R}^{3}\) with smooth boundary. This models the interaction of populations of swimming bacteria with the surrounding fluid. The existence of a global weak solution is proved using multiple layers of approximations and Rothe's method for the time discretization. Key words and phrases:Weak solution; Dynamical boundary conditions; Chemotaxis; Navier-Stokes; Rothe's method. 2020 Mathematics Subject Classification: 35A01, 35D30, 35Q30, 35Q92, 35A35 ## 1. Introduction The Chemotaxis-Navier-Stokes system \[\left\{\begin{array}{ll}\partial_{t}c-\alpha\Delta c+u\cdot\nabla c+nf(c)=0& \text{a.e. in}&\Omega\times(0,T),\\ \partial_{t}n-\nabla\cdot(\beta\nabla n-g(n,\ c)\nabla c)+u\cdot\nabla n=0& \text{a.e. in}&\Omega\times(0,T),\\ \partial_{t}u-\nabla\cdot(\xi\nabla u)+(u\cdot\nabla)u+\nabla p=n\nabla\sigma& \text{a.e. in}&\Omega\times(0,T),\\ \nabla\cdot u=0&\text{a.e. in}&\Omega\times(0,T).\end{array}\right. \tag{1.1}\] in a bounded convex domain with smooth boundary describes an oxygen-driven bacteria suspension swimming in an incompressible fluid like water. The above system, which was first introduced by Tuval et al. [10], consists of three coupled equations: an equation for the concentration of oxygen \(c\), an equation for the population density \(n\) of the bacteria, and the Navier-Stokes equation describing the water flow \(u\). Equation (1.1) has been studied by several authors (e.g. [1], [2], [3]). In their works, the equations are endowed with Neumann or Robin boundary conditions for both \(c\) and \(n\). In this work, we extend the model by assuming dynamical boundary condition for oxygen concentration \(c\), which is given by \[\partial_{t}c=\Delta_{\tau}c-b\partial_{\eta}c\quad\text{on}\quad\partial\Omega \tag{1.2}\] Here we assume there is an oxygen source acting on the boundary, which depends on the oxygen flux \(\partial_{\eta}c\) across the boundary. The Laplace-Beltrami operator \(\Delta_{\tau}\) on the boundary describes the diffusion of oxygen along the boundary. The derivation of dynamical boundary conditions is introduced in [4]. To the best of our knowledge, Chemotaxis-Navier-Stokes system with the above dynamical boundary condition has not been addressed in the existing literature. To complete the system, we introduce the following boundary conditions for bacterial and fluid field. \[\beta\partial_{\eta}n=g(n,\ c)\partial_{\eta}c\quad\text{on}\ \partial\Omega \tag{1.3}\] \[u=0\quad\text{on}\quad\partial\Omega \tag{1.4}\] together with the initial conditions \[c(x,\ 0)=c_{0}(x),\ n(x,\ 0)=n_{0}(x),\ u(x,\ 0)=u_{0}(x) \tag{1.5}\] The aim of this paper is to derive the existence of a global weak solution of the system (1.1) - (1.5), which describes the interaction between the bacteria density \(n\), the oxygen concentration \(c\), the fluid velocity field \(u\) and the associated pressure \(p\) in a bounded convex domain \(\Omega\) with smooth boundary. We set the gradient of gravitational potential \(\sigma\) to be constant (i.e. \(\nabla\sigma\equiv\text{const}\)). The existence of a global weak solution is proved by the discretization in time (Rothe's method). This technique was also used in previous papers (e.g. [5], [7], [8], [9]) to solve other types of problems. To fix the notation, we denote by \(H^{m}(\Omega)\) the standard Sobolev space in \(L^{2}(\Omega)\) with derivative of order less than or equal to \(m\) in \(L^{2}(\Omega)\). Let \(D(\Omega)\) be the space of \(C^{\infty}\) function with compact support contained in \(\Omega\). The closure of \(D(\Omega)\) in \(H^{m}(\Omega)\) is denoted by \(H^{m}_{0}(\Omega)\). Let \(\Upsilon\) be the space \[\Upsilon=\{u\in D(\Omega),\nabla\cdot u=0\}\] The closure of \(\Upsilon\) in \(L^{2}(\Omega)\) and in \(H^{1}_{0}(\Omega)\) are denoted by \(H\) and \(V\) respectively. We denote by \(L^{r}(0,\ T;\ X)\) the Banach space of all measurable functions \[v:\ [0,\ T]\to X\] with norm \[\begin{array}{c}\|v\|_{L^{r}(0,\ T;\ X)}=\left(\int_{0}^{T}\|v\|_{X}^{r}dt \right)^{\frac{1}{r}},\ \mbox{for}\ 1\leq r<\infty\\ \mbox{or}\ \ \ \|v\|_{L^{\infty}(0,\ T;\ X)}=\mbox{ess sup}_{0\leq t\leq T}\|v\|_{X}, \ \mbox{for}\ r=\infty.\end{array}\] The trace of a function is denoted by the subscript \(\tau.\) For example, \(c_{\tau}\) denotes the trace of function \(c.\) Throughout this paper, we denote by \(M\) and \(C\) the constants whose values may be different at each occurrence. Before stating the main result, we make the following assumptions throughout this paper: \[\begin{array}{ll}(H_{1})&f(\cdot)\ in\ C^{0}(R)\qquad\mbox{ with }f_{0}\leq f(\cdot)\leq f_{1};\ f_{0},\ f_{1}\in R^{+}.\\ (H_{2})&g(\cdot,\ \cdot)\ in\ C^{1}(R^{2})\quad\mbox{with }|g(\cdot,\ \cdot)|\leq g _{1};\ g_{1}\in R^{+}.\\ (H_{3})&\alpha,\ \beta,\ \xi,\ b\in R^{+}.\end{array}\] The main result of this paper is: **Theorem 1.1**.: _Suppose \((H_{1})-(H_{3})\) hold, \((c_{0},\ n_{0},\ u_{0})\in(L^{2}(\Omega))^{2}\times H.\) Then there exists functions \((c,\ n)\in\left(L^{\infty}(0,\ T,\ L^{2}(\Omega))\bigcap L^{2}(0,\ T,\ H^{1}( \Omega))\right)^{2}\) and \(u\in L^{\infty}(0,\ T,\ H)\bigcap L^{2}(0,\ T,\ V)\) such that \((c(0),\ n(0),\ u(0))=(c_{0},\ n_{0},\ u_{0})\) and_ \[\begin{cases}&\int_{\Omega}\partial_{t}c(t)\phi_{1}dx+\alpha\int_{\Omega} \nabla c(t)\nabla\phi_{1}dx+\frac{\alpha}{b}\int_{\partial\Omega}\partial_{t} c_{\tau}(t)\phi_{1\tau}d\sigma+\frac{\alpha}{b}\int_{\partial\Omega}\nabla_{ \tau}c(t)\nabla_{\tau}\phi_{1}d\sigma\\ &\qquad+\int_{\Omega}u\nabla c(t)\phi_{1}dx=\int_{\Omega}-n(t)f(c(t))\phi_{1} dx,\\ &\int_{\Omega}\partial_{t}n(t)\phi_{2}dx+\int_{\Omega}\left(\beta\nabla n(t)-g(n (t),\ c(t))\nabla c(t)\right)\nabla\phi_{2}dx\\ &\qquad\qquad\qquad\qquad+\int_{\Omega}u(t)\nabla n(t)\phi_{2}dx=0,\\ &\int_{\Omega}\partial_{t}u(t)\phi_{3}dx+\int_{\Omega}\xi\nabla u(t)\nabla\phi_ {3}dx+\int_{\Omega}(u(t)\cdot\nabla)u(t)\phi_{3}dx=\\ &\int_{\Omega}n(t)\nabla\sigma\phi_{3}dx.\end{cases} \tag{1.6}\] _for all \((\phi_{1},\ \phi_{2},\ \phi_{3})\in(H^{1}(\Omega))^{2}\times V\)._ The paper is organized as follows. In the next section, we introduce some preliminary lemmas and time-discretization scheme (Rothe's method). We also outline the approaches to prove the main result. Section 3 is devoted to proving Theorem 2.3. We use regularization technique and fixed-point theorem to prove the existence of solution for an auxiliary problem, then use Galerkin method to show existence of solutions to the discretized scheme. In section 4, we derive several a priori estimates which will allow us to pass limits in the discretization scheme and thereby verify Theorem 1.1. ## 2. Preliminaries In this paper, we will use the following Gronwall's lemma in the discrete form. **Lemma 2.1**.: _Let \(0<k<1,\ (a_{i})_{i\geq 1},\) and \((A_{i})_{i\geq 1}\) be sequence of real, non-negative numbers. Assuming that \((A_{i})_{i\geq 1}\) is non-decreasing and that_ \[a_{i}\leq A_{i}+k\sum_{j=0}^{i}a_{j},\quad\text{for }i=0,1,2,\cdots\] _then_ \[a_{i}\leq\frac{1}{1-k}A_{i}exp\left((i-1)\frac{k}{1-k}\right),\quad\text{for }i=0,1,2,\cdots\] We also need the following relation: \[2\int_{\Omega}a(a-b)dx=\|a\|_{L^{2}(\Omega)}^{2}-\|b\|_{L^{2}(\Omega)}^{2}+\|a -b\|_{L^{2}(\Omega)}^{2}. \tag{2.1}\] We will frequently use Young's inequality: \[ab\leq\delta a^{2}+\frac{1}{4\delta}b^{2}.\] The compactness result presented in the next lemma will allow us to pass the limit in the Rothe approximation. **Lemma 2.2**.: _Let \(X,\ Y\) be two Banach spaces, such that \(Y\subset X\), the injection being compact._ _Assume that \(G\) is a family of functions in \(L^{2}(0,\ T;\ Y)\bigcap L^{p}(0,\ T;\ X)\) for some \(T>0\) and \(p>1\), such that_ \[\begin{split}& G\text{ is bounded in }L^{2}(0,\ T;\ Y)\text{ and }L^{p}(0,\ T;\ X);\\ &\sup_{g\in G}\int_{0}^{T-a}\|g(a+s)-g(s)\|_{X}^{p}ds\to 0\quad \text{as }a\to 0,\ a>0\end{split} \tag{2.2}\] _Then the family \(G\) is relatively compact in \(L^{p}(0,\ T;\ X)\)._ The proof the main result stated in section 1 is based on the Rothe's method of time discretization. We divide the time interval \([0,\ T]\) into \(N\) subintervals \([t_{m-1},\ t_{m}],\)\(t_{m}=km,\)\(k=T/N,\)\(m=1,\ 2,\ \cdots,\ N.\) The time discretized variational formulation reads as \[\left\{\begin{array}{ll}\delta_{t}c^{m}-\alpha\Delta c^{m}+u^{m}\cdot\nabla c ^{m}+n^{m}f(c^{m})=0&\text{a.e. in}\quad\Omega\times(0,T),\\ \delta_{t}n^{m}-\nabla\cdot(\beta\nabla n^{m}-g(n^{m},\ c^{m})\nabla c^{m})+u^ {m}\cdot\nabla n^{m}=0&\text{a.e. in}\quad\Omega\times(0,T),\\ \delta_{t}u^{m}-\nabla\cdot(\xi\nabla u^{m})+(u^{m}\cdot\nabla)u^{m}+\nabla p^ {m}=n^{m}\nabla\sigma&\text{a.e. in}\quad\Omega\times(0,T),\\ \nabla\cdot u^{m}=0&\text{a.e. in}\quad\Omega\times(0,T),\\ \delta_{t}c^{m}=\Delta_{\tau}c^{m}-b\partial_{\eta}c^{m}&\text{a.e. on}\quad \partial\Omega,\\ \beta\partial_{\eta}n^{m}=g(n^{m},\ c^{m})\partial_{\eta}c^{m}&\text{a.e. on}\quad \partial\Omega,\\ u^{m}=0&\text{a.e. on}\quad\partial\Omega,\\ c^{0}=c_{0}(x),\ n^{0}=n_{0}(x),\ u^{0}=u_{0}(x).&\end{array}\right. \tag{2.3}\] Here, we use the notation: \(\delta_{t}c^{m}=\frac{c^{m}-c^{m-1}}{k},\ \delta_{t}n^{m}=\frac{n^{m}-n^{m-1} }{k},\delta_{t}u^{m}=\frac{u^{m}-u^{m-1}}{k},\) where \(c^{m},\ n^{m},\ u^{m},\ m=1,2,\cdots,N\) are the approximations of \(c(x,t_{m}),\ n(x,t_{m}),\\) and \(u(x,t_{m})\) respectively. The existence result for the discrete scheme is stated in the following theorem, which will be proved in section 3. **Theorem 2.3**.: _Suppose \((H_{1})-(H_{3})\) holds, \((c^{0},\ n^{0},\ u^{0})\in(L^{2}(\Omega))^{2}\times H.\) Then there exists \((c^{m},\ n^{m},\ u^{m})\in(L^{2}(\Omega))^{2}\times H\) solving the discrete problem (2.3) for time step \(k\) small enough._ With this result, we introduce the Rothe functions: \[\left\{\begin{array}{ll}&\tilde{c}_{k}(t)=c^{m-1}+(t-t_{m-1})\delta_{t}c^{m },\\ &\tilde{c}_{k\tau}(t)=c_{\tau}^{m-1}+(t-t_{m-1})\delta_{t}c_{\tau}^{m},\\ &\tilde{n}_{k}(t)=n^{m-1}+(t-t_{m-1})\delta_{t}n^{m},\\ &\tilde{u}_{k}(t)=u^{m-1}+(t-t_{m-1})\delta_{t}u^{m},\quad\text{for $t_{m-1} \leq t\leq t_{m},\ 1\leq m\leq N$}\end{array}\right. \tag{2.4}\] and step functions: \[\left\{\begin{array}{ll}&(c_{k}(t),\ c_{k\tau}(t),\ n_{k}(t),\ u_{k}(t))=(c ^{m},\ c_{\tau}^{m},\ n^{m},\ u^{m}),\\ &(c_{k}(0),\ c_{k\tau}(0),\ n_{k}(0),\ u_{k}(0))=(c_{0},\ c_{0\tau},\ n_{0},\ u_{0}). \end{array}\right. \tag{2.5}\] for \(t_{m-1}\leq t\leq t_{m},\quad 1\leq m\leq N.\) We will prove the Rothe functions \((\tilde{c}_{k},\ \tilde{c}_{k\tau},\ \tilde{n}_{k},\ \tilde{u}_{k})\) and the step functions \((c_{k},\ c_{k\tau},\ n_{k},\ u_{k})\) will converge to the limit functions \((c,\ c_{\tau},\ n,\ u)\) as \(k\to 0\), and the limit functions will be the solutions to problem (1.6). Thus, the main result will be proved. ## 3. Proof of Theorem 2.3 The proof of Theorem 2.3 is based on semi-Galerkin method. Let \(w_{j}\) be an orthogonal basis of \(V\) that is orthonormal in \(H\). We denote by \(V_{j}\) the finite vector space spanned by \(\{w_{i}\}_{1\leq i\leq j}\). For a fixed \(m\), let \(u_{j}^{m}\in V_{j}\) be the Galerkin approximation of \(u^{m}\), and assume \((c^{m-1},\ n^{m-1},\ u^{m-1})\) are given, we consider the following problem: Find \((c_{j}^{m},\ n_{j}^{m},\ u_{j}^{m})\in\left(H^{1}(\Omega)\right)^{2}\times V_{j}\), satisfying the following elliptic system: \[\left\{\begin{array}{ll}&-k\alpha\Delta c_{j}^{m}+k(u_{j}^{m}\cdot\nabla)c_ {j}^{m}+c_{j}^{m}=-kn_{j}^{m}f(c_{j}^{m})+h&\text{a.e. in }\Omega\times(0,T),\\ &-k\nabla\cdot\left(\beta\nabla n_{j}^{m}-g(n_{j}^{m},\ c_{j}^{m})\nabla c_{j} ^{m}\right)+k(u_{j}^{m}\cdot\nabla)n_{j}^{m}+n_{j}^{m}=l&\text{a.e. in }\Omega\times(0,T),\\ &-k\nabla\cdot(\xi\nabla u_{j}^{m})+k(u_{j}^{m}\cdot\nabla)u_{j}^{m}+k\nabla P _{j}^{m}+u_{j}^{m}=kn_{j}^{m}\nabla\sigma+q&\text{a.e. in }\Omega\times(0,T),\\ &\nabla\cdot u_{j}^{m}=0&\text{a.e. in }\Omega\times(0,T),\\ &k\partial_{\eta}c_{j}^{m}=\frac{k}{b}\Delta_{\tau}c_{j_{\tau}}^{m}-\frac{1}{b }c_{j_{\tau}}^{m}+\frac{1}{b}h_{\tau}&\text{a.e. on }\partial\Omega,\\ &\beta\partial_{\eta}n_{j}^{m}=g(n_{j}^{m},\ c_{j}^{m})\partial_{\eta}c_{j}^{m }&\text{a.e. on }\partial\Omega,\\ &u_{j}^{m}=0&\text{a.e. on }\partial\Omega.\end{array}\right. \tag{3.1}\] where \((h,\ h_{\tau},\ l,\ q)=(c^{m-1},\ c_{\tau}^{m-1},\ n^{m-1},\ u^{m-1})\). The existence result of the above problem is stated in the following theorem: **Theorem 3.1**.: _Suppose \((H_{1})-(H_{3})\) holds, and \((h,\ l,\ q,\ h_{\tau})\in(L^{2}(\Omega))^{3}\times L^{2}(\partial\Omega),\) then there exists a solution \((c_{j}^{m},\ n_{j}^{m},\ u_{j}^{m})\in(H^{1}(\Omega))^{2}\times V\) of problem (3.1)._ We will then move on to derive estimates on the solution \((c_{j}^{m},\ c_{j}^{m},\ c_{j}^{m})\) of problem (3.1). Those estimates allow us to pass the limits by letting \(j\rightarrow\infty\), the sequences will converge to limit functions, which will be the solutions of the discrete problem (2.3), and Theorem 2.3 will be proved. The proof of Theorem 3.1 will use the following lemma: **Lemma 3.2**.: _Suppose \((H_{1})-(H_{3})\) holds, and \((h,\ l,\ h_{\tau})\in(L^{2}(\Omega))^{2}\times L^{2}(\partial\Omega),\) then for fixed \(\hat{u}\in V_{j}\), there exist a solution \((c,\ n)\in(H^{1}(\Omega))^{2}\) of the following problem:_ \[\left\{\begin{array}{c}-k\alpha\Delta c+k(\hat{u}\cdot\nabla c)+c=-knf(c)+h, \\ -k\nabla\cdot(\beta\nabla n-g(n,\ c)\nabla c)+k(\hat{u}\cdot\nabla n)+n=l,\\ k\partial_{\eta}c=\frac{k}{b}\Delta_{\tau}c_{\tau}-\frac{1}{b}c_{\tau}+\frac {1}{b}h_{\tau},\\ \beta\partial_{\eta}n=g(n,\ c)\partial_{\eta}c.\end{array}\right. \tag{3.2}\] 3.1 Proof of Lemma 3.2 To prove Lemma 3.2, we first consider a regularized problem, we smooth \(\hat{u}\) by replacing \(\hat{u}\) with \(J_{\epsilon}\hat{u}\), where \(J_{\epsilon}\hat{u}=\left((\psi_{\epsilon}u)*w_{\epsilon}\right)_{div}.\) Here \[\psi_{\epsilon}(x):=\left\{\begin{array}{ll}0&\mbox{if \ dis }(x,\ \partial\Omega)\leq 2 \epsilon,\\ 1&\mbox{elsewhere}\end{array}\right.\] \((\psi_{\epsilon}u)*w_{\epsilon}\) denote the standard regularization of \(\psi_{\epsilon}u\) with kernel \(w_{\epsilon}\) having support in a ball of radius \(\epsilon\). The symbol \((\cdot)_{div}\) comes from the Helmholtz decomposition, see [12]. \(J_{\epsilon}\hat{u}\) keeps Dirichlet boundary condition and divergence free property, therefore, we have the identity \(\int_{\Omega}(J_{\epsilon}\hat{u}\cdot\nabla c)cdx=0\), see [6]. We will use this identity frequently throughout this paper. We have the following existence result for the regularized problem. **Lemma 3.3**.: _Suppose \((H_{1})-(H_{3})\) holds, and \((h,\ l,\ h_{\tau})\in(L^{2}(\Omega))^{2}\times L^{2}(\partial\Omega),\) then for fixed \(\epsilon\in(0,\ 1)\) and \(\hat{u}\in V_{j}\), there exist a solution \((c_{\epsilon},\ n_{\epsilon})\in(H^{1}(\Omega))^{2}\) of the following problem:_ \[\begin{array}{c}-k\alpha\Delta c_{\epsilon}+kJ_{\epsilon}\hat{u}\cdot\nabla c _{\epsilon}+c_{\epsilon}=-kn_{\epsilon}f(c_{\epsilon})+h,\\ -k\nabla\cdot(\beta\nabla n_{\epsilon}-g(n_{\epsilon},\ c_{\epsilon})\nabla c _{\epsilon})+kJ_{\epsilon}\hat{u}\cdot\nabla n_{\epsilon}+n_{\epsilon}=l,\\ k\partial_{\eta}c_{\epsilon}=\frac{k}{b}\Delta_{\tau}c_{\epsilon\tau}-\frac {1}{b}c_{\epsilon\tau}+\frac{1}{b}h_{\tau},\\ \beta\partial_{\eta}n_{\epsilon}=g(n_{\epsilon},\ c_{\epsilon})\partial_{\eta }c_{\epsilon}.\end{array} \tag{3.3}\] **Proof of Lemma 3.3**: In the proof, we omit subscript \(\epsilon\), write \((c_{\epsilon},\ n_{\epsilon})\) as \((c,\ n)\). We use Schaefer's fixed-point theorem. We define an operator \(\Phi:\quad X\to X,\;\) where \[X=H^{1}(\Omega)\times H^{1}(\Omega)\] Fix \((\hat{c},\;\hat{n})\;\in X,\;\) we set \(\Phi(\hat{c},\;\hat{n})=(c,\;n),\) where \((c,\;n)\) is the solution to the following problem: \[-k\alpha\Delta c+kJ_{\epsilon}\hat{u}\cdot\nabla c+c=-k\hat{n}f( \hat{c})+h, \tag{3.4}\] \[-k\nabla\cdot(\beta\nabla n-g(\hat{n},\;c)\nabla c)+kJ_{\epsilon }\hat{u}\cdot\nabla n+n=l,\] (3.5) \[k\partial_{\eta}c=\frac{k}{b}\Delta_{\tau}c_{\tau}-\frac{1}{b}c _{\tau}+\frac{1}{b}h_{\tau},\] (3.6) \[\beta\partial_{\eta}n=g(\hat{n},\;c)\partial_{\eta}c. \tag{3.7}\] We want to show \(\Phi\) is a continuous and compact mapping of \(X\) into itself, such that the set \(\{x\in X:\;x=\lambda\Phi(x)\;\text{for some}\;0\leq\lambda\leq 1\}\) is bounded. Therefore, by Schaefer's fixed-point theorem, \(\Phi\) has a fixed point. Multiply both sides of the equation (3.4) by \(c,\) use \(\int_{\Omega}(J_{\epsilon}\hat{u}\cdot\nabla c)cdx=0,\) we arrive at \[-\alpha\int_{\partial\Omega}c(h_{2}-\tfrac{1}{b}c_{\tau}+\tfrac{k}{b}\Delta_{ \tau}c_{\tau})d\sigma+\alpha k\|\nabla c\|_{L^{2}}^{2}+\|c\|_{L^{2}}^{2}\] \[=\int_{\Omega}h_{1}c.\] Here \(h_{1}=-k\hat{n}f(\hat{c})+h,\;h_{2}=\tfrac{1}{b}h_{\tau}.\) Use Holder's inequality and Young's inequality, we get \[\begin{array}{l}\frac{\alpha k}{b}\|\nabla c_{\tau}\|_{L^{2}}^{2}+\alpha k \|\nabla c\|_{L^{2}}^{2}+(1-\delta)\|c\|_{L^{2}}^{2}+\alpha(\tfrac{1}{b}- \delta)\|c_{\tau}\|_{L^{2}}^{2}\\ \leq\tfrac{1}{4\delta}\|h_{1}\|_{L^{2}}^{2}+\tfrac{\alpha}{4\delta}\|h_{2}\|_ {L^{2}}^{2}\\ \leq\tfrac{1}{2\delta}\|h\|_{L^{2}}^{2}+\tfrac{\alpha}{4\delta b^{2}}\|h_{ \tau}\|_{L^{2}}^{2}+\tfrac{1}{2\delta}k^{2}f_{1}^{2}\|\hat{n}\|_{L^{2}}^{2}. \end{array} \tag{3.8}\] Choose \(\delta<\min(1,\frac{1}{b})\) in the above inequality, we have \[\|c\|_{L^{2}}^{2}+\|\nabla c\|_{L^{2}}^{2}\leq C(\|h\|_{L^{2}}^{2}+\|h_{\tau} \|_{L^{2}}^{2}+\|\hat{n}\|_{L^{2}}^{2}). \tag{3.9}\] Multiply (3.5) by \(n,\) integrate by parts, and use boundary conditions, we get, \[k\beta\|\nabla n\|_{L^{2}}^{2}+\|n\|_{L^{2}}^{2}=\int_{\Omega}ln+ k\int_{\Omega}g(\hat{n},\;c)\nabla c\nabla n\] \[\leq\delta\|n\|_{L^{2}}^{2}+\tfrac{1}{4\delta}\|l\|_{L^{2}}^{2}+ \tfrac{kg_{1}}{2}(\|\nabla n\|_{L^{2}}^{2}+\|\nabla c\|_{L^{2}}^{2}),\] which gives, \[(k\beta-\frac{kg_{1}}{2})\|\nabla n\|_{L^{2}}^{2}+(1-\delta)\|n\|_{L^{2}}^{2} \leq\frac{1}{4\delta}\|l\|_{L^{2}}^{2}+\frac{kg_{1}}{2}\|\nabla c\|_{L^{2}}^{2}.\] Choose \(\delta<1\) in the above inequality, and require \(\beta>\frac{g_{1}}{2}\), we have \[\|n\|_{L^{2}}^{2}+\|\nabla n\|_{L^{2}}^{2}\leq C(\|l\|_{L^{2}}^{2}+\|\nabla c\|_ {L^{2}}^{2}). \tag{3.10}\] This together with (3.9) shows \[\|n\|_{L^{2}}^{2}+\|\nabla n\|_{L^{2}}^{2}\leq C(\|l\|_{L^{2}}^{2}+\|h\|_{L^{2 }}^{2}+\|h_{\tau}\|_{L^{2}}^{2}+\|\hat{n}\|_{L^{2}}^{2}). \tag{3.11}\] (3.9) and (3.11) show that \(\Phi\) maps \(X\) to \(X\). Next, we want to show the set \[S=\{(c,\ n)\in H^{1}\times H^{1}:\ (c,\ n)=\lambda\Phi(c,\ n)\ \text{for some}\ 0\leq \lambda\leq 1\}\] is bounded. We consider the equation \((\frac{c}{\lambda},\ \frac{n}{\lambda})=\Phi(c,\ n)\): \[-k\alpha\Delta c+kJ_{\epsilon}\hat{u}\cdot\nabla c+c=\lambda(- knf(c)+h), \tag{3.12}\] \[-k\beta\Delta n+kJ_{\epsilon}\hat{u}\cdot\nabla n+n=\lambda\left( -k\nabla\cdot(g(n,\ \frac{c}{\lambda})\frac{\nabla c}{\lambda})+l\right),\] (3.13) \[k\partial_{\eta}c=\frac{k}{b}\Delta_{\tau}c_{\tau}-\frac{1}{b}c _{\tau}+\frac{\lambda}{b}h_{\tau},\] (3.14) \[\beta\partial_{\eta}n=g(n,\ \frac{c}{\lambda})\partial_{\eta}c. \tag{3.15}\] Multiply (3.12) by \(c\), integrate with respect to \(x\), we get \[\frac{\alpha k}{b}\|\nabla_{\tau}c_{\tau}\|_{L^{2}}^{2}+\frac{ \alpha}{b}\|c_{\tau}\|_{L^{2}}^{2}-\frac{\alpha\lambda}{b}\int_{\partial\Omega }ch_{\tau}d\sigma+k\alpha\|\nabla c\|_{L^{2}}^{2}+\|c\|_{L^{2}}^{2}\] \[=-\lambda\int_{\Omega}knf(c)c+\lambda\int_{\Omega}hc.\] Use Young's inequality, we arrive at \[\frac{\alpha k}{b}\|\nabla_{\tau}c_{\tau}\|_{L^{2}}^{2}+(\frac{ \alpha}{b}-\frac{\alpha\lambda}{b}\delta)\|c_{\tau}\|_{L^{2}}^{2}+k\alpha\| \nabla c\|_{L^{2}}^{2}+(1-\lambda kf_{1}\delta-\lambda\delta)\|c\|_{L^{2}}^{2}\] \[\leq\frac{\alpha\lambda}{4b\delta}\|h_{\tau}\|_{L^{2}}^{2}+\frac {\lambda kf_{1}}{4\delta}\|n\|_{L^{2}}^{2}+\frac{\lambda}{4\delta}\|h\|_{L^{2} }^{2}.\] Choose \(\delta<\min(\frac{1}{\lambda},\frac{1}{\lambda+\lambda kf_{1}})\) in the above inequality, we have \[\alpha k\|\nabla c\|_{L^{2}}^{2}\leq\frac{1}{4\delta}\left(\lambda kf_{1}\|n\|_ {L^{2}}^{2}+\lambda\|h\|_{L^{2}}^{2}\right)+\frac{\lambda\alpha}{4\delta b}\| h_{\tau}\|_{L^{2}}^{2}. \tag{3.16}\] Multiply (3.13) by \(n\), integrate with respect to \(x\), use \(\int_{\Omega}(J_{\epsilon}\hat{u})\cdot(\nabla n)n=0\), then integrate by parts, use boundary condition (3.15), H\(\ddot{o}\)lder and Young's inequalities, we arrive at: \[k\beta\|\nabla n\|_{L^{2}}^{2}+\|n\|_{L^{2}}^{2}\leq\lambda(\hat{\delta}\|n\|_ {L^{2}}^{2}+\frac{1}{4\hat{\delta}}\|l\|_{L^{2}}^{2})+kg_{1}(\frac{1}{4\hat{ \delta}}\|\nabla c\|_{L^{2}}^{2}+\hat{\delta}\|\nabla n\|_{L^{2}}^{2}),\] which gives, \[\begin{array}{l}(1-\lambda\hat{\delta})\|n\|_{L^{2}}^{2}+(k\beta-kg_{1}\hat{ \delta})\|\nabla n\|_{L^{2}}^{2}\\ \leq\frac{\lambda}{4\hat{\delta}}\|l\|_{L^{2}}^{2}+\frac{kg_{1}}{4\hat{\delta }}\|\nabla c\|_{L^{2}}^{2}.\end{array} \tag{3.17}\] Choose \(\hat{\delta}<\min(\frac{1}{\lambda},\ \frac{\beta}{g_{1}})\) in (3.17), with fixed \(\delta,\ \hat{\delta}\) in (3.16) and (3.17), choose \(k\) small, such that \[\frac{\lambda kf_{1}}{4\hat{\delta}}<1-\lambda\hat{\delta}.\] Require \(\alpha\) to be a constant s.t. \(\alpha>\frac{g_{1}}{4\hat{\delta}}.\) By adding (3.16) and (3.17), we can absorb the terms with \(\|\nabla c\|_{L^{2}}^{2}\) and \(\|n\|_{L^{2}}^{2}\) in the right hand side of the equation to the left hand side, and obtain: \[\begin{array}{l}(\alpha k-\frac{kg_{1}}{4\hat{\delta}})\|c\|_{H^{1}}^{2}+(k \beta-kg_{1}\hat{\delta})\|\nabla n\|_{L^{2}}^{2}+(1-\lambda\hat{\delta}- \frac{\lambda kf_{1}}{4\hat{\delta}})\|n\|_{L^{2}}^{2}\\ \leq\frac{\lambda}{4\hat{\delta}}\|h\|_{L^{2}}^{2}+\frac{\lambda\alpha}{4\hat {\delta}b}\|\ h_{\tau}\|_{L^{2}}^{2}+\frac{\lambda}{4\hat{\delta}}\|l\|_{L^{2} }^{2}.\end{array} \tag{3.18}\] Therefore, \(\|n\|_{H^{1}},\ \|c\|_{H^{1}}\) is bounded, hence the set \(S\) is bounded. Next, we want to check the mapping \(\Phi\) is compact. We proceed to show that \(\Phi\) maps bounded set \((\hat{c},\ \hat{n})\in H^{1}\times H^{1}\) to bounded set \((c,\ n)=\Phi(\hat{c},\ \hat{n})\) in \(H^{2}\times H^{2}\): For \((\hat{c},\ \hat{n})\) bounded in \(H^{1}\times H^{1}\), from equation (3.4), (3.6), and elliptic regularity theorem, we know that \(\|c\|_{H^{2}}\) is bounded. Boundedness of \(\|n\|_{H^{2}}\) comes from equation (3.5), (3.7), elliptic regularity theorem, and boundedness of \(\|c\|_{H^{2}}\). So we have \((c,\ n)=\Phi(\hat{c},\ \hat{n})\) is bounded in \(H^{2}\times H^{2}\). Since the embedding of \(H^{2}\times H^{2}\) into \(H^{1}\times H^{1}\) is compact, the bounded set \((c,\ n)\) in \(H^{2}\times H^{2}\) is relatively compact in \(H^{1}\times H^{1}\). Therefore, the mapping \(\Phi\) is compact. The last step is to show that \(\Phi\) is continuous. Let \((\hat{c}_{n},\ \hat{n}_{n})\) converge to \((\hat{c},\ \hat{n})\) in \(H^{1}\times H^{1}\) strongly. Let \((c_{n},\ n_{n})=\Phi(\hat{c}_{n},\ \hat{n}_{n})\), so we have \[\begin{array}{l}-k\alpha\Delta c_{n}+kJ_{\epsilon}\hat{u}\cdot\nabla c_{n}+ c_{n}=-k\hat{n}_{n}f(\hat{c}_{n})+h,\\ -k\beta\Delta n_{n}+kJ_{\epsilon}\hat{u}\cdot\nabla n_{n}+n_{n}=-k\nabla\cdot (g(\hat{n}_{n},\ c_{n})\nabla c_{n})+l,\\ k\partial_{\eta}c_{n}=\frac{k}{b}\Delta_{\tau}c_{n}{}_{\tau}-\frac{1}{b}c_{ n}{}_{\tau}+\frac{1}{b}h_{\tau},\\ \beta\partial_{\eta}n_{n}=g(\hat{n}_{n},\ c_{n})\partial_{\eta}c_{n}.\end{array} \tag{3.19}\] Since the sequence \((\hat{c}_{n},\ \hat{n}_{n})\) is bounded in \(H^{1}\times H^{1}\), the same argument to show the compactness of \(\Phi\) can be applied here to derive boundedness of \((c_{n},\ n_{n})\) in \(H^{2}\times H^{2}\). Since \(H^{2}\times H^{2}\) is compactly embedded into \(H^{1}\times H^{1}\), there exist \((c,\ n)\in H^{2}\times H^{2}\) and subsequence of \((c_{n},\ n_{n})\), still denoted as \((c_{n},\ n_{n})\), s.t. \[(c_{n},\ n_{n})\rightharpoonup(c,\ n)\ \text{weakly in}\ H^{2}\times H^{2}, \tag{3.20}\] \[(c_{n},\ n_{n})\to(c,\ n)\ \text{strongly in}\ H^{1}\times H^{1}. \tag{3.21}\] With (3.21) and the fact that \((\hat{c}_{n},\ \hat{n}_{n})\) converges to \((\hat{c},\ \hat{n})\) in \(H^{1}\times H^{1}\) strongly, we have \[(\hat{c}_{n},\ \hat{n}_{n},\ c_{n},\ n_{n})\to(\hat{c},\ \hat{n},\ c,\ n)\ \text{a.e. in}\ \Omega. \tag{3.22}\] As for the trace of \((c_{n},\ n_{n})\), we have \(n_{n\tau}\) is bounded in \(H^{\frac{3}{2}}(\partial\Omega)\) (since \(n_{n}\) is bounded in \(H^{2}\)), therefore, is also bounded in \(H^{1}(\partial\Omega)\). Since \(H^{1}(\partial\Omega)\) is compactly embedded into \(L^{2}(\partial\Omega)\), there exists subsequence of \(n_{n\tau}\), still denoted as \(n_{n\tau}\), s.t. \(n_{n\tau}\to n_{\tau}\) strongly in \(L^{2}(\partial\Omega)\), so we have \[n_{n\tau}\to n_{\tau}\ \text{almost everywhere}. \tag{3.23}\] Similarly, we have \[c_{n\tau}\to c_{\tau}\ \text{almost everywhere}. \tag{3.24}\] With assumption \((H_{1})-(H_{3})\), (3.22) - (3.24), and dominated convergence theorem, we conclude that \[f(\hat{c}_{n})\to f(\hat{c})\ \text{a.e. and strongly in}\ L^{2}, \tag{3.25}\] \[g(\hat{n}_{n},\ c_{n})\to g(\hat{n},\ c)\ \text{a.e. and strongly in}\ L^{2}. \tag{3.26}\] From boundedness of \((c_{n},\ n_{n})\) in \(H^{1}\times H^{1}\), we have \[(\nabla c_{n},\ \nabla n_{n})\rightharpoonup(\nabla c,\ \nabla n)\ \text{weakly in}\ L^{2}\times L^{2}. \tag{3.27}\] From (3.25) - (3.27), we get \[\hat{n}_{n}f(\hat{c}_{n})\to\hat{n}f(\hat{c})\ \text{in distribution},\] \[g(\hat{n}_{n},\ c_{n})\nabla c_{n}\to g(\hat{n},\ c)\nabla c \ \text{in distribution}.\] Similarly, we can pass limits in other terms in (3.19) by letting \(n\to\infty\), we then obtain (3.4) - (3.7), therefore, we have shown that \((c,\ n)=\Phi(\hat{c},\ \hat{n}).\) The continuity of \(\Phi\) is proved. This finishes the proof of Lemma 3.3. **Proof of Lemma 3.2:** We multiply \(\eqref{eq:3.3}_{1}\) by \(c_{\epsilon}\) and \(\eqref{eq:3.3}_{2}\) by \(n_{\epsilon}\), integrate over \(\Omega\), then proceed in an analogous way as in (3.16) - (3.18), we obtain the following estimates: \[\|c_{\epsilon\tau}\|_{H^{1}}^{2}+\|c_{\epsilon}\|_{H^{1}}^{2}+\|n_{\epsilon}\|_ {H^{1}}^{2}+\|n_{\epsilon}\|_{L^{2}}^{2}\leq C\left(\|h\|_{L^{2}}^{2}+\|h_{\tau} \|_{L^{2}}^{2}+\|l\|_{L^{2}}^{2}\right). \tag{3.28}\] where the constant \(C\) is independent of \(\epsilon\). We obtain from (3.28) that \((c_{\epsilon\tau},\ c_{\epsilon},\ n_{\epsilon})\) is bounded in \(H^{1}(\partial\Omega)\times H^{1}(\Omega)\times H^{1}(\Omega)\), uniformly w.r.t. \(\epsilon\). Since \(H^{1}(\partial\Omega)\) and \(H^{1}(\Omega)\) are compactly embedded in \(L^{2}(\partial\Omega)\) and \(L^{2}(\Omega)\) respectively, there exist \((c_{\tau},\ c,\ n)\in H^{1}(\partial\Omega)\times H^{1}(\Omega)\times H^{1}(\Omega)\) and subsequence of \((c_{\epsilon\tau},\ c_{\epsilon},\ n_{\epsilon})\), still denoted as \((c_{\epsilon\tau},\ c_{\epsilon},\ n_{\epsilon})\), s.t. as \(\epsilon\to 0^{+}\), we have \[\begin{split}&(c_{\epsilon\tau},\ c_{\epsilon},\ n_{\epsilon}) \rightharpoonup(c_{\tau},\ c,\ n)\text{ weakly in }H^{1}(\partial\Omega)\times H^{1}(\Omega)\times H^{1}(\Omega),\\ &(c_{\epsilon\tau},\ c_{\epsilon},\ n_{\epsilon})\to(c_{\tau}, \ c,\ n)\text{ strongly in }L^{2}(\partial\Omega)\times L^{2}(\Omega)\times L^{2}(\Omega).\end{split} \tag{3.29}\] From (3.29) together with \[J_{\epsilon}\hat{u}\to\hat{u}\text{ strongly in }L^{2}(\Omega). \tag{3.30}\] We get \[\begin{split}& J_{\epsilon}\hat{u}\cdot\nabla c_{\epsilon}\to \hat{u}\cdot\nabla c\text{ in distribution,}\\ & J_{\epsilon}\hat{u}\cdot\nabla n_{\epsilon}\to\hat{u}\cdot \nabla n\text{ in distribution.}\end{split}\] We then proceed in an analogous way as in the proof of continuity of \(\Phi\) in Lemma 3.3 to pass the limit in (3.3) by letting \(\epsilon\to 0^{+}\) to obtain (3.2). The proof of Lemma 3.2 is completed. 3.2 Proof of Theorem 3.1 For the simplicity of notation, in the proof of Theorem 3.1, we omit the superscript "\(m\)", and write equation (3.1) as follows: \[\begin{cases}&-k\alpha\Delta c_{j}+k(u_{j}\cdot\nabla)c_{j}+c_{j}=-kn_{j} f(c_{j})+h&\text{ in }\Omega,\\ &-k\nabla\cdot(\beta\nabla n_{j}-g(n_{j},\ c_{j})\nabla c_{j})+k(u_{j} \cdot\nabla)n_{j}+n_{j}=l&\text{ in }\Omega,\\ &-k\nabla\cdot(\xi\nabla u_{j})+k(u_{j}\cdot\nabla)u_{j}+k\nabla P_{j}+u_{ j}=kn_{j}\nabla\sigma+q&\text{ in }\Omega,\\ &\nabla\cdot u_{j}=0&\text{ in }\Omega,\\ & k\partial_{\eta}c_{j}=\frac{k}{b}\Delta_{\tau}c_{j_{\tau}}-\frac{1}{b}c_{j_ {\tau}}+\frac{1}{b}h_{\tau}&\text{ on }\partial\Omega,\\ &\beta\partial_{\eta}n_{j}=g(n_{j},\ c_{j})\partial_{\eta}c_{j}&\text{ on } \partial\Omega,\\ & u_{j}=0&\text{ on }\partial\Omega.\end{cases} \tag{3.31}\] Recall that \(u_{j}\in V_{j}\) is the Galerkin approximation of \(u^{m}\) for a fixed \(m\). We define an operator \(L:\ \ V_{j}\to V_{j}\) as follows: For a fixed \(\hat{u}_{j}\in V_{j},\ \ \text{we set}\ L(\hat{u}_{j})=u_{j},\ \ \text{where}\ u_{j}\) is the solution to the following problem: \[-k\alpha\Delta c_{j}+k(\hat{u}_{j}\cdot\nabla)c_{j}+c_{j}=-kn_{j }f(c_{j})+h, \tag{3.32}\] \[-k\nabla\cdot(\beta\nabla n_{j}-g(n_{j},\ c_{j})\nabla c_{j})+k( \hat{u}_{j}\cdot\nabla)n_{j}+n_{j}=l,\] (3.33) \[-k\nabla\cdot(\xi\nabla u_{j})+k(u_{j}\cdot\nabla)u_{j}+k\nabla P _{j}+u_{j}=kn_{j}\nabla\sigma+q,\] (3.34) \[\nabla\cdot u_{j}=0. \tag{3.35}\] with boundary conditions: \[k\partial_{\eta}c_{j}=\frac{k}{b}\Delta_{\tau}c_{j_{\tau}}-\frac {1}{b}c_{j_{\tau}}+\frac{1}{b}h_{\tau}, \tag{3.36}\] \[\beta\partial_{\eta}n_{j}=g(n_{j},\ c_{j})\partial_{\eta}c_{j},\] (3.37) \[u_{j}=0. \tag{3.38}\] Let \(\hat{u}_{j}\) belongs to a bounded set in \(V_{j}\), we fix \(\hat{u}_{j}\) in (3.32) - (3.33), solve (3.32), (3.33), (3.36), (3.37) for \((c_{j},\ n_{j})\). The existence of solution \((c_{j},\ n_{j})\) is proved in Lemma 3.2, and we have the estimate of \(n_{j}\): \[\|n_{j}\|_{L^{2}}^{2}\leq C(\|h\|_{L^{2}}^{2}+\|h_{\tau}\|_{L^{2}}^{2}+\|l\|_{ L^{2}}^{2}). \tag{3.39}\] We then use this \(n_{j}\) in equation (3.34), proceed to solve equation (3.34),(3.35) and (3.38) for \(u_{j}\), since \(u_{j}\in V_{j}\), which is finite-dimensional, the existence of this \(u_{j}\) can be proved in an analogous way as in [6, p. 164]. The operator \(L\) then maps \(\hat{u}_{j}\) to \(u_{j}\), i.e. \(u_{j}=L(\hat{u}_{j})\). Next, we show that \(L\) has a fixed point. To this end, we multiply equation (3.34) with \(u_{j}\), integrate over \(\Omega\), use \(\int_{\Omega}(u_{j}\cdot\nabla)u_{j}\cdot u_{j}=0,\ \ \text{we get},\) \[k\xi\|\nabla u_{j}\|_{L^{2}}^{2}+\|u_{j}\|_{L^{2}}^{2}=k\int_{\Omega}n_{j} \nabla\sigma u_{j}+\int_{\Omega}qu_{j}.\] Use Young's inequality for the terms on the right hand side of the above equation, and use (3.39), we obtain, \[\begin{array}{l}k\xi\|\nabla u_{j}\|_{L^{2}}^{2}+(1-kC\delta-\delta)\|u_{j}\|_{L ^{2}}^{2}\\ \qquad\leq\frac{kC}{4\delta}\|n_{j}\|_{L^{2}}^{2}+\frac{1}{4\delta}\|q\|_{L^{2 }}^{2}\\ \qquad\leq\frac{kC}{4\delta}(\|h\|_{L^{2}}^{2}+\|h_{\tau}\|_{L^{2}}^{2}+\|l\|_ {L^{2}}^{2})+\frac{1}{4\delta}\|q\|_{L^{2}}^{2}.\end{array} \tag{3.40}\] By choosing \(\delta\) small enough, from the above inequality, we see that \(u_{j}=L(\hat{u}_{j})\) is bounded in \(V_{j}\). Therefore, \(L\) maps a bounded set in \(V_{j}\) to a bounded set in \(V_{j}\). Since \(V_{j}\) is finite dimensional space, we can use Brouwer fixed-point theorem to conclude that \(L\) has a fixed point, which is the solution of (3.31). This completes the proof of Theorem 3.1. 3.3 Proof of Theorem 2.3 With Theorem 3.1, we have shown the existence results for (3.1) with a fixed "\(m\)". Now we multiply \(\eqref{eq:231}_{1}\) by \(c_{j}\), \(\eqref{eq:231}_{2}\) by \(n_{j}\), \(\eqref{eq:231}_{3}\) by \(u_{j}\), integrate over \(\Omega\), proceed in an analogous way as in (3.16) - (3.18), (3.40), we see that \((c_{j},\ n_{j},\ u_{j})\) of (3.31) is bounded in \(\left(H^{1}(\Omega)\right)^{2}\times V\), uniformly with respect to \(j\). As a result, as \(j\to+\infty\), we have \[\begin{array}{l}(c_{j},\ n_{j})\rightharpoonup(c,\ n)\ \text{weakly in $H^{1}\times H^{1}$},\\ (c_{j},\ n_{j})\to(c,\ n)\ \text{strongly in $L^{2}\times L^{2}$},\\ u_{j}\rightharpoonup u\ \text{weakly in $V$},\\ u_{j}\to u\ \text{strongly in $H$}.\end{array}\] We can pass the limit (letting \(j\to+\infty\)) in (3.31). This finish the proof of Theorem 2.3. 4. Proof of Theorem 1.1 We first derive aprori estimates for \((c^{m},\ n^{m},\ u^{m})\). To this end, we multiply \(\eqref{eq:231}_{1}\) by \(c^{m}\), \(\eqref{eq:231}_{2}\) by \(n^{m}\), \(\eqref{eq:231}_{3}\) by \(u^{m}\), and integrate over \(\Omega\), we arrive at, \[\begin{array}{l}\frac{1}{k}\int_{\Omega}c^{m}(c^{m}-c^{m-1})-\alpha\int_{ \Omega}c^{m}\Delta c^{m}+\int_{\Omega}c^{m}n^{m}f(c^{m})=0,\\ \frac{1}{k}\int_{\Omega}n^{m}(n^{m}-n^{m-1})-\int_{\Omega}n^{m}\nabla\cdot( \beta\nabla n^{m}-g(n^{m},\ c^{m})\nabla c^{m})=0,\\ \frac{1}{k}\int_{\Omega}u^{m}(u^{m}-u^{m-1})-\int_{\Omega}u^{m}\nabla\cdot( \xi\nabla u^{m})=\int_{\Omega}u^{m}n^{m}\nabla\sigma.\end{array}\] Do integration by parts, using boundary conditions \(\left(2.3\right)_{5,6,7}\), we have \[\frac{1}{k}\int_{\Omega}c^{m}(c^{m}-c^{m-1})+\alpha\|\nabla c^{m}\|_ {L^{2}}^{2}+\frac{\alpha}{b}\|\nabla_{\tau}c^{m}\|_{L^{2}}^{2}+\frac{\alpha}{ bk}\int_{\partial\Omega}c^{m}_{\tau}(c^{m}_{\tau}-c^{m-1}_{\tau})\] \[=-\int_{\Omega}c^{m}n^{m}f(c^{m}), \tag{4.1}\] \[\frac{1}{k}\int_{\Omega}n^{m}(n^{m}-n^{m-1})+\beta\|\nabla n^{m} \|_{L^{2}}^{2}-\int_{\Omega}g(n^{m},\ c^{m})\nabla c^{m}\nabla n^{m}=0,\] (4.2) \[\frac{1}{k}\int_{\Omega}u^{m}(u^{m}-u^{m-1})+\xi\|\nabla u^{m}\|_ {L^{2}}^{2}=\int_{\Omega}u^{m}n^{m}\nabla\sigma. \tag{4.3}\] Using relation \(\left(2.1\right)\) in the equations \(\left(4.1\right)\) and \(\left(4.2\right)\), we get, \[\begin{array}{l}\frac{1}{2k}(\|c^{m}\|_{L^{2}}^{2}-\|c^{m-1}\|_{L^{2}}^{2}+ \|c^{m}-c^{m-1}\|_{L^{2}}^{2})+\alpha\|\nabla c^{m}\|_{L^{2}}^{2}+\frac{\alpha }{b}\|\nabla_{\tau}c^{m}\|_{L^{2}}^{2}\\ \quad+\frac{\alpha}{2bk}(\|c^{m}_{\tau}\|_{L^{2}}^{2}-\|c^{m-1}_{\tau}\|_{L^{2 }}^{2}+\|c^{m}_{\tau}-c^{m-1}_{\tau}\|_{L^{2}}^{2})\\ \quad\quad\quad\quad\leq\frac{f_{1}}{2}(\|c^{m}\|_{L^{2}}^{2}+\|n^{m}\|_{L^{2 }}^{2}).\end{array} \tag{4.4}\] \[\begin{array}{l}\frac{1}{2k}(\|n^{m}\|_{L^{2}}^{2}-\|n^{m-1}\|_{L^{2}}^{2}+ \|n^{m}-n^{m-1}\|_{L^{2}}^{2})+\beta\|\nabla n^{m}\|_{L^{2}}^{2}\\ \quad\quad\quad\quad\quad\leq g_{1}(\frac{1}{4\delta}\|\nabla c^{m}\|_{L^{2}} ^{2}+\delta\|\nabla n^{m}\|_{L^{2}}^{2}).\end{array} \tag{4.5}\] Multiply \(\left(4.4\right)\) by \(\frac{g_{1}}{4\delta}\), \(\left(4.5\right)\) by \(\alpha\), add the resulting equations, we get \[\begin{array}{l}\frac{g_{1}}{8k\delta}(\|c^{m}\|_{L^{2}}^{2}-\|c^{m-1}\|_{L^{ 2}}^{2}+\|c^{m}-c^{m-1}\|_{L^{2}}^{2})+\frac{g_{1}\alpha}{4b\delta}\|\nabla_{ \tau}c^{m}\|_{L^{2}}^{2}\\ \quad+\frac{g_{1}\alpha}{8kb\delta}(\|c^{m}_{\tau}\|_{L^{2}}^{2}-\|c^{m-1}_{ \tau}\|_{L^{2}}^{2}+\|c^{m}_{\tau}-c^{m-1}_{\tau}\|_{L^{2}}^{2})\\ \quad+\frac{\alpha}{2k}(\|n^{m}\|_{L^{2}}^{2}-\|n^{m-1}\|_{L^{2}}^{2}+\|n^{m} -n^{m-1}\|_{L^{2}}^{2})+(\alpha\beta-g_{1}\alpha\delta)\|\nabla n^{m}\|_{L^{2 }}^{2}\\ \quad\quad\quad\quad\leq\frac{g_{1}}{4\delta}\frac{f_{1}}{2}(\|c^{m}\|_{L^{2 }}^{2}+\|n^{m}\|_{L^{2}}^{2}).\end{array} \tag{4.6}\] Choose \(\delta\) such that \(\delta<\frac{\beta}{g_{1}}\), then multiply \(\left(4.6\right)\) by \(\frac{8k\delta}{g_{1}}\), we get \[\begin{array}{l}(\|c^{m}\|_{L^{2}}^{2}-\|c^{m-1}\|_{L^{2}}^{2}+\|c^{m}-c^{m-1 }\|_{L^{2}}^{2})+\frac{2k\alpha}{b}\|\nabla_{\tau}c^{m}\|_{L^{2}}^{2}\\ \quad+\frac{\alpha}{b}(\|c^{m}_{\tau}\|_{L^{2}}^{2}-\|c^{m-1}_{\tau}\|_{L^{2}} ^{2}+\|c^{m}_{\tau}-c^{m-1}_{\tau}\|_{L^{2}}^{2})\\ \quad+\frac{4\delta\alpha}{g_{1}}(\|n^{m}\|_{L^{2}}^{2}-\|n^{m-1}\|_{L^{2}}^{2} +\|n^{m}-n^{m-1}\|_{L^{2}}^{2})+\frac{8k\delta}{g_{1}}(\alpha\beta-g_{1} \alpha\delta)\|\nabla n^{m}\|_{L^{2}}^{2}\\ \quad\quad\quad\quad\leq kf_{1}(\|c^{m}\|_{L^{2}}^{2}+\|n^{m}\|_{L^{2}}^{2}). \end{array} \tag{4.7}\] Let \(\tilde{\alpha}:=\min\{\frac{\alpha}{b},\ \frac{4\delta\alpha}{g_{1}},\ \frac{8 \delta}{g_{1}}(\alpha\beta-g_{1}\alpha\delta)\}\), then from \(\left(4.7\right)\), we have \[\begin{array}{l}(\|c^{m}\|_{L^{2}}^{2}-\|c^{m-1}\|_{L^{2}}^{2}+\|c^{m}-c^{m-1 }\|_{L^{2}}^{2})+k\tilde{\alpha}\|\nabla_{\tau}c^{m}\|_{L^{2}}^{2}\\ \quad+\tilde{\alpha}(\|c^{m}_{\tau}\|_{L^{2}}^{2}-\|c^{m-1}_{\tau}\|_{L^{2}}^{2} +\|c^{m}_{\tau}-c^{m-1}_{\tau}\|_{L^{2}}^{2})\\ \quad+\tilde{\alpha}(\|n^{m}\|_{L^{2}}^{2}-\|n^{m-1}\|_{L^{2}}^{2}+\|n^{m}-n^{m-1 }\|_{L^{2}}^{2})+\tilde{\alpha}k\|\nabla n^{m}\|_{L^{2}}^{2}\\ \quad\quad\quad\quad\quad\leq kf_{1}(\|c^{m}\|_{L^{2}}^{2}+\|n^{m}\|_{L^{2}}^{2}). \end{array}\] Summing the above inequality for \(m=1,\ 2,\ 3,\ \cdots,\ r,\quad 1\leq r\leq N,\) we find \[\begin{split}&\|c^{r}\|_{L^{2}}^{2}+\sum_{m=1}^{r}\|c^{m}-c^{m-1} \|_{L^{2}}^{2}+k\tilde{\alpha}\sum_{m=1}^{r}\|\nabla_{\tau}c^{m}\|_{L^{2}}^{2} \\ &\quad+\tilde{\alpha}(\|c^{r}_{\tau}\|_{L^{2}}^{2}+\sum_{m=1}^{r} \|c^{m}_{\tau}-c^{m-1}_{\tau}\|_{L^{2}}^{2})\\ &\quad+\tilde{\alpha}(\|n^{r}\|_{L^{2}}^{2}+\sum_{m=1}^{r}\|n^{m} -n^{m-1}\|_{L^{2}}^{2})+\tilde{\alpha}k\sum_{m=1}^{r}\|\nabla n^{m}\|_{L^{2}} ^{2}\\ &\qquad\qquad\leq kf_{1}\sum_{m=1}^{r}(\|c^{m}\|_{L^{2}}^{2}+\|n ^{m}\|_{L^{2}}^{2})+\|c^{0}\|_{L^{2}}^{2}+\tilde{\alpha}\|c^{0}_{\tau}\|_{L^ {2}}^{2}+\tilde{\alpha}\|n^{0}\|_{L^{2}}^{2}.\end{split}\] From here, we derive the following inequality: \[\begin{split}&\|c^{r}\|_{L^{2}}^{2}+\sum_{m=1}^{r}\|c^{m}-c^{m-1} \|_{L^{2}}^{2}+k\sum_{m=1}^{r}\|\nabla_{\tau}c^{m}\|_{L^{2}}^{2}\\ &\quad+\|c^{r}_{\tau}\|_{L^{2}}^{2}+\sum_{m=1}^{r}\|c^{m}_{\tau}- c^{m-1}_{\tau}\|_{L^{2}}^{2}\\ &\quad+\|n^{r}\|_{L^{2}}^{2}+\sum_{m=1}^{r}\|n^{m}-n^{m-1}\|_{L^{ 2}}^{2}+k\sum_{m=1}^{r}\|\nabla n^{m}\|_{L^{2}}^{2}\\ &\leq M(\|c^{0}\|_{L^{2}}^{2}+\|c^{0}_{\tau}\|_{L^{2}}^{2}+\|n^{ 0}\|_{L^{2}}^{2})+Mk\sum_{m=1}^{r}(\|c^{m}\|_{L^{2}}^{2}+\|n^{m}\|_{L^{2}}^{2} ).\end{split} \tag{4.8}\] Using Lemma 2.1 (discrete Gronwall's Lemma), we have \[\begin{split}&\max_{1\leq r\leq N}\|c^{r}\|_{L^{2}}^{2}+\max_{1 \leq r\leq N}\|n^{r}\|_{L^{2}}^{2}\\ &\quad\leq M(\|c^{0}\|_{L^{2}}^{2}+\|c^{0}_{\tau}\|_{L^{2}}^{2}+ \|n^{0}\|_{L^{2}}^{2}).\end{split} \tag{4.9}\] \[\begin{split}& k\sum_{m=1}^{r}(\|\nabla_{\tau}c^{m}\|_{L^{2}}^{2}+ \|\nabla n^{m}\|_{L^{2}}^{2})\\ &\quad\leq M(\|c^{0}\|_{L^{2}}^{2}+\|c^{0}_{\tau}\|_{L^{2}}^{2}+ \|n^{0}\|_{L^{2}}^{2})+Mkr\left(M(\|c^{0}\|_{L^{2}}^{2}+\|c^{0}_{\tau}\|_{L^{2} }^{2}+\|n^{0}\|_{L^{2}}^{2})\right),\\ &\quad\leq(M+M^{2}T)(\|c^{0}\|_{L^{2}}^{2}+\|c^{0}_{\tau}\|_{L^{2 }}^{2}+\|n^{0}\|_{L^{2}}^{2}).\end{split}\] Write \(M+M^{2}T\) again as \(M\), we have \[\begin{split}& k\sum_{m=1}^{r}(\|\nabla_{\tau}c^{m}\|_{L^{2}}^{2}+ \|\nabla n^{m}\|_{L^{2}}^{2})\\ &\quad\leq M(\|c^{0}\|_{L^{2}}^{2}+\|c^{0}_{\tau}\|_{L^{2}}^{2}+ \|n^{0}\|_{L^{2}}^{2}).\end{split} \tag{4.10}\] Similarly, we have \[\begin{split}&\sum_{m=1}^{r}(\|c^{m}-c^{m-1}\|_{L^{2}}^{2}+\|c^{m}_ {\tau}-c^{m-1}_{\tau}\|_{L^{2}}^{2}+\|n^{m}-n^{m-1}\|_{L^{2}}^{2})\\ &\quad\leq M(\|c^{0}\|_{L^{2}}^{2}+\|c^{0}_{\tau}\|_{L^{2}}^{2}+ \|n^{0}\|_{L^{2}}^{2}).\end{split} \tag{4.11}\] \[\begin{split}&\max_{1\leq r\leq N}\|c^{r}_{\tau}\|_{L^{2}}^{2}\leq M (\|c^{0}\|_{L^{2}}^{2}+\|c^{0}_{\tau}\|_{L^{2}}^{2}+\|n^{0}\|_{L^{2}}^{2}). \end{split} \tag{4.12}\] \[\begin{split}&\sum_{m=1}^{r}\|c^{m}_{\tau}-c^{m-1}_{\tau}\|_{L^{2}}^{2} \leq M(\|c^{0}\|_{L^{2}}^{2}+\|c^{0}_{\tau}\|_{L^{2}}^{2}+\|n^{0}\|_{L^{2}}^{2} ).\end{split} \tag{4.13}\] For the estimate of \(u^{m}\), we proceed in an analogous way, and obtain the following inequality from (4.3). \[\|u^{m}\|_{L^{2}}^{2}-\|u^{m-1}\|_{L^{2}}^{2}+\|u^{m}-u^{m-1}\|_{L^{2 }}^{2}+2k\xi\|\nabla u^{m}\|_{L^{2}}^{2}\] \[\quad\leq kC(\|u^{m}\|_{L^{2}}^{2}+\|u^{m}\|_{L^{2}}^{2}).\] Summing the above inequality for \(m=1,\ 2,\ 3,\ \cdots,\ r,\quad 1\leq r\leq N,\ \ \text{we find}\) \[\|u^{r}\|_{L^{2}}^{2}+\sum_{m=1}^{r}\|u^{m}-u^{m-1}\|_{L^{2}}^{2}+ 2k\xi\sum_{m=1}^{r}\|\nabla u^{m}\|_{L^{2}}^{2}\] \[\quad\leq kC\sum_{m=1}^{r}(\|u^{m}\|_{L^{2}}^{2}+\|u^{m}\|_{L^{2} }^{2})+\|u^{0}\|_{L^{2}}^{2}\] \[\quad\leq kC\sum_{m=1}^{r}\|u^{m}\|_{L^{2}}^{2}+krC\max_{1\leq m \leq r}\|u^{m}\|_{L^{2}}^{2}+\|u^{0}\|_{L^{2}}^{2}\] \[\quad\leq M(\|u^{0}\|_{L^{2}}^{2}+\|c^{0}\|_{L^{2}}^{2}+\|c^{0}_{ \tau}\|_{L^{2}}^{2}+\|n^{0}\|_{L^{2}}^{2})+Mk\sum_{m=1}^{r}\|u^{m}\|_{L^{2}}^{2}.\] By Lemma 2.1 (discrete Gronwall's Lemma), we have \[\max_{1\leq r\leq N}\|u^{r}\|_{L^{2}}^{2}\leq M(\|c^{0}\|_{L^{2}}^{2}+\|c^{0}_ {\tau}\|_{L^{2}}^{2}+\|n^{0}\|_{L^{2}}^{2}+\|u^{0}\|_{L^{2}}^{2}). \tag{4.14}\] \[\sum_{m=1}^{r}\|u^{m}-u^{m-1}\|_{L^{2}}^{2}\leq M(\|c^{0}\|_{L^{2}}^{2}+\|c^{0 }_{\tau}\|_{L^{2}}^{2}+\|n^{0}\|_{L^{2}}^{2}+\|u^{0}\|_{L^{2}}^{2}). \tag{4.15}\] \[k\sum_{m=1}^{r}\|\nabla u^{m}\|_{L^{2}}^{2}\leq M(\|c^{0}\|_{L^{2}}^{2}+\|c^{0 }_{\tau}\|_{L^{2}}^{2}+\|n^{0}\|_{L^{2}}^{2}+\|u^{0}\|_{L^{2}}^{2}). \tag{4.16}\] From (4.9) - (4.16), we obtain the following estimates for Rothe functions and step functions: \[\|(c_{k},\ n_{k})\|_{(L^{2}(0,\ T;\ L^{2}(\Omega)))^{2}}\leq M; \quad\ \|(c_{k},\ n_{k})\|_{(L^{2}(0,\ T;\ H^{1}(\Omega)))^{2}}\leq M,\] \[\|u_{k}\|_{L^{2}(0,\ T;\ H)}\leq M;\quad\ \|u_{k}\|_{L^{2}(0,\ T;\ V)}\leq M,\] \[\|(\tilde{c}_{k},\ \tilde{n}_{k})\|_{(L^{2}(0,\ T;\ L^{2}(\Omega)))^{2}} \leq M;\quad\ \ \|(\tilde{c}_{k},\ \tilde{n}_{k})\|_{(L^{2}(0,\ T;\ H^{1}(\Omega)))^{2}}\leq M,\] \[\|\tilde{u}_{k}\|_{L^{2}(0,\ T;\ H)}\leq M;\quad\ \ \|\tilde{u}_{k}\|_{L^{2}(0,\ T;\ V)}\leq M,\] \[\|c_{k\tau}\|_{L^{2}(0,\ T;\ L^{2}(\partial\Omega))}\leq M;\quad \ \|c_{k\tau}\|_{L^{2}(0,\ T;\ H^{1}(\partial\Omega))}\leq M,\] \[\|\tilde{c}_{k\tau}\|_{L^{2}(0,\ T;\ L^{2}(\partial\Omega))}\leq M ;\quad\ \ \|\tilde{c}_{k\tau}\|_{L^{2}(0,\ T;\ H^{1}(\partial\Omega))}\leq M,\] \[\|(\tilde{c}_{k}-c_{k},\ \tilde{n}_{k}-n_{k})\|_{(L^{2}(0,\ T;\ L^{2}( \Omega)))^{2}}\leq Mk;\quad\ \ \|\tilde{u}_{k}-u_{k}\|_{L^{2}(0,\ T;\ H)}\leq Mk. \tag{4.17}\] From the above estimates of Rothe functions and step functions, we conclude that, there exist subsequences, still denoted as \((c_{k},\ c_{k\tau},\ n_{k},\ u_{k}),\ (\tilde{c}_{k},\ \tilde{c}_{k\tau},\ \tilde{n}_{k},\ \tilde{u}_{k})\) such that as \(k\to 0\),, we have \[\begin{array}{l}(c_{k},\ c_{k\tau},\ n_{k})\rightharpoonup(c,\ c_{\tau},\ n);\ (\tilde{c}_{k},\ \tilde{c}_{k\tau},\ \tilde{n}_{k})\rightharpoonup(\tilde{c},\ \tilde{c}_{\tau},\ \tilde{n})\text{ weakly in }L^{2}(0,\ T;\ L^{2}(\Omega)),\\ (c_{k},\ c_{k\tau},\ n_{k})\rightharpoonup(c,\ c_{\tau},\ n);\ (\tilde{c}_{k},\ \tilde{c}_{k\tau},\ \tilde{n}_{k}) \rightharpoonup(\tilde{c},\ \tilde{c}_{\tau},\ \tilde{n})\text{ weakly in }L^{2}(0,\ T;\ H^{1}(\Omega)),\\ u_{k}\rightharpoonup u;\ \tilde{u}_{k}\rightharpoonup\tilde{u},\text{ weakly in }L^{2}(0,\ T;\ H),\\ u_{k}\rightharpoonup u;\ \tilde{u}_{k}\rightharpoonup\tilde{u},\text{ weakly in }L^{2}(0,\ T;\ V),\\ (c_{k},\ c_{k\tau},\ n_{k},\ u_{k})=(\tilde{c}_{k},\ \tilde{c}_{k\tau},\ \tilde{n}_{k},\ \tilde{u}_{k})\text{ almost everywhere.}\end{array} \tag{4.18}\] Next, we want to show there exist subsequences of \((c_{k},\ n_{k},\ u_{k})\), still denoted as \((c_{k},\ n_{k},\ u_{k})\), such that as \(k\to 0\), we have \[\begin{array}{l}(c_{k},\ n_{k})\to(c,\ n)\text{ strongly in }L^{2}(0,\ T;\ L^{2}(\Omega)),\\ u_{k}\to u\text{ strongly in }L^{2}(0,\ T;\ H).\end{array} \tag{4.19}\] To this end, we apply Lemma 2.2, set \(Y=\left(H^{1}(\Omega)\right)^{2}\times V,\ X=\left(L^{2}(\Omega)\right)^{2} \times H,\) and set \(p=2\). The embedding of \(Y\) into \(X\) is compact. Let \(G\) be the family of functions \(\ (\tilde{c}_{k},\ \tilde{n}_{k},\ \tilde{u}_{k})\). Then \(G\) is bounded in \(L^{2}(0,\ T;\ Y)\) and \(L^{2}(0,\ T;\ X)\) due to (4.17). Assume \((c^{0},\ n^{0},\ u^{0})\in\left(H^{1}\right)^{2}\times V,\ \text{we want to show (\ref{eq:2}) in Lemma 2.2 holds.}\) For this, we rewrite (2.3) using Rothe functions \((\tilde{c}_{k},\ \tilde{c}_{k\tau},\ \tilde{n}_{k},\ \tilde{u}_{k})\) and step functions \((c_{k},\ c_{k\tau},\ n_{k},\ u_{k})\), then multiply \(\eqref{eq:2}_{1}\) by \(\phi_{1}\), \(\eqref{eq:2}_{2}\) by \(\phi_{2}\), \(\eqref{eq:2}_{3}\) by \(\phi_{3}\), where \(\phi_{1},\ \phi_{2}\) are any functions in \(H^{1}(\Omega)\), and \(\phi_{3}\) is any function in \(V,\) integrate by parts, using boundary conditions, we obtain \[\begin{array}{l}\int_{\Omega}\partial_{t}\tilde{c}_{k}(t)\phi_{1}dx+\alpha \int_{\Omega}\nabla c_{k}(t)\nabla\phi_{1}dx+\frac{\alpha}{b}\int_{\partial \Omega}\partial_{t}\tilde{c}_{k\tau}(t)\phi_{1\tau}d\sigma+\\ \quad\frac{\alpha}{b}\int_{\partial\Omega}\nabla_{\tau}c_{k}(t)\nabla_{\tau} \phi_{1}d\sigma+\int_{\Omega}u_{k}(t)\nabla c_{k}(t)\phi_{1}dx=\int_{\Omega}- n_{k}(t)f(c_{k}(t))\phi_{1}dx,\\ \int_{\Omega}\partial_{t}\tilde{n}_{k}(t)\phi_{2}dx+\int_{\Omega}\left(\beta \nabla n_{k}(t)-g(n_{k}(t),\ c_{k}(t))\nabla c_{k}(t)\right)\nabla\phi_{2}dx \\ \quad+\int_{\Omega}u_{k}(t)\nabla n_{k}(t)\phi_{2}dx=0,\\ \int_{\Omega}\partial_{t}\tilde{u}_{k}(t)\phi_{3}dx+\int_{\Omega}\xi\nabla u _{k}(t)\nabla\phi_{3}dx+\int_{\Omega}(u_{k}(t)\cdot\nabla)u_{k}(t)\phi_{3}dx= \\ \quad\int_{\Omega}n_{k}(t)\nabla\sigma\phi_{3}dx.\end{array} \tag{4.20}\] We then integrate (4.20) between \(t\) and \(t+a,\ t\in(0,\ T),\ a>0,\) we get \[\begin{array}{l}\int_{\Omega}(\tilde{c}_{k}(t+a)-\tilde{c}_{k}(t))\phi_{1}dx+ \frac{\alpha}{b}\int_{\partial\Omega}(\tilde{c}_{k\tau}(t+a)-\tilde{c}_{k\tau} (t))\phi_{1\tau}d\sigma=-\alpha\int_{t}^{t+a}\int_{\Omega}\nabla c_{k}(s) \nabla\phi_{1}dxds\\ \quad-\frac{\alpha}{b}\int_{t}^{t+a}\int_{\partial\Omega}\nabla_{\tau}c_{k}(s) \nabla_{\tau}\phi_{1}d\sigma ds-\int_{t}^{t+a}\int_{\Omega}u_{k}(s)\nabla c_{k} (s)\phi_{1}dxds-\int_{t}^{t+a}\int_{\Omega}n_{k}(s)f(c_{k}(s))\phi_{1}dxds,\\ \int_{\Omega}(\tilde{n}_{k}(t+a)-\tilde{n}_{k}(t))\phi_{2}dx=-\int_{t}^{t+a} \int_{\Omega}\left(\beta\nabla n_{k}(s)-g(n_{k}(s),\ c_{k}(s))\nabla c_{k}(s) \right)\nabla\phi_{2}dxds\\ \quad-\int_{t}^{t+a}\int_{\Omega}u_{k}(s)\nabla n_{k}(s)\phi_{2}dxds,\\ \int_{\Omega}(\tilde{u}_{k}(t+a)-\tilde{u}_{k}(t))\phi_{3}dx=-\int_{t}^{t+a} \int_{\Omega}\xi\nabla u_{k}(s)\nabla\phi_{3}dxds-\int_{t}^{t+a}\int_{\Omega} (u_{k}(s)\cdot\nabla)u_{k}(s)\phi_{3}dxds\\ \quad+\int_{t}^{t+a}\int_{\Omega}n_{k}(s)\nabla\sigma\phi_{3}dxds.\end{array} \tag{4.21}\] In the above equations, let \[\begin{array}{l}\phi_{1}=\tilde{c}_{k}(t+a)-\tilde{c}_{k}(t),\\ \phi_{1\tau}=\tilde{c}_{k\tau}(t+a)-\tilde{c}_{k\tau}(t),\\ \phi_{2}=\tilde{n}_{k}(t+a)-\tilde{n}_{k}(t),\\ \phi_{3}=\tilde{u}_{k}(t+a)-\tilde{u}_{k}(t).\end{array}\] Then integrate from \(0\) to \(T-a,\) we get from \(\eqref{eq:T-a}_{1},\) \[\int_{0}^{T-a}\left\|\tilde{c}_{k}(t+a)-\tilde{c}_{k}(t)\right\|_{L^{2}}^{2}+ \frac{\alpha}{b}\int_{0}^{T-a}\left\|\tilde{c}_{k\tau}(t+a)-\tilde{c}_{k\tau} (t)\right\|_{L^{2}}^{2}=I_{1}+I_{2}+I_{3}+I_{4},\] where \[\begin{array}{l}I_{1}=-\alpha\int_{0}^{T-a}\int_{t}^{t+a}\int_{\Omega} \nabla c_{k}(s)\nabla\phi_{1}dxdsdt,\\ I_{2}=-\frac{\alpha}{b}\int_{0}^{T-a}\int_{t}^{t+a}\int_{\partial\Omega}\nabla_ {\tau}c_{k}(s)\nabla_{\tau}\phi_{1}d\sigma dsdt,\\ I_{3}=-\int_{0}^{T-a}\int_{t}^{t+a}\int_{\Omega}u_{k}(s)\nabla c_{k}(s)\phi_{1 }dxdsdt,\\ I_{4}=-\int_{0}^{T-a}\int_{t}^{t+a}\int_{\Omega}n_{k}(s)f(c_{k}(s))\phi_{1}dxdsdt.\end{array}\] Using H\(\ddot{o}\)lder's inequality, Fubini's Theorem, and (4.17), we have \[\begin{array}{rl}|I_{1}|&\leq\alpha\int_{0}^{T-a}\int_{t}^{t+a}\|\nabla c_{ k}(s)\|_{L^{2}}\|\nabla(\tilde{c}_{k}(t+a)-\tilde{c}_{k}(t))\|_{L^{2}}dsdt\\ &=\alpha\int_{0}^{T-a}\|\nabla(\tilde{c}_{k}(t+a)-\tilde{c}_{k}(t))\|_{L^{2}} \left(\int_{t}^{t+a}\|\nabla c_{k}(s)\|_{L^{2}}ds\right)dt\\ &\leq a^{\frac{1}{2}}\left(\int_{0}^{T}\|\nabla c_{k}(s)\|_{L^{2}}^{2}ds\right) ^{\frac{1}{2}}\alpha\int_{0}^{T-a}\|\nabla(\tilde{c}_{k}(t+a)-\tilde{c}_{k}(t) )\|_{L^{2}}dt\\ &\leq a^{\frac{1}{2}}M\alpha(T-a)^{\frac{1}{2}}2\left(\int_{0}^{T}\|\nabla \tilde{c}_{k}(t)\|_{L^{2}}^{2}dt\right)^{\frac{1}{2}}\\ &\leq a^{\frac{1}{2}}M\alpha T^{\frac{1}{2}}.\end{array}\] Then we have \(I_{1}\to 0\) as \(a\to 0.\) Similarly, we have \(I_{2}\to 0\) as \(a\to 0.\) For the estimates of \(I_{3}\), by H\(\ddot{o}\)lder's inequality, Fubini's Theorem, Sobolev embedding theorem, and (4.17), we have \[|I_{3}| \leq\int_{0}^{T-a}\int_{t}^{t+a}\parallel u_{k}(s)\|_{L^{6}}\|\nabla c _{k}(s)\|_{L^{2}}\|\tilde{c}_{k}(t+a)-\tilde{c}_{k}(t)\|_{L^{3}}dsdt\] \[\leq\int_{0}^{T-a}2\|\nabla\tilde{c}_{k}(t)\|_{L^{2}}\left\{\int_ {t}^{t+a}\|u_{k}(s)\|_{V}\|\nabla c_{k}(s)\|_{L^{2}}ds\right\}dt\] \[\leq\int_{0}^{T}\|u_{k}(s)\|_{V}\|\nabla c_{k}(s)\|_{L^{2}}\left( \int_{[s-a,\ s]\bigcap[0,\ T-a]}2\|\nabla\tilde{c}_{k}(t)\|_{L^{2}}dt\right)ds\] \[\leq\int_{0}^{T}\parallel u_{k}(s)\|_{V}\|\nabla c_{k}(s)\|_{L^{2 }}\left\{2a^{\frac{1}{2}}\left(\int_{0}^{T}\|\nabla\tilde{c}_{k}(t)\|_{L^{2}}^ {2}dt\right)^{\frac{1}{2}}\right\}ds\] \[\leq a^{\frac{1}{2}}M\int_{0}^{T}\parallel u_{k}(s)\|_{V}\|\nabla c _{k}(s)\|_{L^{2}}ds\] \[\leq a^{\frac{1}{2}}M\left(\int_{0}^{T}\parallel u_{k}(s)\|_{V}^{ 2}ds\right)^{\frac{1}{2}}\left(\int_{0}^{T}\|\nabla c_{k}(s)\|_{L^{2}}^{2}ds \right)^{\frac{1}{2}}\] \[\leq a^{\frac{1}{2}}M.\] Then we have \(I_{3}\to 0\) as \(a\to 0\). We now continue to estimate \(|I_{4}|:\) \[|I_{4}| \leq\int_{0}^{T-a}\int_{t}^{t+a}f_{1}\|n_{k}(s)\|_{L^{2}}\|\tilde{c }_{k}(t+a)-\tilde{c}_{k}(t)\|_{L^{2}}dsdt\] \[=f_{1}\int_{0}^{T-a}\|\tilde{c}_{k}(t+a)-\tilde{c}_{k}(t)\|_{L^{2 }}\left(\int_{t}^{t+a}\|n_{k}(s)\|_{L^{2}}ds\right)dt\] \[\leq f_{1}a^{\frac{1}{2}}\left(\int_{0}^{T}\|n_{k}(s)\|_{L^{2}}^ {2}ds\right)^{\frac{1}{2}}\int_{0}^{T-a}\|\tilde{c}_{k}(t+a)-\tilde{c}_{k}(t) \|_{L^{2}}dt\] \[\leq f_{1}a^{\frac{1}{2}}(T-a)^{\frac{1}{2}}M.\] We have \(I_{4}\to 0\) as \(a\to 0\). Therefore, we have proved that \[\int_{0}^{T-a}\|\tilde{c}_{k}(t+a)-\tilde{c}_{k}(t)\|_{L^{2}}^{2}\ \to 0\quad\text{as }a\to 0. \tag{4.22}\] Proceeding in an analogous way, we have \[\int_{0}^{T-a}\|\tilde{n}_{k}(t+a)-\tilde{n}_{k}(t)\|_{L^{2}}^{2}\ \to 0\quad\text{as }a\to 0. \tag{4.23}\] \[\int_{0}^{T-a}\|\tilde{u}_{k}(t+a)-\tilde{u}_{k}(t)\|_{L^{2}}^{2}\ \to 0\quad\text{as }a\to 0. \tag{4.24}\] With (4.22) - (4.24), and lemma 2.2, we conclude that (4.19) holds. Proceed in the same way as we derive (3.25) - (3.26), we also have, \[f(c_{k})\to f(c)\quad\text{strongly in }L^{2}(0,\ T,\ L^{2}(\Omega)), \tag{4.25}\] \[g(n_{k},\ c_{k})\to g(n,\ c)\quad\text{strongly in }L^{2}(0,\ T,\ L^{2}(\Omega)). \tag{4.26}\] Next, we want to pass the limit in (4.20) by letting \(k\to 0\), to this end, we consider \(\phi_{1},\ \phi_{2}\in C^{\infty}\bigcap H^{1},\ \phi_{3}\in\Upsilon\), and \((\Psi_{1},\ \Psi_{2},\ \Psi_{3})\in\big{(}C^{1}([0,\ T],\ R)\big{)}^{3}\), with \(\Psi_{1}(T)=\Psi_{2}(T)=\Psi_{3}(T)=0\). Multiply \(\left(\ref{4.20}\right)_{1}\) by \(\Psi_{1},\)\(\left(\ref{4.20}\right)_{2}\) by \(\Psi_{2},\)\(\left(\ref{4.20}\right)_{3}\) by \(\Psi_{3},\) integrate over \([0,\ T],\) integration by parts, we get \[\begin{array}{l}-\int_{0}^{T}\int_{\Omega}\tilde{c}_{k}(t)\phi_{1}\Psi_{1}^{ \prime}(t)dxdt+\alpha\int_{0}^{T}\int_{\Omega}\nabla c_{k}(t)\nabla\phi_{1} \Psi_{1}(t)dxdt-\frac{\alpha}{b}\int_{0}^{T}\int_{\partial\Omega}\tilde{c}_{k \tau}(t)\phi_{1\tau}\Psi_{1}^{\prime}(t)d\sigma dt\\ \quad+\frac{\alpha}{b}\int_{0}^{T}\int_{\partial\Omega}\nabla_{\tau}c_{k}(t) \nabla_{\tau}\phi_{1}\Psi_{1}(t)d\sigma dt+\int_{0}^{T}\int_{\Omega}u_{k}(t) \nabla c_{k}(t)\phi_{1}\Psi_{1}(t)dxdt=\Psi_{1}(0)\int_{\Omega}\tilde{c}_{k}(x,\ 0)\phi_{1}dx\\ \quad+\frac{\alpha}{b}\Psi_{1}(0)\int_{\partial\Omega}\tilde{c}_{k\tau}(x,\ 0) \phi_{1\tau}d\sigma-\int_{0}^{T}\int_{\Omega}n_{k}(t)f(c_{k}(t))\phi_{1}\Psi_{1 }(t)dxdt,\\ -\int_{0}^{T}\int_{\Omega}\tilde{n}_{k}(t)\phi_{2}\Psi_{2}^{\prime}(t)dxdt+ \int_{0}^{T}\int_{\Omega}\left(\beta\nabla n_{k}(t)-g(n_{k}(t),\ c_{k}(t)) \nabla c_{k}(t)\right)\nabla\phi_{2}\Psi_{2}(t)dxdt\\ \quad\quad+\int_{0}^{T}\int_{\Omega}u_{k}(t)\nabla n_{k}(t)\phi_{2}\Psi_{2}(t )dxdt=\Psi_{2}(0)\int_{\Omega}\tilde{n}_{k}(x,\ 0)\phi_{2}dx,\\ -\int_{0}^{T}\int_{\Omega}\tilde{u}_{k}(t)\phi_{3}\Psi_{3}^{\prime}(t)dxdt+ \int_{0}^{T}\int_{\Omega}\xi\nabla u_{k}(t)\nabla\phi_{3}\Psi_{3}(t)dxdt+ \int_{0}^{T}\int_{\Omega}(u_{k}(t)\cdot\nabla)u_{k}(t)\phi_{3}\Psi_{3}(t)dxdt \\ \quad\quad=\Psi_{3}(0)\int_{\Omega}\tilde{u}_{k}(x,\ 0)\phi_{3}dx+ \int_{0}^{T}\int_{\Omega}n_{k}(t)\nabla\sigma\phi_{3}\Psi_{3}(t)dxdt.\end{array} \tag{4.27}\] Now we take the limit in \(\left(\ref{4.27}\right)\) by letting \(k\to 0,\) and using \(\left(\ref{4.18}\right),\)\(\left(\ref{4.19}\right),\)\(\left(\ref{4.25}\right),\)\(\left(\ref{4.26}\right)\), we get \[\begin{array}{l}-\int_{0}^{T}\int_{\Omega}c(t)\phi_{1}\Psi_{1}^{\prime}(t) dxdt+\alpha\int_{0}^{T}\int_{\Omega}\nabla c(t)\nabla\phi_{1}\Psi_{1}(t)dxdt- \frac{\alpha}{b}\int_{0}^{T}\int_{\partial\Omega}c_{\tau}(t)\phi_{1\tau}\Psi_{ 1}^{\prime}(t)d\sigma dt\\ +\frac{\alpha}{b}\int_{0}^{T}\int_{\partial\Omega}\nabla_{\tau}c(t)\nabla_{ \tau}\phi_{1}\Psi_{1}(t)d\sigma dt+\int_{0}^{T}\int_{\Omega}u(t)\nabla c(t) \phi_{1}\Psi_{1}(t)dxdt\\ =\Psi_{1}(0)\int_{\Omega}c(x,\ 0)\phi_{1}dx+\frac{\alpha}{b}\Psi_{1}(0)\int_{ \partial\Omega}c_{\tau}(x,\ 0)\phi_{1\tau}d\sigma-\int_{0}^{T}\int_{\Omega}n(t)f(c(t))\phi_{1}\Psi_{1}(t)dxdt,\\ -\int_{0}^{T}\int_{\Omega}n(t)\phi_{2}\Psi_{2}^{\prime}(t)dxdt+\int_{0}^{T} \int_{\Omega}\left(\beta\nabla n(t)-g(n(t),\ c(t))\nabla c(t)\right)\nabla\phi _{2}\Psi_{2}(t)dxdt\\ +\int_{0}^{T}\int_{\Omega}u(t)\nabla n(t)\phi_{2}\Psi_{2}(t)dxdt=\Psi_{2}(0) \int_{\Omega}n(x,\ 0)\phi_{2}dx,\\ -\int_{0}^{T}\int_{\Omega}u(t)\phi_{3}\Psi_{3}^{\prime}(t)dxdt+\int_{0}^{T} \int_{\Omega}\xi\nabla u(t)\nabla\phi_{3}\Psi_{3}(t)dxdt+\int_{0}^{T}\int_{ \Omega}(u(t)\cdot\nabla)u(t)\phi_{3}\Psi_{3}(t)dxdt\\ =\Psi_{3}(0)\int_{\Omega}u(x,\ 0)\phi_{3}dx+\int_{0}^{T}\int_{\Omega}n(t)\nabla \sigma\phi_{3}\Psi_{3}(t)dxdt.\end{array} \tag{4.28}\] \(\left(\ref{4.28}\right)\) holds for any \(\phi_{1},\ \phi_{2}\in C^{\infty}\bigcap H^{1},\ \phi_{3}\in\Upsilon,\) by continuity, it holds for any \(\phi_{1},\ \phi_{2}\in H^{1},\ \phi_{3}\in V\). By choosing \(\left(\Psi_{1},\ \Psi_{2},\ \Psi_{3}\right)\in\left(C_{0}^{\infty}[0,\ T] \right)^{3},\) we draw conclusion that \(\left(\ref{4.6}\right)\) holds, in the weak sense on \(\left(0,\ T\right).\) By standard argument, we have \(\left(c(0),\ n(0),\ u(0)\right)=\left(c_{0},\ n_{0},\ u_{0}\right).\) The proof of Theorem 1.1 is completed.
2304.05456
Dual Systolic Graphs
We define a family of graphs we call dual systolic graphs. This definition comes from graphs that are duals of systolic simplicial complexes. Our main result is a sharp (up to constants) isoperimetric inequality for dual systolic graphs. The first step in the proof is an extension of the classical isoperimetric inequality of the boolean cube. The isoperimetric inequality for dual systolic graphs, however, is exponentially stronger than the one for the boolean cube. Interestingly, we know that dual systolic graphs exist, but we do not yet know how to efficiently construct them. We, therefore, define a weaker notion of dual systolicity. We prove the same isoperimetric inequality for weakly dual systolic graphs, and at the same time provide an efficient construction of a family of graphs that are weakly dual systolic. We call this family of graphs clique products. We show that there is a non-trivial connection between the small set expansion capabilities and the threshold rank of clique products, and believe they can find further applications.
Daniel Carmon, Amir Yehudayoff
2023-04-11T19:04:56Z
http://arxiv.org/abs/2304.05456v2
# Dual systolic graphs ###### Abstract. We define a family of graphs we call _dual systolic_ graphs. This definition comes from graphs that are duals of systolic simplicial complexes. Our main result is a sharp (up to constants) isoperimetric inequality for dual systolic graphs. The first step in the proof is an extension of the classical isoperimetric inequality of the boolean cube. The isoperimetric inequality for dual systolic graphs, however, is exponentially stronger than the one for the boolean cube. Interestingly, we know that dual systolic graphs exist, but we do not yet know how to efficiently construct them. We, therefore, define a weaker notion of dual systolicity. We prove the same isoperimetric inequality for weakly dual systolic graphs, and at the same time provide an efficient construction of a family of graphs that are weakly dual systolic. We call this family of graphs _clique products_. We show that there is a non-trivial connection between the small set expansion capabilities and the threshold rank of clique products, and believe they can find further applications. ## 1. Introduction The systole of a space is the smallest length of a cycle that can not be contracted. Systoles were originally studied for manifolds, but our focus is combinatorial. For graphs, the systole is the girth. A graph is systolic if it locally looks like a tree. In a broader topological setting, a simplicial complex1 is systolic if every short cycle in its one-skeleton is contractible (roughly speaking, there are no short induced cycles). The systolic condition for simplicial complexes can be thought of as a combinatorial definition of non-positive curvature. Systolic inequalities play an important role in many areas of mathematics (see [13, 7] and references within). Footnote 1: A collection of finite sets that is closed downwards. A particular systolic condition for simplicial complexes, identified by Davis and Moussong, is the _no empty square_ condition (see [18, 4, 10] and references within). This condition implies that the one-skeleton does not contain an induced cycle of length at most four. It was open whether simplicial complexes satisfying this condition exist or not. Januszkiewicz the first to construct such complexes [10, 11]. Their construction is deep and sophisticated. Its importance is also evident from its application in machine learning [3]. This application highlights the fundamental importance of systoles. Roughly speaking, some systolic spaces lead to learning problems that are difficult to solve due to a "global" reason but not a "local" reason. The starting point of our work was to improve our understanding of the JS constructions. A specific question that came up is _what is special about duals of systolic spaces?_ Every pure2 simplicial complex defines a dual graph. The vertices of the graph are the facets3 in the complex. Two vertices are connected by an edge if the two facets share a face of co-dimension one. The fact that the graph is a dual of a complex automatically leads to non-trivial behavior. We formalize such behavior using the notion of pseudo-cubes. Footnote 2: All maximal simplexes have the same dimension. Footnote 3: Full-dimensional simplexes. **Definition**.: _We say that a graph \(G=(V,E)\) is a pseudo-cube if the following conditions are satisfied:_ 1. _It is_ \(d\)_-regular._ 2. _It has an edge_ \(d\)_-coloring; that is, there is_ \(\chi:E\to[d]\) _so that for all_ \(v\in V\)_, if_ \(E_{v}\) _is the set of_ \(d\) _edges touching the vertex_ \(v\)_, then_ \(\chi(E_{v})=[d]\)_._ 3. _For every color_ \(i\in[d]\)_, the graph obtained from_ \(G\) _by contracting all edges of color_ \(\neq i\) _does not have self-loop._ The number \(d\) is called the _dimension_ of the pseudo-cube. Property (iii) is called _color independence_. In other words, it means that for every color \(i\) and every cycle in \(G\), the number of edges of color \(i\) along the cycle is not one (either zero or at least two). While this definition is simple to state, let us describe some examples, and provide some intuition. A well-known example of a pseudo-cube is the \(d\)-dimensional boolean cube \(Q_{d}\). Its vertex set is \(\{0,1\}^{d}\) and two vertices are connected by an edge if they differ in a single entry. The edge coloring comes from the \(d\) dimensions or directions in the cube. The color independence follows from the linear independence of the standard basis. The definition of a pseudo-cube graph corresponds to the graph being dual to a _chromatic_ and _non-branching_ pure complex. A simplicial complex is _chromatic_ if there is a coloring of its vertices such that each facet has a vertex of each color. A pure simplicial is _non-branching_ if every simplex of co-dimension one is a face of exactly two facets. Such simplicial complexes allow us to encode data in a useful manner. The general theme is to model "legal or possible states" as topological spaces. Consider, for example, the following scenario. There are three players \(A,B,C\), and a deck of four cards \(1,2,3,4\). Each player gets a unique card. There are \(24\) possible assignments of cards to players. This scenario can be modeled as a chromatic and non-branching simplicial complex (see Figure 1 and Example 3 in [22]). A vertex in the complex corresponds to the local view of a single player (i.e., the vertices are pairs of the form \((A,1)\)). The color of a vertex is the player's name (the color of \((A,1)\) is \(A\)). The \(24\) facets correspond to possible states of the world such as \(\{(A,2),(B,1),(C,4)\}\). Each facet has three colors. When a single player switches between her card and the free card, we move to an adjacent facet. This simplicial complex turns out to be a triangulation of the two-dimensional torus. The translation to the topological language leads to important results in computer science. A prominent example is the proof of the asynchronous computability theorem in [8], where the different colors corresponded to different processors in a distributed setting. In the recent work [3], such simplicial complexes were used to model multiclass learning problems. The following example provides some additional intuition. Imagine the unit interval \(I=[0,1]\) placed on the line \(\mathbb{R}\). Let \(x\) be the reflection around \(1\in\mathbb{R}\) and let \(y\) be the reflection around \(0\in\mathbb{R}\). Both \(x\) and \(y\) are involutions. If we apply \(x\) on \(I\) we get \(xI=[1,2]\), and if we apply \(y\) we get \(yI=[-1,0]\). The group \(D\) generated by \(x\) and \(y\) is the infinite dihedral group. If we apply all of \(D\) on \([0,1]\), we get a tiling of the line by unit intervals. This tiling defines a simplicial complex. The dual graph is the Cayley graph of \(D\) with \(x,y\) as generators, and is a pseudo-cube. The intermediate value theorem says that if we move continuously in the line starting at some unit interval and coming back to the same interval, then we must cross the same boundary of the interval more than once. This topological property is abstracted by color independence. Figure 1. Four cards to three players as a simplicial complex. The \(24\) triangles are facets, and vertices of the same color and number are identified. The boolean cube is the dual graph of a non-branching and chromatic pure complex. The vertices of this complex are the \(2d\) elements in \([d]\times\{0,1\}\). Each \(x\in\{0,1\}^{d}\) defines a full dimensional simplex \(\sigma_{x}=\{(i,x_{i}):i\in[d]\}\). Two vertices \(x,y\in\{0,1\}^{d}\) are connected by an edge iff the common face \(\sigma_{x}\cap\sigma_{y}\) has co-dimension one. This simplicial complex is, however, not systolic. For example, for \(d=2\), this complex is an empty square. The JS complexes are systolic. Their dual graphs are therefore _dual systolic_. The following definition provides a combinatorial abstraction. **Definition**.: _A graph \(G\) is dual systolic of dimension \(d\) if it is a pseudo-cube of dimension \(d\) so that for every color \(i\in[d]\), the graph obtained from \(G\) by contracting all edges of color \(\neq i\) is simple (that is, there are no double edges between vertices)._ Stated differently, dual systolicity means that for every color \(i\), the following holds. Let \(V_{1},V_{2}\subseteq V\) be two distinct connected components with respect to edges of color \(\neq i\). There is at most a single edge of color \(i\) between \(V_{1}\) and \(V_{2}\) in \(G\). _How does this definition relate to systoles?_ The observation is that if a simplicial complex has no empty squares then its dual graph is dual systolic. Let \(\mathcal{C}\) be a chromatic, non-branching and pure simplicial complex, and let \(G\) be its dual graph. The _star_ of a vertex \(v\) in \(\mathcal{C}\) is the set of all simplexes \(\sigma\in\mathcal{C}\) so that \(\{v\}\cup\sigma\in\mathcal{C}\). The vertex \(v\) is called the center of the star. There is a one-to-one correspondence between stars in \(\mathcal{C}\) whose center has color \(i\) and connected components in the graph obtained by deleting edges with color \(i\) from \(G\). Let \(v_{1},v_{2}\) be two distinct vertices of color \(i\) in \(\mathcal{C}\). Assume that the two corresponding connected components have two or more \(i\)-edges between them. It follows that the corresponding stars share two different simplexes \(\sigma_{1},\sigma_{2}\) of co-dimension one. There are two distinct vertices \(u_{1}\in\sigma_{1}\) and \(u_{2}\in\sigma_{2}\) with the same color. The square \(v_{1},u_{1},v_{2},u_{2}\) has alternating colors so it is empty (because the complex is chromatic). The \(d\)-dimensional boolean cube \(Q_{d}\) is not dual systolic for \(d>1\); for every \(i\in[d]\), the the graph obtained from \(Q_{d}\) by contracting all edges of color \(\neq i\) consists of \(2^{d-1}\) parallel edges. In a sense, it is the "least dual systolic" pseudo-cube. The dual graphs of the JS complexes are dual systolic. The definition of dual systolic graphs is simple and natural. The constructions of Januszkiewicz and Swiatkowski imply that dual systolic graphs exist [10, 11]. It seems, however, quite complicated to efficiently build dual systolic graphs. For example, we do not yet know of strongly polynomial time algorithms for building them. ### Isoperimetry The first property we study is the isoperimetric profile of dual systolic graphs. Let \(G=(V,E)\) be a graph. For \(U\subseteq V\), denote by \(\partial(U)=\partial_{G}(U)\) the number of edges connecting \(U\) and \(V\setminus U\) in \(G\). The _edge expansion_ of \(U\neq\emptyset\) is \[\phi(U)=\phi_{G}(U)=\frac{\partial(U)}{|U|}.\] The isoperimetric profile of \(G\) is the function defined by \[P(s)=P_{G}(s)=\min\{\phi(U):|U|=s\}.\] It provides a lot of information on the structure of \(G\). Figure 3. The dual systolic graph from Figure 1 with blue edges deleted. Figure 2. A systolic complex (on the left) and its dual graph (on the right). The first result we state bounds the isoperimetric profile of pseudo-cubes. Samorodnitsky proved the same result for the boolean cube [20]. We observe that his proof can be extended to all pseudo-cubes.4 Footnote 4: Here and below, logarithms are in base two. **Theorem 1**.: _For every \(d\)-dimensional pseudo-cube \(G\) and \(s>0\),_ \[P_{G}(s)\geq d-\log s.\] The theorem is sharp for the boolean cube for all integers \(s\) that are powers of two. It can be thought of as saying that pseudo-cubes are small set expanders. The study of small set expansion is motivated by its intimate connection with the unique games conjecture, its potential to provide insights into hardness of optimization problems, and its connection to metric embeddings (see [19, 15, 1] and references within). The boolean cube is the canonical example of a small set expander. The small set expansion of various other graphs has been studied in several works. Noisy versions of the boolean cube were studied in [12, 17]. Small set expansion for noisy cubes is related to the "majority is stablest" theorem. The authors of [2] considered a derandomized version of the noisy cube and achieved a better trade-off between expansion and threshold rank (more on this later on). The small set expansion of the multi-slice graph was studied in [6], and of the Johnson graph was studied in [14]. Another line of research includes characterizing the non-expanding subsets and showing that they are, in some sense, negligible [5]. Dual systolicity leads to a much stronger bound on the isoperimetric profile. **Theorem 2**.: _For every \(d\)-dimensional dual systolic graph \(G\) and \(s>1\),_ \[P_{G}(s)\geq d-8(1+\log\log s).\] This is an exponential improvement over the previous theorem. Pseudo-cubes have strong expansion, say of at least \(\frac{d}{2}\), for sets of size at most exponential in \(d\). The same expansion for dual systolic graphs holds for sets of size _doubly exponential_ in \(d\). This bound is sharp (up to the constant before the \(\log\log\)) because the JS construction yields dual systolic graphs of size doubly exponential in \(d\). Most proofs of small set expansion are analytic and go through hypercontractivity. Our proof follows a different path. The bound for pseudo-cubes uses information theory (following Samorodnitsky's footsteps). The bound for dual systolic graphs has three parts. The first part is the bound for pseudo-cubes. The second part is identifying a dynamics that the isoperimetric profile satisfies. On a high-level, there is a functional \(\mathcal{F}\) so that if \(P\) is the isoperimetric profile of dual systolic graphs, then \(\mathcal{F}(P)\) is also such a profile. The third part is finding the fixed point of this dynamics. This dynamics can be thought of as a bootstrapping mechanism for proving isoperimetric inequalities that relies on dual systolicity. For more details, see Section 3. ### Explicit constructions The JS complexes are defined via a collection of groups. This collection of groups is inductively constructed, but we do not know of an efficient algorithm that implements this construction. In this section, we weaken the dual systolic condition. This leads to explicit constructions and at the same time we get the same isoperimetric behavior as for dual systolic graphs. The first notion we define is that of a _weak_ pseudo-cube. We replace color independence with a weaker requirement. **Definition**.: _We say that a graph \(G=(V,E)\) is a weak pseudo-cube if (i) it is \(d\)-regular, (ii) it has an edge \(d\)-coloring, and (iii) the following two conditions hold. First, the graph obtained from \(G\) by contracting all edges of color \(\neq d\) does not have self-loop. Second, let \(G_{-d}\) denote the graph obtained from \(G\) by deleting all edges of color \(d\). Every connected component in \(G_{-d}\) is a \((d-1)\)-dimensional pseudo-cube. This definition is inductive; for \(d=1\), a weak pseudo-cube is a perfect matching._ Property (iii) is called _weak_ color independence. Weak pseudo-cubes satisfy the same isoperimetric inequality as the one stated for pseudo-cubes above. **Theorem 3**.: _For every \(d\)-dimensional weak pseudo-cube \(G\) and \(s>0\),_ \[P_{G}(s)\geq d-\log s.\] We now define the weak notion of dual systolicity. **Definition**.: _A graph \(G\) is weakly dual systolic if it is a weak pseudo-cube of dimension \(d\) so that (a) the graph obtained from \(G\) by contracting all edges of color \(\neq d\) is simple, and (b) every connected component in \(G_{-d}\) is weakly systolic of dimension \(d-1\). This definition is inductive; for \(d=1\), a weakly systolic graph is a perfect matching._ Weakly dual systolic graphs also satisfy the same strong isoperimetric inequality as dual systolic graphs. **Theorem 4**.: _For every \(d\)-dimensional weakly dual systolic graph \(G\) and \(s>1\),_ \[P_{G}(s)\geq d-8(1+\log\log s).\] Even though we weakened the requirements we obtained the same isoperimetric behavior. But we also obtained one more important thing. There are explicit and simple constructions of weakly dual systolic graphs. Before describing the construction, we answer a basic question. _What is the least size of a weakly systolic graph of dimension \(d\)?_ The size of \(d\)-dimensional pseudo-cubes is at least roughly \(\exp(d)\). The size of \(d\)-dimensional dual systolic graphs is much larger, at least roughly \(\exp(\exp(d))\). Indeed, if we denote by \(F(d)\) the minimal number of vertices in a \(d\)-dimensional weakly dual systolic graph, we have the recurrence relation: \(F(1)=2\) and for \(d>1\), \[F(d)\geq F(d-1)\big{(}F(d-1)+1\big{)}. \tag{1}\] The inequality holds because if we look at a single connected component of \(G_{-d}\) of size \(S\) then it has \(S\) neighboring components, and the size of each of these \(S+1\) components is at least \(F(d-1)\). **Claim 5**.: _For every integer \(d\geq 1\), there is an explicit weakly dual systolic graph \(CP^{(d)}\) of dimension \(d\) with \(n^{(d)}\) vertices. In addition \(n^{(1)}=2\) and \(n^{(d)}=n^{(d-1)}(n^{(d-1)}+1)\) for \(d>1\)._ The graph \(CP^{(d)}\) is called the \(d\)_-dimensional clique product._ The construction of clique products is by repeated applications of replacement products of cliques [21, 9]. See Figure 4 for an example. Figure 4. The graph \(CP^{(3)}\) is displayed in the top-left corner. The other three images show this graph when we delete edges of a specific color. Only the green color on the bottom-right β€œpartitions the graph” in the desired way. The clique products show that inequality (1) is in fact an equality. The \(d\)-dimensional clique product is the smallest possible \(d\)-dimensional weakly systolic graph. In some sense, clique products are the dual systolic versions of the boolean cubes (which are the smallest possible pseudo-cubes). Below we describe one application of clique products. It seems reasonable that they can be used in future applications as well. Proof of Claim 5.: Take \(CP^{(1)}\) to be a single edge. Construct \(CP^{(d)}\) from \(CP^{(d-1)}\) by taking its replacement product with the clique of size \(n^{(d-1)}+1\). Specifically, take \(n^{(d-1)}+1\) disjoint copies of \(CP^{(d-1)}\), and connect node \(i\in\{1,2,\ldots,n^{(d-1)}\}\) in copy \(j\in\{0,1,\ldots,n^{(d-1)}\}\) to node \(-i\) in copy \(i+j\), where the operations are modulo \(n^{(d-1)}+1\). Color all these new edges with color \(d\). ### An application The following question came up in the study of the unique games conjecture: _what is the relationship between threshold rank and small set expansion?_ Let \(G\) be a regular graph with \(n\) vertices, and let \(M\) be its normalized adjacency matrix. The matrix \(M\) is symmetric and has \(n\) eigenvalues (the maximum eigenvalue is one). The _threshold rank_ of \(G\) with parameter \(\epsilon>0\), denoted by \(TR_{1-\epsilon}(G)\), is defined to be the number of eigenvalues of \(M\) that are at least \(1-\epsilon\). Threshold rank is important in algorithms for the unique games conjecture [16, 1]. The authors of [1] bounded from above the threshold rank of small set expanders. The authors of [2] constructed small set expanders with relatively large threshold rank. A full understanding of the relationship between threshold rank and small set expansion remains an open problem. The clique products are (somewhat) small set expanders with high threshold rank. **Theorem 6**.: _Let \(d>2\), let \(CP=CP^{(d)}\) and let \(n=n^{(d)}\). Let \(0<k\leq\frac{d}{2}\) be an integer and \(\epsilon\geq\frac{2k}{d}\). Then,_ \[TR_{1-\epsilon}(CP)\geq\frac{n^{1-2^{-k}}}{2}.\] For \(k=1\), we get a lower bound of \(\Omega\left(\sqrt{n}\right)\) on the threshold rank with \(\epsilon=\frac{2}{d}\). In other words, we have a graph of size \(n\) so that the threshold rank \(TR_{1-\epsilon}\) is polynomially large, for \(\epsilon\leq\frac{2}{1+\log\log n}\) that tends to zero as \(n\) tends to infinity. When we think of \(\epsilon\) as a constant, the integer \(k\) becomes \(\frac{\epsilon d}{2}\), and the threshold rank is almost as high as it can be (\(TR_{1-\epsilon}\) is at least \(n^{1-o(1)}\)). A rough comparison can be made for example with Theorem 4.14 in [2]. Their construction depends on two parameters. If we plug-in \(\epsilon\) as one of their parameters, and \(\frac{1}{2}\) as the second parameter, we see that the threshold rank of \(CP^{(d)}\) is larger than in their construction, but the small set expansion properties in their construction is better. The two constructions are incomparable. The ideas developed in our paper can perhaps lead to better constructions in the small set expansion versus threshold rank question. A specific approach that seems reasonable is weakening the dual systolicity condition; instead of requiring at most one edge between two connected components, we can upper bound the number of edges between components by some other number. This weakening still leads to small set expansion behavior, and can potentially lead to a wider range of parameters. ### A discussion The dual systolicity condition translates the systolic notion from [10, 11] to the language of graphs. There are stronger systolic criteria for simplicial complexes than the one considered here (i.e., the \(k\)-large condition). We believe that further exploring these stronger criteria could lead to stronger guarantees and to powerful properties. Our work is only a first step in this direction. ## 2. Isoperimetry of weak pseudo-cubes In this section we prove bounds on the isoperimetric profile of weak pseudo-cubes. The proof we present here uses information theory, and is inspired by Samorodnitsky's proof for the hypercube [20]. The main difficulty we face is identifying the correct "coordinate system" to use. Proof of Theorem 3.: Let \(G=(V,E)\) be a \(d\)-dimensional weak pseudo-cube, and let \(U\) be a non-empty set of vertices. For \(i\in[d]\), denote all inner \(i\)-edges of \(U\) by \[e_{i}(U):=\{e\in E:e\subseteq U,\chi(e)=i\}.\] Because \[d|U|=\partial(U)+2\sum_{i\in[d]}e_{i}(U),\] we see that \[\phi(U)=d-\frac{2\sum_{i\in[d]}e_{i}(U)}{|U|}.\] It remains to prove that \[\frac{2\sum_{i=1}^{d}e_{i}(U)}{|U|}\leq\log|U|. \tag{2}\] Let \(X\) be a uniformly random vertex in \(U\) so that the Shannon entropy of \(X\) is \(H(X)=\log|U|\). For every \(I\subseteq[d]\), denote by \(X_{I}\) the set \[X_{I}=\{v\in V:\ v\text{ is reachable from }X\text{ using colors in }I\}.\] The crucial observation is that for all \(e,i\) so that \(\Pr[X_{\{i\}}=e]>0\), we have \[H(X|X_{\{i\}}=e)=1_{e\in e_{i}(U)}.\] In other words, when \(e\in e_{i}(U)\), the entropy is one, and when \(e\not\in e_{i}(U)\), the entropy is zero. Taking expectation, we see that \[H(X|X_{\{i\}})=\frac{2e_{i}(U)}{|U|}.\] Inequality (2) can thus be written as \[\sum_{i\in[d]}H(X|X_{\{i\}})\leq H(X).\] To show this, we observe that for all \(i>1\), weak color independence implies that \[H(X|X_{\{i\}})\leq H(X_{\{1,\ldots,i-1\}}|X_{\{i\}}).\] When \(H(X|X_{\{i\}}=e)\) is one, the distribution of \(X_{\{1,\ldots,i-1\}}\) conditioned on \(X_{\{i\}}=e\) is uniform between two options (so that the r.h.s. is one as well). The set \(X_{I}\) is determined by the set \(X_{I^{\prime}}\) whenever \(I^{\prime}\subseteq I\). The data processing inequality thus implies that for all \(i>1\), \[H(X_{\{1,\ldots,i-1\}}|X_{\{i\}})\leq H(X_{\{1,\ldots,i-1\}}|X_{\{1,2,\ldots,i \}}).\] By the chain rule, \[\sum_{i\in[d]}H(X|X_{\{i\}}) =H(X|X_{\{1\}})+\sum_{i>1}H(X|X_{\{i\}})\] \[\leq H(X|X_{\{1\}})+\sum_{i>1}H(X_{\{1,\ldots,i-1\}}|X_{\{1,2, \ldots,i\}})\] \[=H(X,X_{\{1\}},X_{\{1,2\}},\ldots,X_{\{1,\ldots,d-1\}}|X_{\{1, \ldots,d\}})\] \[\leq H(X).\qed\] ## 3. Isoperimetry for weak dual systolic graphs This section is devoted for analyzing the isoperimetric profile of weakly dual systolic graphs. ### A dynamic The key step in the proof is identifying a dynamic that the isoperimetric profile satisfies. This leads to the following definition. **Definition**.: _A function \(g:[1,\infty)\to[0,\infty)\) is a dimension-independent bounding function if for every weakly dual systolic graph \(G=(V,E)\) of dimension \(d\), and every \(s>0\),_ \[P_{G}(s)\geq d-g(s).\] The following lemma describes the underlying dynamics. For \(\epsilon>0\), define a functional \(\mathcal{F}_{\epsilon}\) by \[\big{[}\mathcal{F}_{\epsilon}(g)\big{]}(s)=g(2^{\frac{4}{\epsilon}})+\epsilon \log s.\] **Lemma 7**.: _Let \(g\) be a dimension-independent bounding function that is monotonically non-decreasing so that \(g(2)\geq 1\). Then, for every \(\epsilon>0\), the function \(\mathcal{F}_{\epsilon}(g)\) is also a dimension-independent bounding function._ Proof.: Let \(G\) be a \(d\)-dimensional weakly dual systolic graph, and let \(U\) be a set of vertices of size \(s>0\). We need to prove that \[\phi(U)=\phi_{G}(U)\geq d-g(2^{\frac{4}{\epsilon}})-\epsilon\log s. \tag{3}\] The proof is by induction on \(d\) and \(s\). There are two induction bases. If \(s=1\), then \(\phi(U)=d\) and the inequality holds. If \(d=1\) and \(1<s\), then the r.h.s. of (3) is non-positive. It remains to perform the inductive step. Let \(V_{1},\ldots,V_{L}\) be the connected components of the graph \(G_{-d}\) obtained from \(G\) by deleting all edges of color \(d\). Decompose \(U\) to \(U_{1},\ldots,U_{L}\) defined by \(U_{\ell}=U\cap V_{\ell}\). Assume that only the first \(U_{1},\ldots,U_{k}\) are non-empty (and the rest are empty). If \(k=1\), then induction on \(d\) completes the proof, because if \(G^{\prime}\) is the graph \(G\) induces on \(V_{1}\) then \(\phi(U)\geq\phi_{G^{\prime}}(U)+1\) and \(G^{\prime}\) has dimension \(d-1\). So, we can assume that \(k>1\). For each \(i\in[k]\), we can consider \(U_{i}\) and \(U_{\neg i}=\bigcup_{j\neq i}U_{j}\). For convenience, let \(p_{i}=\frac{|U_{i}|}{s}\) and let \(q_{i}=1-p_{i}\). Both \(U_{i}\) and \(U_{\neg i}\) are smaller than \(U\), so that induction implies that \[\frac{\partial(U_{i})}{p_{i}s}\geq d-g(2^{\frac{4}{\epsilon}})- \epsilon\log(p_{i}s),\] \[\frac{\partial(U_{\neg i})}{q_{i}s}\geq d-g(2^{\frac{4}{\epsilon }})-\epsilon\log(q_{i}s).\] How does \(\partial(U)\) relate to these quantities? The edges between \(U_{i}\) and \(U_{\neg i}\) are counted both in \(\partial(U_{i})\) and in \(\partial(U_{\neg i})\). All these edge have color \(d\). The number of edges between \(U_{i}\) and \(U_{\neg i}\) is denoted by \(e(U_{i},U_{\neg i})\). Therefore, \[\partial(U)=\partial(U_{i})+\partial(U_{\neg i})-2e(U_{i},U_{\neg i}).\] Combining the three former equations gives us: \[\frac{\partial(U)}{s} \geq d-g(2^{\frac{4}{\epsilon}})-\epsilon p_{i}\log(p_{i}s)-\epsilon q _{i}\log(q_{i}s)-\frac{2e(U_{i},U_{\neg i})}{s}\] \[=d-g(2^{\frac{4}{\epsilon}})-\epsilon\log(s)+\epsilon h(p_{i})- \frac{2e(U_{i},U_{\neg i})}{s},\] where \(h\) is the binary entropy function. The missing part for us can thus be summarized by the inequality: \[\frac{2e(U_{i},U_{\neg i})}{s}\leq\epsilon h(p_{i}). \tag{4}\] Notice that we did not assume anything about the index \(i\), and so it is enough to find any index for which (4) is satisfied. The analysis is partitioned between three cases. **Case I**: There exists \(i\in[k]\) so that \(p_{i}\leq 2^{-\frac{2}{\epsilon}}\). In this case, we do not even need to use dual systolicity. Instead, we can use the bound \(e(U_{i},U_{\neg i})\leq p_{i}s\) that holds due to proper coloring. Plugging this in (4), we get that it is enough to show \(2p_{i}\leq\epsilon h(p_{i})\). So, it is enough to show that \(2p_{i}\leq\epsilon p_{i}\log\frac{1}{p_{i}}\). This is immediately satisfied by the assumption of the case. **Case II**: \(2^{\frac{4}{\epsilon}}\leq s\) and for all \(i\in[k]\), we have \(2^{-\frac{2}{\epsilon}}<p_{i}\). Because \(\sum_{i\in[k]}p_{i}=1\), we know that \(k\leq 2^{\frac{2}{\epsilon}}\). Systolicity implies that \(e(U_{i},U_{\neg i})\leq k\) for all \(i\). This means that it suffices to prove for some \(i\) that \[\frac{2\cdot 2^{\frac{2}{\epsilon}}}{s}\leq\epsilon h(p_{i}).\] Because \(k>1\), we can choose \(i\) so that \(p_{i}\leq\frac{1}{2}\). This means we can treat the binary entropy as an increasing function, and lower bound it by \(h(2^{-\frac{2}{\epsilon}})\). It suffices to show that \[\frac{2\cdot 2^{\frac{2}{\epsilon}}}{\epsilon s}\leq\frac{2}{\epsilon}\cdot 2^ {-\frac{2}{\epsilon}},\] which indeed holds. **Case III**: \(s<2^{\frac{4}{\epsilon}}\). Because \(g\) is monotonic non-decreasing, we have \(g(s)\leq g(2^{\frac{4}{\epsilon}})\). So, \[\phi(U)\geq d-g(s)\geq d-g(2^{\frac{4}{\epsilon}})-\epsilon\log s.\qed\] ### The fixed point Lemma 7 provides a bootstrapping mechanism. We can start with an initial bounding function \(g_{0}\). The pseudo-cube bound allows use to choose \(g_{0}(x)=\log x\). From \(g_{0}\), we can get a family of new bounds \(g_{1,\epsilon}=\mathcal{F}_{\epsilon}(g_{0})\) parametrized by \(\epsilon\). For every set-size \(s\), we can choose an \(\epsilon\) that gives the best possible bound for that \(s\). This leads to a new bounding function \(g_{1}\). We can plug \(g_{1}\) into the same mechanism and get a stronger bound \(g_{2}\), and so forth. The fixed-point of this dynamics is the best possible bound this proof gives (which turns out to be optimal up to constant factors). The following lemma describes this mechanism. **Lemma 8**.: _Let \(\ell>0\) be an integer. Suppose \(g(s)=c\log^{\frac{1}{\ell}}s\) is a dimension-independent bounding function where \(c\geq 1\). Then, the following function is also a dimension-independent bounding function:_ \[g^{\prime}(m)=c^{\prime}\log^{\frac{1}{\ell+1}}s,\] _where_ \[c^{\prime}=\left(4c^{\ell}\ell\right)^{\frac{1}{\ell+1}}+\left(4\Big{(}\frac{ c}{\ell}\Big{)}^{\ell}\right)^{\frac{1}{\ell+1}}\geq 1. \tag{5}\] Proof.: Plugging \(g\) into \(\mathcal{F}_{\epsilon}\) gives for every \(\epsilon>0\), a new function \[f_{\epsilon}(s) =c\log^{\frac{1}{\ell}}(2^{\frac{4}{\epsilon}})+\epsilon\log s\] \[=c\Big{(}\frac{4}{\epsilon}\Big{)}^{\frac{1}{\ell}}+\epsilon\log s.\] For every \(s\), we wish to find the best possible \(\epsilon=\epsilon(s)\). An elementary calculation leads to the following choice \[\epsilon(s)=\Big{(}\frac{c4^{\frac{1}{\ell}}}{\ell\log s}\Big{)}^{\frac{\ell} {\ell+1}}.\] Plugging this \(\epsilon\) gives \[f^{\prime}(s) =c4^{\frac{1}{\ell}}\Big{(}\frac{\ell\log s}{c4^{\frac{1}{\ell}} }\Big{)}^{\frac{1}{\ell+1}}+\Big{(}\frac{c4^{\frac{1}{\ell}}}{\ell\log s} \Big{)}^{\frac{\ell}{\ell+1}}\log s\] \[=\Big{(}c4^{\frac{1}{\ell}}\Big{(}\frac{\ell}{c4^{\frac{1}{\ell }}}\Big{)}^{\frac{1}{\ell+1}}+\Big{(}\frac{c4^{\frac{1}{\ell}}}{\ell}\Big{)}^ {\frac{\ell}{\ell+1}}\Big{)}\log^{\frac{1}{\ell+1}}s.\] It remains to verify that \(c^{\prime}\geq 1\). This follows by a direct substitution of \(c=1\) that gives \((4\ell)^{\frac{1}{\ell+1}}+(\frac{4}{\ell^{\ell}})^{\frac{1}{\ell+1}}>1\). Proof of Theorem 4.: We build a sequence of dimension-independent bounding functions \(g_{0},g_{1},\ldots\) using Lemma 8. Theorem 3 allows to choose \(g_{0}(s)=\log s\), which we can plug into Lemma 8. The lemma leads to a sequence of constants \(c_{0}=1\), and \(c_{\ell+1}\) is defined from \(c_{\ell}\) via (5). A simple induction on \(\ell\) shows that \(c_{\ell}\leq 4\ell\). The induction base is true, and the induction step is \[\big{(}4(4\ell)^{\ell}\ell\big{)}^{\frac{1}{\ell+1}}+\Big{(}4\Big{(}\frac{4 \ell}{\ell}\Big{)}^{\ell}\Big{)}^{\frac{1}{\ell+1}}\leq 4\ell+4.\] We thus have the following family of dimension-independent bounding functions: \[g_{\ell}(s)=4\ell\log^{\frac{1}{\ell}}s. \tag{6}\] For each \(s>1\), we can use \(\ell\) that suits us best. Plugging \(\ell=\lceil\log\log s\rceil\) gives \[g_{\ell}(s) =4\lceil\log\log s\rceil\log^{\frac{1}{\lceil\log\log s\rceil}}s\] \[\leq 4(1+\log\log s)\log^{\frac{1}{\log\log s}}s\] \[=8(1+\log\log s).\qed\] ## 4. Threshold rank of clique products Proof of Theorem 6.: Let \(\epsilon=\frac{2k}{d}\) with a positive integer \(k\leq\frac{d}{2}\). Let \(M\) denote the normazlied adjacency matrix of \(CP^{(d)}\). Let \(\lambda_{1}\geq\lambda_{2}\geq\ldots\geq\lambda_{T}\) be the eigenvalues of \(M\) which are larger than \(1-\epsilon\). Let \(u_{1},\ldots,u_{T}\) be the corresponding normalized eigenvectors (which are orthogonal). Denote by \(U\) the span of \(u_{1},\ldots,u_{T}\). The graph \(CP^{(d)}\) is composed of multiple copies of \(CP^{(d-k)}\). The number of copies is \(\frac{n^{(d)}}{n^{(d-k)}}\). For each copy \(j\), let \(\tilde{v}_{j}\) be the indictor vector of the vertices in copy \(j\), and let \(v_{j}=\frac{\tilde{v}_{j}}{\|\tilde{v}_{j}\|}\). We claim two things: 1. For every \(t\), \[\sum_{j}(\langle u_{t},v_{j}\rangle)^{2}\leq 1.\] 2. For every \(j\), \[\sum_{t}(\langle u_{t},v_{j}\rangle)^{2}\geq\tfrac{1}{2}.\] The first item holds because the vectors \(v_{1},v_{2},\ldots\) are orthonormal. To prove the second item, fix \(j\), and denote by \(w_{1}\) the projection of \(v_{j}\) to \(U\) and by \(w_{2}\) the projection of \(v_{j}\) to \(U^{\perp}\). So, \(\|w_{1}\|^{2}=\sum_{t}(\langle u_{t},v_{j}\rangle)^{2}\) and \(\|w_{1}\|^{2}+\|w_{2}\|^{2}=1\). The main point is that \[\langle v_{j},Mv_{j}\rangle=1-\tfrac{k}{d}.\] The last calculation is \[\langle v_{j},Mv_{j}\rangle =\langle w_{1},Mw_{1}\rangle+2\langle w_{1},Mw_{2}\rangle+\langle w_ {2},Mw_{2}\rangle\] \[=\langle w_{1},Mw_{1}\rangle+\langle w_{2},Mw_{2}\rangle\] \[\leq\|w_{1}\|^{2}+\big{(}1-\tfrac{2k}{d}\big{)}(1-\|w_{1}\|^{2})\] \[=1-\tfrac{2k}{d}+\tfrac{2k}{d}\|w_{1}\|^{2},\] so that \(\|w_{1}\|^{2}\geq\tfrac{k}{d}\cdot\tfrac{d}{2k}=\tfrac{1}{2}\), as needed. Now, taking the sum \[\frac{n^{(d)}}{2n^{(d-k)}}\leq\sum_{j,t}(\langle u_{t},v_{j}\rangle)^{2}\leq T.\] Recalling the recurrence relation in Equation 1, and applying it \(k\) times, we have \[\big{(}n^{(d-k)}\big{)}^{2^{k}}<n^{(d)}\] so that \[T\geq\frac{\big{(}n^{(d)}\big{)}^{1-2^{-k}}}{2}.\qed\] ## Acknowledgement We wish to thank Irit Dinur, Nir Lazarovich, and Roy Meshulam for helpful and insightful discussions.
2304.02339
Combining experimental and observational data through a power likelihood
Randomized controlled trials are the gold standard for causal inference and play a pivotal role in modern evidence-based medicine. However, the sample sizes they use are often too limited to draw significant causal conclusions for subgroups that are less prevalent in the population. In contrast, observational data are becoming increasingly accessible in large volumes but can be subject to bias as a result of hidden confounding. Given these complementary features, we propose a power likelihood approach to augmenting RCTs with observational data to improve the efficiency of treatment effect estimation. We provide a data-adaptive procedure for maximizing the expected log predictive density (ELPD) to select the learning rate that best regulates the information from the observational data. We validate our method through a simulation study that shows increased power while maintaining an approximate nominal coverage rate. Finally, we apply our method in a real-world data fusion study augmenting the PIONEER 6 clinical trial with a US health claims dataset, demonstrating the effectiveness of our method and providing detailed guidance on how to address practical considerations in its application.
Xi Lin, Jens Magelund Tarp, Robin J. Evans
2023-04-05T09:53:58Z
http://arxiv.org/abs/2304.02339v2
# Many Data: Combine Experimental and Observational Data through a Power Likelihood ###### Abstract Randomized controlled trials are commonly regarded as the gold standard for causal inference and play a pivotal role in modern evidence-based medicine. However, the sample sizes they use are often too limited to draw significant causal conclusions for subgroups that are less prevalent in the population. In contrast, observational data are becoming increasingly accessible in large volumes but can be subject to bias as a result of hidden confounding. Given these complementary features, we propose a power likelihood approach to augmenting RCTs with observational data for robust estimation of heterogeneous treatment effects. We provide a data-adaptive procedure for maximizing the Expected Log Predictive Density (ELPD) to select the influence factor that best regulates the information from the observational data. We conduct a simulation study to illustrate the efficacy of our method and its favourable features compared to existing approaches. Lastly, we apply the proposed method to data from Tennessee's Student Teacher Achievement Ratio (STAR) Study to demonstrate its usefulness and practicality in real-world data analysis. ## 1 Introduction Experimental data and observational data represent two distinct regimes used for causal inference. Experimental data are collected through designed experiments with randomized interventions; a typical example is Randomized Controlled Trials (RCTs). Randomized intervention isolates the causal effect of interest from unwanted and potentially unobserved confounding factors as the experiment protocol ensures, in expectation, balanced control and treatment groups. As a result, empirical researchers can employ straight-forward estimation strategies such as regression, inverse propensity weighting (IPW) (Rosenbaum and Rubin, 1983; Robins et al., 1994; Bang and Robins, 2005; Cao et al., 2009) and matching (Rubin, 1973; Hirano et al., 2003; Abadie and Imbens, 2016) to consistently estimate causal effects. Therefore, RCTs are widely used in clinical settings and regulators, such as the U.S. Food and Drug Administration (FDA) and the European Medicines Agency (EMA), regard them as the gold standard to demonstrate the efficacy of a proposed treatment. Observational data, also sometimes referred to as real-world data, are passively collected without specially designed intervention. Examples include electronic health records, bank transactions, and user behaviour data on online platforms. In this Big Data era, they have become ubiquitous and readily available in large volumes. Due to the lack of randomization, observational data are subject to hidden confounding, introducing bias in causal estimates that does not vanish even with infinite samples (Pearl, 2009).1 For that reason, observational data are traditionally less favoured compared to RCTs in the context of medicine and healthcare. Footnote 1: Specifically, we are concerned about hidden common causes shared by the treatment and outcome. For example, in the case of flu vaccination, individuals with more active health-seeking behaviour may be more likely to receive flu shots. This can introduce bias because they are also less likely to be hospitalized due to the flu regardless of vaccination status. In this case, the observed effect of the flu shot on hospitalization rates would be confounded by the underlying health-seeking behaviour of the individuals. Hidden confounding is a common problem in observational studies, as there may be numerous unobserved factors that affect both treatment assignment and the outcome of interest. Nonetheless, RCTs suffer from limitations too. They can be time-consuming and expensive to conduct and collect data from, which means that they are limited by slow turnaround times and inadequate sample sizes. The COVID-19 pandemic highlighted the value of timely and reliable real-world data, which served as crucial evidence for determining the safety and effectiveness of drugs and vaccines (Hansen et al., 2021; Pottegard et al., 2020). For rare diseases, for example, male breast cancer, it may be infeasible to recruit enough patients needed to conduct clinical trials with adequate statistical power (Wedam et al., 2020). In such cases, rigorous analyses of reliable real-world data can provide valuable insights. Combining the information from the observational and experimental datasets, a method of _data fusion_(Bareinboim and Pearl, 2016), is an obvious avenue to harvesting the best of both realms. The combination is expected to leverage the internal validity2 of experimental data and complement it with the richness of the observational data. Over recent years, both the FDA and EMA have shared the vision that real-world data and RCTs should be seen as complementing each other, and their relative importance depends on the regulatory issue under consideration (U.S Food and Drug Administration, 2018; European Medicines Agency, 2021); these agencies have already granted regulatory approval to a number of studies and methodologies that rigorously combine evidence from RCTs and real-world data (Wedam et al., 2020; U.S Food and Drug Administration, 2018, 2016; European Medicines Agency, 2022) Footnote 2: First introduced in Campbell (1957), the author motivated the concept of internal validity with the question β€œdid in fact the experimental stimulus make some significant difference in this specific instance?”. Alternatively, we can intuitively understand it as the extent to which we are confident that a causal relationship established in a study cannot be explained by other factors. Essentially, the combination is inherently a bias-variance trade-off. By introducing information from the large, possibly-biased real-world data to limited-scaled, unbiased randomized data, we seek to improve the efficiency of the causal estimator at the expense of an incremental bias and strike a balance that results in an overall improvement according to a certain criterion, for example, mean squared error (MSE). In this paper, we present a novel power likelihood approach for effectively augmenting RCTs with observational data to improve the estimation of heterogeneous treatment effects. The remainder of this paper is organized as follows: in Section 2, we review data fusion methods proposed in recent literature. In Section 3 we introduce the setup of the problem and state the main assumptions, before moving on to our proposed power likelihood approach in Section 4. In Section 5 we conduct a simulation study to illustrate the efficacy of our approach and its favourable features compared to existing proposals. We also apply our method to data from the STAR study in Section 6 to demonstrate its usefulness and practicality in real-world data analysis. We conclude with a discussion in the final section. We defer proofs and additional details of simulations to the appendices. The code to reproduce the simulations and figures is available on Github3. Footnote 3: [https://github.com/XiLinStats/ManyData](https://github.com/XiLinStats/ManyData) ## 2 Related Literature In recent years, combining randomized and observational data for causal inference has attracted a lot of research attention. Before we review existing works, we first introduce the definitions of average treatment effect (ATE) and conditional average treatment effect (CATE). **Definition 2.1**.: (Average treatment effect) We define the average treatment effect as: \[\tau=\mathbb{E}[Y\,|\,do(T=1)]-\mathbb{E}[Y\,|\,do(T=0)]\] where \(\mathbb{E}[Y|do(T=t)]\) represents the expected outcome \(Y\) if we set a dichotomous treatment \(T\) to \(t\in\{0,1\}\) through intervention. **Definition 2.2**.: (Conditional average treatment effect) We define the conditional average treatment effect as: \[\tau(c)=\mathbb{E}[Y|do(T=1),C=c]-\mathbb{E}[Y|do(T=0),C=c]\] where \(\mathbb{E}[Y\,|\,do(T=t),C=c]\) represents the expected outcome \(Y\) of an individual with covariate values of \(c\), if we set \(T\) to \(t\) through intervention. If the treatment effect varies by values of \(C\), then we say that the treatment effect is modified by \(C\), or there is _treatment effect heterogeneity_ across values of \(C\). Owing to the concern about introducing bias into the inference, a number of initial attempts assumed that the observational data is conditionally unconfounded in the hope that the combined estimator benefits from the efficiency gain while remaining unbiased (Hartman et al., 2015; Rosenman et al., 2022). However, one can never prove that all confounders are measured and the assumption of unconfoundedness almost never holds in practice (Pearl, 2009). Other works attempted to avoid making this stringent assumption and replaced it with alternative assumptions on the mechanism that drives how the unmeasured confounding biases the causal effect (Peysakhovich and Lada, 2016; Kallus et al., 2018; Athey et al., 2020). Specifically, Kallus et al. (2018) assumed that the influence of hidden confounding on the outcome had a parametric structure and can be effectively modelled via "experimental grounding", based on the difference between the observational data and the RCT. Yang and Ding (2020) formulated the combination problem in a setting where the experimental and observational samples are drawn from a common distribution, with all confounders measured in the experimental sample but not all measured in the observational sample. Their setting essentially assumed that the unmeasured confounders influence the observational data in the same fashion as the experimental data. They then proposed an estimator minimizing the asymptotic variance aiming to improve the efficiency of the estimator from the randomized sample. The abovementioned assumptions centring around unconfoundedness are likely to be reasonable in practice, but they remain unfalsifiable nonetheless. This motivated several "test-then-pool"-styled approaches (Viele et al., 2014; Yang et al., 2020). Originally proposed by Viele et al. (2014), such approaches start with the null hypothesis of equality between the causal estimates from both datasets and only pool the data together if the null hypothesis is not rejected. This approach is the most basic form of dynamically borrowing information from an external, or historical, dataset. Typically, "test-and-pool" approaches require the researcher to specify the size of the test, say \(\alpha=0.025\), which can be somewhat subjective.Yang et al. (2020) proposed a data-adaptive approach to dynamically choose the threshold for the test static based on an estimation of the bias. A major issue with "test-then-pool"-styled approaches are that when the experimental data is small, the hypothesis test for any discrepancy is under-powered and rarely gets rejected, resulting in the observational data being pooled even if it is biased (Li et al., 2020). The EMA recently approved a historical borrowing method by Schuler et al. (2021). They proposed to augment the control arm of RCTs by incorporating a prognostic score fitted using a historical dataset, which is required to be drawn from the same population as the trial control arm so that the mean control outcomes are the same. While they suggested that this assumption can be weakened, the theoretical efficiency gain still relies on the control outcome estimated from the historical data being consistent, which may not hold if the external data comes from a previous trial, or for instance, from another country. If an unbiased estimate is not absolutely compulsory and there is some appetite or tolerance for bias, then there are more options of methods aimed at optimizing the tradeoff between bias and efficiency gain (Rosenman et al., 2020; Oberst et al., 2022; Dang et al., 2022). Similar to the setting in Schuler et al. (2021), Dang et al. (2022) proposed a Targeted Maximum Likelihood Estimation (TMLE) experiment selector to dynamically incorporate external data into the trial control arm. They novelly incorporated a bias estimated using a negative control outcome (NCO) into the Mean Squared Error to be minimized. As a result, their method relaxed the assumption that the external data is unconfounded and used the NCO to control the bias in the control data. _Stein's Paradox_ showed that while MLE estimators have the smallest risk among all unbiased estimators, they are not _admissible_, meaning that there exist other estimators with lower MSE regardless of the true value (Stein, 1956; James and Stein, 1961). One example of such estimator is the classical James-Stein estimator, which shrinks the MLE towards the zero vector. Green and Strawderman (1991) and Green et al. (2005) extended this idea and considered the problem of combining unbiased and possibly biased estimators. Rosenman et al. (2022) applied this idea specifically to causal inference and developed several shrinkage-style estimators by minimizing Stein's Unbiased Risk Estimate (SURE). Their method requires at least four strata, as well as independence among stratum estimates, to guarantee a reduction in risk. This means that its usefulness is limited in the case of ATE or two or three-category CATE estimation. Without the dimensionality and independence requirements posed by James-Stein-styled shrinkage estimators, a straight-forward linear combination strategy has been popular in recent literature (Tarima et al., 2020; Cheng and Cai, 2021; Oberst et al., 2022). Such estimators take the linear form \(\hat{\theta}_{\lambda}=(1-\lambda)\hat{\theta}_{e}+\lambda\hat{\theta}_{o}\), where \(\hat{\theta}_{e}\) is an unbiased estimator from the RCT and \(\hat{\theta}_{o}\) is a biased estimator from the observational data, with \(\lambda\) being a weight parameter. Specifically, Tarima et al. (2020) and Oberst et al. (2022) proposed to choose the optimal\(\lambda\) by minimizing the theoretical MSE: \[\lambda^{*}=\frac{\text{Var}(\hat{\theta}_{e})-\text{Cov}(\hat{\theta}_{e}, \hat{\theta}_{o})}{\delta^{2}+\text{Var}(\hat{\theta}_{e}-\hat{\theta}_{o})}.\] Obviously, the 'true' bias \(\delta\) and the variance quantities remain unknown so both papers considered estimating \(\lambda^{*}\) using plug-in estimators. In particular, they proposed using \((\hat{\theta}_{e}-\hat{\theta}_{o})^{2}\) as an estimate of \(\delta^{2}\). While this approach has the great advantage of simplicity, one drawback is that while \(\hat{\theta}_{e}-\hat{\theta}_{o}\) is an unbiased estimator of the true bias \(\delta\), the plug-in estimator \(\hat{\lambda}^{*}\) is not unbiased with regard to the MSE-minimizing weight \(\lambda^{*}\), due to Jensen's Inequality. In fact, the plug-in estimator \(\hat{\lambda}\) underestimates the optimal \(\lambda^{*}\) on average when \(\text{Cov}(\hat{\theta}_{e},\hat{\theta}_{o})=0\) due to the convexity of the expression. This means that when bias is low, their method fails to include as much influence from the observational data as theoretically optimal. Furthermore, their method does not have a natural extension to estimate heterogeneous causal effects. ## 3 Setup, Notation and Assumptions ### Setup and notations We consider two data sources, experimental data \(\mathcal{D}_{e}\) of size \(n_{e}\) and observational data \(\mathcal{D}_{o}\) of size \(n_{o}\). Typically, we would expect \(n_{e}<n_{o}\) to represent a common scenario in practice where researchers want to augment a small-scaled RCT trial with much larger observational data, yet this is not required. The causal models for the two data sources are represented by the graphs in Figure 1, where \(T\in\mathcal{R}\) represents a treatment, \(\mathbf{W}\in\mathcal{R}^{d}\) is the set of pre-treatment covariates, \(Y\in\mathcal{R}\) is the outcome of interest and \(U\) represents unmeasured confounding. In both data sources, we assume that the observations \(X_{i}=(Y_{i},T_{i},\mathbf{W}_{i})\) are independently and identically distributed, respectively. We are interested in the causal question "what would happen to \(Y\) if we set \(T=t\) by external intervention?". In particular, we are interested in the heterogeneous treatment effect, that is "For an individual of certain characteristics, what is the average effect of a treatment \(T\) on their outcome \(Y\)?". There are several overlapping frameworks to represent causal relationships, including potential outcomes (Rubin, 1974), Structural Causal Models (e.g. Pearl, 2009) that are based on causal directed graphs (e.g. Spirtes et al., 2000), Single World Intervention Graphs (Richardson and Robins, 2013), and the Decision-theoretic framework (Dawid, 2021). The discussion and results in this paper generally apply independent of the choice of frameworks. For notational purposes, we use Pearl's '\(do(\cdot)\)' operator to indicate interventions. Let the observations \(X_{i}\) be distributed according to \(P\) with some density \(p\). The observational and experimental data come from different distributions with some aspects in common and we use \(p_{o}\) and \(p_{e}\) to denote the two. **Remark 3.1**.: In the experimental data \(\mathcal{D}_{e}\) following graph (a) in Figure 1, the covariate set Figure 1: Causal models for (a) the experimental data \(\mathcal{D}_{e}\) and (b) the observational data \(\mathcal{D}_{o}\). Dashed edges denote possible causal relationships. For example, In (a), if edge \(W\to T\) exists, it means that the treatment is conditionally randomized, otherwise completely randomized. satisfies the _back-door criterion_ and the causal effect of \(T\) on \(Y\) is identifiable by \[P_{e}(y\,|\,do(t))=\sum_{\mathbf{w}}P(y\,|\,t,\mathbf{w})P(\mathbf{w}).\] In contrast, in the observational data \(\mathcal{D}_{o}\) following graph (b) in Figure 1, conditional on \(\mathbf{W}\), there is still a spurious back-door path \(T\gets U\to Y\) open because \(U\) remains unmeasured. As a result, \(P_{o}\left(y\,|\,do(t)\right)\) is not identifiable in \(\mathcal{D}_{o}\). Based on the setup depicted in Figure 1, we can further divide the set of observed covariates \(\mathbf{W}\) into two sets, \(\mathbf{Z}\) and \(\mathbf{C}\). The set \(\mathbf{C}\subseteq\mathbf{W}\) denotes the causal effect modifiers we are interested in and \(\mathbf{Z}=\mathbf{W}\setminus\mathbf{C}\) represents the remaining covariates that we want to marginalize over. Specifically, we are interested in the interventional distribution \[P\left(Y=y\,|\,do(T=t),\mathbf{C}=\mathbf{c}\right) \tag{1}\] which refers to the conditional distribution given \(\mathbf{C}=\mathbf{c}\), but marginal over \(\mathbf{Z}\), under an experiment where \(T\) is fixed by intervention to the value \(t\). The distribution (1) is sometimes also denoted as the distribution of the potential outcome \(Y^{(t)}\,|\,\mathbf{C}=\mathbf{c}\). This \(\mathbf{C}\)-specific interventional distribution allows us to examine the effect modification by set \(\mathbf{C}\). This conditional distribution describes the heterogeneous treatment effects, and in the case where treatment \(T\) is dichotomous, we can subsequently derive the CATE. **Example 3.2**.: We use an example to further illustrate our setup with the distinction of covariate sets \(\mathbf{C}\) and \(\mathbf{W}\). In a medical context, clinicians may be interested in the efficacy of a drug on patients with certain characteristics. Suppose we have access to an RCT and an observational dataset, which both record whether individuals took the drug \(T\), some health outcome \(Y\), body mass index (BMI) \(C\) and blood glucose level \(Z\). The causal relationships are represented in Figure 2, where we assume that both \(C\) and \(Z\) influence the outcome \(Y\). In addition, there is a link between BMI \(C\) and blood glucose level \(Z\). To give guidance on the prescription of this drug \(T\), we are interested in the causal effect moderated by BMI \(C\) and want to marginalize it over the distribution of \(Z\). Specifically, we are interested in \[\tau(c) =\mathbb{E}[Y\,|\,do(T=1),c]-\mathbb{E}[Y\,|\,do(T=0),c]\] \[=\mathbb{E}\left[\mathbb{E}\left[Y\,|\,do(T=1),c,Z\right]- \mathbb{E}[Y\,|\,do(T=0),c,Z]\right].\] We will revisit this example in Section 5, where we conduct a simulation study following this setup. ### Assumptions We then state our main assumptions below. **Assumption 1**.: (Positivity) \(P(T=t\,|\,\mathbf{C}=\mathbf{c})>0\) for all values \(\mathbf{c}\) with \(P(\mathbf{C}=\mathbf{c})>0\) in \(\mathcal{D}_{e}\). This positivity assumption ensures that for each individual in the experimental sample, there is a nonzero probability of being assigned to each of the treatment levels. **Assumption 2**.: \(P_{e}\left(Y=y\,|\,do(T=t),\mathbf{C}=\mathbf{c}\right)=P_{o}\left(Y=y\,|\,do(T=t), \mathbf{C}=\mathbf{c}\right)\)__ Assumption 2 assumes that the conditional causal distribution with intervention on \(T\) is the same across \(\mathcal{D}_{e}\) and \(\mathcal{D}_{o}\), even though it is generally not identifiable in the latter. We want to highlight that we only require the conditional causal distribution of interest, that is, conditional on the subset of covariates \(C\) to be invariant across the two datasets, instead of the full set including \(Z\). As illustrated in Figure 3, we use \(\theta\) to denote the set of parameters of the common conditional distribution \(P\left(Y=y\,|\,do(T=t),\mathbf{C}=\mathbf{c}\right)\), while \(\mathcal{D}_{e}\) and \(\mathcal{D}_{o}\) may have different sets of parameters, say, \(\gamma\) and \(\psi\), describing the rest of the joint distribution. The inference of the common causal parameters \(\theta\) is of interest and we consider \(\gamma\) and \(\psi\) as nuisance parameters. **Remark 3.3**.: As discussed in Remark 3.1, the causal distribution \(P\left(Y=y\,|\,do(T=t),\mathbf{C}=\mathbf{c}\right)\) is identifiable by \(P\left(Y=y\,|\,T=t,\mathbf{C}=\mathbf{c}\right)\) in \(\mathcal{D}_{e}\) but not in \(\mathcal{D}_{o}\) due to unmeasured confounder \(U\). We Figure 3: A graphical interpretation of the combination problem. Figure 2: Causal models for (a) the experimental data \(\mathcal{D}_{e}\) and (b) the observational data \(\mathcal{D}_{o}\) in Example 3.2. approach this identifiability issue as a model misspecification problem, that is, we view that the inference for \(\theta\) is misspecified using the data model for \(P\left(Y=y\,|\,T=t,Z=z\right)\) because we fail to include the covariates \(U\). As a consequence of model misspecification, the \(\theta\) estimated from the observational data alone, \(\hat{\theta}_{o}\), is prone to bias. Through combining datasets, we want to optimally estimate \(\tau(\mathbf{c})\) with regard to Mean Squared Error (MSE) as the loss function. One way to look at this problem is that we are combining \(\hat{\theta}_{e}\), which has no bias but high variance due to the smaller sample size, and \(\hat{\theta}_{o}\), which is possibly biased due to unobserved confounding but low in variance. Essentially, we aim to optimize a **bias-variance trade-off**. ## 4 A Power Likelihood Approach Traditional Bayesian inference assumes that the data model, \(f(x;\theta)\) is correct up to the known parameter value \(\theta\). However, Bayesian inference loses its predictive optimality (Zellner, 1988) when the data models are misspecified. Bissiri et al. (2016) proposed a framework for _general Bayesian inference_ where parameters are connected to observations through a loss function rather than the traditional likelihood functions, which relaxes the requirement of a 'true data-generating mechanism': \[\pi(\theta\mid x)\propto\exp\left\{-l\left(\theta,x\right)\right\}\pi(\theta),\] where \(l\left(\theta,x\right)\) is a loss function. Under this overarching framework, a popular solution to robustly allow for Bayesian learning under model misspecification is to raise the likelihood to a fractional power. This approach is discussed as a "power prior" in Ibrahim and Chen (2000), a "data-modified prior" in Walker and Hjort (2001) and a "power likelihood" in Holmes and Walker (2017). \[\pi_{\eta}(\theta\mid X)\propto\pi(\theta)\prod_{i=1}^{n}f(X_{i};\theta)^{\eta}.\] Adapting to our inference problem, we take the joint likelihood but raise the likelihood of the observational data to a power \(\eta\): \[f_{\eta}(\mathbf{X_{e}},\mathbf{X_{o}};\eta,\psi,\gamma)=f_{e}(\mathbf{X_{e}};\theta, \gamma)\times\left(f_{o}\left(\mathbf{X_{o}};\theta,\psi\right)\right)^{\eta}.\] where \(\mathbf{X_{e}}\) and \(\mathbf{X_{o}}\) represent observations in \(\mathcal{D}_{e}\) and \(\mathcal{D}_{o}\), and \(p_{e}(\mathbf{X_{e}};\theta,\gamma)\) and \(p_{o}\left(\mathbf{X_{o}};\theta,\psi\right)\) are the joint densities respectively. This is equivalent to, under the _general Bayesian framework_, defining a loss function: \[l_{\eta}(\theta,\psi,\gamma;\mathbf{X_{e}},\mathbf{X_{o}})=-\left\{\log f_{e}\left( \mathbf{X_{e}};\theta,\gamma\right)+\eta\times\log f_{o}\left(\mathbf{X_{o}};\theta, \psi\right)\right\}.\] Then by Bayes' rule, the posterior distribution of \(\theta\) and nuisance parameters \(\psi\) and \(\gamma\) becomes \[\pi\left(\theta,\psi,\gamma\mid\mathbf{X_{e}},\mathbf{X_{o}}\right) \propto\exp\left\{-l_{\eta}\left(\theta,\psi,\gamma;\mathbf{X_{e}}, \mathbf{X_{o}}\right)\right\}\pi(\theta,\psi,\gamma)\] \[=f_{e}(\mathbf{X_{e}};\theta,\gamma)\cdot f_{o}\left(\mathbf{X_{o}}; \theta,\psi\right)^{\eta}\cdot\pi(\theta,\psi,\gamma), \tag{2}\] where \(\pi(\theta,\psi,\gamma)\) is the joint prior distribution of the parameters. We can see that the parameter \(\eta\) works like a dial moderating the influence from the observational data, relative to that from the experimental data. When \(\eta=0\), the loss function is just the negative log likelihood of the experimental data \(\mathcal{D}_{e}\) and the influence of the observational data is cut off as if \(\mathcal{D}_{o}\) is excluded from the analysis. When \(\eta=1\) then it is the conventional Bayesian posterior and means that we treat \(\mathcal{D}_{e}\) and \(\mathcal{D}_{o}\) equally for inference. Intuitively, we want to choose \(\eta\) to be between \(0\) and \(1\). Two important components of the power likelihood approach are the likelihood and the power \(\eta\). In the remainder of this section, we discuss the likelihood we propose to use and the robust selection of \(\eta\). ### Frugal parameterization To specify the joint densities \(p_{e}\) and \(p_{o}\) in (2), we propose to adopt the _frugal parameterization_(Evans and Didelez, 2021), which is designed specially for causal inference applications. Following its framework, we can break down a joint distribution \(p(\mathbf{z},t,y\mid\mathbf{c})\) into three separate pieces: * the distribution of 'the past': \(p_{\mathbf{Z}T\mid\mathbf{C}}(\mathbf{z},t\mid\mathbf{c}):=P(\mathbf{Z}=\mathbf{z},T=t\mid\mathbf{C}=\mathbf{c})\) * the causal quantity of interest: \(p_{Y\mid T\mathbf{C}}^{*}(y\mid t,\mathbf{c}):=P\left(Y=y\mid do(T=t),\mathbf{C}=\mathbf{c}\right)\), and * a dependence measure \(\phi_{Y\mathbf{Z}\mid T\mathbf{C}}^{*}\) between \(Y\) and \(\mathbf{Z}\) conditional on \(T\) and \(\mathbf{C}\). Examples of such dependence structures include copulas and conditional odds ratios. The advantages of using frugal parameterization are three-fold. Firstly, it gives a likelihood with causal parameterization to the observational data. Secondly, it isolates the causal quantity of interest, \(p_{Y\mid T}^{*}(y\mid t)\), from the rest of the joint distribution, which means that we can directly target inference \(\theta\), which parameterizes this distribution, and treat the data-specific parameters \(\gamma\) and \(\psi\) as nuisance parameters. Finally, the frugal parameterization allows the non-causal distributions to differ between the RCT and observational data, for example, the joint distribution of covariates and treatment assignment, \(p\left(\mathbf{z},t\right)\). Under the _frugal parameterization_, we can factorize joint densities \(p_{e}\) and \(p_{o}\) as below: \[p_{e}\left(y,\mathbf{z},t\mid\mathbf{c};\theta,\gamma\right) =p_{e,\mathbf{Z}T\mid\mathbf{C}}\left(\mathbf{z},t\mid\mathbf{c};\gamma\right) \cdot p_{Y\mid T\mathbf{C}}^{*}(y\mid t,\mathbf{c};\theta)\cdot\phi_{e,Y\mathbf{Z}\mid T \mathbf{C}}^{*}\left(y,\mathbf{z}\mid t,\mathbf{c};\gamma\right) \tag{3}\] \[p_{o}\left(y,\mathbf{z},t\mid\mathbf{c};\theta,\psi\right) =p_{o,\mathbf{Z}T\mid\mathbf{C}}\left(\mathbf{z},t\mid\mathbf{c};\psi\right) \cdot p_{Y\mid T\mathbf{C}}^{*}(y\mid t,\mathbf{c};\theta)\cdot\phi_{o,Y\mathbf{Z}\mid T \mathbf{C}}^{*}\left(y,\mathbf{z}\mid t,\mathbf{c};\psi\right), \tag{4}\] where, by Assumption 2, we have \[p_{Y|T\boldsymbol{C}}^{*}(y\mid t,\boldsymbol{c};\theta):=p_{e,Y|T\boldsymbol{C}}^ {*}(y\mid t,\boldsymbol{c};\theta)=p_{o,Y|T\boldsymbol{C}}^{*}(y\mid t, \boldsymbol{c};\theta).\] Ergo this parameterization explicitly isolates the causal distribution of interest, \(p_{Y|T\boldsymbol{C}}^{*}(y\mid t,\boldsymbol{c};\theta)\) from the remainder of the joint distribution. Additionally, to ensure a consistent estimation of CATE conditional on the covariate set \(\boldsymbol{C}\) from the randomized data, we further a correctly specified model for \(p_{e,Y|T\boldsymbol{C}}^{*}(y\mid t,\boldsymbol{c};\theta)\). **Assumption 3**.: The causal marginal distribution \(P\left(Y=y\mid do(T=t),\boldsymbol{C}=\boldsymbol{c}\right)\) in the randomized data \(\mathcal{D}_{e}\) is correctly specified by a parametric model \(p_{e,Y|T\boldsymbol{C}}^{*}(y\mid t,\boldsymbol{c};\theta)\). A convenient choice of such specification is a linear model, although linearity is not required by our proposed method. Unlike requiring a correctly specified data model for the full joint distribution, Assumption 3 is strictly less stringent. Furthermore, Assumption 3 imposes a requirement only on the randomized data, but makes no restriction on the observational data model. **Remark 4.1**.: With Assumption 3, our inference takes place in the so-called \(\mathcal{M}\)-closed world. However, as famously stated by Box (1976), "all models are wrong". The assumption of a correct specification is unlikely to hold in practice. Instead, we conduct causal inference in the \(\mathcal{M}\)-open world (Bernardo and Smith, 1994), where, asymptotically, MLE estimation minimizes the Kullback-Leibler (KL) divergence between the true data generative model to the assumed family of models. Additionally, it is worth noting that, in the case of misspecification, if treatment is binary, the ATE estimate constructed through marginalizing over all conditioning variables \(\boldsymbol{C}\) will be consistent nonetheless. We then apply this parameterization to specify the power posterior in (2). Substituting the joint densities \(p_{e}(\boldsymbol{X_{e}};\theta,\gamma)\) and \(p_{o}\left(\boldsymbol{X_{o}};\theta,\psi\right)\) by (3) and (4) gives \[\pi_{\eta}\left(\theta,\psi,\gamma\mid\boldsymbol{X_{e}}, \boldsymbol{X_{o}}\right)\] \[\propto p_{e}(\boldsymbol{X_{e}};\theta,\gamma)\cdot p_{o}\left( \boldsymbol{X_{o}};\theta,\psi\right)^{\eta}\cdot\pi(\theta,\psi,\gamma)\] \[\propto p_{e,\boldsymbol{Z}T\,|\,\boldsymbol{C}}\left(\boldsymbol{z },t\,|\,\boldsymbol{c};\gamma\right)\cdot p_{Y|T\boldsymbol{C}}^{*}(y\mid t, \boldsymbol{c};\theta)\cdot\phi_{e,Y\boldsymbol{Z}|T\boldsymbol{C}}^{*}\left( y,\boldsymbol{z}\,|\,t,\boldsymbol{c};\gamma\right)\] \[\quad\times\left(p_{o,\boldsymbol{Z}T\,|\,\boldsymbol{C}}\left( \boldsymbol{z},t\,|\,\boldsymbol{c};\psi\right)\cdot p_{Y|T\boldsymbol{C}}^{* }(y\mid t,\boldsymbol{c};\theta)\cdot\phi_{o,Y\boldsymbol{Z}|T\boldsymbol{C}} ^{*}\left(y,\boldsymbol{z}\,|\,t,\boldsymbol{c};\psi\right)\right)^{\eta}\] \[\quad\times\pi(\theta,\psi,\gamma). \tag{5}\] ### Choosing the optimal influence factor \(\eta\) We can see that \(\eta\) plays a crucial role in controlling how much information we want to borrow from the observational data. The selection of \(\eta\) needs to be performed carefully: the observational data can be useful in reducing the variance, however, we do not want to contaminate our causal estimate with excessive bias by setting \(\eta\) too high. Ideally, we want \(\eta\) to be chosen robustly adaptive to the data, where we incorporate more information when the intrinsic bias in the estimate from the observational data, \(\theta_{o}\), is small or its variability is low, and vice versa. Specifically, we use MSE as the loss function, and we aim at lowering the risk of our resultant estimator \(\hat{\theta}_{\eta}\), \[R(\hat{\theta}_{\eta},\theta)=\mathbb{E}\left(||\hat{\theta}_{\eta}-\theta||^{ 2}\right).\] In recent literature, several methods for selecting the \(\eta\) in the power likelihood have been proposed, mainly in the context of addressing model misspecification, for example, the _SafeBayes_ algorithm (Grunwald and Van Ommen, 2017), expected information matching (Holmes and Walker, 2017; Lyddon et al., 2019), and frequentist coverage probability calibration (Syring and Martin, 2019). A review and empirical comparison of these methods can be found in Wu and Martin (2022). Considering our problem setting and objectives, similar to the choice of Carmona and Nicholls (2020), we propose to select \(\eta\) by maximizing the expected log point-wise predictive density (ELPD): \[\text{ELPD}(\eta) =\mathbb{E}_{\tilde{\mathbf{x}}}\log p_{\eta}(\tilde{X}\,|\, \boldsymbol{x}_{e},\boldsymbol{x}_{o})\] \[=\int_{\tilde{\mathbf{x}}}p_{t}(\tilde{x})\log p_{\eta}(\tilde{x} \,|\,\boldsymbol{x}_{e},\boldsymbol{x}_{o})\,d\tilde{x}, \tag{6}\] where \[p_{\eta}(\tilde{x}\,|\,\boldsymbol{x}_{e},\boldsymbol{x}_{o})=\int_{\Gamma} \int_{\Theta}p(\tilde{x}\,|\,\theta,\gamma)p(\theta,\gamma\,|\,\boldsymbol{x}_ {e},\boldsymbol{x}_{o})\,d\theta\,d\gamma\] is the posterior predictive distribution indexed by \(\eta\), and the expectation is with respect to the 'true' data generating process \(p_{t}\). As the 'true' distribution is unknown, there are different ways to estimate ELPD. A commonly used method to approximate the 'true' distribution is leave-one-out cross-validation (LOO), which involves leaving one observation out at a time, evaluating the posterior predictive density on this observation and then averaging over all observations. This method is computationally expensive as one can imagine. Vehtari et al. (2017) introduced an efficient computation of LOO using Pareto-smoothed importance sampling (LOO-PSIS) which avoids repeated partition of the dataset. Another estimation method is to use the widely applicable information criterion (WAIC) which is proven to be asymptotically equal to LOO (Watanabe, 2010). The WAIC method simply uses the ordinary posterior to estimate the density of each observation, and subtracts the effective number of parameters, defined as \[\widehat{\text{ELPD}}(\eta)=\frac{1}{n_{e}}\sum_{i=1}^{n_{e}}\log\hat{p}_{\eta }(x_{i}\,|\,\boldsymbol{x})-\hat{d}_{\text{WAIC}},\] where \(\hat{p}_{\eta}(x_{i}\,|\,\mathbf{x})\) is estimated using the posterior samples of the parameters. \(\hat{d}_{\text{WAIC}}\) is the estimated effective number of parameters. Gelman et al. (1995) provided a mean-based and a variance-based definitions for \(\hat{d}_{\text{WAIC}}\), and consistent with Vehtari et al. (2017), we use the variance-based definition \[d_{\text{WAIC}}\,=\sum_{i=1}^{n_{e}}\text{Var}_{\text{post}}\ \left(\log p\left(x_{i}\mid \theta\right)\right),\] where \(\text{Var}_{\text{post}}\) is the posterior variance of the log predictive density for each data point \(x_{i}\) in the randomized data. We propose to evaluate the ELPD on \(\mathcal{D}_{e}\) as we assume the causal relationship in the experimental data is correct. We then apply a grid search in \([0,1]\) to find the \(\eta^{*}\) such that \[\eta^{*}=\operatorname*{arg\,max}_{\eta}\widehat{\text{ELPD}}(\eta).\] **Remark 4.2**.: It is worth noting the connection between ELPD and KL divergence measure. We can rewrite (6) as \[\text{ELPD}(\eta) =\int_{\mathfrak{X}}p_{t}(\tilde{x})\log p_{\eta}(\tilde{x}\,|\, \mathbf{x}_{e},\mathbf{x}_{o})\,d\tilde{x}\] \[=\int_{\mathfrak{X}}p_{t}(\tilde{x})\left(\log\frac{p_{\eta}( \tilde{x}\,|\,\mathbf{x}_{e},\mathbf{x}_{o})}{p_{t}(\tilde{x})}+\log p_{t}(\tilde{x}) \right)\,d\tilde{x}\] \[=\int_{\mathfrak{X}}p_{t}(\tilde{x})\log\frac{p_{\eta}(\tilde{x} \,|\,\mathbf{x}_{e},\mathbf{x}_{o})}{p_{t}(\tilde{x})}\,d\tilde{x}+\int_{\mathfrak{X}}p _{t}(\tilde{x})\log p_{t}(\tilde{x})\,d\tilde{x}\] \[=-d_{\text{KL}}\left(p_{t}\,|\,p_{\eta}\right)+c, \tag{7}\] where \(c=\int_{\mathfrak{X}}p_{t}(\tilde{x})\log p_{t}(\tilde{x})\,d\tilde{x}\), which is the entropy of \(p_{t}\), is a constant independent of \(\eta\). Essentially, choosing the \(\eta\) via maximizing ELPD is equivalent to selecting the posterior predictive distribution, indexed by \(\eta\), that is closest to the true data distribution in KL-divergence. Our proposed method to select \(\eta\) is summarized in Algorithm 1. ``` Initialize \(\eta^{*}\gets 0\), \(\text{ELPD}^{*}\leftarrow-\infty\) for\(i=0,1,2,\ldots,N\)do Let \(\eta=i/N\) Sample \((\theta^{(i)},\psi^{(i)},\gamma^{(i)})\sim\pi_{\eta}\left(\theta,\psi,\gamma \mid\mathbf{X}_{e},\mathbf{X}_{o}\right)\) using any appropriate sampler Compute \(\widehat{\text{ELPD}}(\eta)\) using posterior samples \((\theta^{(i)},\psi^{(i)},\gamma^{(i)})\) if\(\widehat{\text{ELPD}}(\eta)>\text{ELPD}^{*}\)then \(\text{ELPD}^{*}\leftarrow\widehat{\text{ELPD}}(\eta)\) \(\eta^{*}\leftarrow\eta\) endif endfor return\(\eta^{*}\) ``` **Algorithm 1** Power likelihood \(\eta\) selection Once an optimal \(\eta\) is selected, we can then perform likelihood-based inference. ### Consistency Intuitively, the \(\hat{\eta}\) that maximizes ELPD in (7) depends on the magnitude of bias introduced by the observational dataset. The more biased \(\mathcal{D}_{o}\) is, the less we should incorporate its influence. We are particularly interested in studying the asymptotics of \(\hat{\eta}\) and the resulting estimator \(\hat{\theta}_{\hat{\eta}}\). Similar to the discussion in Yang and Ding (2020) and Dang et al. (2022), we assume that the bias in the observational data \(\mathcal{D}_{o}\) due to hidden confounding is not fixed but depends on its sample size, even though this usually is not practically true. Let \(\delta=\delta(n_{o})=\delta^{*}/n_{o}^{k}\) denote the bias in \(\mathcal{D}_{o}\). Specifically, we consider the following two scenarios: **Scenario 1**: \(k\geq 1/2\) or \(\delta^{*}=0\). This is where the observational data is unbiased or the bias reduces at a rate faster than \(\sqrt{n}\), which becomes negligible as we apply the Central Limit Theorem (CLT). **Scenario 2**: \(0\leq k<1/2\) and \(\delta^{*}\neq 0\). This is where the non-zero bias is a constant or the bias reduces at a rate slower than \(\sqrt{n}\), which does not vanish as we apply the CLT. **Theorem 4.3** (**Consistency**).: _Consider experimental data \(X=(X_{1},\ldots,X_{n_{e}})\) where \(X_{i}\)'s are i.i.d. following a normal distribution \(X_{i}\sim\mathcal{N}(\theta^{*},\sigma^{2})\), and observational data \(Y=(Y_{1},\ldots,Y_{n_{e}})\) i.i.d. following \(Y_{i}\sim\mathcal{N}(\theta^{*}+\delta,\sigma^{2})\). Here \(\delta\) denotes the unknown bias and is a function of \(n_{o}\). Suppose the variance \(\sigma^{2}\) is known. The estimator following the proposed power likelihood approach is \(\hat{\theta}_{\hat{\eta}}\). Let \(n=n_{e}+n_{o}\) and \(n_{e}/n_{o}=c>0\)._ * _If_ \(0\leq k<1/2\) _and_ \(\delta^{*}\neq 0\)_, the_ \(\hat{\eta}=O(n^{-1/2})\)_,_ \(\hat{\theta}_{\hat{\eta}}=\mu^{*}+O(n^{-1/2})\) _and_ \(\mathbb{E}[(\hat{\theta}_{\eta}-\theta^{*})^{2}]=\sigma^{2}/n_{e}+O(n^{-3/2})\)_._ * _If_ \(k\geq 1/2\) _or_ \(\delta^{*}=0\) _then_ \(\hat{\eta}\) _does not converge._ Theorem 4.3 suggests that when bias is non-zero, the selected influence factor \(\hat{\eta}\) converges to \(0\) and the estimator \(\hat{\mu}_{\eta}\) is \(\sqrt{n}\)-consistent. As a result, the MSE of the estimator \(\hat{\mu}_{\eta}\) converges to that from using the experimental data only. To provide intuition for this theorem, as \(n_{o}\) grows, the fraction of its influence we want to incorporate reduces and as \(n_{e}\) grows, the need to augment it with external data decreases at the same time. However, when the observational data is inherently unbiased, the selected \(\hat{\eta}\) will not converge to zero. On the contrary, as bias diminishes close to zero, we will very likely infer that \(\hat{\eta}=1\) is the best value as we restrict \(\eta\) to be between \(0\) and \(1\). The proof is presented in Appendix A. ### Normal approximations to the posterior distribution Our proposed method, as described in Algorithm 1, involves sampling from posterior distributions multiple times in the search for the optimal \(\eta\). While a standard approach is to obtain the pos terior samples by Markov-Chain Monte Carlo (MCMC) with a prior, it can be computationally challenging when data are massive, \(\mathcal{D}_{o}\) in particular. The computational challenge is exacerbated when the parameter vector \((\theta,\psi,\gamma)\) has high dimensionality which may affect the efficiency of the sampling procedure. In such scenarios, we propose to apply a normal approximation of the posterior and then sample from this distribution (Gelman et al., 1995), derived as \[\pi_{\eta}\left(\theta,\psi,\gamma\mid X_{e},X_{o}\right)\approx\mathcal{N} \left(\hat{\theta}_{\eta},[\mathcal{I}(\hat{\theta}_{\eta})]^{-1}\right),\] where \(\hat{\theta}_{\eta}\) is the MLE estimate with regard to the \(\eta\)-powered likelihood function and \(\mathcal{I}(\hat{\theta}_{\eta})\) is the corresponding Fisher information matrix. A detailed derivation of these quantities can be found in Appendix B. A nice property of this approximation is that the bigger the data sets, the better the approximation, which will clear the computational hurdle and make our method scalable for the application to big data. ## 5 Simulation We now run a simulation that illustrates how our method robustly moderates the influence of the observational data through the \(\eta\)-powered likelihood and compare the performance of our method with several existing approaches. ### Simulation setup Following the same setup as described in Example 3.2, and represented in Figure 2. Suppose we observe a continuous covariate \(C\), for example, body mass index (BMI), a continuous covariate \(Z\), for instance, blood glucose level, which is associated with BMI, a binary treatment variable \(T\) indicating whether a drug is taken, and a continuous health outcome \(Y\) such as a change in average blood glucose level, for example, measured by HbA1c. There also exists an unobserved binary confounder \(U\) that influences both the treatment and the outcome. The influence of \(U\) means that the causal estimate from the observational data is biased. Specifically, we model the distributions as follows: \[U \sim\text{Bernoulli}(0.5) C \sim N(0,1)\] \[Z \mid C \sim N(\mu_{z},\,1) T \mid C,Z,U \sim\text{Bernoulli}(\mu_{t})\] \[Y(t) \mid C \sim N(\mu_{y},\,1).\] where, \[\mu_{z} =0.2+0.6\,C\] \[\operatorname{logit}\mu_{t} =0.5+0.1\,C+0.6\,Z+0.4\,C\,Z+\psi U\] \[\mu_{y} =0.6+0.2\,C\qquad\qquad+1.1\,C\,T+\psi U\] Additionally, let the dependence structure between \(Y\) and \(Z\) given \(T=t\) and \(C=c\), \(\phi^{*}_{YZ|TC}\), be a conditionally bivariate Gaussian copula, with correlation parameter \(\rho_{t}=2\operatorname{expit}(1+2.5t)-1\). We use the same parameterization for the experimental data \(\mathcal{D}_{e}\) except that we assume that treatment is randomly assigned and hence replace the mean of \(T\) with \(\mu_{t}=0.5\). We set the sample sizes to 2,500 and 250 for the observational and experimental data, respectively. The results we show in this section are averaged across 500 sets of synthetic datasets. **Remark 5.1**.: In this as well as other simulations that we have tested, we compared the ELPD estimated by LOO-PSIS and WAIC method, implemented through the loo package in R (Vehtari et al., 2017), and they almost always give very close results. We use the WAIC method in this simulation. We have also compared obtaining posterior parameter samples using the Metropolis-Hasting within Gibbs MCMC sampler, with sampling directly from the approximated normal distribution as described in Section 4.4. With sample sizes of 250 and 2,500, in a 12-dimensional parameter space, the two approaches lead to very similar \(\eta\)'s being selected. For run time reasons, we sample directly from the approximated normal distribution in this simulation study. ### Moderating the bias-variance trade-off Figure 4 presents three scenarios where unmeasured confounding has an influence of different magnitude: no influence (\(\psi=0\)), small influence (\(\psi=0.75\)) and large influence (\(\psi=1\)). The first column shows the average negative ELPD at each value of \(\eta\). Our proposed method searches for the \(\eta\) that maximizes ELPD, which corresponds to the lowest point on the curve. The second and third columns show the MSE of the ATE and CATE4 estimates evaluated on \(\mathcal{D}_{e}\) at different values of \(\eta\). We decompose MSE into variance (in blue) and squared bias (in red) to give an intuitive illustration of the moderating effect of \(\eta\) on the bias-variance trade-off: as we increase \(\eta\), we include more information from \(\mathcal{D}_{o}\) and hence the variance reduces, yet at the expense of an increase in bias. Footnote 4: While we only show the CATE in the subgroup where \(C>0\), we have analyzed the MSE for CATE estimates in the subgroup \(C\leq 0\) and it exhibited a very similar pattern of MSE loss as \(\eta\) varied. When \(U\) has no influence on \(T\) and \(Y\), that is, \(\mathcal{D}_{o}\) is not biased and is just 'as good as' \(\mathcal{D}_{e}\), the average \(\eta\) selected is around 0.8 which is reasonable as we would want to include as much information in \(\mathcal{D}_{o}\) as possible. Unsurprisingly, the corresponding MSEs of ATE and CATE estimates are both lower than if we do not include \(\mathcal{D}_{o}\) at all. When \(U\) has a small influence on \(T\) and \(Y\), our method chooses a smaller \(\eta\), averaging at around 0.7. Again, the corresponding MSEs of ATE and CATE estimates are both lower than at \(\eta=0\). When \(U\) has a larger influence, that is, \(\mathcal{D}_{o}\) is more biased, our method responds by selecting an even smaller \(\eta\). Although the corresponding MSE of the ATE estimate has increased, we achieve a slightly lower MSE of CATE estimates than \(\eta=0\). ### Comparison with existing methods Within the same simulation, we have also compared the performance of our method to four existing approaches namely, the MSE-minimizing estimator by Oberst et al. (2022), shrinkage estimators by Green et al. (2005) and by Rosenman et al. (2020), and experimental grounding by Kallus et al. (2018). We first compare the methods' performance in estimating heterogeneous treatment effects. We first stratify \(\mathcal{D}_{e}\) and \(\mathcal{D}_{o}\) by values of \(C\) and normal quantiles of \(Z\) into 10 strata. Then we measure the overall MSE of the CATE estimates of the 10 strata. Figure 5 plots the overall MSE relative to using inverse propensity weighted (IPW) estimators from \(\mathcal{D}_{e}\) only. As the MSE-minimizing estimator of Oberst et al. (2022) does not have a natural extension to systematically estimating heterogeneous treatment effects, it is not included in this comparison. Additionally, the purple Figure 4: Simulation results for scenarios where \(U\) has no influence (\(\psi=0\)), small influence (\(\psi=0.75\)) and large influence (\(\psi=1\)) on \(T\) and \(Y\). Left column: average negative ELPD with \(\eta\in[0,1]\). The yellow dotted lines mark the average \(\eta\) selected. Middle column: MSE of the ATE estimate. Right Column: MSE of the CATE estimate in the subgroup with \(C>0\). The MSE is decomposed into bias (red-shaded area) and variance (blue-shaded area). dashed line shows the performance of using the estimates from \(\mathcal{D}_{e}\) only but with a correctly-specified parametric model. The difference between this line and the reference line at 1 is whether a parametric model is used or not. We can see that all methods lead to lower overall MSE than using the strata mean differences in \(\mathcal{D}_{e}\). Parametric methods that are correctly specified, that is, ours and experimental grounding, outperform the shrinkage estimators. Our method leads to the lowest overall MSE and as bias increases, our method converges back to using the experimental data only. Figure 6 compares the relative MSE of ATE estimates using different methods. Generally, there is a trade-off between the base-case MSE reduction and the worst-case increase in MSE. Our method dominates both shrinkage estimators with a lower relative MSE regardless of the bias. The MSE of the experimental grounding estimator of Kallus et al. (2018) is on par with using the experimental data only. The MSE-minimizing estimator in Oberst et al. (2022) has the most comparable performance to ours. Oberst et al. (2022) observed that, in their simulations, the worst-case relative MSE of their estimator is bounded within a 27% increase, and this claim seems to hold in our results too. Compared to their estimator, ours achieves a larger reduction in MSE when the bias is small. This is possibly due to the weight, \(\hat{\lambda}_{ober}\), underestimating the theoretically optimal weight \(\lambda\) on average, which means that their pooled estimator fails to include as much influence from a'reasonably good' observational data as theoretically optimal. In addition, our estimator exhibits a faster reversion to using the experimental data only when the bias is large. This means that our method is more conservative against drawing awfully 'wrong' causal conclusions by allowing Figure 5: A comparison of relative overall MSE of CATE estimates of the 10 strata. The reference line at 1 represents the overall MSE of using IPW estimates in \(\mathcal{D}_{e}\). excessive influence from a very biased observational dataset contaminating the experimental data. This is an attractive feature if a'safe policy' is a priority. Policy-makers are usually risk-averse when making crucial decisions impacting people's lives, and prefer a safe policy that lowers the probability of a worse outcome than the status quo (Ben-Michael et al., 2021). ### Incorporating more variables To examine the robustness of our method with higher dimensional covariates, we introduce more variables into the simulation. This reflects that, in practice, clinicians may be interested in more fine-grained CATE that is conditional on several patient characteristics allowing for more personalized treatment decisions. Examples of such variables include age, gender, and medical history. Specifically, we increased the dimension of \(\mathbf{C}\), the set of variables that CATE is conditioned on, to five, consisting of three continuous variables and two binary variables. We have also introduced one more continuous variable into \(\mathbf{Z}\) reflecting that we have more information about a patient we want to marginalize. A detailed model specification can be found in Appendix C.2. Similar to Figure 5, Figure 7 compares the performance of different methods regarding CATE estimates. We can see that our method's performance is comparable to that when the dimensions of the covariate sets \(\mathbf{Z}\) and \(\mathbf{C}\) are low. Our method performs better than the non-parametric methods when the model is correctly specified, and compared to only using the randomized data \(\mathcal{D}_{\varepsilon}\), our method leads to a reduction in MSE when the confounding bias is small. There is a range of elevated MSE when the bias is moderate to high, up to an approximately 60% increase, Figure 6: A comparison of relative overall MSE of ATE estimates. The reference line at 1 represents the overall MSE of using a correctly specified parametric model on \(\mathcal{D}_{e}\) only. before reverting to using \(\mathcal{D}_{e}\) only. In line with the findings in Oberst et al. (2022), some trade-off between reduction and inflation in MSE by the magnitude of bias seems inevitable, unless we have additional external information. For example, Dang et al. (2022) assume there is a negative control outcome that can be used to determine whether or not observational data should be integrated. ## 6 Analysis of STAR data To illustrate the validity and efficacy of our proposed approach, we apply it to a real-world dataset from Tennessee's Student Teacher Achievement Ratio Study (STAR). The STAR study is a longitudinal experiment where students were randomly assigned into classes of different sizes to study the impact of class size on students' educational outcomes (Word et al., 1990). The STAR data itself is from an RCT, and similar to Kallus et al. (2018), we partition the STAR data into two and construct an observational data by artificially introducing confounding through selectively removing a biased subset of samples. The STAR data covers over 7,000 students in 79 schools who were randomly assigned into one of three class types: small class (13 to 17 students per teacher), regular class (22 to 25 students per teacher) and regular-with-aide class (22 to 25 students with a full-time aide) and we focus on the first two types of interventions. Weuse standardized test scores at the end of first grade as the outcome and use the following covariates: gender, race, date of birth, and an indicator of whether or not a free lunch was provided. We applied the same data processing as in Kallus et al. (2018), Figure 7: A comparison of relative overall MSE of CATE estimates in 10 strata when we introduce more covariates into the parametric model. The reference line at 1 represents the overall MSE of using IPW estimates in \(\mathcal{D}_{e}\). such as filtering and removing records with missing values, and arrived at the same final sample of 4,218 students, consisting of 2,811 students enrolled in rural or inner-city schools, and 1,407 students in urban or suburban schools. To construct the experimental datasets, we randomly sample 10% of the 2,811 students enrolled in rural or inner-city schools. To construct the confounded observational dataset, we take all the controls (\(T=0\)) that were not sampled into the experimental data, and for those receiving treatment (\(T=1\)) we take a sample of 1,000 students where we down-weight those with an outcome below the 30\({}^{\text{th}}\) percentile. By constructing the observational data this way, we successfully introduce confounding from two sources: 1) no students from rural or inner-city schools enter the experimental data, this is as if there is an unobserved exclusion criterion 2) the treatment effect is artificially biased upwards by down-weighting units in treatment with poor outcomes. The confounding translates into a naive ATE estimate of 57.0 rather than the original dataset's 38.4. We then apply our method to combine the constructed experimental and observational data to estimate the ATE in the experimental data and repeat this process for 500 different sets of datasets. Figure 8 shows an assessment of the performance of our method. We can see a similar pattern as in the simulation study: the shape of the negative ELPD by \(\eta\) mirrors the bias-variance trade-off. The average \(\eta\) selected is 0.17 and the average MSE is 175.23, whereas if we only use the experimental data, the average MSE is 193.84 which means approximately 10% reduction. Figure 9 gives some intuition of our method. The figure compares the distribution of ATE estimates using different methods. We can see that the distribution of ATE estimates from only using the small Figure 8: Results for application on the re-constructed STAR datasets, where for the observational dataset, we down-weight those with outcomes in the bottom 30%. Top graph: average -elpd with \(\eta\in[0,1]\). Bottom graph: MSE of the ATE estimate on the randomized data, which is decomposed into bias (red shaded area) and variance (blue shaded area). unconfounded data, represented by the red area, is dispersed and 0 is included in the 99% confidence interval; this is unfortunate if high statistical power is required. In contrast, the distribution of ATE estimates from using the large confounded data, represented by the green area, is very concentrated around its mean, which is fairly far away from that of the unconfounded distribution. The blue distribution represents the distribution of the ATE estimate from our proposed method. We can see that it sits in between the other two distributions and is moderately spread out. In this case, 0 is no longer with the 99% confidence interval. Recalling Remark 4.2, the learning factor \(\eta\) is chosen so that the posterior predictive distribution is closest, in KL divergence, to the "true" generative distribution of the randomized data. This gives us a new perspective: the blue density presents the distribution of ATE estimates that are inferred from combining information from the two datasets so that the posterior predictive distribution is closest to the "true" randomized data. ## 7 Discussion In this paper, we present a novel power likelihood approach to combining experimental and observational data that aims at optimizing the bias-variance tradeoff for the estimation of heterogeneous treatment effects. The power likelihood approach fits under the general Bayesian framework and introduces an influence factor \(\eta\) that moderates the impact of observational data. We propose a principled, data-adaptive procedure to select \(\eta\) by maximizing predictive accuracy, measured by ELPD, on the randomized data. This procedure robustly selects \(\eta\) depending on how 'contami Figure 9: Distribution of ATE estimates from using the unconfounded data only (red shaded area), confounded data only (green shaded area) and the combined data using our proposed methodology (blue shaded area).The distribution is obtained from 500 different sets of confounded and unconfounded datasets. nated' and 'useful' the observational data is for our inferential goals and has the practical advantage that it does not require the specification of hyper-parameters. Through empirical comparisons in Section 5, we show that our method outperforms existing methods (Green and Strawderman, 1991; Kallus et al., 2018; Rosenman et al., 2020; Oberst et al., 2022) for substantially lower overall MSE for CATE estimates, a larger reduction in MSE for the ATE estimate when bias is modest, and faster reversion to using the experimental data only when the observational data is drastically biased. Additionally, we demonstrate the feasibility and robustness of our proposed method as the dimension of covariates grows. In Section 6, we apply our method to the STAR data to illustrate its efficacy in a semi-synthetic real-world data analysis. In the future, we intend to apply this method to actual clinical data to test its practicality. Furthermore, although for simplicity, we assume that the treatment is binary throughout the discussion, our proposed method, in fact, allows for cases where treatment is multivariate or continuous. This is an advantage over most of the existing methods. A common limitation of Bayesian inference methods is that it is difficult to focus on optimizing a particular parameter of interest, for example, Bayesian dynamic borrowing methods in Ibrahim and Chen (2000) and Hobbs et al. (2012). In contrast, in a frequentist setting, methods such as targeted learning using TMLE (van der Laan and Rubin, 2006) and model selection using Focused Information Criteria (FIC) (Claeskens and Hjort, 2003; Zhang and Liang, 2011), are exactly for this purpose. In our proposal, we take a typical Bayesian route and select \(\eta\) based on expected prediction quality instead of directly targeting the CATE estimates. As the result, our method runs the risk of inefficiency as we include more covariates, which is manifested in the inflation of MSE described in Section 5.4. The loss of efficiency is more pronounced as covariates that are weakly correlated, or not correlated with the outcome get included in the model. Again, this issue is not unique to our method. As mitigation, we advise selecting only covariates that are predictive of the outcome in the model and applying dimension reduction techniques such as LASSO regression or principal component analysis (PCA) where appropriate. ## Appendix A Proof of Theorem 4.3 **Theorem 4.2** (**Consistency**).: Consider experimental data \(X=(X_{1},\ldots,X_{n_{e}})\) where \(X_{i}\)'s are i.i.d. following a normal distribution \(X_{i}\sim\mathcal{N}(\theta^{*},\sigma^{2})\), and observational data \(Y=(Y_{1},\ldots,Y_{n_{o}})\) i.i.d. following \(Y_{i}\sim\mathcal{N}(\theta^{*}+\delta,\sigma^{2})\). Here \(\delta\) denotes the unknown bias and is a function of \(n_{o}\). Suppose the variance \(\sigma^{2}\) is known. The estimator following the proposed power likelihood approach is \(\hat{\theta}_{\hat{\eta}}\). Let \(n=n_{e}+n_{o}\) and \(n_{e}/n_{o}=c>0\). * If \(0\leq k<1/2\) and \(\delta^{*}\neq 0\), the \(\hat{\eta}=O(n^{-1/2})\), \(\hat{\theta}_{\hat{\eta}}=\theta^{*}+O(n^{-1/2})\) and \(\mathbb{E}[(\hat{\theta}_{\eta}-\theta^{*})^{2}]=\sigma^{2}/n_{e}+O(n^{-3/2})\). * If \(k\geq 1/2\) or \(\delta^{*}=0\) then \(\hat{\eta}\) does not converge. Proof.: A roadmap for the proof of Theorem 4.3: 1. We start by deriving the posterior predictive distribution and ELPD 2. We set the derivative of the ELPD to zero and find the roots 3. For the scenario where \(k\geq 1/2\) or \(\delta^{*}=0\), we study the asymptotic behaviour of \(\hat{\eta}\) 4. For the scenario where \(0<k<1/2\) and \(\delta^{*}\neq 0\), we study the asymptotic behaviour of \(\hat{\eta}\) and the resulting estimator \(\hat{\theta}_{\hat{\eta}}\). ### Posterior predictive distribution and the ELPD We are interested in the parameter \(\theta\in\mathcal{R}\), for which we have a prior distribution \(\pi(\theta)\sim\mathcal{N}(\theta_{0},\sigma_{0}^{2})\). The 'presumed' data models of \(x_{i}\) and \(y_{i}\) condition on \(\theta\) are: \[p_{e}(x_{i}\,|\,\theta) =\frac{1}{\sqrt{(2\pi\sigma^{2})}}\exp\bigg{\{}-\frac{(x_{i}- \theta)^{2}}{2\sigma^{2}}\bigg{\}} \tag{8}\] \[p_{o}(y_{i}\,|\,\theta) =\frac{1}{\sqrt{(2\pi\sigma^{2})}}\exp\bigg{\{}-\frac{(y_{i}- \theta)^{2}}{2\sigma^{2}}\bigg{\}}, \tag{9}\] and the power posterior is \[\pi_{\eta}(\theta\,|\,x,y) \propto p_{\eta}(x,y\,|\,\theta)\times\pi(\theta) \tag{10}\] \[\propto\prod_{i=1}^{n_{e}}p_{e}(x_{i}\,|\,\theta)\times\left( \prod_{i=1}^{n_{o}}p_{o}(y_{i}\,|\,\theta)\right)^{\eta}\times\pi(\theta)\] (11) \[\propto\exp(-\frac{n_{e}(\bar{x}-\theta)^{2}}{2\sigma^{2}})\times \exp(-\frac{\eta n_{o}(\bar{y}-\theta)^{2}}{2\sigma^{2}})\times\exp(-\frac{( \theta-\theta_{0})^{2}}{2\sigma_{0}^{2}})\] (12) \[\propto\exp\frac{(\sigma^{2}+n_{e}\sigma_{0}^{2}+\eta n_{o} \sigma_{0}^{2})\theta^{2}-2(\sigma^{2}\theta_{0}+n_{e}\sigma_{0}^{2}\bar{x}+ \eta n_{o}\sigma_{0}^{2}\bar{y})\theta}{2\sigma^{2}\sigma_{0}^{2}}, \tag{13}\] where \(\bar{x}=\frac{1}{n_{e}}\sum_{i=1}^{n_{e}}x_{i}\) and \(\bar{y}=\frac{1}{n_{o}}\sum_{i=1}^{n_{o}}y_{i}\) are the sample means of the two datasets. It is easy to see that \(\pi_{\eta}(\theta;x,y)\sim\mathcal{N}(\hat{\theta},\hat{\sigma}^{2})\) where, \[\hat{\theta} =\frac{\sigma^{2}\theta_{0}+n_{e}\sigma_{0}^{2}\bar{x}+\eta n_{o }\sigma_{0}^{2}\bar{y}}{\sigma^{2}+n_{e}\sigma_{0}^{2}+\eta n_{o}\sigma_{0}^{2}} \tag{14}\] \[\hat{\sigma}^{2} =\frac{\sigma^{2}\sigma_{0}^{2}}{\sigma^{2}+n_{e}\sigma_{0}^{2}+ \eta n_{o}\sigma_{0}^{2}}. \tag{15}\] Asymptotically the influence from the prior distribution assumptions vanishes and the posterior mean estimator \(\hat{\theta}\) becomes \[\hat{\theta}=\hat{\theta}_{\eta}=\frac{n_{e}\bar{x}+\eta n_{o}\bar{y}}{n_{e}+ \eta n_{o}} \tag{16}\] which is our proposed estimator. We can see that \(\hat{\theta}\) depends on the value of \(\eta\) and we write it as \(\hat{\theta}_{\eta}\) to emphasise the dependence. Similarly, the asymptotic posterior variance is \[\hat{\sigma}=\hat{\sigma}_{\eta}=\frac{\sigma^{2}}{n_{e}+\eta n_{o}}. \tag{17}\] For (17) to make sense, we require \(n_{e}+\eta n_{o}>0\). Following the definition in Vehtari et al. (2017), we can write ELPD for a new datum as a function of \(\eta\) as \[\text{ELPD}(\eta)=\int_{\mathcal{Z}}p_{e}^{*}(z_{i})\log p_{\eta}(z_{i}|x,y)dz_ {i},\] where \(p_{e}^{*}(z_{i})\) is the distribution representing the _true_ data-generating process of the experimental data, \(z_{i}\) is a new data point from this true distribution, and \[p_{\eta}(z_{i}|x,y)=\int p_{e}(z_{i}|\theta,\sigma^{2})\pi_{\eta}(\theta|x,y)d\theta\] is a candidate posterior predictive distribution indexed by \(\eta\). Substituting the expressions for \(p_{e}(z_{i}|\theta,\sigma^{2})\) and \(\pi_{\eta}(\theta|x,y)\) gives \[p_{\eta}(z_{i}|x,y) =\int\frac{1}{\sqrt{(2\pi\sigma^{2})}}\exp\left(-\frac{(z_{i}- \theta)^{2}}{2\sigma^{2}}\right)\times\frac{1}{\sqrt{(2\pi\hat{\sigma}^{2})}} \exp\left(-\frac{(\theta-\hat{\theta})^{2}}{2\hat{\sigma}^{2}}\right)\] \[=\frac{1}{\sqrt{2\pi(\sigma^{2}+\hat{\sigma}^{2})}}\exp\left(- \frac{(z_{i}-\hat{\theta})^{2}}{2(\sigma^{2}+\hat{\sigma}^{2})}\right),\] where \(\hat{\theta}\) and \(\hat{\sigma}\) are the asymptotic posterior mean and variance as in (16) and (17). We can then write out the log prediction density and \(\text{ELPD}(\eta)\), given \(x\) and \(y\), as \[\log p_{\eta}(z_{i}|x,y) =-\frac{1}{2}\log(2\pi)-\frac{1}{2}\log(\sigma^{2}+\hat{\sigma}^ {2})-\frac{(z_{i}-\hat{\theta})^{2}}{2(\sigma^{2}+\hat{\sigma}^{2})}\] \[\text{ELPD}(\eta) =\mathbb{E}_{P_{e}^{*}}\{\log p_{\eta}(Z_{i}|x,y)\}\] \[=-\frac{1}{2}\log(2\pi)-\frac{1}{2}\log(\sigma^{2}+\hat{\sigma}^ {2})-\mathbb{E}_{P_{e}^{*}}\left\{\frac{(Z-\hat{\theta})^{2}}{2(\sigma^{2}+ \hat{\sigma}^{2})}\right\}.\] ### Derivative of ELPD We want to find \(\eta=\eta^{*}\) such that \(\eta^{*}=\operatorname*{arg\,max}_{\eta}\text{ELPD}(\eta)\). However, the true data generative process represented by \(p_{e}^{*}\) remains unknown, we have to empirically estimate ELPD by evaluating it over a calibration dataset of size \(n_{c}\) with i.i.d. samples from the true distribution, \[\widehat{\text{ELPD}}(\eta)=\frac{1}{n_{c}}\sum_{i=1}^{n_{c}}\left\{-\frac{1} {2}\log(2\pi)-\frac{1}{2}\log(\sigma^{2}+\hat{\sigma}^{2})-\frac{(z_{i}-\hat{ \theta})^{2}}{2(\sigma^{2}+\hat{\sigma}^{2})}\right\}.\] It is important to recognize that \(\eta\) is now a parameter of the likelihood model for the calibrated data. Essentially we want to find the MLE estimator of \(\eta\), \(\hat{\eta}\), such that \[\hat{\eta}=\operatorname*{arg\,max}_{\eta}\widehat{\operatorname{ELPD}}(\eta).\] We consider the score function, \[S_{n_{e}}(\eta) =\frac{1}{n_{c}}\sum_{i=1}^{n_{c}}S(z_{i},\eta)\] \[=\frac{1}{n_{c}}\sum_{i=1}^{n_{c}}\frac{\partial}{\partial\eta} \left\{-\frac{1}{2}\log(2\pi)-\frac{1}{2}\log(\sigma^{2}+\hat{\sigma}^{2})- \frac{(z_{i}-\hat{\theta})^{2}}{2(\sigma^{2}+\hat{\sigma}^{2})}\right\}\] for which \(\mathbb{E}[S_{n_{e}}(\eta_{o})]=0\). Suppose that \(\hat{\eta}\) is a solution to the estimation equation \(S_{n_{e}}(\eta)=0\), i.e. \(S_{n_{e}}(\hat{\eta})=0\). By semi-parametric inference theory, \[\sqrt{n_{e}}\left(\hat{\eta}-\eta_{0}\right)\xrightarrow{D}\mathcal{N}\left( 0,\left[\mathbb{E}\left\{\frac{\partial S\left(Z,\eta_{0}\right)}{\partial \eta^{T}}\right\}\right]^{-1}\operatorname{Var}\left\{S\left(Z,\eta_{0}\right) \right\}\left[\mathbb{E}\left\{\frac{\partial S\left(Z,\eta_{0}\right)}{ \partial\eta^{T}}\right\}\right]^{-1^{T}}\right).\] Under suitable regularity conditions, we have the relationship between the Fisher Information \(\mathcal{I}\left(\eta_{0}\right)\) and the terms in the variance as below: \[\mathcal{I}^{-1}\left(\eta_{0}\right)=\mathbb{E}\left\{-\frac{\partial S \left(Z,\eta_{0}\right)}{\partial\eta^{T}}\right\}^{-1}=\mathbb{E}\left\{S_{ \eta}\left(Z,\eta_{0}\right)S_{\eta}^{T}\left(Z,\eta_{0}\right)\right\}= \operatorname{Var}\left\{S\left(Z,\eta_{0}\right)\right\}.\] Therefore, \[\sqrt{n_{c}}\left(\widehat{\eta}_{n_{e}}-\eta_{0}\right)\xrightarrow{d} \mathcal{N}\left(0,\mathcal{I}\left(\eta_{0}\right)^{-1}\right).\] To find \(\eta_{0}\), we want to solve the equation \(\mathbb{E}[S_{n_{e}}(\eta_{o})]=0\). Expanding it, we have \[\mathbb{E}[S_{n_{e}}(\eta_{o})] =\mathbb{E}_{Z}\left\{\frac{1}{n_{c}}\sum_{i=1}^{n_{c}}\frac{ \partial}{\partial\eta}\left[-\frac{1}{2}\log(2\pi)-\frac{1}{2}\log(\sigma^{2} +\hat{\sigma}^{2})-\frac{(Z_{i}-\hat{\theta})^{2}}{2(\sigma^{2}+\hat{\sigma}^ {2})}\right]\right\}\] \[=-\frac{1}{2}\left\{\frac{\partial}{\partial\eta}\left[\log( \sigma^{2}+\hat{\sigma}^{2})\right]+\frac{\partial}{\partial\eta}\mathbb{E}_{ Z}\left[\frac{(Z-\hat{\theta})^{2}}{\sigma^{2}+\hat{\sigma}^{2}}\right]\right\}\] \[=-\frac{1}{2}\left\{\frac{\partial}{\partial\eta}\log(\sigma^{2} +\hat{\sigma}^{2})+\frac{\partial}{\partial\eta}\frac{(\theta^{*}-\hat{\theta })^{2}+\sigma^{2}}{\sigma^{2}+\hat{\sigma}^{2}}\right\} \tag{18}\] The third equality is by Leibniz integral rule and dominated convergence theorem. We also have the derivatives of \(\hat{\theta}\) and \(\hat{\sigma}^{2}\) with respect to \(\eta\) as follows: \[\frac{\partial\hat{\theta}}{\partial\eta} =\frac{n_{e}n_{o}\left(\bar{Y}-\bar{X}\right)}{\left(n_{e}+\eta n_{ o}\right)^{2}} \tag{19}\] \[\frac{\partial\hat{\sigma}^{2}}{\partial\eta} =-\frac{n_{o}\sigma^{2}}{\left(n_{e}+\eta n_{o}\right)^{2}}. \tag{20}\] Substituting Equation (19) - (20) into (18), gives \[\mathbb{E}[S_{n_{e}}(\eta_{0})] =-\frac{n_{o}\sigma^{2}}{2(\sigma^{2}+\hat{\sigma}^{2})^{2}\left( n_{e}+\eta n_{o}\right)^{4}}\left\{2\left(n_{e}\left(\bar{X}-\theta^{*}\right)+ \eta n_{o}\left(\bar{Y}-\theta^{*}\right)\right)n_{e}\left(\bar{Y}-\bar{X} \right)\left(n_{e}+\eta n_{o}+1\right)\right\}\] \[\qquad+\left(n_{e}\left(\bar{X}-\theta^{*}\right)+\eta n_{o} \left(\bar{Y}-\theta^{*}\right)\right)^{2}-\left(n_{e}+\eta n_{o}\right) \sigma^{2}\} \tag{21}\] Setting \(\mathbb{E}[S_{n_{e}}(\eta_{0})]=0\), by the quadratic formula, the two roots are: \[\eta_{0,1}= \frac{1}{2n_{o}\left(\bar{Y}-\theta^{*}\right)\left(\bar{Y}- \theta^{*}-2n_{e}(\bar{Y}-\bar{X})\right)}\left\{\sigma^{2}-2n_{e}\left(\bar{Y }^{2}+n_{e}(\bar{Y}^{2}-\bar{X}^{2})-2\left(\bar{Y}+(\bar{Y}-\bar{X})n_{e} \right)\theta^{*}+\theta^{*2}\right)\right\}\] \[+\sqrt{\sigma^{4}+4(\bar{X}-\bar{Y})^{2}n_{e}^{2}\left(\sigma^{2} +\left(\bar{Y}+n_{e}(\bar{Y}-\bar{X})+\theta^{*}\right)^{2}\right)}\} \tag{22}\] \[\eta_{0,2}= \frac{1}{2n_{o}\left(\bar{Y}-\theta^{*}\right)\left(\bar{Y}- \theta^{*}-2n_{e}(\bar{Y}-\bar{X})\right)}\left\{\sigma^{2}-2n_{e}\left(\bar{ Y}^{2}+n_{e}(\bar{Y}^{2}-\bar{X}^{2})-2\left(\bar{Y}+(\bar{Y}-\bar{X})n_{e} \right)\theta^{*}+\theta^{*2}\right)\right)\] \[-\sqrt{\sigma^{4}+4(\bar{X}-\bar{Y})^{2}n_{e}^{2}\left(\sigma^{2 }+\left(\bar{Y}+n_{e}(\bar{Y}-\bar{X})+\theta^{*}\right)^{2}\right)}\}\] ### Scenario 1: \(k\geq 1/2\) or \(\delta^{*}=0\) We start with the case of strict inequality, that is, \(k>1/2\). For \(X_{i}\sim\mathcal{N}(\theta^{*},\sigma^{2})\) and \(Y_{i}\sim\mathcal{N}(\theta^{*}+\delta^{*}/n^{k},\sigma^{2})\) where \(k>1/2\), that is, \(\delta^{*}/n^{k}=o_{p}(\sqrt{n})\), by CLT, we have \[\bar{X} \xrightarrow{d}\mathcal{N}(\theta^{*},\frac{\sigma^{2}}{n_{e}}) \bar{Y} \xrightarrow{d}\mathcal{N}(\theta^{*},\frac{\sigma^{2}}{n_{o}})\] so we write \[\bar{X} =\theta^{*}+\frac{\sigma}{\sqrt{n_{e}}}Z_{x} \bar{Y} =\theta^{*}+\frac{\sigma}{\sqrt{n_{o}}}Z_{y}, \tag{24}\] where \(Z_{x}\) and \(Z_{y}\) are standard normal random variables. This representation clearly holds if \(\delta^{*}=0\). Assuming \(Z_{x}\neq\sqrt{c}Z_{y}\), which is implied by \(\bar{X}\neq\bar{Y}\), and \(Z_{y}\neq 0\), substituting (27) in the root expressions (22) and (23) gives \[\begin{split}\eta_{0,1}&=\frac{1}{2Z_{y}\!\left(Z_{y}+2 n_{o}\left(-\sqrt{c}Z_{x}+cZ_{y}\right)\right)}\Bigg{\{}1+2cn_{o}Z_{x}^{2}-2c \left(1+cn_{o}\right)Z_{y}^{2}\\ &+\sqrt{1+4c\left(Z_{x}-\sqrt{c}Z_{y}\right)^{2}\left(n_{o} \left(1+cn_{o}Z_{x}^{2}\right)-2\sqrt{c}n_{o}\left(1+cn_{o}\right)Z_{x}Z_{y}+ \left(1+cn_{o}\right)^{2}Z_{y}^{2}\right)}\Bigg{\}}\\ &=\frac{1}{4n_{o}Z_{y}(cZ_{y}-\sqrt{c}Z_{x})}\left\{2cn_{o}Z_{x} ^{2}-2c^{2}n_{o}Z_{y}^{2}+\sqrt{4c(Z_{x}-\sqrt{c}Z_{y})^{2}\left(cn_{o}^{2}Z_{ x}^{2}-2c^{3/2}n_{o}^{2}Z_{x}Z_{y}+c^{2}n_{o}^{2}Z_{y}^{2}+O(n)\right)}\right\}\\ &\times\left(1+O\left(\frac{1}{n}\right)\right)\\ &=\frac{2cn_{o}}{4n_{o}Z_{y}(cZ_{y}-\sqrt{c}Z_{x})}\left\{Z_{x} ^{2}-cZ_{y}^{2}+\sqrt{(Z_{x}-\sqrt{c}Z_{y})^{2}\left(Z_{x}^{2}-2\sqrt{c}Z_{x}Z _{y}+cZ_{y}^{2}+O\left(\frac{1}{n}\right)\right)}\right\}\left(1+O\left(\frac {1}{n}\right)\right)\\ &=\frac{\sqrt{c}}{2Z_{y}(\sqrt{c}Z_{y}-Z_{x})}\left\{Z_{x}^{2}-cZ _{y}^{2}+\sqrt{(Z_{x}-\sqrt{c}Z_{y})^{4}}\right\}+O\left(\frac{1}{n}\right)\\ &=\frac{\sqrt{c}\left(Z_{x}^{2}-2\sqrt{c}Z_{x}Z_{y}\right)}{2Z_{ y}(\sqrt{c}Z_{y}-Z_{x})}+O\left(\frac{1}{n}\right)\\ &=\frac{\sqrt{c}}{2Z_{y}(\sqrt{c}Z_{y}-Z_{x})}\left\{Z_{x}^{2}-cZ _{y}^{2}+(Z_{x}-\sqrt{c}Z_{y})^{2}\right\}+O\left(\frac{1}{n}\right)\\ &=-\sqrt{c}\frac{Z_{x}}{Z_{y}}+O\left(\frac{1}{n}\right)\end{split} \tag{25}\] We observe that \(\eta_{0,1}\) asymptotically follows a Cauchy distribution which suggests that the mean of \(\hat{\eta}\) does not converge and hence is not identifiable. It is easy to see that, following the same derivation, the other root can be simplified into \[\eta_{0,2}=-c+O\left(\frac{1}{n}\right), \tag{26}\] which does not satisfy our requirement that \(n_{e}+\eta n_{o}>0\) for (17) to make sense. It easily follows that the same result holds when \(\delta^{*}=0\). When k = 1/2 and \(\delta^{*}\neq 0\), (27) becomes \[\bar{X} =\theta^{*}+\frac{\sigma}{\sqrt{n}_{e}}Z_{x} \bar{Y} =\theta^{*}+\frac{\sigma}{\sqrt{n}_{o}}(Z_{y}+\delta^{*}),. \tag{27}\] Following the same derivation as (25) gives \[\eta_{0,1}-\sqrt{c}\frac{Z_{x}}{Z_{y}+\delta^{*}}+O\left(\frac{1}{n}\right), \tag{28}\] which suggests that \(\eta_{0}\) asymptotically follows the ratio distribution of two independent normal distributions, with one of them being non-central. Such distribution is heavy-tailed and has no finite moments (Hinkley, 1969; Marsaglia, 2006). Therefore, in this case, the mean of \(\hat{\eta}\) also does not converge. ### Scenario 1: \(0<k<1/2\) or \(\delta^{*}\neq 0\) When the bias decreases with sample size at a rate slower than \(\sqrt{n}\), the bias does not vanish as the CLT applies. Then (27) becomes \[\bar{X} =\theta^{*}+\frac{\sigma}{\sqrt{n}_{e}}Z_{x} \bar{Y} =\theta^{*}+\delta(n)+\frac{\sigma}{\sqrt{n}_{o}}Z_{y}, \tag{29}\] Because of the bias term \(\delta(n)\), the root expressions (22) and (23) would become hard to simplify if we substitute (29) directly into them. Instead, we observe the \(\mathbb{E}[S_{n_{e}}(\eta_{0})]\) expression as in (21) \[\mathbb{E}[S_{n_{e}}(\eta_{0})] =\frac{n_{o}\sigma^{2}}{(\sigma^{2}+\hat{\sigma}^{2})^{2}\left(n_ {e}+\eta n_{o}\right)^{4}}\big{\{}2\left(n_{e}\left(\bar{X}-\theta^{*}\right) +\eta n_{o}\left(\bar{Y}-\theta^{*}\right)\right)n_{e}\left(\bar{Y}-\bar{X} \right)\left(n_{e}+\eta n_{o}+1\right)\] \[\quad+\left(n_{e}\left(\bar{X}-\theta^{*}\right)+\eta n_{o} \left(\bar{Y}-\theta^{*}\right)\right)^{2}-\left(n_{e}+\eta n_{o}\right) \sigma^{2}\big{\}} \tag{30}\] \[=\frac{2n_{e}n_{o}\sigma^{2}}{(\sigma^{2}+\hat{\sigma}^{2})^{2} \left(n_{e}+\eta n_{o}\right)^{3}}\left(n_{e}\left(\bar{X}-\theta^{*}\right)+ \eta n_{o}\left(\bar{Y}-\theta^{*}\right)\right)\left(\bar{Y}-\bar{X}\right) +O\left(n^{-1}\right). \tag{31}\] We can see that asymptotically, \(\eta_{0}\) and \(\eta_{0,2}\) should be the solutions to \[n_{e}\left(\bar{X}-\theta^{*}\right)+\eta n_{o}\left(\bar{Y}-\theta^{*}\right) =0\] Substituting (29) in, we can write out \(\eta_{0}\) as \[\eta_{0} =-\frac{c}{\delta}\left(\frac{\sigma}{\sqrt{cn_{o}}}Z_{x}\right)+O( \frac{1}{n}).\] Next, we want to find the Fisher information evaluated at \(\eta_{0}\): \[\mathcal{I}\left(\eta_{0}\right) =-\mathbb{E}\left[\frac{\partial}{\partial\eta}S_{n_{e}}(\eta_{0})\right]\] \[=-\frac{\partial}{\partial\eta}\mathbb{E}\left[S_{n_{e}}(\eta) \right]\big{|}_{\eta=\eta_{0}}\] \[=-\frac{\partial}{\partial\eta}\left[-\frac{1}{2}\frac{2n_{e}n_ {o}\sigma^{2}}{\left(\sigma^{2}+\hat{\sigma}^{2}\right)^{2}\left(n_{e}+\eta n_ {o}\right)^{3}}\left(n_{e}\left(\bar{X}-\theta^{*}\right)+\eta n_{o}\left( \bar{Y}-\theta^{*}\right)\right)\left(\bar{Y}-\bar{X}\right)+O\left(n^{-1} \right)\right]\right|_{\eta=\eta_{0}}\] \[=\frac{\partial}{\partial\eta}\left[\frac{n_{e}n_{o}\sigma^{2} \left(n_{e}\frac{\sigma}{\sqrt{n_{e}}}Z_{x}+\eta n_{o}\left(\frac{\sigma}{ \sqrt{n_{e}}}Z_{y}\right)\right)\left(\delta+\frac{\sigma}{\sqrt{n_{o}}}Z_{y }-\frac{\sigma}{\sqrt{n_{e}}}Z_{x}\right)}{\left(\sigma^{2}+\hat{\sigma}^{2} \right)^{2}\left(n_{e}+\eta n_{o}\right)^{3}}+O\left(\frac{1}{n}\right)\right] \Bigg{|}_{\eta=\eta_{0}}\] \[=\frac{\partial}{\partial\eta}\left[\frac{n_{e}n_{o}\sigma^{2} }{\left(\sigma^{2}+\hat{\sigma}^{2}\right)^{2}\left(n_{e}+\eta n_{o}\right)^{ 3}}pn_{o}\delta^{2}+O\left(\frac{1}{\sqrt{n}}\right)\right]\Bigg{|}_{\eta=\eta_ {0}}\] \[=\frac{\partial}{\partial\eta}\left[\frac{n_{e}n_{o}^{2}\sigma^{ 2}\delta^{2}\eta}{\left(\sigma^{2}+\frac{\sigma^{2}}{n_{e}+\eta n_{o}}\right)^ {2}\left(n_{e}+\eta n_{o}\right)^{3}}+O\left(\frac{1}{\sqrt{n}}\right)\right] \Bigg{|}_{\eta=\eta_{0}}\] \[=\frac{n_{e}n_{o}^{2}\delta^{2}}{\sigma^{2}}\frac{\partial}{ \partial\eta}\left[\frac{\eta}{\left(n_{e}+\eta n_{o}\right)^{3}}\right] \Bigg{|}_{\eta=\eta_{0}}+O\left(\frac{1}{\sqrt{n}}\right)\] \[=\frac{n_{e}n_{o}^{2}\delta^{2}}{\sigma^{2}}\left[\frac{\left(n_{ e}+\eta n_{o}\right)^{3}-3n_{o}\left(n_{e}+\eta n_{o}\right)^{2}\eta}{\left(n_{e}+ \eta n_{o}\right)^{6}}\right]\Bigg{|}_{\eta=\eta_{0}}+O\left(\frac{1}{\sqrt{n}}\right)\] \[=\frac{n_{e}n_{o}^{2}\delta^{2}}{\sigma^{2}}\left[\frac{n_{e}-2n_ {o}\eta}{\left(n_{e}+\eta n_{o}\right)^{4}}\right]\Bigg{|}_{\eta=\eta_{0}}+O \left(\frac{1}{\sqrt{n}}\right)\] and using that \(\eta_{0}=-\frac{c}{\delta n_{o}}\left(\frac{\sigma}{\sqrt{n_{e}}}Z_{x}\right) +O(\frac{1}{n})\) \[=\frac{n_{e}n_{o}^{2}\delta^{2}}{\sigma^{2}}\frac{n_{e}-2n_{o}\eta _{0}}{\left(n_{e}-n_{e}\left(\frac{\sigma}{\delta\sqrt{n_{e}}}Z_{x}+O(\frac{1} {n})\right)\right)^{4}}+O\left(\frac{1}{\sqrt{n}}\right)\] \[=\frac{n_{e}n_{o}^{2}\delta^{2}}{\sigma^{2}}\frac{n_{e}+2n_{o} \left(\frac{n_{e}}{\delta n_{o}}\left(\frac{\sigma}{\sqrt{n_{e}}}Z_{x}+O(\frac {1}{n})\right)\right)\left(1+\frac{4\sigma}{\delta\sqrt{n_{e}}}Z_{x}+O(\frac{1} {n})\right)+O\left(\frac{1}{\sqrt{n}}\right)\] \[=\frac{n_{o}^{2}\delta^{2}}{n_{e}^{2}\sigma^{2}}\left(1+\frac{2 \sigma}{\delta\sqrt{n_{e}}}Z_{x}+O(\frac{1}{n})\right)\left(1+\frac{4\sigma}{ \delta\sqrt{n_{e}}}Z_{x}+O(\frac{1}{n})\right)+O\left(\frac{1}{\sqrt{n}}\right)\] \[=\frac{n_{o}^{2}\delta^{2}}{n_{e}^{2}\sigma^{2}}+O\left(1/\sqrt{n} \right).\] Therefore, together with the fact that \(n_{e}=cn_{o}\), \[\sqrt{n_{e}}\left(\widehat{\eta}-\eta_{0}\right)\xrightarrow{D}\mathcal{N} \left(0,\frac{\sigma^{2}c^{2}}{\delta^{2}}+O\left(\frac{1}{\sqrt{n}}\right) \right),\] where \(\eta_{0}=-\frac{c}{\delta}\left(\frac{\sigma}{\sqrt{cn_{o}}}Z_{x}\right)+O(\frac{1 }{n})\). Or equivalently, we can write \[w=\widehat{\eta}_{n_{e}}-\eta_{0}=\frac{c}{\delta}\frac{\sigma}{\sqrt{n_{c}}}Z_{ w}+O(\frac{1}{n}),\] where \(Z_{w}\) is a standard normal variable independent of \(Z_{x}\) and \(Z_{y}\). Then it follows that \[\widehat{\eta}=\eta_{0}+w=\frac{c}{\delta}\left(-\frac{\sigma}{\sqrt{cn_{o}}}Z _{x}+\frac{\sigma}{\sqrt{n_{c}}}Z_{w}\right)+O\left(\frac{1}{n}\right).\] We note that \(\mathcal{I}\left(\eta_{0}\right)\) is asymptotically positive, which means the second derivative of \(\mathrm{ELPD}(\eta)\) with respective to \(\eta\) is negative in expectation. It confirms that \(\eta_{0}\) is indeed a local maxima. We also observe that as \(\delta\) decreases, the probability of \(\hat{\eta}\) exceeding \(1\) increases. This means that if we restrict \(\eta\) within \([0,1]\), it is more likely that we will choose \(\eta=1\) as the best value. Setting \(\eta=\widehat{\eta}\) our proposed estimator in (16) becomes \[\hat{\theta}_{\hat{q}} =\frac{\frac{\sigma Z_{x}}{\sqrt{cn_{o}}}+\theta^{*}+\frac{\left( \frac{\sigma Z_{w}}{\sqrt{cn_{o}}}-\frac{\sigma Z_{y}}{\sqrt{cn_{o}}}+O\left( \frac{1}{n}\right)\right)\left(\delta+\frac{\sigma Z_{y}}{\sqrt{n_{o}}}+\theta ^{*}\right)}{\delta}}{1+\frac{\sigma\left(\frac{\sigma Z_{w}}{\sqrt{cn_{o}}}- \frac{\sigma Z_{y}}{\sqrt{cn_{o}}}+O\left(\frac{1}{n}\right)\right)}{\delta}}\] \[=\left(\frac{\sigma Z_{x}}{\sqrt{cn_{o}}}+\theta^{*}+\frac{\left( \frac{\sigma Z_{w}}{\sqrt{n_{c}}}-\frac{\sigma Z_{w}}{\sqrt{cn_{o}}}+O\left( \frac{1}{n}\right)\right)\left(\delta+\frac{\sigma Z_{y}}{\sqrt{n_{o}}}+\theta ^{*}\right)}{\delta}\right)\left(1+O\left(\frac{1}{\sqrt{n}}\right)\right)\] \[=\theta^{*}+\frac{\sigma}{\sqrt{cn_{o}}}Z_{x}+\frac{\sigma}{\sqrt {n_{c}}}Z_{w}-\frac{\sigma}{\sqrt{cn_{o}}}Z_{x}+O\left(\frac{1}{n}\right)\] \[=\theta^{*}+\frac{\sigma}{\sqrt{n_{c}}}Z_{w}+O\left(\frac{1}{n} \right). \tag{32}\] From (32) we can see that the proposed estimator \(\widehat{\theta}_{\hat{q}}\) asymptotically converges to \(\theta^{*}\). As a result, the MSE is asymptotically, \[\mathcal{R}(\hat{\theta}_{\eta},\theta^{*}) =\mathbb{E}_{X,Y}\left(\|\hat{\theta}_{\hat{\eta}}-\theta^{*}\|^{ 2}\right)=\mathbb{E}\left[\frac{\sigma}{\sqrt{n_{c}}}Z_{w}+O\left(\frac{1}{n} \right)\right]^{2}\] \[=\frac{\sigma^{2}}{n_{c}}+O\left(\frac{1}{n^{3/2}}\right).\] Optimistically, if we use a Leave-One-Out Cross Validation approach, it is close to using a calibration dataset of size \(n_{c}=n_{e}\). Therefore, the MSE of our proposed estimator converges to that using the experimental data alone. ## Appendix B Normal approximation of posterior Following the setup outlined in Section 3, we have an experimental dataset of size \(n_{e}\), \(X=(X_{1},\ldots,X_{n_{e}})\) and an observational dataset of size \(n_{o}\), \(Y=(Y_{1},\ldots,Y_{n_{o}})\). We assume that they share the same causal parameters \(\theta^{*}\). We consider the following estimation equations: \[\sum_{i=1}^{n_{e}}f(\theta;X_{i}) =0, \tag{33}\] \[\sum_{i=1}^{n_{a}}g(\theta;Y_{i}) =0,\] (34) \[\sum_{i=1}^{n_{e}}f(\theta;X_{i})+\eta\sum_{i=1}^{n_{a}}g(\theta;Y _{i}) =0, \tag{35}\] where where \(\mathbb{E}f(\theta^{*};X_{i})=0\) and \(\mathbb{E}g(\theta^{*}+\delta;Y_{i})=0\) for some unknown bias \(\delta\). From the standard theory of estimation equations, we know that the solution to (33), \(\hat{\theta}_{e}\), is asymptotically normal following \(\sqrt{n_{e}}(\hat{\theta}_{e}-\theta^{*})\to N(0,V)\), and likewise, \(\sqrt{n_{o}}(\hat{\theta}_{o}-\theta^{*}-\delta)\to N(0,W)\). Let the solution to (35) be \(\hat{\theta}_{\eta}\). We can approximate each sum in (35) by the Taylor Expansion and use the fact that \(\left[\sum_{i=1}^{N_{e}}f(\hat{\theta}_{e};X_{i})\right]=0\) and \(\left[\sum_{i=1}^{N_{o}}g(\hat{\theta}_{o};X_{i})\right]=0\), then \[n_{e}V^{-1}(\hat{\theta}_{\eta}-\hat{\theta}_{e})+\eta n_{o}W^{-1 }(\hat{\theta}_{\eta}-\hat{\theta}_{o}) =0\] \[\left(n_{e}V^{-1}+\eta n_{o}W^{-1}\right)\hat{\theta}_{\eta} =n_{e}V^{-1}\hat{\theta}_{e}+\eta n_{o}W^{-1}\hat{\theta}_{o}.\] Then we have an expression for \(\hat{\theta}_{\eta}\) as \[\hat{\theta}_{\eta}=\left(n_{e}V^{-1}+\eta n_{o}W^{-1}\right)^{-1}\left(n_{e} V^{-1}\hat{\theta}_{e}+\eta n_{o}W^{-1}\hat{\theta}_{o}\right).\] Let \(\mathcal{I}(\hat{\theta}_{e})\) and \(\mathcal{I}(\hat{\theta}_{o})\) be the Fisher information matrices for the experimental and observational data respectively, then the combined Fisher Information matrix is \[\mathcal{I}(\hat{\theta}_{\eta}) =-\mathbb{E}\left[\frac{\partial}{\partial\theta}\left(\sum_{i=1 }^{n_{e}}f(\theta;X_{i})+\eta\sum_{i=1}^{n_{o}}g(\theta;Y_{i})\right)\right]\] \[=-\mathbb{E}\left[\frac{\partial}{\partial\theta}\sum_{i=1}^{n_{ e}}f(\theta;X_{i})\right]-\eta\,\mathbb{E}\left[\frac{\partial}{\partial \theta}\sum_{i=1}^{n_{o}}g(\theta;Y_{i})\right]\] \[=\mathcal{I}(\hat{\theta}_{e})+\eta\,\mathcal{I}(\hat{\theta}_{o}).\] That is, the combined Fisher Information is a weighted sum of the two datasets. In the simulations presented in Section 5, we use the sandwich estimators of \(V\) and \(W\) and Fisher information calculated using the R package causal(Evans, 2021). Then, as outlined in Section 4.4, we can approximate the \(\eta\)-powered posterior with \[\pi_{\eta}\left(\theta\,|\,X,Y\right)\approx\mathcal{N}\left(\hat{\theta}_{ \eta},[\mathcal{I}(\hat{\theta}_{\eta})]^{-1}\right),\] where \(\hat{\theta}_{\eta}\) and \(\mathcal{I}(\hat{\theta}_{\eta})\) can be approximated as derived above. ## Appendix C Details of simulation ### Implementation of other methods In this section, we provide details on how we implement the comparator methods in the simulation presented in Section 5. The shrinkage estimators in Green et al. (2005) require more than two strata and those in Rosenman et al. (2020) require as least four strata to guarantee a risk reduction, therefore we split both datasets into 10 subgroups. In the main simulation in Section 5.2, the stratification is based on deciles of \(C\) in the observational data; for the simulation presented in Section 5.4, there are more covariates and so we use stratification based on the two levels of \(C_{5}\) and quintiles of \(C_{1}\). We use \(\hat{\theta}_{\boldsymbol{o}}\), \(\hat{\theta}_{\boldsymbol{e}}\in\mathbb{R}^{K}\) where \(K=10\) to denote the vector of CATE estimates in the 10 strata in \(\mathcal{D}_{e}\) and \(\mathcal{D}_{o}\) respectively. #### c.1.1 Green et al. (2005) We implemented the estimators proposed in Green et al. (2005), which generalizes the estimators in Green and Strawderman (1991) to allow for heteroscedastic variances. Specifically, we used the estimators \[\boldsymbol{\delta}_{1}^{+}(\hat{\boldsymbol{\theta}}_{\boldsymbol{e}},\hat{ \boldsymbol{\theta}}_{\boldsymbol{o}})=\hat{\boldsymbol{\theta}}_{\boldsymbol {o}}+\left(1-\frac{a}{\left(\hat{\boldsymbol{\theta}}_{\boldsymbol{o}}-\hat{ \boldsymbol{\theta}}_{\boldsymbol{e}}\right)^{T}\boldsymbol{\Sigma}_{e}^{-1} \left(\hat{\boldsymbol{\theta}}_{\boldsymbol{o}}-\hat{\boldsymbol{\theta}}_{ \boldsymbol{e}}\right)}\right)_{+}\left(\hat{\boldsymbol{\theta}}_{\boldsymbol {o}}-\hat{\boldsymbol{\theta}}_{\boldsymbol{e}}\right)\] and \[\boldsymbol{\delta}_{2}^{+}(\hat{\boldsymbol{\theta}}_{\boldsymbol{e}},\hat{ \boldsymbol{\theta}}_{\boldsymbol{o}})=\hat{\boldsymbol{\theta}}_{\boldsymbol {o}}+\left(1_{K}-\frac{a\boldsymbol{\Sigma}_{e}^{-1}}{\left(\hat{\boldsymbol {\theta}}_{\boldsymbol{o}}-\hat{\boldsymbol{\theta}}_{\boldsymbol{e}}\right)^{ T}\boldsymbol{\Sigma}_{e}^{-2}\left(\hat{\boldsymbol{\theta}}_{\boldsymbol{o}}-\hat{ \boldsymbol{\theta}}_{\boldsymbol{e}}\right)}\right)_{+}\left(\hat{\boldsymbol {\theta}}_{\boldsymbol{o}}-\hat{\boldsymbol{\theta}}_{\boldsymbol{e}}\right)\] where \(\Sigma_{e}\) is the variance-covariance matrix of \(\hat{\theta}_{e}\); note, of course, that we assume the components of \(\hat{\theta}_{e}\) (and \(\hat{\theta}_{o}\)) are mutually independent. We have also adopted the default choice of \(a=K-2\), where \(K=10\) in our simulation setup. \(\Sigma_{e}\) is a diagonal matrix with the variance of each component on its diagonal. In the simulation, we implemented both estimators and they gave very similar results. For better readability, we only plot \(\boldsymbol{\delta}_{1}^{+}\) in Figure 5, 6 and 7. #### c.1.2 Rosenman et al. (2020) We implemented two estimators proposed in Rosenman et al. (2020). The first one is the shrinkage estimator which shares a common shrinkage factor across all components of \(\hat{\theta}_{e}\) and \(\hat{\theta}_{o}\). \[\mathbf{\kappa}_{1+}=\hat{\mathbf{\theta}}_{\mathbf{o}}+\left(1-\frac{\operatorname{Tr} \left(\mathbf{\Sigma}_{e}\mathbf{D}\right)}{\left(\hat{\mathbf{\theta}}_{\mathbf{o}}-\hat{\mathbf{ \theta}}_{\mathbf{e}}\right)^{T}\mathbf{D}\left(\hat{\theta}_{\mathbf{o}}-\hat{\mathbf{\theta }}_{\mathbf{e}}\right)}\right)_{+}\left(\hat{\mathbf{\theta}}_{\mathbf{o}}-\hat{\mathbf{\theta }}_{\mathbf{e}}\right)\] where \(\mathbf{D}\) is a diagonal weight matrix representing the relative importance of the strata estimates. In our implementation, we set \(\mathbf{D}=\mathrm{I}_{K}\). Another estimator they proposed uses variance-weighted shrinkage factors: \[\mathbf{\kappa}_{2+}=\hat{\mathbf{\theta}}_{\mathbf{o}}+\left(\mathbf{I}_{K}-\frac{ \operatorname{Tr}\left(\mathbf{\Sigma}_{e}^{2}\mathbf{D}\right)\mathbf{\Sigma}_{e}}{ \left(\hat{\mathbf{\theta}}_{\mathbf{o}}-\hat{\mathbf{\theta}}_{\mathbf{e}}\right)^{T}\mathbf{ \Sigma}_{e}^{2}\mathbf{D}\left(\hat{\theta}_{o}-\hat{\theta}_{e}\right)}\right)_{+} \left(\hat{\mathbf{\theta}}_{\mathbf{o}}-\hat{\mathbf{\theta}}_{\mathbf{e}}\right).\] In the simulation, we implemented both estimators and they gave very similar results. We only plot \(\mathbf{\kappa}_{1+}\) in Figure 5, 6 and 7 for simplicity. For all the abovementioned shrinkage estimators, we use the IPW estimates for \(\hat{\mathbf{\theta}}_{\mathbf{o}}\) and \(\hat{\mathbf{\theta}}_{\mathbf{e}}\): \[\hat{\theta}_{e,k} =\sum_{i\in\mathcal{E}_{k}}\frac{T_{i}Y_{i}}{e_{e}(Z_{i},C_{i})}- \frac{(1-T_{i})Y_{i}}{1-e_{e}(Z_{i},C_{i})} \text{for }k=1,2,\ldots,10\] \[\hat{\theta}_{o,k} =\sum_{i\in\mathcal{O}_{k}}\frac{T_{i}Y_{i}}{e_{o}(Z_{i},C_{i})}- \frac{(1-T_{i})Y_{i}}{1-e_{o}(Z_{i},C_{i})} \text{for }k=1,2,\ldots,10\] where \(e_{e}(Z_{i},C_{i})\) and \(e_{o}(Z_{i},C_{i})\) are the propensity scores, and \(\mathcal{E}_{k}\) and \(\mathcal{O}_{k}\) denote the observations in the \(k\)-th stratum of the randomized and observational data, respectively. #### c.1.3 Kallus et al. (2018) We follow the experimental grounding approach as described in Algorithm 1 in Kallus et al. (2018). Specifically, we implement the following steps: 1. We fit two linear outcome regression models on the observational data: \(m^{1}(\mathbf{z},\mathbf{c})=\mathbb{E}[Y\,|\,T=1,\mathbf{Z}=\mathbf{z},\mathbf{C}=\mathbf{c},D= \mathcal{D}_{o}]\) and \(m^{0}(\mathbf{z},\mathbf{c})=\mathbb{E}[Y\,|\,T=0,\mathbf{Z}=\mathbf{z},\mathbf{C}=\mathbf{c},D= \mathcal{D}_{o}]\), and calculate their difference as an estimate of CATE estimate \(\hat{\omega}(\mathbf{z},\mathbf{c})=\hat{m}^{1}(\mathbf{z},\mathbf{c})-\hat{m}^{0}(\mathbf{z},\mathbf{ c})\). 2. We learn the bias correction function, assuming linearity, \(\xi(\mathbf{z},\mathbf{c})=\beta^{T}\mathbf{v}\) where \(\mathbf{v}\) is the combined vector of \((\mathbf{z},\mathbf{c})^{T}\) and \[\hat{\beta}=\operatorname*{arg\,min}_{\beta}\sum_{i=1}^{n_{e}}\left(q_{i}Y_{i}- \hat{\omega}\left(\mathbf{V}_{i}\right)-\beta^{T}\mathbf{V}_{i}\right)^{2},\] where \(q_{i}=2\) if \(T=1\) and \(q_{i}=-2\) if \(T=0\). That is, we fit \(\eta(\mathbf{z},\mathbf{c})\) through least square regression on \(q_{i}Y_{i}-\hat{\omega}(\mathbf{V}_{i})\) using the randomized data. 1. We then calculate the strata CATE estimates as \[\hat{\tau}_{k}=\frac{1}{n_{e,k}}\sum_{i\in\mathcal{E}_{k}}\hat{\omega}(\mathbf{V}_ {i})+\hat{\beta}^{T}\mathbf{V}_{i}\text{ for }k=1,2,\ldots,10,\] where \(\mathcal{E}_{k}\) denotes the collection of randomized units in stratum \(k\). Specifically, with respect to \(m^{1}\), \(m^{0}\) and \(\xi\), we fit the models using the formula \(Y\sim C+Z\) for the simulation in Section 5.2 and \(Y\sim C_{1}+C_{2}+C_{3}+C_{4}+C_{5}+Z_{1}+Z_{2}\) for the simulation in Section 5.4. It is worth noting that these formulae correctly specify the relationships according to the data-generating processes. #### c.1.4 Oberst et al. (2022) We construct the estimator proposed in Oberst et al. (2022) as \[\hat{\tau}_{\hat{\lambda}} =\hat{\lambda}\hat{\tau}_{o}+(1-\hat{\lambda})\hat{\tau}_{e} \text{where }\hat{\lambda} =\frac{\hat{\sigma}_{u}^{2}}{\left(\hat{\tau}_{e}-\hat{\tau}_{o} \right)^{2}+\hat{\sigma}_{e}^{2}+\hat{\sigma}_{o}^{2}},\] where \(\hat{\tau}_{e}\) and \(\hat{\tau}_{o}\) are ATE estimates from \(\mathcal{D}_{e}\) and \(\mathcal{D}_{o}\) respectively. As we assume that \(\hat{\tau}_{e}\) and \(\hat{\tau}_{o}\) are independent, the covariance term in the original formula of \(\hat{\lambda}\) drops out. We use the ATE estimates and their variance inferred from the fully parametric model as \(\hat{\tau}_{e}\) and \(\hat{\tau}_{o}\) so that \(\hat{\tau}_{\hat{\lambda}}\) is as comparable to our approach as possible. ### Setup of the simulation in 5.4 Very similar to the setup of the simulation in 5.2, we model the distributions as follows5 : Footnote 5: For simplicity, we use the shorthand notation \(\mathbf{C}_{1:p}\) to denote the set \(\{C_{1},\cdots,C_{p}\}\) \[\mathbf{C}_{1:3} \sim N(0,1) \mathbf{C}_{4:5} \sim\text{Bernoulli}(0.5)\] \[Z_{1}\mid C_{1} \sim N(\mu_{z_{1}},\,1) Z_{2}\mid C_{4} \sim N(\mu_{z_{2}},1)\] \[T\,\lvert C_{1},C_{5},Z_{1} \sim\text{Bernoulli}(\mu_{t}) Y(t)\mid\mathbf{C}_{1:5} \sim N(\mu_{y},\,1).\] where, \[\mu_{z_{1}} =C_{1}\] \[\mu_{z_{2}} =C_{4}\] \[\text{logit}\,\mu_{t} =0.5+0.1\,C_{1}+0.6\,Z_{1}+0.4\,C_{5}\,Z+0.1\,C_{1}Z_{1}+\psi U\] \[\mu_{y} =[1,C_{1},C_{2},C_{3},C_{4},C_{5}]\left[\begin{array}{cc}0.1&0.7 \\ 0.2&0.8\\ 0.3&0.9\\ 0.4&1.0\\ 0.5&1.1\\ 0.6&1.2\end{array}\right]\left[\begin{array}{c}1\\ T\end{array}\right]+\psi U\] Additionally, let the dependence structure between \(Y\) and \(\mathbf{Z}_{1:2}\) given \(\{T=t,\mathbf{C}_{1:5}=\mathbf{c}_{1:5}\}\), \(\phi^{*}_{Y\mathbf{Z}|TC}\), be a conditionally trivariate Gaussian copula, with correlation matrix \[\mathbf{\Sigma}=\begin{pmatrix}1&\rho_{Z_{1}Z_{2}}&\rho_{Z_{1}Y}\\ \rho_{Z_{1}Z_{2}}&1&\rho_{Z_{2}Y}\\ \rho_{Z_{1}Y}&\rho_{Z_{2}Y}&1\end{pmatrix},\] where \(\rho_{z_{1}z_{2}}=\rho_{z_{1}Y}=\rho_{z_{2}Y}=2\,\text{expit}(1+t)-1\). Specifically, the correlations are \(0.76\) when \(T=1\), and \(0.46\) when \(T=0\). We use the same parameterization for the experimental data \(\mathcal{D}_{e}\) except that we assume that treatment is randomly assigned and hence replace the mean of \(T\) with \(\mu_{t}=0.5\). We set the sample sizes to 2,500 and 250 for the observational and experimental data, respectively. The results in Section 5.4 are averaged across 500 sets of synthetic datasets.
2307.03577
CuTS: Customizable Tabular Synthetic Data Generation
Privacy, data quality, and data sharing concerns pose a key limitation for tabular data applications. While generating synthetic data resembling the original distribution addresses some of these issues, most applications would benefit from additional customization on the generated data. However, existing synthetic data approaches are limited to particular constraints, e.g., differential privacy (DP) or fairness. In this work, we introduce CuTS, the first customizable synthetic tabular data generation framework. Customization in CuTS is achieved via declarative statistical and logical expressions, supporting a wide range of requirements (e.g., DP or fairness, among others). To ensure high synthetic data quality in the presence of custom specifications, CuTS is pre-trained on the original dataset and fine-tuned on a differentiable loss automatically derived from the provided specifications using novel relaxations. We evaluate CuTS over four datasets and on numerous custom specifications, outperforming state-of-the-art specialized approaches on several tasks while being more general. In particular, at the same fairness level, we achieve 2.3% higher downstream accuracy than the state-of-the-art in fair synthetic data generation on the Adult dataset.
Mark Vero, Mislav Balunović, Martin Vechev
2023-07-07T13:10:23Z
http://arxiv.org/abs/2307.03577v4
# Programmable Synthetic Tabular Data Generation ###### Abstract Large amounts of tabular data remain underutilized due to privacy, data quality, and data sharing limitations. While training a generative model producing synthetic data resembling the original distribution addresses some of these issues, most applications require additional constraints from the generated data. Existing synthetic data approaches are limited as they typically only handle specific constraints, _e.g.,_ differential privacy (DP) or increased fairness, and lack an accessible interface for declaring general specifications. In this work, we introduce ProgSyn, the first programmable synthetic tabular data generation algorithm that allows for comprehensive customization over the generated data. To ensure high data quality while adhering to custom specifications, ProgSyn pre-trains a generative model on the original dataset and fine-tunes it on a differentiable loss automatically derived from the provided specifications. These can be programmatically declared using statistical and logical expressions, supporting a wide range of requirements (_e.g.,_ DP or fairness, among others). We conduct an extensive experimental evaluation of ProgSyn on a number of constraints, achieving a new state-of-the-art on some, while remaining general. For instance, at the same fairness level we achieve 2.3% higher downstream accuracy than the state-of-the-art in fair synthetic data generation on the Adult dataset. Overall, ProgSyn provides a versatile and accessible framework for generating constrained synthetic tabular data, allowing for specifications that generalize beyond the capabilities of prior work. ## 1 Introduction The availability of large datasets has been key to the recent rapid progress of machine learning. To enable this progress, datasets often have to be shared between different organizations, and potentially passed on to third parties to train machine learning models. This often presents a roadblock as data owners are responsible for ensuring they do not perpetuate biases present in the data and do not violate user privacy by sharing their personal records. Tabular data is especially delicate from this perspective, as it is abundant in high-stakes applications, such as finance and healthcare [1]. To facilitate progress and enable data sharing by addressing these issues, synthetic data has emerged as a promising approach of increasing interest. Synthetic dataThe goal of synthetic data generation is to produce a new dataset that statistically resembles the original one, yet overcomes the above issues. Driven by recent regulations requiring bias mitigation (_e.g.,_ GDPR [2] Art. 5a), data accuracy (GDPR Art. 5d), and privacy (GDPR Art. 5c and 5e), there has been increased interest in this field. Prior work has already addressed some of the data sharing concerns: differentially private synthetic data [3, 4, 5, 6, 7, 8, 6], generating data with reduced bias [9, 10, 11, 12, 13, 14], and combining these two objectives [15]. While these works achieve significant progress, they might still generate data that violates truthfulness (_e.g.,_ person who is 10 years old and has a doctorate) or contains undesired statistical patterns (_e.g.,_ a pharmaceutical company not sharing even synthetic copies of their clinical trial data, as the distribution of patient conditions reveal their development focus). Therefore, a key challenge is to enable data owners to generate high utility data fulfilling diverse constraints as required by their applications. ProgSyn: Programmable Synthetic Tabular Data GenerationIn this work, we introduce the first synthetic tabular data generation method allowing for general programmable specifications, expressed in an intuitive language. Figure 1 shows an overview of ProgSyn, featuring example constraints defined by the data owner, where no person younger than 25 with a doctorate should be generated and where the data should be unbiased w.r.t sex. ProgSyn supports a wide range of constraints. First, it allows for differential privacy constraints protecting individuals included in the original dataset. Through logical and implication constraints it can specify relationships that each data point has to satisfy (as in Figure 1). It supports statistical constraints allowing users to manipulate statistical properties of the synthetic data. Finally, it provides constraints for enforcing properties on the classifiers trained on the synthetic data (_e.g.,_ low bias). ProgSyn thus generalizes prior works supporting only specific constraints. Our key insight is that one can preserve high utility by pre-training a generative model (\(g_{\theta}\) in Figure 1) on the original dataset and then fine-tune it to satisfy the constraints. For fine-tuning, we automatically convert the non-differentiable constraints into a relaxed differentiable loss which is then minimized together with the pre-training objective, encouraging the model to generate constrained data. Example: Statistical ConstraintsWe demonstrate on a practical example how statistical ProgSyn constraints can be used to allow an organization to share their synthesized data, without compromising their proprietary information. Recall that a drug company may need to obfuscate the distribution of patient conditions before sharing their data, as they want to avoid revealing the focus of their research. We instantiate this example on the Health Heritage dataset, containing patient data. Using a constraint to increase the feature's entropy, ProgSyn produces high quality synthetic data only losing \(1\%\) downstream accuracy w.r.t. the original data. At the same time, as shown in Figure 2, it obfuscates the details of patient conditions, making it difficult to accurately determine the exact prevalence of the most common conditions. Figure 1: An overview of ProgSyn. The data owner writes a program that lists constraints for the synthetic data. For example, they might want to make sure that the model does not generate people younger than 25 with a Doctorate degree. Additionally, they might require that the synthetic data is differentially private and unbiased. To achieve this, ProgSyn pre-trains a differentially private generative model, and then fine-tunes it to satisfy the given constraints. Finally, the generative model can be used to sample a synthetic dataset satisfying the constraints. Figure 2: Increasing feature entropy using statistical constraints in ProgSyn. In our experimental evaluation we demonstrate that ProgSyn produces synthetic data respecting a number of constraints unattainable by prior work, achieving high utility and constraint satisfaction rate. Furthermore, on constraints supported by prior work we either outperform them, or at least match their performance. For instance, we improve the state-of-the-art in fair synthetic data generation on the Adult [16] dataset by achieving a \(2.3\%\) higher downstream accuracy and a \(2\times\) lower demographic parity distance of \(0.01\). Additionally, we demonstrate that ProgSyn is able to stack several diverse constraints at the same time, while maintaining high data quality. Main contributionsOur main contributions are: 1. The first language for programmable synthetic tabular data generation, capturing a wide range of constraints on the data. 2. A method for generating synthetic data with programmable constraints based on fine-tuning a generative model via a differentiable loss derived from the constraints. 3. An implementation of the language and training method into a system called ProgSyn, together with an extensive evaluation demonstrating its strong competitiveness and versatility. ## 2 Background Here, we briefly present the notation and some fundamental concepts that we will rely on later. Tabular DataTabular data is one of the most common data formats, extensively used in high-stakes contexts, _e.g.,_ in healthcare, finance, and social sciences [1]. The data is organized in columns, storing either _continuous_ or _discrete_ features. For the rest of this work we will assume that the data at hand _only_ contains discrete columns, _i.e.,_ we discretize any continuous columns before proceeding. Let the number of columns be \(K\), then we denote the domain of each resulting discrete feature as \(\mathcal{D}_{i}\) for \(i\in[K]\). We one-hot encode the columns, turning each \(d_{i}\in\mathcal{D}_{i}\) into a binary vector of length \(|\mathcal{D}_{i}|\), with a single non-zero entry marking the position of the encoded discrete category. The resulting set of one-hot encoded rows is denoted as \(\mathcal{X}\), where each encoded data point \(x\in\mathcal{X}\) is of length \(q\coloneqq\sum_{i=1}^{K}|\mathcal{D}_{i}|\) and contains exactly \(K\) non-zero entries. Further, a full table of \(N\) rows is denoted as \(X\in\mathcal{X}^{N}\), with \(X_{i}\) denoting the \(i\)-th data point. In the rest of this text, we will also refer to \(X\) as a sample of size \(N\), as well as simply a dataset, and will use row and data point interchangeably to refer to a single \(x\in X\). Also, unless stated otherwise, we will denote a synthetic sample as \(\hat{X}\). Finally, let \(\mathcal{S}\coloneqq\{s_{1},\dots,\,s_{m}\}\subseteq[K]\coloneqq\{1,\dots,\,K\}\), then we write \(X[\mathcal{D}_{s_{1}},\dots,\mathcal{D}_{s_{m}}]\) meaning only the (column-space) subset of \(X\) that corresponds to the original columns \(\mathcal{D}_{s_{1}},\dots,\mathcal{D}_{s_{m}}\). MarginalsLet \(\mathcal{S}\coloneqq\{\mathcal{D}_{s_{i}}\}_{i=1}^{m}\) be a subset of columns, then the marginal over \(\mathcal{S}\) on a sample \(X\) counts the occurrences of each feature combination from the product space \(\bigtimes_{i=1}^{m}\mathcal{D}_{s_{i}}\) over all rows in \(X\). We denote the unnormalized marginal as \(\mu(\mathcal{S},\,X)\), and denote the normalized marginal as \(\bar{\mu}(\mathcal{S},X)\coloneqq\frac{1}{N}\,\mu(\mathcal{S},\,X)\). A \(k\)-way marginal refers to a marginal over \(k\) columns. Marginals are an important statistic in tabular data, as they effectively capture the approximate distributional characteristics of the features in the sample, facilitating the calculation of a wide range of statistics, _e.g.,_ correlations and conditional relationships between the involved features. Additionally, due to the one-hot encoding in \(X\), we can differentiably calculate marginals using the Kronecker product, _e.g.,_\(:\mu(\mathcal{S},\,X)\coloneqq\sum_{k=1}^{N}X_{k}[\mathcal{D}_{s_{1}}]\otimes \dots\otimes X_{k}[\mathcal{D}_{s_{m}}]\). Differential PrivacyThe gold standard for providing privacy guarantees for data dependent algorithms is differential privacy (DP) [17], where the privacy of individual contained in a dataset is ensured by limiting the impact a single data point can have on the outcome of the algorithm. This is usually achieved by injecting carefully engineered noise in the process, which in turn negatively affects the accuracy of the procedure. The privacy level is quantified by \(\epsilon\), with lower levels of \(\epsilon\) corresponding to higher privacy, and as such, higher noise and lower accuracy. Fair ClassificationAs machine learning systems may propagate biases from their training data [18; 19], there is an increased interest to mitigate this effect [20; 21; 22]. Demographic parity is a fairness criterion that demands that the protected group membership of any individual should have _no_ effect on the expected outcome of the classification. We measure violation of this criterion using the demographic parity distance: \(\pi_{\mathcal{D}_{s}}(f,\mathcal{X})\coloneqq\max_{d_{i},\,d_{j}\in\mathcal{D}_{s} \times\mathcal{D}_{s}}|E_{x\sim\mathcal{X}}[f(x)|\mathcal{D}_{s}=d_{i}]-E_{x\sim \mathcal{X}}[f(x)|\mathcal{D}_{s}=d_{j}]|\), where \(f\) is a classifier and \(\mathcal{D}_{s}\) is the protected feature. Synthetic DataThe goal of synthetic data generation is to train a generative model \(g_{\theta}\) on the real data \(X\) to produce synthetic samples \(\hat{X}\) that are statistically as close as possible to \(X\). Ultimately, \(\hat{X}\) should have high enough quality to replace \(X\) in data analysis and machine learning tasks. ## 3 Related Work Unconstrained Synthetic Tabular DataUnconstrained synthetic tabular data generation exhibits a long line of work, where the most prominent approaches are collected in the Synthetic Data Vault (SDV) [23], including the deep learning based methods of TVAE and CTGAN [24]. Recent works [25; 26; 27] have demonstrated vast improvements over the models in SDV. However, none of these works support settings, where privacy, fairness, or other custom constraints have to be taken into consideration. Our work is the first general approach in this direction. Synthetic Data with Differential Privacy GuaranteesAlthough synthetic data was considered to be private, it is unfortunately not satisfactory in this regard [28]. Therefore, differentially private (DP) synthetic data generation algorithms are of increasing interest. A recent survey [29] established that approaches relying on generative adversarial networks (GAN) (_e.g._, PATE-GAN [4] and DP-CGAN [5]) are outperformed by marginal-based graphical models operating on a fixed set of measurements (_e.g._, PrivBayes[3], and MST [6]). Recently, algorithms that iteratively apply new measurements from a set of target marginals have shown strong improvements (_e.g.,_ RAP [7], GEM [8], and AIM [30]). The generative model underlying ProgSyn extends on the learner of GEM combined with the iterative framework of AIM, as detailed in Section 4.1. Note that these models do not support the variety of constraints facilitated by ProgSyn. Fair Synthetic Tabular DataReducing bias of synthetic data is an important concern. Most works in this area make use of GANs with bias-penalized loss functions to encourage fairness [9; 10; 12; 13], or pre-processing the dataset to remove bias before training a generative model [14]. In a different approach, DECAF [11] trains a causally-aware GAN, and removes undesired causal relationships at generation time to reduce bias. Finally, PreFair [15] extends the graphical model based DP algorithm of MST [30] by modifying the underlying graph such that the modeled distribution is fair. Synthetic Tabular Data with Logical ConstraintsAlthough it is important for the synthetic data to allow enforcing logical relationships in the data, not many works have addressed this issue. AIM [6] allows imposing a restricted set of constraints by manually zeroing out certain entries in the represented marginals. As we find in Section 5, this approach can severely impact the quality of the generated data. Kamino [31] is a DP synthetic tabular data generation method focused on facilitating logical relationships between pairs of generated data points. As ProgSyn operates under the usual assumption of i.i.d. data, the constraints supported by Kamino do not extend to our setting. Constraints in Continuous ModelsThere has been a long line of work focusing on encoding domain-knowledge or other information in the form of _logical constraints_ that would aid the machine learning model in its performance. Some prominent works achieve this by modifying the loss function or its computation at training time [32; 33; 34; 35; 36; 37], or by modifying the model and/or its inference procedure [38; 39; 40; 41]. The main distinguishing factors with our work are: (i) these approaches improve the trained models by injecting additional knowledge, while ProgSyn's aim is freely customizable synthetic data generation; and (ii) most such approaches only support constraints that are limited to first order logic on individual data points, while ProgSyn also supports statistical and downstream constraints on the generated dataset. Further, note that ProgSyn's contribution is _not_ another general language for differentiable logic, but one specific to synthetic data generation. ## 4 ProgSyn: Programmable Synthetic Data Generator To fully utilize tabular data, it is often necessary to create a synthetic version, satisfying additional specifications, such as protecting individual privacy using differential privacy. The synthetic data should also support logical constraints to preserve or inject structure, allow direct influence on statistical relationships, and ensure that resulting classifiers possess desirable properties, _e.g.,_ low bias in addition to high accuracy. While prior work considered small disjunct subsets of these constraints, ProgSyn is the first method which allows for joint programmable specification of _all_ of the above constraints for synthetic data generation. ProgSyn converts the user provided specifications, written in the ProgSyn language, to a differentiable loss and uses it to train the generative model. We now describe the underlying generative model, training procedure, and the language for specifying constraints. ### The ProgSyn Framework The generative model, building the backbone of ProgSyn, is based on GEM [8]. That is, we make use of the generator of a GAN to generate datasets from random noise, which is then trained by comparing marginals calculated on this generated dataset to the marginals of the original dataset. Formally, let \(g_{\theta}\) denote our generative model with learnable parameters \(\theta\), then \(g_{\theta}:\mathbb{R}^{p}\rightarrow\mathcal{X}\) is a mapping to the one-hot representation-space of the original dataset. We sample the random noise \(z\) and use it as an input to \(g_{\theta}\) from multivariate standard normal distribution, _i.e.,_\(z\sim\mathcal{N}(0,\,\mathbb{I}_{p\times p})\) (shorthand: \(\mathcal{N}_{p}\)). As such, we can sample from \(g_{\theta}\) by first sampling an input noise and feeding it through the network to obtain the corresponding dataset sample. The goal is for the distribution induced by \(g_{\theta}\) to match that of the original data, _i.e.,_ to find a \(\theta\) such that \(P_{g_{\theta}}\approx P_{x}\). Recall that in Section 2 we assumed that all continuous features are discretized, and the resulting dataset contains only discrete features. We denote \(\mathcal{X}\) as the one-hot encoded space of the dataset. To ensure that the output of \(g_{\theta}\) is in the correct representation we use a per-feature straight-through gumbel-softmax estimator [42] as the final layer, which differentiably produces one-hot representations for each output feature. This is achieved by rounding the soft outputs of the gumbel-softmax for each feature to a one-hot vector, while using the soft probabilities in the backward pass. Non-Private TrainingFor training \(g_{\theta}\) in the non-private setting, we first measure a set of marginals on the original dataset \(X\), denoted as \(M(X)\). To obtain the training loss \(\mathcal{L}_{M}\), we calculate the total variation (TV) distance between the true marginals \(M(X)\) and the marginals measured on a generated sample \(M(g_{\theta}(z))\) of size \(B\), _i.e.,_\(\mathcal{L}_{M}(g_{\theta}(z),\,X)\coloneqq\frac{1}{2}\left[M(X)-M(g_{\theta }(z))\right]\), where \(z\sim\mathcal{N}_{p}^{B}\). We then use iterative gradient-based optimization to minimize \(\mathcal{L}_{M}\), resampling \(z\) at each iteration. Differentially Private TrainingFor DP training, we adapt the DP training algorithm presented in AIM [6]. Concretely, we exchange their graphical model with our \(g_{\theta}\) as the base generative model of the procedure. Additionally, we modify their budget adaptation step; in a similar vein to adaptive ODE solvers, we allow both for increasing and decreasing the per iteration DP budget, depending on the improvements observed at the previous step. For more details, we refer the reader to Appendix C. Training ProgSyn with ConstraintsDepending on whether a DP constraint is present, we first pre-train ProgSyn either by the non-private or the DP training method described above _without_ any other constraints. Once pre-training is done, we fine-tune ProgSyn minimizing the pre-training objective \(\mathcal{L}_{M}\) augmented by additional loss terms \(\mathcal{L}_{\texttt{constr.}}^{(i)}\) derived from the constraints. For \(n\) constraints we can write the fine-tuning objective as: \[\mathcal{L}_{\texttt{fine}}(g_{\theta}(z),\,X,\,X_{r})\coloneqq\mathcal{L}_{M }(g_{\theta}(z),\,X)+\sum_{i=1}^{n}\lambda_{i}\,\mathcal{L}_{\texttt{constr.}} ^{(i)}(g_{\theta}(z),\,X_{r}), \tag{4.1}\] where \(\{\lambda_{i}\}_{i=1}^{n}\) are real valued parameters weighing the constraints' impact on the objective, \(X\) is the original dataset, and \(X_{r}\) is a reference dataset, which is either the original dataset itself, or a sample generated at the end of fine-tuning, in case the training is DP. The goal is to find a \(\theta^{*}\) that minimizes the fine-tuning loss \(\mathcal{L}_{\texttt{fine}}(g_{\theta}(z),\,X,\,X_{r})\). We discuss the choice of \(\{\lambda_{i}\}_{i=1}^{n}\) in Appendix B.1. ### ProgSyn Constraint Types: Privacy, Logical, Statistical, and Downstream With the help of the ProgSyn program on the Adult dataset [16] shown in Figure 3, and using it as a running example, we introduce the technical details of each supported constraint type below. The ProgSyn LanguageEach program begins with a command fixing the source dataset we wish to make a synthetic copy of and ends in an END; command. In between, we may list all constraints we wish to impose over ProgSyn. If no constraint is given, \(g_{\theta}\) is simply trained to maximally match the original dataset in a non-private manner. Each command consists of (i) an action description, defining the way the constraint has to be handled by the optimizer (maximize, minimize, enforce, or ensure); (ii) a constraint type description; (iii) an optional PARAM specification, where one may set the corresponding constraint weight \(\lambda\), and (iv) an expression directly describing the constraint in terms of the data attributes. Differential Privacy ConstraintProgSyn can protect the privacy of individuals in \(X\) with DP by using the constraint shown in line 2 of Figure 3. This ensures that the pre-training of \(g_{\theta}\) is done by the iterative DP method described in Section 4.1, and that fine-tuning does not access the original dataset \(X\). Logical Row ConstraintsTo avoid generating unrealistic data points or when aiming to incorporate domain knowledge, it is necessary to support logical constraints over individual rows. For instance, consider the constraint (denoted as \(\phi\)) in line 3 of Figure 3, requiring that each generated individual's age is between \(35\) and \(55\). We refer to such first order logical expressions consisting of feature-constant comparisons chained by logical AND and OR operations that have to hold for _each row_ of the synthetic samples as _row constraints_. Returning to our example, \(\phi\) consists of two comparisons \(t_{1}\coloneqq\texttt{age}\) > 35 and \(t_{2}\coloneqq\texttt{age}\) < 55. To enforce \(\phi\) over \(g_{\theta}\), we first negate the expression \(\phi\) to obtain \(\neg\phi=\texttt{age}\) <= 35 OR age >= 55, and at each iteration count the rows where the negated expression holds, penalizing the fine-tuning loss with this count. Concretely, we differentiably compute a binary mask \(b_{\neg\phi}\) marking the rows in a generated synthetic sample \(\hat{X}\) of length \(N\) that satisfy \(\neg\phi\), the sum of which is exactly the number of rows violating \(\phi\). We do this by making use of the differentiable one-hot encoding in \(\hat{X}\). First, we translate each of the negated comparison terms \(\neg t_{1}\) and \(\neg t_{2}\) into binary masks \(m_{\neg t_{1}},\,m_{\neg t_{2}}\in\{0,\,1\}^{q}\) over the columns by setting each coordinate that corresponds to a valid assignment in \(t_{i}\) to \(1\) and keeping the rest of the entries \(0\). For instance, if the age feature is discretized as [18-35, 36-45, 36-45, 46-54, 55-80], then \(\neg t_{1}[\texttt{age}]=[1,0,0,0]\) and \(\neg t_{2}[\texttt{age}]=[0,0,0,1]\), with the rest of the \(q-4\) dimensions padded with zeros. Finally, we can compute the resulting binary mask \(b_{\neg\phi}\) over the rows of \(\hat{X}\) using the following logical operators based on basic vector-matrix operations: AND: \(\hat{X}m_{t_{1}}^{T}\odot\hat{X}m_{t_{2}}^{T}\), and OR: \(\hat{X}m_{t_{1}}^{T}+\hat{X}m_{t_{2}}^{T}-\hat{X}m_{t_{1}}^{T}\odot\hat{X}m_{t _{2}}^{T}\). In case of composite logical expressions we apply these primitives to each pair of comparisons recursively. Notice that as we only make use of matrix-vector operations between \(\hat{X}\) and constants independent of the data, the calculation is fully differentiable with respect to the generator. Altogether, we can add the following loss term to the fine-tuning loss of \(g_{\theta}\) to enforce \(\phi\): \(\mathcal{L}_{\phi}(g_{\theta}(z))\coloneqq\sum_{i}^{N}b_{\neg\phi}(g_{\theta} (z))_{i}\), using the notation \(b_{\neg\phi}(g_{\theta}(z))\) for the binary mask calculated over the sample obtained from \(g_{\theta}\). Logical ImplicationsAs shown in line 4 of Figure 3, on the Adult dataset it is sensible to require that individuals that are divorced or have never married are not designated as husband or wife. In general, it is often necessary that certain implications hold in every row, either to preserve (as in the example) or to inject logical relationships in the data. We enforce implications \(\phi\implies\psi\) over the \(g_{\theta}\) by penalizing every generated row that violates the implication, _i.e.,_ every row that satisfies \(\zeta\coloneqq\phi\wedge\neg\psi\). Notice that \(\zeta\) can be understood as a row constraint expression, allowing for the techniques described in the paragraph above to calculate \(b_{\zeta}(g_{\theta}(z))\) (note that we do not negate \(\zeta\)). Therefore, the resulting loss term to be added to the fine-tuning objective is: \(\mathcal{L}_{\phi}\Longrightarrow\psi(g_{\theta}(z))\coloneqq\sum_{i=1}^{N}b_{ \zeta}(g_{\theta}(z))_{i}=\sum_{i=1}^{N}b_{\phi}(g_{\theta}(z))_{i}\odot b_{ \neg\psi}(g_{\theta}(z))_{i}\). Figure 3: ProgSyn program on the Adult dataset containing example commands for each supported constraint type. Statistical ConstraintsOne may want to smoothen out undesired statistical differences between certain groups to limit bias, _e.g.,_ encourage that the mean age measured over males and females agree (line 5 of Figure 3); or obfuscate sensitive statistical information, such as hiding the most prevalent disease in their dataset (recall the example in Section 1). To facilitate such statistical constraints we support the calculation of conditional statistical operations (expectation, variance, standard deviation, and entropy) composed into arithmetic (\(+\), \(-\), \(*\), \(/\)) and logical (\(\wedge\), \(\vee\), \(<\), \(<\), \(>\), \(\geq\), \(=\), \(\neq\)) expressions. The calculation of the corresponding loss term consists of two steps: (i) differentiably calculating the value of each involved statistical expression (involved), and (ii) as afterwards we are left with logical and arithmetical terms of reals, we can calculate the resulting loss term using t-norms and DL2 primitives [33]. Here (ii), we rely on prior work [33], therefore we only elaborate on the more involved step (i) below. Denote a conditional statistical operator as \(OP[f(\mathcal{S})|\phi]\), where \(f\) is a differentiable mathematical function over a subset of features \(\mathcal{S}\), and \(\phi\) is a row constraint-like condition. To differentiably compute the result, we first select all rows of the sample \(\hat{X}\) where \(\phi\) applies, using the technique for row constraints, described in an earlier paragraph. Then, considering only this subset of the sample \(\hat{X}_{\phi}\subseteq\hat{X}\), we compute the normalized joint marginal of all features involved in \(\mathcal{S}\), \(\bar{\mu}(\mathcal{S},\,\hat{X}_{\phi})\), describing a valid probability distribution over \(f(\mathcal{S})\). As such, we can finally compute the value of the given statistical operation following its mathematical definition. Downstream ConstraintsAs the synthetic data is expected to be deployed to train machine learning models, we need to support specifications and constraints involving them. For instance, one may require that models trained on the synthetic data exhibit lower bias, or that no models can be trained on the data to predict a certain protected column (lines 6 and 7 of Figure 3). We achieve this by first training a differentiable surrogate classifier \(h_{\psi}\) on the defined prediction task at each iteration of fine-tuning \(g_{\theta}\). Once this surrogate model is trained, we "test" it on the reference dataset \(X_{r}\), and compute the statistic of interest \(SI\) (_e.g.,_ demographic parity distance \(\pi_{\mathcal{D}_{s}}\) for bias w.r.t. the protected feature \(\mathcal{D}_{s}\), or the cross entropy \(\mathcal{L}_{CE}\) for predictive objectives). We then update \(g_{\theta}\) influencing \(SI\) in our desired direction. Denote the synthetic sample generated at the current iteration as \(\hat{X}\), the features available to the surrogate model for prediction as \(\hat{X}[\texttt{features}]\), and the target features as \(\hat{X}[\texttt{target}]\). Then the loss term added to the fine-tuning objective can be defined as: \[\mathcal{L}_{\texttt{DOWSTREAM}}(g_{\theta}(z),\,X_{r})\coloneqq s\cdot SI( h_{\psi^{*}}(X[\texttt{features}]),\,X[\texttt{target}]), \tag{4.2}\] with \[\psi^{*}\coloneqq\min_{\psi}\,\,\mathcal{L}_{CE}(h_{\psi}(\hat{X}[\texttt{ features}]),\,\hat{X}[\texttt{target}]), \tag{4.3}\] where \(\mathcal{L}_{CE}\) denotes the cross-entropy loss, and \(s\in\{-1,\,1\}\) depending on whether we wish to maximize or minimize the computed statistic. Note that \(\psi^{*}\) depends (differentiably) on \(\theta\) through \(\hat{X}\), and as such Equation (4.2) exhibits a differentiable dependency on \(\theta\). ## 5 Experimental Evaluation In this section, we present our results demonstrating that ProgSyn can produce high-utility synthetic data which respects a wide range of constraints. Experimental SetupFor realizing \(g_{\theta}\) we use a fully connected neural network with residual connections. Non-private models are trained on all \(3\)-way marginals that involve the target label of the original prediction task of the dataset. Private models are trained by letting the algorithm select from all \(1\), \(2\), and \(3\)-way marginals. The constraint parameters are selected with the help of a hold-out validation dataset. Wherever possible, we report the mean and standard deviation of a given metric, measured over 5 retrainings and 5 samples from each model. For further details on the experimental setup please see Appendix A. For most of our evaluations we use the popular Adult dataset [16] containing US-census data, where the task is to predict whether the income of an individual is above \(\$50K\). To demonstrate the generalizability of ProgSyn, we also test it on the Health Heritage Prize dataset from Kaggle [43]. For evaluating the quality of the produced synthetic data w.r.t. the original dataset, we measure the test accuracy of an XGBoost [44] model trained on the synthetic data and tested on the real test data. We resort to this evaluation metric to keep the presentation compact, while providing a comprehensive measure of the usefulness of the generated data. As XGBoost is state-of-the-art on tabular classification problems, it allows us to capture fine-grained deviations in data quality [26]. We compare only to prior works with an available open source implementation. Downstream Constraints:Eliminating Bias and PredictabilityWe evaluate ProgSyn's performance on the task of generating a synthetic copy of the Adult dataset that is fair w.r.t the sex feature, both in the non-private and private (DP) setting, using the constraint shown in line 6 of Figure 3. We compare ProgSyn to two recent non-private (DECAF [11], and TabFairGAN [13]), and one private (Prefair [15]) fair synthetic data generation methods. The statistic of interest is a low demographic parity distance on the sex feature of an XGBoost trained on the synthetic dataset and tested on the real testing dataset. In Table 1, we collect both our results in the non-private (top) and in the private (bottom, \(\epsilon=1\)) settings. Notice that ProgSyn attains the highest accuracy and lowest demographic parity distance in both settings, achieving a new state-of-the-art in both private and non-private fair synthetic data generation. Most notably, while the other methods were specifically developed for producing fair synthetic data, ProgSyn is general, with this being just one of the many specifications it supports. Further, it can often be useful to data owners to ensure that malicious actors cannot learn to predict certain personal features from the released synthetic data. Using the DOWNSTREAM constraint shown in line 7 of Figure 3, we synthesize Adult, such that it cannot be used to train a classifier predicting the sex feature from other columns. Imposing this constraint, we reduce the balanced accuracy of an XGBoost on the sex feature from \(83.3\%\) to \(50.2\%\), _i.e.,_ to random guessing, while retaining \(84.4\%\) accuracy on the original task. Statistical PropertiesRecall that ProgSyn allows direct manipulations of statistical properties of the generated datasets, using STATISTICAL constraints. We evaluate its effectiveness on this task with 3 statistical constraints on Adult: S1: set the average age across the dataset to \(30\) instead of the original \(\approx 37\); S2: set the average age of males and females equal (line 5 in Figure 3); and S3: set the correlation of sex and salary to zero, _i.e.,_\(\frac{\mathbb{E}[\texttt{sex-salary}]-\mathbb{E}[\texttt{sex}]\mathbb{E}[ \texttt{salary}]}{\sqrt{\texttt{Var}(\texttt{sex})\,\texttt{Var}(\texttt{salary })}}=0\), which is easily expressible in ProgSyn. On S1, we achieve a mean age of \(30.2\) retaining \(84.6\%\) accuracy, while on S2 ProgSyn reduces the average age gap from \(2.3\) years to \(<0.1\) maintaining \(85.1\%\) accuracy. Most interestingly, on S3, we reduce the correlation between sex and salary from \(-0.2\) to just \(-0.01\), and retain an impressive \(84.9\%\) accuracy. We provide more details in Appendix B. Logical ConstraintsWe evaluate the performance of ProgSyn in enforcing logical row and implication constraints on the Adult dataset, using three implications (I1, I2, I3) and two row constraints (RC1, RC2). While RC2 and I2 correspond to lines 3 and 4 in Figure 3, we list the rest of the constraints in Appendix B. Note that the binary mask obtained for each constraint, as explained in Section 4.2, can easily be used for rejection sampling (RS) from ProgSyn. Therefore, in our comparison we distinguish between ProgSyn with just RS and ProgSyn fine-tuned on the given constraint and rejection sampled (FT + RS). In the private setting, we compare our performance also to AIM [6], where we encode the constraints in the graphical model as structural zeros (SZ). We summarize our results in Table 2, where in first two rows we show the constraint satisfaction rates (CSR) on the original dataset, and on the evaluated synthetic datasets (_i.e.,_ we compare the methods at \(100\%\) CSR). Observe that while other methods also yield competitive results on constraints that are easy to enforce, _i.e.,_ have high base satisfaction rate, as the constraint difficulty increases, fine-tuning becomes necessary, yielding superior results. Further, we tested ProgSyn in case _all_ 5 constraints are applied at once, resulting in \(84.0\%\) accuracy, demonstrating a strong performance in composability. These experiments show that ProgSyn is strongly effective in enforcing logical constraints. \begin{table} \begin{tabular}{l c c} \hline \hline & XGB Acc. [\%] & Dem. Parity sex \\ \hline True Data & \(85.4\pm 0.00\) & \(0.18\pm 0.000\) \\ \hline \(\blacksquare\) DECAF Dem. Parity [11] & \(66.8\pm 6.99\) & \(0.08\pm 0.072\) \\ \(\blacksquare\) TabFairGAN [13] & \(79.8\pm 0.48\) & \(0.02\pm 0.014\) \\ \(\blacksquare\) ProgSyn & \(\mathbf{82.1\pm 0.27}\) & \(\mathbf{0.01\pm 0.006}\) \\ \hline \hline Prefair Greedy [15] (\(\epsilon=1\)) & \(80.2\pm 0.35\) & \(0.04\pm 0.008\) \\ \(\blacksquare\) Prefair Optimal [15] (\(\epsilon=1\)) & \(75.7\pm 1.47\) & \(0.03\pm 0.023\) \\ \(\blacksquare\) ProgSyn (\(\epsilon=1\)) & \(\mathbf{80.9\pm 0.27}\) & \(\mathbf{0.01\pm 0.005}\) \\ \hline \hline \end{tabular} \end{table} Table 1: XGB accuracy [%] vs. demographic parity distance on the sex feature of various fair synthetic data generation algorithms compared to ProgSyn, both in a non-private (top) and private (\(\epsilon=1\)) settings (bottom). Stacking Constraints of Different TypesIn a significantly harder scenario, the user may impose several constraints of different types. To evaluate ProgSyn in this case, we selected at least one constraint from each of the previously examined ones, and combined them in a single ProgSyn program. The constraints we picked for this experiment are: (i) the constraint used to generate fair synthetic data w.r.t. sex; (ii) & (iii) S1 and S2 statistical constraints, setting the average age to thirty, and equating the average ages of males and females; and (iv) & (v) two logical implication constraints (I3 and I2) from Table 2. In Table 3 we show the effect of applying these constraints one-after-another, with each row in the table standing for one additional constraint imposed (marked in green). We can see that after sacrificing the expected \(\approx 3.4\%\) accuracy for achieving low bias, ProgSyn maintains stable accuracy, while satisfying all remaining constraints. This result shows that ProgSyn can effectively incorporate diverse constraints simultaneously. Experiments on Health HeritageWe demonstrate the generalizability of ProgSyn by training with constraints on the Health Heritage dataset. We tested an implication constraint (I) with \(18.3\%\) CSR, a row constraint (RC) with \(44.8\%\) CSR in the true data, and the statistical constraint shown in the introduction, increasing the entropy of the PrimaryConditionGroup feature. On the two logical constraints ProgSyn + RS achieves \(80.1\%\) and \(79.9\%\) accuracy, while ProgSyn + FT + RS achieves a \(80.1\%\) and \(80.0\%\) accuracy, reinforcing that fine-tuning helps, and that ProgSyn can effectively produce data respecting logical constraints. For our last experiment, we imposed a constraint over ProgSyn to maximize the entropy of the PrimaryConditionGroup feature, designating the disease of patients. The effect of the constraint on the marginal distribution of the feature is displayed in Figure 2, where we can observe that information about outlying categories is strongly obfuscated. Notably, the synthetic data maintains high quality with a downstream classifier accuracy of \(80.1\%\) compared to \(81.1\%\) of the real data. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Constraint & I1 & I2 & I3 & RC1 & RC2 \\ Real data CSR & \(93.6\%\) & \(100\%\) & \(60.4\%\) & \(32.4\%\) & \(40.5\%\) \\ All synthetic data CSR below & \(100\%\) & \(100\%\) & \(100\%\) & \(100\%\) & \(100\%\) \\ \hline \hline \(\Pi\)ProgSyn + RS & \(\mathbf{85.1\pm 0.12}\) & \(\mathbf{85.1\pm 0.14}\) & \(\mathbf{85.1\pm 0.16}\) & \(82.9\pm 0.83\) & \(84.5\pm 0.14\) \\ \(\Pi\)ProgSyn + FT + RS & \(\mathbf{85.1\pm 0.10}\) & \(85.0\pm 0.18\) & \(\mathbf{85.1\pm 0.15}\) & \(\mathbf{84.7\pm 0.13}\) & \(\mathbf{84.8\pm 0.15}\) \\ \hline \hline \(\blacksquare\) AIM + SZ [6] (\(\epsilon=1\)) & \(\mathbf{84.2\pm 0.23}\) & \(\mathbf{84.1\pm 0.26}\) & \(83.7\pm 0.25\) & \(73.9\pm 0.75\) & \(67.6\pm 1.38\) \\ \(\Pi\)ProgSyn + RS (\(\epsilon=1\)) & \(83.7\pm 0.16\) & \(83.7\pm 0.18\) & \(83.7\pm 0.19\) & \(81.0\pm 0.86\) & \(\mathbf{83.5\pm 0.20}\) \\ \(\blacksquare\)ProgSyn + FT + RS (\(\epsilon=1\)) & \(83.8\pm 0.15\) & \(83.7\pm 0.18\) & \(\mathbf{83.9\pm 0.12}\) & \(\mathbf{83.1\pm 0.18}\) & \(83.4\pm 0.17\) \\ \hline \hline \end{tabular} \end{table} Table 2: XGB accuracy [%] of synthetic data at \(100\%\) constraint satisfaction rate (CSR) on three implication constraints (I1 - I3) and two row constraints, applied separately, both in a **non-private** (top) and **private** (\(\epsilon=1\)) setting (bottom). RS: rejection sampling, FT: fine-tuning, and SZ: structural zeros. ProgSyn + FT + RS is consistent across all settings, maintaining high data quality throughout. \begin{table} \begin{tabular}{l c c c c c} \hline \hline XGB Acc. [\%] & Dem. Parity sex & \(\Delta\) Avg. Age to \(30\) & \(\Delta\) M-F Avg. Age & I3 Sat. [\%] & I2 Sat. [\%] \\ \hline \(85.1\pm 0.16\) & \(0.19\pm 0.005\) & \(37.3\pm 0.05\) & \(2.3\pm 0.17\) & \(59.3\pm 0.85\) & \(98.5\pm 0.09\) \\ \hline \(81.7\pm 0.25\) & \(0.02\pm 0.007\) & \(37.3\pm 0.05\) & \(2.1\pm 0.19\) & \(57.6\pm 0.78\) & \(96.7\pm 0.09\) \\ \(82.5\pm 0.76\) & \(0.06\pm 0.053\) & \(30.2\pm 0.04\) & \(1.3\pm 0.14\) & \(57.0\pm 0.84\) & \(96.4\pm 0.25\) \\ \(82.0\pm 0.50\) & \(0.04\pm 0.036\) & \(30.2\pm 0.03\) & \(0.0\pm 0.10\) & \(56.9\pm 1.11\) & \(96.5\pm 0.19\) \\ \(81.3\pm 0.34\) & \(0.01\pm 0.006\) & \(30.2\pm 0.04\) & \(0.0\pm 0.12\) & \(100.0\pm 0.00\) & \(95.5\pm 0.16\) \\ \(81.6\pm 0.29\) & \(0.02\pm 0.011\) & \(30.2\pm 0.04\) & \(0.1\pm 0.12\) & \(100.0\pm 0.00\) & \(100.0\pm 0.00\) \\ \hline \hline \end{tabular} \end{table} Table 3: ProgSyn’s performance on \(5\) different constraints applied together, progressively adding more constraints. In each row the **active constraints** are **highlighted in green**. The constraints are: the constraint used for fair data; statistical constraints S1 and S2, setting the average age to \(30\), and equating the average ages of males and females; and two implications. ProgSyn demonstrates strong composability, satisfying all constraints while maintaining competitive accuracy. Conclusion We presented ProgSyn, a new method for programmable synthetic data generation. The key idea was to pretrain a generative model, and then fine-tune it to satisfy constraints provided by the data owner. The fine-tuning is performed by converting the constraints into a differentiable loss. ProgSyn differs from prior work because it allows data owners to programmatically declare logical and statistical constraints necessary for their own use case. We evaluated ProgSyn on a variety of practical specifications, most of them not supported by prior work, and obtained strong results. Moreover, for the constraints supported by prior work we either match or exceed (e.g. for fairness constraints) their performance. Our work shows that it is possible to generate synthetic data that satisfies a variety of constraints, thus opening doors for its wider adoption.
2304.04990
Future Constraints on Dark Matter with Gravitationally Lensed Fast Radio Bursts Detected by BURSTT
Understanding dark matter is one of the most urgent questions in modern physics. A very interesting candidate is primordial black holes (PBHs; Carr2016). For the mass ranges of $< 10^{-16} M_{\odot}$ and $> 100 M_{\odot}$, PBHs have been ruled out. However, they are still poorly constrained in the mass ranges of $10^{-16} - 100 M_{\odot}$ (Belotsky et al. 2019). Fast radio bursts (FRBs) are millisecond flashes of radio light of unknown origin mostly from outside the Milky Way. Due to their short timescales, gravitationally lensed FRBs, which are yet to be detected, have been proposed as a useful probe for constraining the presence of PBHs in the mass window of $< 100M_{\odot}$ (Mu\~noz et al. 2016). Up to now, the most successful project in finding FRBs has been CHIME. Due to its large field of view (FoV), CHIME is detecting at least 600 FRBs since 2018. However, none of them is confirmed to be gravitationally lensed (Leung et al. 2022). Taiwan plans to build a new telescope, BURSTT dedicated to detecting FRBs. Its survey area will be 25 times greater than CHIME. BURSTT can localize all of these FRBs through very-long-baseline interferometry (VLBI). We estimate the probability to find gravitationally lensed FRBs, based on the scaled redshift distribution from the latest CHIME catalog and the lensing probability function from Mu\~noz et al. (2016). BURSTT-2048 can detect ~ 24 lensed FRBs out of ~ 1,700 FRBs per annum. With BURSTT's ability to detect nanosecond FRBs, we can constrain PBHs to form a part of dark matter down to $10^{-4}M_{\odot}$.
Simon C. -C. Ho, Tetsuya Hashimoto, Tomotsugu Goto, Yu-Wei Lin, Seong Jin Kim, Yuri Uno, Tiger Y. -Y. Hsiao
2023-04-11T05:20:58Z
http://arxiv.org/abs/2304.04990v1
# Future Constraints on Dark Matter with Gravitationally Lensed Fast Radio Bursts Detected by BURSTT ###### Abstract Understanding dark matter is one of the most urgent questions in modern physics. A very interesting candidate is primordial black holes (PBHs; Carr et al., 2016). For the mass ranges of \(<10^{-16}M_{\odot}\) and \(>100M_{\odot}\), PBHs have been ruled out. However, they are still poorly constrained in the mass ranges of \(10^{-16}-100M_{\odot}\)(Belotsky et al., 2019). Fast radio bursts (FRBs) are millisecond flashes of radio light of unknown origin mostly from outside the Milky Way. Due to their short timescales, gravitationally lensed FRBs, which are yet to be detected, have been proposed as a useful probe for constraining the presence of PBHs in the mass window of \(<100M_{\odot}\)(Munoz et al., 2016). Up to now, the most successful project in finding FRBs has been CHIME. Due to its large field of view (FoV), CHIME is detecting at least 600 FRBs since 2018. However, none of them is confirmed to be gravitationally lensed (Leung et al., 2022). Taiwan plans to build a new telescope, BURSTT dedicated to detecting FRBs. Its survey area will be 25 times greater than CHIME. BURSTT can localize all of these FRBs through very-long-baseline interferometry (VLBI). We estimate the probability to find gravitationally lensed FRBs, based on the scaled redshift distribution from the latest CHIME catalog and the lensing probability function from Munoz et al. (2016). BURSTT-2048 can detect \(\sim 24\) lensed FRBs out of \(\sim 1,700\) FRBs per annum. With BURSTT's ability to detect nanosecond FRBs, we can constrain PBHs to form a part of dark matter down to \(10^{-4}M_{\odot}\). transients: fast radio bursts - gravitational lensing: micro - (cosmology:) dark matter 0000-0002-0002]Simon C.-C. Ho 0000-0002-3882-7888]Tetsuya Hashimoto 0000-0002-3181-5885]Tomotsugu Goto 0000-0002-1881-5885]Seong Jin Kim 0000-0002-1881-5885]Yuri Uno 0000-0002-3181-5885]Tiger Y.-Y. Hsiao ## 1 Introduction Understanding the nature of dark matter is one of the most interesting questions in modern physics. Broadly, dark matter searches have focused on the subatomic matter (Weakly Interacting Massive Particles or WIMPs) and objects in the astrophysical mass domain (Massive Compact Halo Objects, or MACHOs). In the search for WIMPs, considerable efforts have been made. WIMPs are hypothetical particles that can be considered a candidate for dark matter. Recent sub-atomic experiments such as the LUX-ZEPLIN (LZ; Aalbers et al., 2022) experiment, DarkSide-50 experiment (The DarkSide-50 Collaboration et al., 2022) and XENONnT experiment (XENON Collaboration et al., 2023) carried by particle physicists have placed constraints on the parameter space of WIMPs (Kimball & Budker, 2023). Nevertheless, no conclusive evidence for WIMPs as a significant contributor to DM has been found to date. A particular method of gravitational microlensing has been used in recent decades to search for dark matter around our Milky Way galaxy in the form of MACHOS (Alcock et al., 2000). For example, microlensing surveys have derived constraints on mass ranges in the range of \(10^{-7}\) to \(10~{}M_{\odot}\)(e.g., Alcock et al., 2000; Tisserand et al., 2007). Moreover, constraints from the cosmic microwave background exclude masses \(>\)100 \(M_{\odot}\)(Ali-Haimoud and Kamionkowski, 2017). In MACHO and Experience pour la Recherche d'Objets Sombres (EROS) (Alcock et al., 2000; Tisserand et al., 2007), steady background sources are monitored on timescales of days to weeks while their apparent brightness is monitored over time. Primordial Black Hole (PBH) distributions within the Local Group can thus be constrained by these searches. On the other hand, gravitational wave observations from mergers of compact binaries (Abbott et al., 2016) have recently revived interest in the possibility that dark matter is mainly compact matter (Laha, 2020), such as PBHs for which the most promising mass range for dark matter is 10-100 \(M_{\odot}\). One of the methods for detecting dark compact objects is through gravitational lensing of radio transients, such as fast radio bursts (FRBs) (Munoz et al., 2016; Eichler, 2017; Katz et al., 2020; Wucknitz et al., 2021). FRBs are mysterious millisecond radio signals mostly emitted by extragalactic sources (Cordes and Chatterjee, 2019). Although we are not yet able to explain how these millisecond bursts originate or emit, their cosmological distance and abundance make them a good candidate for time-domain gravitational lensing searches. (Munoz et al., 2016; Eichler, 2017; Katz et al., 2020; Wucknitz et al., 2021). As the FRB propagates around a foreground mass, coherent multiple images of the FRB are generated with small differences in the timing, which can be resolved in the time domain as interference fringes (Kader et al., 2022). Meanwhile, FRBs have been found in other galaxies (e.g., Chatterjee et al., 2017; Ravi et al., 2019; Bannister et al., 2019). Using FRBs as probes, one could constrain cosmological abundances of PBHs rather than local abundances in the future. The current challenge in searching for gravitationally lensed FRBs is that the detection rate of FRBs is still too low. Since the first discovery of FRBs more than a decade ago (Lorimer et al., 2007), astronomers have observed more and more FRBs every year. Among all of the search projects, the most successful project in finding FRBs has been CHIME (CHIME/FRB Collaboration et al., 2018). Due to its large field of view (FoV), CHIME has outperformed other radio arrays detecting about 600 FRBs since 2018 (Masui and Chime/Frb Collaboration, 2021), which is about 5/6 of the current observed number of FRBs. However, with this number of FRBs, no gravitationally lensed FRB has been confirmed. Leung et al. (2022); Kader et al. (2022) have observed no lenses from 172 bursts of 114 independent events of the CHIME catalog. They have placed an upper limit on the constraint of dark matter made of compact objects, such as PBHs. Laser Interferometer Gravitational-Wave Observatory (LIGO)/VIRGO interferometer has suggested a hint to the dark matter fraction (Abbott et al., 2016). Based on LIGO's result, Bird et al. (2016) suggest the black hole merger rate is consistent with all dark matter being \(\sim\) 30 \(M_{\odot}\) black holes. If this is the case, about 1\(-\)2% of all FRBs will pass close enough to such a PBH to be microlensed (Munoz et al., 2016) and 10\({}^{4}\) FRBs would yield about 100\(-\)200 microlensed FRBs. If, on the other hand, no lenses are found in 10\({}^{4}\) FRBs, the dark matter fraction in PBHs can be constrained to be below 1% (Munoz et al., 2016). Taiwan plans to build a new radio telescope, the Bustling Universe Radio Survey Telescope in Taiwan (BURSTT; Lin et al., 2022) which is dedicated to detecting FRBs with an accurate localization capability (\(\sim\) 1"). BURSTT will be the next frontier telescope in FRB science after CHIME. Because of the telescope's unique fisheye design, it will have a massive 1 steradian(sr) FoV, meaning its survey area will be 25 times greater than CHIME, covering more FRB-like events. BURSTT will have a 256-antenna array in the first phase and it will be extended to a 2048-antenna array in the second phase. This improves the sensitivity by about 8 times. Moreover, BURSTT will conduct long-term monitoring observations to prevent missing any repeating FRBs. Regarding the time resolution, BURSTT has a ring buffer to record the voltage data, which the timing resolution is adjustable to be about a nanosecond (H.-H. Lin, 2022, private communication). If we detect any gravitationally lensed nanosecond FRB, we can use it to constrain dark matter down to 10\({}^{-4}M_{\odot}\) of the lensing mass (see Eq. 2 for more detail). The proposed BURSTT opens up the exciting possibility for the searches of gravitationally lensed FRBs which can help us constrain the fraction of dark matter better. In this work, we focus on the future prospect of BURSTT-2048. We aim to give an estimation of how many lensed FRBs will be detected by BURSTT-2048. This paper is composed as follows; we describe the gravitational lensing model in Section 2. The predicted number of lensed FRBs to be detected by BURSTT and the calculation are presented in Section 3. Discussion in 4 and conclusion are in Section 5. Throughout this paper, we adopt the AB magnitude system and assume a cosmology with H\({}_{0}\) = 70 km s\({}^{-1}\)Mpc\({}^{-1}\), \(\Omega_{\Lambda}\) = 0.7, and \(\Omega_{\rm M}\) = 0.3 (Spergel et al., 2003). ## 2 Gravitational lensing model In this section, for strong lensing by compact objects, we calculate the optical depth by calculating microlensing effects on a given FRB. These results allow us to calculate the number of lensed FRBs that can be ex pected when all dark matter is PBHs with a combination of different redshift distributions. We consider compact objects that may be modeled using a point-mass lens, which is the simplest kind of lens model from Munoz et al. (2016), in which a PBH of mass \(M_{L}\). We do not consider a mass spectrum but only fixed mass cases for \(M_{L}\), which can be modeled as point lenses around the Einstein radius: \[\theta_{E}=2\sqrt{\frac{GM_{L}}{c^{2}}\frac{D_{LS}}{D_{S}D_{L}}}. \tag{1}\] The (angular-diameter) distances from the source, to the lens, and between the source and the lens are represented by \(D_{S},D_{L}\), and \(D_{LS}\), respectively (Takahashi and Nakamura, 2003). With a point lens, two images are formed at positions \(\theta_{\pm}=\left(\beta\pm\sqrt{\beta^{2}+4\theta_{E}^{2}}\right)/2\), where \(\beta\) corresponds to impact angle (Munoz et al., 2016). The time delay between these two images is \[\Delta t=\frac{4GM_{L}}{c^{3}}\left(1+z_{L}\right)\left[\frac{y}{2}\sqrt{y^{2 }+4}+\log\left(\frac{\sqrt{y^{2}+4}+y}{\sqrt{y^{2}+4}-y}\right)\right], \tag{2}\] where \(y\equiv\beta/\theta_{E}\) represents the normalized impact parameter, and the lens redshift is \(z_{L}\)(Munoz et al., 2016). In this equation, the time delay, \(\Delta t\) is mainly affected by the lensing mass, \(M_{L}\). In addition, we follow Munoz et al. (2016) to define the \(R_{f}\) as the flux ratio of the magnifications of both images \(\mu_{+}\) and \(\mu_{-}\); i.e., \[R_{f}\equiv\left|\frac{\mu_{+}}{\mu_{-}}\right|=\frac{y^{2}+2+y\sqrt{y^{2}+4} }{y^{2}+2-y\sqrt{y^{2}+4}}>1. \tag{3}\] Following Munoz et al. (2016), an FRB must meet three conditions to be considered strongly lensed. First, the brighter image between the two pulses has a signal-to-noise ratio of 10 or higher (Petroff et al., 2014). Second, the observed time delay is longer than some reference time \(\overline{\Delta t}\) (e.g., observational resolution), and therefore a lower bound will be placed on the impact parameter \(y>y_{\rm min}\left(M_{L},z_{L}\right)\), as calculated by Eq. 2. Finally, we require that the flux ratio \(R_{f}\) is lower than a critical value for \(\bar{R}_{f}\) (which we take to be redshift independent) so that both events are observed. By doing this, the impact parameter is forced to be smaller than (Munoz et al., 2016): \[y_{\rm max}=\left[\left(1+\bar{R}_{f}\right)/\sqrt{\bar{R}_{f}}-2\right]^{1/2}. \tag{4}\] Munoz et al. (2016) calculated the probability for an FRB to be lensed as follows. The lensing optical depth of a source at redshift \(z_{S}\) is given by \[\tau\left(M_{L},z_{S}\right)=\int_{0}^{z_{S}}d\chi\left(z_{L}\right)\left(1+z _{L}\right)^{2}n_{L}\sigma\left(M_{L},z_{L}\right). \tag{5}\] Here, \(\chi(z)\) represents the comoving distance at redshift \(z\), \(n_{L}\) is the comoving number density of lenses comoving, and \(\sigma\) is the lensing cross-section of a point lens with mass \(M_{L}\), expressed as an annulus between the maximum and minimum impact parameters by the amount of mass. The equation is as follows \[\sigma\left(M_{L},z_{L}\right)=\frac{4\pi GM_{L}}{c^{2}}\frac{D_{L}D_{LS}}{D_{S }}\left[y_{\rm max}^{2}-y_{\rm min}^{2}\left(M_{L},z_{L}\right)\right]. \tag{6}\] Using the Hubble parameter both at the redshift of the lens, \(H\left(z_{L}\right)\), and present \(H_{0}\), we can recast Eq. 5 as \[\tau\left(M_{L},z_{S}\right)= \frac{3}{2}f_{\rm DM}\Omega_{c}\int_{0}^{z_{S}}dz_{L}\frac{H_{0}^ {2}}{cH\left(z_{L}\right)}\frac{D_{L}D_{LS}}{D_{S}} \tag{7}\] \[\times\left(1+z_{L}\right)^{2}\left[y_{\rm max}^{2}-y_{\rm min}^{ 2}\left(M_{L},z_{L}\right)\right],\] where \(\Omega_{c}=0.24\) is the cold-dark-matter density at present and \(\Omega_{m}=\Omega_{c}+\Omega_{b}\). The only remaining dependence on the lens mass \(M_{L}\) is through \(y_{\rm min}\). Given a normalized observed distribution function \(N(z)\) for FRBs with redshift, \(z\), (Masui and Chime/Frb Collaboration, 2021; Hashimoto et al., 2022), we can calculate their integrated optical depth \(\bar{\tau}\left(M_{L}\right)\), due to compact objects of mass \(M_{L}\), as \[\bar{\tau}\left(M_{L}\right)=\int dz\,\tau\left(z,M_{L}\right)N(z). \tag{8}\] Here, \(\tau\) is included because it becomes approximately the same as the lensing probability when \(\tau\ll 1\). We show this quantity in Fig. 1 for different time delays with a redshift cutoff at \(z_{\rm cut}=0.5\) and \(f_{\rm DM}=1\) (assuming all the dark matter was composed of PBHs). Munoz et al. (2016) adopted a \(z=0.5\) cutoff redshift because their redshift distribution function fits well with the observed redshift distribution of FRB samples (FRBCAT, Petroff et al., 2016) if \(z=0.5\) is chosen. We applied the redshift cutoff at \(z_{\rm cut}=0.5\) as Munoz et al. (2016) since the redshift distributions from both works peak at \(z\sim 0.5\). In this figure, \(N(z)=1\) is assumed. We take an empirically derived \(N(z)\) from CHIME FRBs into account in Section 3.2. It is shown that the distribution shifts to a smaller lensing mass as the time delay reduces. It is important to note that, even if all the dark matter consisted of a single mass \(M_{L}\), there will be no unique value for the lensing time delays induced on FRBs because of the different redshifts of lenses and impact parameters. ## 3 Modelling the FRB events to be detected by burstt BURSTT is a proposed instrument tailored for detecting FRB-like events with accurate arcsecond localization and high cadence (Lin et al., 2022). BURSTT will observe 25 times more sky than CHIME, because of its unique fisheye design and FoV of 1 sr. BURSTT's modular nature allows it to be expanded by adding additional antennas or outrigger stations. The main station could be upgraded from 256 antennas to 2048 antennas in the next phase. The system equivalent flux density (SEFD) will improve about 8 times from \(\sim\) 5,000 Jy in the first phase to \(\sim\) 600 Jy in the second phase. This would improve the sensitivity to detect more bursts, which improves the detection rate. At the same time, BURSTT is an interferometer with baselines of over 100 km and has an astrometric accuracy for FRB localizations of better than 1", more precise than that of CHIME (2- and 25-arcsec precision between the CHIME and the CHIME Pathfinder; Leung et al., 2021), ASKAP (arcsecond accuracy, Bhandari et al., 2020), and Deep Synoptic Array (DSA, \(\pm\)2.5" accuracy; Ravi et al., 2019). This allows us to discover a large sample of bright FRB-like events with more accurate localization, including those located close to the Earth. In this section, we use parameters from BURSTT-2048 and CHIME to calculate a predicted number of gravitationally lensed FRB to be detected by the proposed BURSTT-2048. ### Predicted number of FRBs to be detected by burstt The specifications of BURSTT and CHIME relevant to the gravitational lensing search are summarised in Table 1. Using the FoV and the SEFD of BURSTT and CHIME, we can estimate a value for the event rate of FRBs to be detected by BURSTT. For simplicity, we assume the number of expected FRB numbers, \(R\) is proportional to FoV and inversely proportional to the 3/2 power of SEFD as \[R\propto\frac{\text{FoV}}{(\text{SEFD})^{3/2}} \tag{9}\] where the power-law index of the cumulative fluence distribution of CHIME FRBs. Observationally, the power is consistent with 3/2 within error according to the CHIME catalog paper (CHIME/FRB Collaboration et al., 2018) and it is also expected if there is no redshift evolution of FRB number density within Euclidean space. In our case, BURSTT looks at the nearby Universe. Therefore, such assumptions are likely the case. For CHIME, the FoV is \(\sim\) 0.04 sr, and the SEFD = 100 Jy. For BURSTT-2048, FoV is \(\sim\) 1 sr and the SEFD is \(\sim\) 600 Jy. Therefore, when we refer to 1,000 FRBs per year based on CHIME's predicted number (CHIME/FRB Collaboration et al., 2018), we estimate that BURSTT-2048 will detect about 1,700 FRBs per annum, which outperforms CHIME in terms of FRB event rate. It should be also noted that BURSTT can localize all of these FRBs through Very-long-baseline interferometry (VLBI). ### Predicted number of lensed FRBs to be detected by burstt Recall the Eq. 8, given a redshift distribution function for FRBs, we can calculate the integrated optical depth due to a compact object of mass \(M_{L}\), as \[\bar{\tau}\left(M_{L}\right)=\int dz\tau\left(z,M_{L}\right)N(z).\] In our case, we obtained a redshift distribution from the latest CHIME catalog (Masui and Chime/Frb Collaboration, 2021). In this catalog, there are 536 FRBs. Hashimoto et al. (2022) estimated FRB's redshifts using their Dispersion Measure (DM). To estimate the fraction of FRB that can be detected by BURSTT, we need to estimate the nominal fluence detection threshold, \(F_{0}\) of BURSTT. We use Eq. 10 from James et al. (2019). \[F_{0}=\frac{\sigma_{th}\text{SEFD}}{\sqrt{2(t_{int})\Delta\nu}} \tag{10}\] where \(\sigma_{th}\) represents the standard deviation of the detection threshold, \(t_{int}\) represents the searching time resolution of the telescope, and \(\Delta\nu\) represents the bandwidth of the telescope. We use \(\sigma_{th}=10\) and we estimate \(F_{0}\) of BURSTT-2048 to be approximately 12 Jy ms. We put this threshold in the 536-FRB CHIME catalog and scale it with the FoV difference between CHIME and BURSTT(25 times better for BURSTT-2048). The estimated redshift distribution of BURSTT-detected FRBs is then recast. The fluence and redshift distributions of CHIME and BURSTT are shown in Fig. 2. To calculate the integrated optical depth for each redshift bin, we calculate the optical depth \(\tau\) for each FRB in the 536-FRB CHIME catalog. Recall Eq. 7, \(\tau\) is negative when \(y_{\text{max}}<y_{\text{min}}^{2}\). This occurs when the lensing time scale, \(\tau\) is small compared with the observational time resolution, \(\Delta t\). We cannot detect such signals and hence, there is no physical solution. In this case, the event is treated as a non-detection. After that, we sum the individual optical depth \(\tau\) together in each redshift bin so we Figure 1: Integrated optical depth, with a cutoff at \(z_{\rm cut}=0.5\). We apply the same \(z_{\rm cut}\) as MuΓ±oz et al. (2016). For the different colors of the solid curve, we require a time delay of \(\tilde{\Delta t}=1\) ms, 0.1 ms, 10 \(\mu\)s, 1 \(\mu\)s, 0.1 \(\mu\)s, 10 ns, and 1 ns, respectively. In all cases, \(f_{\rm DM}=1\). The x-axis is in logarithmic scale. \begin{tabular}{c c c} Parameter & Values & \\ \hline \hline & BURSTT & CHIME \\ \hline Effective area & 40-200 m\({}^{2}\) (BURSTT-256) & 8,000 m\({}^{2}\) \\ & 320-1600 m\({}^{2}\) (BURSTT-2048) & \\ \hline & \(\sim\) 1 sr & \(\sim\) 0.04 sr \\ FoV & \(\sim 100^{\circ}\) (E-W) & \(2.5-1.3^{\circ}\) (E-W) \\ & \(\sim 100^{\circ}\) (N-S) & \(\sim 100^{\circ}\) (N-S) \\ \hline Bandwidth & 400 MHz & 400 MHz \\ \hline Frequency range & 300-800 MHz & 400-800 MHz \\ \hline SEFD & \(\sim\) 5,000 Jy (BURSTT-256) & 100 Jy \\ & \(\sim\) 600 Jy (BURSTT-2048) & \\ \hline Time resolution & \(\sim\) 0.3 ms (adjustable to 1 ns) & \(\sim\) 2.56 \(\mu\)s (adjustable to 1 ns) \\ \hline Localization accuracy & \(\sim\) 1” & \(\sim\) 2” \\ & & \(\sim\) 25” (Pathfinder) \\ \hline Polarization & Dual & Linearly dual \\ \hline E-W baseline & 800 km (Northern Taiwan to Hawaii) & 100 m \\ \hline N-S baseline & 300 km (Northern to Southern Taiwan) & 80 m \\ \end{tabular} \end{table} Table 1: Specifications of BURSTT and CHIME relevant to the gravitational lensing search. obtain the integrated optical depth in each redshift bin. These numbers are equivalent to the number of lensed FRBs in each redshift bin to be detected by BURSTT. We show BURSTT-2048's total number of FRBs and lensed FRBs per annum in four redshift bins from \(z=0\) to \(z=2.2\) in Fig. 3. In this plot, we show the number of detectable lensed FRBs with \(M_{L}\)=0.001\(M_{\odot}\) and \(\bar{\Delta t}=1\) ns. which is the highest time resolution BURSTT-2048 can reach (H.-H. Lin 2022, private communication). In our estimation, we assume if all the dark matter is composed of PBHs (\(f_{\rm DM}=1\)), BURSTT-2048 can detect \(\sim\) 24 lensed FRBs out of a total number \(\sim\) 1,700 FRBs per annum. We also show the lensed number with \(M_{L}\) = 30 \(M_{\odot}\) and \(\bar{\Delta t}=1\) ms in blue. These are the parameters used in Munoz et al. (2016). In this estimation, BURSTT-2048 can detect \(\sim\) 5 lensed FRBs out of a total number of \(\sim\) 1,700 FRBs per annum. Poisson errors propagated from the CHIME catalog (Masui and Chime/Frb Collaboration, 2021) are included in the plot. ### The constraint of PBHs using lensed FRBs to be detected by BURSTT In figure 4, we show the regions in the parameter space of \(f_{\rm DM}-M_{L}\) with a cutoff at \(z_{\rm cut}=0.5\). Assuming the one-year operation of BURSTT-2048, we analyze the likelihood of at least one event for lensing time delays, \(\bar{\Delta t}\) of, 1 ns, 10 ns, 0.1 \(\mu\)s, 1 \(\mu\)s, 10 \(\mu\)s, 0.1 ms, and 1 ms. In the same figure, we also show the current constraints to \(f_{\rm DM}\) of the EROS Collaboration(Tisserand et al., 2007), MACHO Collaboration (Alcock et al., 2000), the COBE Far Infrared Absolute Spectrophotometer (FIRAS) data (Fixsen and Mather, 2002), the three-year Wilkinson Microwave Anisotropy Probe (WMAP3) (Ricotti et al., 2008), Subaru Hyper Suprime-Cam(HSC; Niikura et al., 2019), and wide-binary (WB) disruption (Quinn et al., 2010). With BURSTT's ability to detect nanosecond bursts (H.-H. Lin 2022, private communication), we can constrain PBHs as dark matter down to \(10^{-4}M_{\odot}\). ## 4 Discussion ### The number prediction of FRBs to be detected by BURSTT Referring to Section 3.2, if we used the same parameters as Munoz et al. (2016), BURSTT-2048 can detect 5 lensed FRBs (lensed by 30 \(M_{\odot}\) PBHs) out of 2200 FRBs with a time delay longer than 1 ms. The fraction of lensed FRBs is \(\sim\) 0.2% which is in the same order as Munoz et al. (2016)'s result (0.6%). On the other hand, with the best time resolution (1ns) that BURSTT-2048 can reach, it can detect \(\sim\) 24 lensed FRBs out of a total number \(\sim\) 1,700 FRBs per annum if all the dark matter is in the form of 0.001\(M_{\odot}\) PBHs. Figure 3: The number of lensed FRBs and the total number of FRBs for one-year operations of BURSTT-2048 in four redshift bins from \(z=0\) to \(z=2.2\). Each data point shows the bin center of the redshift bin. The orange line, green line, and red line show the redshift distribution of the total number of FRBs for one-year operations of BURSTT-2048, the redshift distribution of the number of lensed FRBs with \(M_{L}=0.001\)\(M_{\odot}\) and \(\bar{\Delta t}=1\) ns, and the redshift distribution of the number of lensed FRBs with \(M_{L}=30\)\(M_{\odot}\) and \(\bar{\Delta t}=1\) ms, respectively. Figure 2: Fluence distribution of the CHIME FRB catalog (Masui and Chime/Frb Collaboration, 2021) is shown in the upper panel. The nominal fluence detection threshold of BURSTT-2048 is marked with an orange dashed line in the upper panel. Redshift distribution of CHIME (blue), and BURSTT-2048 (orange) are shown in the lower panel. . This increases the fraction of lensed FRBs to \(\sim 1.5\%\). This fraction could be overestimated if we assume that BURSTT-2048 would detect FRBs at the very nearby Universe due to its sensitivity (SEFD \(\sim 600\) Jy). However, we should note that there are sources with high DM and high fluence in the CHIME FRB samples (Masui & Chime/Frb Collaboration, 2021). We show the distribution of observed DM versus fluence for the 536-FRB CHIME catalog (Masui & Chime/Frb Collaboration, 2021) in Fig. 5. In Fig. 5, there are 35 FRBs with DM \(>500\) pc cm\({}^{3}\) and fluence \(>12\) Jy ms. These FRBs correspond to \(\sim 48\%\) of FRBs with fluence \(>12\) Jy ms. These are the extragalactic population of sources in the distant Universe (\(z\gtrapprox 0.3-0.4\)) that contribute to lensing optical depth in our estimation. As a result, the lensing fraction of 1.5% would still be reasonable in our analysis based on the empirically derived redshift distribution of FRBs (Fig. 2). ### The effect of plasma scattering Leung et al. (2022) discuss the effect of plasma scattering on the expected lensing event rate of FRBs. They constructed a two-screen model consisting of a gravitational lens and a plasma lens or scattering screen (Leung et al., 2021). Fig. 3 of Leung et al. (2022) shows the expected lensing event rate for 114 FRB events assuming that all dark matter is composed of PBHs with different lens masses. They show the expected lensing event rate changes for different effective distances of the plasma screen from the FRB source. The effect is maximal when the lens mass is in the range of about \(10^{2}-10^{3}M_{\odot}\). According to Leung et al. (2022)'s results, we can roughly calculate the decrease in the expected number of lensed FRBs for this paper. In the case of lensing mass \(=0.001M_{\odot}\), the expected number of lensed FRBs decreases by \(\sim 5\%\), \(\sim 20\%\), \(\sim 60\%\), and \(\sim 90\%\) for 0.1 pc, 1 pc, 10 pc, and 100 pc of the plasma screen from the FRB source, respectively. This refers to the number of about 33 (\(\sim 1.5\%\)), 27 (\(\sim 1.2\%\)), 14 (\(\sim 0.6\%\)), and 3 (\(\sim 0.1\%\)) lensed FRBs (lensed fraction) per year, respectively. ## 5 Conclusion In this work, it has been demonstrated that coherent FRB lensing can constrain the constituents of the cosmological dark matter, e.g., PBHs with the param Figure 4: Fraction \(f_{\rm DM}\) of dark matter in the form of point lenses of mass \(M_{L}\), where the FRBs have a constant comoving density with a cutoff at \(z_{\rm cut}=0.5\). Assuming a one-year operation, we show our constraints when we require a time delay \(\Delta t\) of, 1 ns, 10 ns, 0.1 \(\mu\)s, 1 \(\mu\)s, 10 \(\mu\)s, and 1 ms filled with eters of BURSTT-2048. We consider compact objects which can be modeled by a point-mass lens (Munoz et al., 2016). Each FRB in the 536-FRB CHIME catalog is evaluated for optical depth \(\tau\). To obtain the integrated optical depth in each redshift bin, we add the individual optical depth \(\tau\) into each bin. Each of these numbers represents the number of lensed FRBs detected by BURSTT for each redshift bin. In our estimation, we assume if all the dark matter is composed of PBHs (\(f_{\rm DM}=1\)), BURSTT-2048 can detect up to \(\sim 2\) lensed FRBs out of a total number \(\sim 1,\!700\) FRBs per annum. With this amount of lensed FRBs and BURSTT's ability to detect nanosecond bursts, we can constrain PBHs as dark matter in the \(10^{-4}-100M_{\odot}\) range. ## Acknowledgments We are very grateful to the anonymous referee for many insightful comments. TG and TH acknowledge the support of the National Science and Technology Council (NSTC) of Taiwan through grants 108-2628-M-007-004-MY3 and 110-2112-M-005-013-MY3/110-2112-M-007-034-, respectively. The BURSTT Project is funded by a grant from the NSTC (111-2123-M-001-008-). We greatly appreciate Dr. Hsiu-Hsien Lin and Dr. Shotaro Yamasaki for providing useful comments.
2305.01519
BCEdge: SLO-Aware DNN Inference Services with Adaptive Batching on Edge Platforms
As deep neural networks (DNNs) are being applied to a wide range of edge intelligent applications, it is critical for edge inference platforms to have both high-throughput and low-latency at the same time. Such edge platforms with multiple DNN models pose new challenges for scheduler designs. First, each request may have different service level objectives (SLOs) to improve quality of service (QoS). Second, the edge platforms should be able to efficiently schedule multiple heterogeneous DNN models so that system utilization can be improved. To meet these two goals, this paper proposes BCEdge, a novel learning-based scheduling framework that takes adaptive batching and concurrent execution of DNN inference services on edge platforms. We define a utility function to evaluate the trade-off between throughput and latency. The scheduler in BCEdge leverages maximum entropy-based deep reinforcement learning (DRL) to maximize utility by 1) co-optimizing batch size and 2) the number of concurrent models automatically. Our prototype implemented on different edge platforms shows that the proposed BCEdge enhances utility by up to 37.6% on average, compared to state-of-the-art solutions, while satisfying SLOs.
Ziyang Zhang, Huan Li, Yang Zhao, Changyao Lin, Jie Liu
2023-05-01T02:56:43Z
http://arxiv.org/abs/2305.01519v1
# BCEdge: SLO-Aware DNN Inference Services ###### Abstract As deep neural networks (DNNs) are being applied to a wide range of edge intelligent applications, it is critical for edge inference platforms to have both high-throughput and low-latency at the same time. Such edge platforms with multiple DNN models pose new challenges for scheduler designs. First, each request may have different service level objectives (SLOs) to improve quality of service (QoS). Second, the edge platforms should be able to efficiently schedule multiple heterogeneous DNN models so that system utilization can be improved. To meet these two goals, this paper proposes BCEdge, a novel learning-based scheduling framework that takes adaptive batching and concurrent execution of DNN inference services on edge platforms. We define a utility function to evaluate the trade-off between throughput and latency. The scheduler in BCEdge leverages maximum entropy-based deep reinforcement learning (DRL) to maximize utility by 1) co-optimizing batch size and 2) the number of concurrent models automatically. Our prototype implemented on different edge platforms shows that the proposed BCEdge enhances utility by up to 37.6% on average, compared to state-of-the-art solutions, while satisfying SLOs. Edge Computing, Inference Service, Scheduling, Reinforcement Learning, Service Level Objective (SLO). ## I Introduction Model inference service systems deployed on cloud servers typically provide multiple trained deep neural networks (DNNs) for users. These systems are usually multi-tenant, meaning hosting one or more model instances per DNN model to serve multiple inference applications, while making better use of the abundant computing resources of servers. For instance, the multi-instance GPU (MIG) in NVIDIA Ampere architecture enables the partitioning of a single NVIDIA A100 GPU into up to seven independent GPU instances that can run concurrently. In this way, the GPU achieves up to 7\(\times\) utilization with guaranteed quality of service (QoS). Furthermore, a single inference request often leads to inefficient utilization. Therefore, prior works [1, 2, 3] batch requests to better exploit the parallelism of GPUs. Batching refers to aggregating arriving requests into a batch within a given time window, and DNN service systems process the entire batch at a particular time, thereby improving throughput (e.g., requests per second, rps). In batch systems, throughput can be improved by increasing batch size. The more requests in a batch, the longer the waiting time to be processed, latency, therefore, is inevitably increased. Increasing computational and memory capabilities open a new opportunity to deploy model inference systems on edge accelerators (e.g., graphics processing unit (GPU), tensor processing unit (TPU), and vision processing unit (VPU), etc.). This emerging computing paradigm provides guarantee for edge intelligent applications [4] with low-latency requirements, including the object detection in autonomous driving [5], recommendation systems in smartphones [6], and the metaverse in wearables [7], etc. On the other hand, various lightweight techniques [8] (such as pruning, compression, quantization, knowledge distillation, etc.) for DNN models enable batching and concurrent inference of multiple model instances at the edge. To better understand the performance implications of batching and concurrent inference, we perform an experimental study, using YOLO-v5 [9] on NVIDIA Xavier NX edge platform. Due to resource constraints on edge platforms, we leverage TensorRT [10] to accelerate the original DNN models. Fig. 1 reports the throughput and latency with various batch sizes and number of concurrent models. We have the following critical observations that motivate this work: _both batch size and number of concurrent models affect throughput and latency, but larger batch size or number of models is not always better._ Fig. 1 illustrates that higher-throughput Fig. 1: The effects of batching and concurrent inference on (a) system throughput and (b) end-to-end latency. Throughput is measured as requests-per-second (rps). The x-axis represents the number of concurrent models, and the y-axis represents batch size. We use YOLO-v5 [9] on NVIDIA Xavier NX edge platform with 8GB RAM. and lower-latency appear in moderate batch size and number of concurrent models. Due to resource contention caused by model interference, excessive batch size and number of concurrent models significantly reduce throughput and increase latency or even cause memory overflow, especially when batch size and number of concurrent models are large, e.g., batch size is 128, model number is 8. Therefore, it is crucial for the scheduler in DNN service system to 1) trade off throughput and latency for optimal performance and 2) accurately predict the interference between models. Designing such a GPU-based inference servers must address different challenges from batching- and concurrent-oriented processing. First, inference requests have service level objective (SLO) to achieve quality of service (QoS) with low-latency. Second, edge platforms usually deploy multiple heterogeneous DNN models to improve throughput and resource utilization. Therefore, it is critical to design an efficient scheduler to achieve the optimization goal for both batching and concurrent inference on edge platforms. Conventional heuristic-based methods are inefficient for multi-objective optimization [11]. In contrast, deep reinforcement learning (DRL) combines the powerful representation of deep learning with the adaptive property of reinforcement learning, which is capable of efficiently solving the above problems. Therefore, we leverage DRL to transform a multi-objective (i.e., throughput and latency) problem into a scheduling problem of batch size and number of concurrent models. Motivated by the above observation, we propose BCEdge, a learnable, adaptive, and multi-tenant scheduling framework for SLO-aware DNN inference services. BCEdge aims to automatically find a global optimum by adjusting both batch size and number of concurrent models for throughput and latency tradeoffs. The search space for scheduling becomes two-dimensional with batch size and number of concurrent models, unlike the prior work with one-dimensional (i.g., batch size) searches. The batching-concurrent scheduling can significantly improves the SLO-preserved throughput. For each DNN model, its computational properties are measured and registered into BCEdge. Based on the profile information of each DNN model, the maximum entropy reinforcement learning-based scheduler in BCEdge automatically adjust batch size and number of concurrent models, while maintaining the SLO. Furthermore, BCEdge leverages a lightweight neural network (NN)-based interference prediction model to reduce the impact of concurrent inference. Table I provides a summarized comparison of our work to related DNN service frameworks. All previous studies are capable of adaptively adjusting batch size at runtime, either automatically or manually, to achieve higher-throughput. As more DNN inference services are consolidated into edge-based GPU servers, although some previous studies provided multiple heterogeneous DNN models for multi-tenancy, these works did not enable scheduling multiple instances of the same model in DNN service system, which becomes more important to make full use of edge servers with constantly increasing computing resources. Regarding inter-model resource contention among multi-tenants and requests with SLO, accurately predicting interference and guaranteeing SLO can better guide the scheduler to automatically adjust batch size and number of concurrent models to achieve moderate system throughput and reduce latency. Only TF-Serving [2] considers model interference, and the scheduler uses hedged backup requests to mitigate latency spikes caused by inter-request or -model interference. On the other hand, except for Clipper [1] and DeepRT [12], none of other previous works consider requests with strict time constraints (i.e., SLO). Importantly, our study addresses all challenges, executing multi-instance concurrently, guaranteeing SLO, and predicting potential interference among multi-model. We evaluated the proposed DNN inference framework on edge platforms with three heterogeneous edge GPUs, using six DNN models in Table IV that cover both CV and NLP applications, such as object detection, image classification as well as speech recognition. The evaluation shows that the proposed scheduling technique with batching and concurrent model instance can improve the trade-off with SLO constraints by 37.6%, compared to the state-of-the-art solutions. The main contributions of this paper are as follows: * Through a motivational case study based on real-world batching and concurrent inference of DNN models on edge platforms, we demonstrate that the trade-off between throughput and latency should leverage both batching and concurrent inference. * We present BCEdge, a learnable scheduling framework with adaptive batching and concurrent model instance for inference service on edge platforms. The scheduler in BCEdge leverages the maximum entropy reinforcement learning to automatically adjust batch size and number of concurrent models to trade off throughput and latency. * For accurate performance prediction, the lightweight NN-based prediction model with negligible overhead in BCEdge reduces the effect of interference among models for DNN concurrent inference. The rest of the paper is organized as follows: Section II presents related work. Section III describes system model and problem formulation. Section IV illustrates our framework design in detail. Section V reports experimental results. Section VI concludes our work. ## II Related work ### _Model-level DNN Inference Service_ Prior works treated the DNN model as an indivisible whole, and proposed a series of edge inference serving frameworks to provide the quality of DNN inference services [1, 2, 12], [14, 15, 16, 17, 18, 19, 20]. Clipper [1], TensorFlow-Serving [2], MArK [15], DeepRT [12], and BATCH [20] adopt the traditional adaptive batching that use time window for efficient DNN inference. None of these existing frameworks offer concurrent operation of model instances to further improve throughput. There are also some prior works research on SLO-aware DNN inference service. Gpulet [16] leverages spatio-temporal sharing of computing resources for multiple heterogeneous DNN models with SLO constraints. Clockwork [19] exploits predictable execution times to achieve tight request-level SLO. INFaaS [18] reduces costs, better throughput, and fewer SLO violations by choosing an adequate variation of a model. PSLO [17] is a preempting SLO-aware scheduler based on minimum average expected latency for edge platforms, which aims to trade-off response time, system throughput, and SLO. Different from the above works, we focus on reducing the SLO violation rate caused by the interference of multi-model. In addition, some edge inference frameworks involve privacy protection [21, 22] and edge-cloud collaborative [23, 24], respectively. These works are orthogonal to BCEEdge that can alleviate privacy and resource constraints. ### _Operator-level DNN Inference Service_ There are some prior research on optimizing the operator scheduling of DNN models to improve the quality of model service [13, 25, 26, 27]. REEF [25] apports a parallel mechanism based on dynamic kernel padding to improve the overall throughput. VELTAIR [27] proposed an adaptive operator-level compilation and scheduling to guarantee resource usage efficiency and reduce interference-induced performance loss for multi-tenant DNN services. PREMA [13] is a predictive multi-task scheduling algorithm for preemptible neural processing unit to meet high-throughput. Abacus [26] leverages overlap-aware latency prediction and deterministic scheduling of overlapped DNN operators that improves throughput while maintaining the QoS for multi-tenant DNN services. Since BCEEdge exploits the computing power of accelerators on edge platforms using batching and concurrent inference, these works are also orthogonal to BCEEdge and can be combined together to enable even higher-throughput and lower-latency. ### _Multi-tenant Scheduling on Edge Platforms_ Multi-tenant scheduling is more challenging due to resource constrained on edge platforms, compared with cloud computing. TVW-RL [28] exploit various temporal resource usage patterns of time-varying workloads based on a deep reinforcement learning (DRL) approach, to improve utilization in real production traces. Likewise, KaiS [29], A3C-R2N2 [11], MILP [30], A3C-DO [31], and MFRL [32] proposed different multi-agent reinforcement learning-based scheduling strategies in edge-cloud cluster, to optimize throughput, latency, energy consumption, cost, etc. MCDS [33] uses a tree-based search strategy and a DNN-based prediction model to optimize QoS in edge-cloud testbeds. Similar to MILP [30], DeEdge [34] proposed D-Deads, a distributed greedy scheduling algorithm with task-deadline in edge computing, which maximize throughput while minimizing latency. Note that the above works only schedule individual tasks one by one, ignoring the benefits of batching and concurrent inference. Inspired by these works, BCEEdge can also be extended to an edge-cloud collaborative inference framework to optimize specific objectives. ## III System Model and Problem formulation In this section, we first formulate system model, including request-, scheduling-, computing-and networking model. Next, we present the optimization problem formulation to show the trade-off between throughput and latency. ### _System Model_ #### Iii-A1 Request Model We assume that the IoT devices (e.g., cameras, drones, smartphones, etc.) share the computing resources of edge platforms. Before task scheduling, the IoT devices generate a series of inference requests with different DNN model types \(\mathrm{m}_{\mathrm{t}}^{\mathrm{i}}\), input types \(\mathrm{d}_{\mathrm{t}}^{\mathrm{i}}\) (i.e., image or text), input shapes \(\mathrm{d}_{\mathrm{s}}^{\mathrm{i}}\), and service level objectives \(\mathrm{SLO}_{\mathrm{i}}\). The i-th request \(r_{i}\), therefore, could be denoted as \(\mathrm{r}_{\mathrm{i}}=\{\mathrm{m}_{\mathrm{t}}^{\mathrm{i}},\mathrm{d}_{ \mathrm{t}}^{\mathrm{i}},\mathrm{d}_{\mathrm{s}}^{\mathrm{i}},\mathrm{SLO}_{ \mathrm{i}}\}\). Note that requests arrive at BCEEdge online at random with a Poisson distribution. BCEEdge maintains a request queue for each model, and support dynamic batching by aggregating multiple inference requests with the same model into corresponding request queue \(\mathrm{seq}_{\mathrm{b}}=\{\mathrm{r}_{1},\mathrm{r}_{2},\ldots,\mathrm{r}_{ \mathrm{b}}\}\), where b is the batch size of DNN models. Meanwhile, BCEEdge constructs multiple instances \(\mathrm{m}_{\mathrm{c}}(\mathrm{c}=1,2,\ldots\mathrm{,n})\) for each model (i.e., concurrent instances), which is critical for edge platforms with GPU, since batching and concurrent inference can effectively improve the throughput. The model zoo in BCEEdge backend executes DNN inference from the request queue, and returns the prediction result \(\mathrm{O}_{\mathrm{b}}=\{\mathrm{o}_{1},\mathrm{o}_{2},\ldots,\mathrm{o}_{ \mathrm{b}}\}\). #### Iii-A2 Scheduling Model Since the SLO is different for each request, a fixed scheduling time slot is not suitable. On the other hand, it is not feasible to specify scheduling time slots for each request individually, which significantly increases system overhead. Therefore, we set the i-th scheduling time slot \(\mathrm{t}_{\mathrm{i}}\) as the ratio of the sum of \(\mathrm{SLO}_{\mathrm{i}}\) for a batch requests to the number of concurrent models, which denoted as \[\mathrm{t}_{\mathrm{i}}=\sum_{\mathrm{i}=1}^{\mathrm{b}}\mathrm{SLO}_{ \mathrm{i}}/\mathrm{m}_{\mathrm{c}} \tag{1}\] In this way, BCEEdge is capable of guaranteeing the SLO of each request and provide efficient inference services with batching and concurrent inference. Moreover, BCEEdge starts the next scheduling immediately after finishing the current scheduling to reduce the GPU idle. #### Iii-A3 Computing and Networking Model The end-to-end latency involves the communication between IoT devices and edge platforms, as well as model inference time, which consists of the following components: * _request transmission time_\(\mathrm{t}_{\mathrm{i}}^{\mathrm{i}}\): the time that IoT devices send the i-th inference request (e.g., image or text, etc.) to edge platforms through the network, which depends on communication bandwidth and the size of input data. * _request serialization time_\(\mathrm{t}_{\mathrm{s}}^{\mathrm{i}}\): the time to aggregate multiple inference requests with the same model into a single request queue at the edge platform, for batching and concurrent inference of model instances. * _request queuing time_\(\mathrm{t}_{\mathrm{w}}^{\mathrm{i}}\): the time that the request is blocked on the request queue until it is scheduled, which relate to batch size and the number of concurrent models. * _DNN inference time_\(\mathrm{t}_{\mathrm{m}}^{\mathrm{i}}\): the time that edge platform execute model inference. Once inference is complete, the current request is removed from the queue. * _result transmission time_\(\mathrm{t}_{\mathrm{o}}^{\mathrm{i}}\): the time that edge platform sends the \(\mathrm{i}\)-th inference request to IoT devices through the network, which is related to the network bandwidth, regardless of the result size (usually negligible). Thus, the overall latency \(\mathrm{t}_{\mathrm{r}}^{\mathrm{i}}\) can be denoted as: \[\mathrm{t}_{\mathrm{r}}^{\mathrm{i}}=\mathrm{t}_{\mathrm{t}}^{\mathrm{i}}+ \mathrm{t}_{\mathrm{s}}^{\mathrm{i}}+\mathrm{t}_{\mathrm{w}}^{\mathrm{i}}+ \mathrm{t}_{\mathrm{m}}^{\mathrm{i}}+\mathrm{t}_{\mathrm{o}}^{\mathrm{i}} \tag{2}\] ### _Problem Formulation_ Our objective is to co-optimize both throughput and latency for each DNN model by automatically exploring the feasible set of batch size and number of concurrent models, while guaranteeing SLO. Inspired by the co-adaptive scheduler named Pollux [35], we present a _utility function_\(\mathrm{U}\) in Eq. (3) to evaluate the trade-off between throughput and latency. \[\mathrm{U}=\log(\mathrm{T}_{\mathrm{t}_{\mathrm{i}}}(\mathrm{b},\mathrm{m}_{ \mathrm{c}})/\frac{\mathrm{L}_{\mathrm{t}_{\mathrm{i}}}(\mathrm{b},\mathrm{m}_{ \mathrm{c}})}{(\sum_{\mathrm{j=1}}^{\mathrm{b}}\mathrm{r}_{\mathrm{j}})/ \mathrm{m}_{\mathrm{c}}}) \tag{3}\] where \(\mathrm{b}\) is the batch size, and \(\mathrm{m}_{\mathrm{c}}\) is the number of concurrent models. The throughput in the \(\mathrm{i}\)-th scheduling time slot \(\mathrm{t}_{\mathrm{i}}\) can be denoted as \(\mathrm{T}_{\mathrm{t}_{\mathrm{i}}}(\mathrm{b},\mathrm{m}_{\mathrm{c}})\), and \(\mathrm{L}_{\mathrm{t}_{\mathrm{i}}}(\mathrm{b},\mathrm{m}_{\mathrm{c}})\) represents the actual latency of the \(\mathrm{i}\)-th request. \((\sum_{\mathrm{j=1}}^{\mathrm{b}}\mathrm{r}_{\mathrm{j}})/\mathrm{m}_{\mathrm{c}}\) denotes the ratio of the sum of SLOs for batch requests to the number of concurrent models. Notably, \(\frac{\mathrm{L}_{\mathrm{t}_{\mathrm{i}}}(\mathrm{b},\mathrm{m}_{\mathrm{c}}) }{(\sum_{\mathrm{j=1}}^{\mathrm{b}}\mathrm{r}_{\mathrm{j}})/\mathrm{m}_{\mathrm{ c}}}\in(0,1]\) avoids request scheduling failure as much as possible while ensuring real-time performance. The scheduler must consider the memory capacity of edge platforms \(\mathrm{M}_{\mathrm{i}}\) and the SLO constraints \(\mathrm{SLO}_{\mathrm{i}}\), when batching and concurrent executing requests. Therefore, the optimization objective with above requirements is formulated as \[\mathrm{min.}\mathrm{U} \tag{4}\] \[\mathrm{s.t.}\mathrm{m}_{\mathrm{i}}\leq\mathrm{M}_{\mathrm{i}}\] \[\mathrm{L}_{\mathrm{i}}\leq\mathrm{SLO}_{\mathrm{i}}\] where \(\mathrm{m}_{\mathrm{i}}\) is the actual used memory for the \(\mathrm{i}\)-th request, and \(\mathrm{L}_{\mathrm{i}}\) is the end-to-end latency of the \(\mathrm{i}\)-th request. Table II provides mainly symbol definitions and corresponding descriptions. ## IV BCEdge Design ### _System Overview_ The goal of BCEdge is to devise a scheduling framework for multi-model DNN inference serving, which aims to allocate a moderate batch size and number of concurrent models for each incoming inference requests, while maintaining SLO. To this end, the scheduling of DNN inference requests with SLO requirements must consider two aspects: batching, and concurrent model instances. Unlike the prior work which consider a subset of the two dimensions [1, 12], we propose a scheduler that fully explores all two dimensions to find the most effective point for scheduling. Fig. 2 presents the overall architecture of our proposed scheduling framework, namely BCEdge. The framework is composed of learning-based scheduler (Section IV-B), dynamic batching module (Section IV-C), concurrent instance module (Section IV-D), performance analyzer (Section IV-E), and SLO-aware interference predictor (Section IV-F). BCEdge first maintain a request queue for each DNN model. The requests with different DNN models generated by IoT devices are merged to send the corresponding request queue. The performance profiler periodically collects the information (e.g., utilization, SLO, system throughput and end-to-end latency for a pair of batch size and number of concurrent models) for each DNN model. Meanwhile, the SLO-aware interference predictor analyzes the potential interference overhead caused by concurrent model instances, which guides the scheduler to make more robust decisions. The learning-based scheduler then finds the best batch size and number of concurrent models by leveraging profiled information, and feeds back to dynamic batch processing module and concurrent instance module, respectively. The executor in the backend finally executes DNN inference service with batching and number of concurrent models on the edge platform. Fig. 2: Overview of the scheduling framework for DNN service. ### _Learning-based Scheduler_ _Search Space Challenge_: The learning-based scheduler is the critical component for BCEEdge. Note that the scheduling in BCEEdge is more complex compared with prior work (e.g., TF-Serving [2], Clipper [1], and DeepRT [12]), since it involves batching as well as concurrent inference. The challenge of the two-dimensional scheduling space (batch size, and number of concurrent models) for BCEEdge is that the scheduling decision is affected by several variables dependent on each other. Specifically, the best batch size and number of concurrent models depends on the computing requirements of different DNN models, the properties of input, the SLO constraints, as well as the available computing resources of edge platforms. Therefore, the optimal trade-off configuration would sit on the sweet spot in the search space built upon the two dimensions, which creates a huge search space. To this end, we tailor the learning-based scheduling algorithm for BCEEdge. Compared with traditional heuristic methods, deep reinforcement learning (DRL) has great advantages in processing complex policy decision, which can be applied to action spaces with high-dimensional. Thus, we design a novel DRL-based scheduler for efficient DNN inference service with batching and concurrent inference, in order to trade off throughput and latency. Since batch size and number of concurrent models are discrete, we present a learnable online scheduling algorithm with maximum entropy, based on discrete soft actor-critic [36] framework, which maximizes the reward while maximizing the entropy of the visited states compared with traditional DRL approaches. The introduction of entropy makes our proposed scheduling algorithm have the following benefits: * Enable the agent in DRL to learn more near-optimal actions to accelerate training (i.e., the output is a policy distribution), compared with deterministic policy-based DRL (i.e., the output is an action). * Enable the agent to has a stronger ability to explore the environment, and avoid falling into local optimum. * Enable the system to be more robust. Now we focus on how the scheduler in BCEEdge finds batch size and number of concurrent models for each inference request that optimizes the trade-off in Eq. (3). We describe the details as follows: #### Iii-B1 Markov Decision Process Formulation Firstly, we model batching and concurrent scheduling of inference requests as a markov decision process (MDP). It can be denoted as a five-tuple (\(\mathcal{S}\), \(\mathcal{A}\), \(\pi\), \(p\), \(r\)): * _State_: \(\mathcal{S}\) is the discrete state space. At each scheduling time slot \(t_{i}\), the agent in DRL constructs a state \(s_{t}(s_{t}\in\mathcal{S})\) to periodically collect request information and the resource utilization of edge platforms. \(s_{t}\) consists of five parts: (I) The DNN model type \(m_{t}^{i}\). (II) The input type \(d_{t}^{i}\) and input shape \(d_{s}^{i}\). (III) The SLO of each requests. (IV) The available computing resources of edge platforms \(m_{i}\). (V) The information of request queue \(seq_{b}\). * _Action_: \(\mathcal{A}\) is the discrete action space. The action of the agent in DRL is to find best batch size \(b\) and number of concurrent models \(m_{c}\). The action \(a_{t}(a_{t}\in\mathcal{A})\) at scheduling time slot \(t_{i}\) can be denoted as \(a_{t}=(b,m_{c})\). For instance, if a DNN model has \(\mathcal{M}\) optional batch sizes and \(\mathcal{N}\) optional number of concurrent models, the size of the discrete action space \(\mathcal{A}\) is \(\mathcal{M}\times\mathcal{N}\). * _Policy_: The policy \(\pi\left(a_{t}\mid s_{t}\right)\) is a function that the agent decides the next action \(a_{t}\) according to the environment state \(s_{t}\) at timestamp \(t\). In the maximum entropy-based DRL algorithm, we maximize both the reward and the entropy of the visited states. The optimal policy \(\pi^{*}\) is denoted as follows: \[\pi^{*}=\mathrm{argmax}_{\pi}\sum_{t=0}^{T}E_{(s_{t},a_{t})\sim \rho_{\pi}}[\gamma^{t}r(s_{t},a_{t})\] (5) \[+\alpha\mathcal{H}(\pi(\cdot\mid s_{t}))]\] where \(\gamma\in[0,1]\) is discount factor, \(\rho_{\pi}\) represents the distribution of action trajectory generated by the policy \(\pi\), and \(\alpha\) is a temperature parameter to express the relative importance of reward and entropy. \(\mathcal{H}\left(\pi\left(\cdot\mid s_{t}\right)\right)=-\log\pi\left(\cdot \mid s_{t}\right)\), which is the entropy with state \(s_{t}\). * _State transition probability_: \(p\left(s_{t}^{\prime}\mid s_{t},a_{t}\right)\) is the state transition probability that indicates the probability of transitioning to the next state \(s_{t}^{\prime}\) after taking an action \(a_{t}\) in the current state \(s_{t}\) at timestamp \(t\), satisfying \(\sum_{s^{\prime}\in\mathcal{S}}p\left(s_{t}^{\prime}\mid s_{t},a_{t}\right)=1\). * _Reward_: \(r:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\) is the reward function. The agent in DRL aims to maximize the accumulated expected reward \(\mathbb{E}\left[\sum_{t=0}^{T}\gamma^{t}r_{t}\right]\), where \(r_{t}\) is the instant reward when the agent selects batch size and number of concurrent models at each scheduling time slot \(t_{i}\). Since our objective is to maximize the trade-off between throughput and latency, we migrate the objective in Eq. (3) to the reward function: \[r_{t}=U\] (6) In this way, the traditional scheduling problem is converted into the maximization of reward in DRL. We further utilize efficient learning-based algorithm to reduce the complexity. #### Iii-B2 Maximum Entropy DRL-based scheduling algorithm We leverage soft policy iteration [37] to maximize both reward and entropy. To be more specific, soft policy iteration consists of policy evaluation and policy improvement, which alternates during training. * _Soft Q-Function (Critic Network)_: We first compute the soft Q-value \(Q(s_{t},a_{t})\) in the policy evaluation step. The soft Q-function is denoted as follows [37]: \[\mathcal{T}^{\pi}Q\left(\mathbf{s}_{t},\mathbf{a}_{t}\right)\triangleq r \left(\mathbf{s}_{t},\mathbf{a}_{t}\right)+\gamma\mathbb{E}_{\mathbf{s}_{t+1} \sim p}\left[V\left(\mathbf{s}_{t+1}\right)\right]\] (7) where \(\mathcal{T}^{\pi}\) is the modified bellman backup operator. The soft state value function \(V(s_{t})\) with the policy \(\pi\) in discrete state space \(s_{t}\) is: \[V\left(s_{t}\right):=\pi\left(s_{t}\right)^{T}\left[Q\left(s_{t}\right)-\alpha \log\left(\pi\left(s_{t}\right)\right)\right]\] (8) We train the soft Q-value in Eq. (7) by minimizing the soft bellman residual, and the loss function is: \[J_{Q}(\theta)=E_{(s_{t},a_{t})\sim D}[\frac{1}{2}(Q_{\theta}(s_{t},a_{t})\] (9) \[-(r(s_{t},a_{t})+\gamma E_{s_{t+1}\sim p(s_{t},a_{t})}[V_{\theta} (s_{t+1})))^{2}]\] * _Policy (Actor Network)_: The policy improvement is used to update the policy network, which is denoted as follows: \[\pi_{\text{new}}\ =\arg\min_{\pi^{\prime}\in\Pi}\mathrm{D_{KL}}\left(\pi^{ \prime}\left(\cdot\mid\mathbf{s}_{t}\right)\|\frac{\exp\left(\frac{1}{\alpha}Q^ {\pi_{\text{old}}}\left(\mathbf{s}_{t},\cdot\right)\right)}{Z^{\pi_{\text{old} }}\left(\mathbf{s}_{t}\right)}\right)\] (10) where \(\mathrm{D_{KL}}\) is the Kullback-Leibble (KL) divergence, and \(Z^{\pi_{\text{old}}}\) is the partition function. We minimize the KL divergence in Eq. (11) to update the parameters of the policy network: \[J_{\pi}(\phi)=E_{s_{t}\sim D_{KL}}\left[\pi_{t}\left(s_{t}\right)^{T}\left[ \alpha\log\left(\pi_{\phi}\left(s_{t}\right)\right)-Q_{\theta}\left(s_{t} \right)\right]\right]\] (11) * _Temperature parameter_: We automatically adjust the temperature parameter \(\alpha\) in Eq. (5), according to [37]: \[J(\alpha)=E_{a_{t}\sim\pi_{t}}\left[-\alpha\left(\log\pi_{t}\left(a_{t}\mid s _{t}\right)+\bar{H}\right)\right]\] (12) where \(\bar{H}\) is a constant vector that equals to the hyperparameters of the target entropy. Algorithm 1 provides the overall procedure of scheduling concurrent model instances with dynamic batching. The scheduler first receives the information of DNN model and resource utilization for each inference request. Before each scheduling time slot, it initializes all networks, including soft Q-network, target soft Q-network, policy network and temperature network with corresponding parameters, respectively. Note that we use two soft Q-networks and take the minimum value of them to alleviate the overestimation of soft Q-value. For each scheduling time slot, the scheduler first checks each request queue. If the request queue is empty, it pushes incoming requests into the request queue (_line 7_). The scheduler then takes an actions (e.g., determine the best batch size and number of concurrent models for each request) based on Eq. (5), and the agent in DRL obtains a instant reward as utility (_line 9_). Meanwhile, the state changes from \(s_{t}\) to \(s_{t+1}\), and the current state, action, reward and the next state are stored as a action transition in the replay buffer \(\mathcal{D}\). The scheduler pulls the request sequence from the batching slot (Section IV-C) when the batch requests in the current request sequence are executed (_line 12_). The scheduler finally update the parameters of all networks, and repeat the above process (_line 14\(\sim\)18_) until the end of the iteration. ``` Input : The information (\(m_{t}^{i}\), \(d_{t}^{i}\), \(d_{s}^{i}\), \(SLO_{i}\)) of request \(r_{i}\), resource utilization \(u\) Output : batch size \(b\), number of concurrent models \(m_{c}\) 1 Initialization all neural networks \(Q_{\theta_{1}},Q_{\theta_{2}},\hat{Q}_{\theta_{1}},\hat{Q}_{\theta_{2}},\pi_{ \phi},T_{\alpha}\); 2 Randomly initialize network parameters \(\theta_{1},\theta_{2},\phi,\alpha\); 3 Initialize an empty replay buffer \(\mathcal{D}\leftarrow\emptyset\); 4foreach scheduling time slot \(t_{i}\)do 5foreach environment stepdo 6ifrequest queue \(seq_{b}==\emptyset\)then 7 Push requests \(r_{i}\) to request queue \(seq_{b}\); 8 9 end if 10 Take an action \(a_{t}(b,m_{c})\) based on policy \(\pi\) and get reward \(r_{t}\left(a_{t}\mid s_{t}\right)\) using Eq. (6); 11\(s_{t+1}\sim p\left(s_{t+1}\mid s_{t},a_{t}\right)\) ; 12\(\mathcal{D}\leftarrow\mathcal{D}\cup\left\{\left(s_{t},a_{t},r\left(s_{t},a_{t} \right),s_{t+1}\right)\right\}\); 13 Pull current request queue \(seq_{b}\) from batching slot \(s_{i}\); 14 15 end for 16foreach gradient stepdo 17 Update actor and target networks \(\theta_{i}\) for \(i\in\{1,2\}\) using Eq. (9); 18 Update critic network \(\phi\) using Eq. (11); 19 Update temperature network \(\alpha\) using Eq. (12); 20 21 end for 22 23 end for ``` **Algorithm 1**Learning-based scheduling algorithm ### _Dynamic Batching_ BECdge enables batch inferencing by allowing individual inference requests to specify a batch of inputs. The inferencing for a batch of inputs is performed at the same time which is especially important for GPUs, since it can greatly increase inferencing throughput. As illustrated in Fig. 3, the dynamic batching maintains a request queue separately for requests with different models, and each batch size in a queue depends on the learning-based scheduler in BECdge. The dynamically created batches are distributed to all model instances configured for the model, and dynamic batching module then concurrently executes multiple batch request queues for each model. To be more specific, dynamic batching first adds each requests to the corresponding request queue based on the order of arrival. Meanwhile, it sorts the priority based on the SLO of inference requests in each queue, the shorter the SLO, the higher the priority. Dynamic batching then merges multiple requests to a single large request, and assigns batch requests in the request queue to multiple slots of corresponding models at runtime. Note that the batch requests are scheduled in the order of arrival if have the same priority. ### _Model Instance Concurrency_ BECdge enables multiple models and multiple instances of the same model to execution in parallel on single or multiple GPUs, and the DNN models executed on CPU are handled similarly by BECdge. Fig. 4 shows the pipeline of executing model instance with batch requests in parallel for three DNN models, and each model is assigned two instances. We assume that BECdge is not currently processing any requests. When the first three requests arrive at the same time, each instance of the three models processes a corresponding request. BECdge then immediately dispatches both of them to the Fig. 3: Dynamic batching module. GPU, and the hardware scheduler of GPU begins working on three inferences in parallel. Note that the first three inference requests are immediately executed in parallel, and the last three inference requests must wait until one of the first three requests completes before beginning. In particular, if multiple inference requests for the same model arrive at the same time, BCEdge serializes their execution by scheduling only one at a time. ### _Performance Profiler_ The profiler in BCEdge periodically collects the performance information online, including the current utilization of CPU, GPU, memory, as well as system throughput and end-to-end latency. Specifically, the profiler records the performance information of each batch request with different input shapes on the edge platform, and feeds the information back to the scheduler. The scheduler learns the above information to schedule the next batch request, i.e., determining the best batch size and number of concurrent models to maximize the utility. Meanwhile, BCEdge can avoid system overload and improve resource utilization by the performance profiler, which reflects the potential advantage of BCEdge for dynamic resource management and allocation. ### _SLO-Aware Interference Predictor_ Concurrent inference of multiple models or multiple instances of a single model can process more requests simultaneously to improve throughput. However, an important challenge is the interference performance caused by concurrent inference of multiple models on a single GPU. As shown in Fig. 1, we observed that concurrent inference significantly increases latency compared to executing a single model independently, as multiple models compete for the shared resources on edge platform, especially the memory. In such case, model interference may cause the scheduler to make incorrect schedules, and may violate the SLO. A key challenge in mitigating interference is to predict latency increases when multiple inferences are executed concurrently in the same GPU. To confine the interference effect, we utilize a lightweight two-layer neural network (NN) with negligible overhead as the predictive model, which directly learns the interference latency of concurrently executing multiple inferences on a single GPU. As shown in Fig. 5, the simple yet effective interference-prediction model based on NN utilizes the currently available computing resources (i.e., memory, CPU and GPU) and the number of concurrent models learned by scheduler as the input of the neural network. We then compare the estimated latency of the neural network output with the actual latency based on performance feedback provided by the performance profiler, and the neural network is trained by minimizing the standard deviation between the real values and the estimation value. The trained neural network aims to improve the stability of the scheduler and reduce the SLO violation rate. ## V Evaluation ### _Experiment setup_ _BCEdge Prototype:_ We implement the prototype of BCEdge as a runtime backend for Triton [3], a inference serving system from NVIDIA. Table III provides a detailed description of the evaluated inference system and the used GPU specification. The table also provides the versions of the operating system, CUDA, runtime, and machine learning framework. As Fig. 6 shows, we use two IMX cameras and a microphone as IoT devices. The request arrival rate is set to 30 requests per second (rps), and follows the Poisson random distribution. Unless otherwise indicated, all evaluations are reported on a NVIDIA Xavier NX edge GPU. _DNN Models:_ We use six DNN models for three popular DNN families to process image and speech data. Specifically, we use YOLO-v5 [9] for object detection task, MobileNet-v3 [38], ResNet-18 [39], EfficientNet-B0 [40] and Inception-v3 [41] for image classification tasks, especially with TinyBERT [42] for speech recognition tasks. We use TensorRT [10] to reduce memory footprint for better batching and concurrent executing. Since the limited computing power of Xavier NX, we downsample the input image to 224\(\times\)224 resolution. All Fig. 4: Concurrent instance module. Fig. 5: SLO-aware interference predictor based on neural network. Fig. 6: BCEdge prototype implemented on NVIDIA Xavier NX edge platform. We use two IMX cameras and a microphone as IoT devices. images are colored frames with the 3 RGB channels. Each corresponding SLO latency is listed in Table IV. _Training Details:_ Our proposed Algorithm 1 is based on the SAC [36] framework. All networks are trained using the Adam optimizer with a learning rate of \(10^{-3}\). Each network has a two-layer ReLU neural network with 128 and 64 hidden units, respectively, and the buffer size is fixed to \(10^{6}\). We trained it offline on an off-the-edge device using four NVIDIA GeForce GTX 3080 GPUs with a mini-batch size of 512 for 500 epochs. We then deploy trained algorithm online to edge platform. ### _Baselines_ #### Iv-B1 **Edge inference service framework** We compare BCEEdge with two SOTA edge inference service frameworks: * _DeepRT_[12]: A soft real-time scheduler that adopts dynamic batching with earliest-deadline-first (EDF) scheduling algorithm to execute batch requests. * _Triton with Actor-Critic (TAC)_[3]: Since Triton only supports manually setting a fixed batch size and number of concurrent models, we combine Triton with Actor-Critic without entropy to compare with BCEEdge. #### Iv-B2 **Various scheduling algorithms** We ported the traditional heuristics and other reinforcement learning methods in BCEEdge inference service framework to compare with our scheduling algorithm, including: * _Genetic Algorithm (GA)_[43]: As a search algorithm for optimization problems, the main idea of GA is "survival of the fittest" in the theory of biological evolution. We take the fitness function in GA as our proposed utility. * _Proximal Policy Optimization (PPO)_[44]: PPO is an on-policy (the optimization policy and behavior policy of agent in the learning process are the same policy) DRL algorithm based on the Actor-Critic architecture. * _Double Deep Q Network (DDQN)_[45]: As an off-policy (the optimization policy and behavior policy of agent in the learning process are different policies) DRL algorithm, DDQN eliminates overestimation by decoupling the selection of actions in target Q-value and the calculation of target Q-value. ### _Trade-off Performance_ #### Iv-B1 **Comparison of Edge Inference Frameworks** We first evaluate the performance of BCEEdge in terms of the tradeoff between throughput and latency. Fig. 7 reports the normalized utility for six DNN models in Table IV. Our proposed BCEEdge consistently outperforms TAC and DeepRT for all models. The lower-utility of DeepRT is caused by the lack of concurrent inference. Although TAC leverages a learning-based approach for batching and concurrent scheduling, its agent lacks the entropy that comprehensive explore the environment. In contrast, BCEEdge introduces entropy into our learning-based scheduling algorithm, so that the agent in DRL have stronger exploration to obtain higher utility. In contrast, BCEEdge provides a better trade-off between throughput and latency by efficient scheduling as well as SLO-aware interference prediction. To be more specific, BCEEdge offers higher utility than both DeepRT and TAC by an average of 37% and 25%, respectively. #### Iv-C2 **Comparison of Convergence Performance** We compare the convergence of three learning-based (PPO, DDQN and Ours) and heuristic-based (GA) scheduling algorithms in BCEEdge DNN inference service framework. As shown in Fig. 10, our proposed scheduling algorithm has has the convergence speed increase of 1.8\(\times\)\(\sim\)3.7\(\times\) compared with baseline methods. This is due to the entropy enables the agent in DRL to learn more approximate optimal actions. That is, there may be multiple actions that are optimal in some states, and our proposed scheduling algorithm makes these actions have the same probability to being selected. Thus, the maximum entropy can effectively speed up the learning process. Note that the genetic algorithm (GA) has the disadvantage of being premature, that is, GA has limited ability to explore the environment, therefore it inevitably converge to a local optimal solution. Importantly, GA involves a large number of calculations, such as crossover, mutation, and etc., resulting in slower convergence. ### _Evaluation of Scalability_ To evaluate scalability beyond our platform in Table III, we additionally select two edge platforms with GPUs (e.g., NVIDIA Jetson Nano and TX2). Table V provides the specific parameters of both two heterogeneous edge platforms compared with Xavier NX. We evaluate the scalability of BCEEdge using object detection (YOLO-v5), image classification (ResNet-18) and speech recognition (TinyBERT) DNN models, respectively. Fig. 11 reports the utility of BCEEdge on Jetson Nano/TX2 edge platforms compared with baselines. We can see that BCEEdge outperforms the baselines on both two heterogeneous edge platforms. Since the image classification DNN model has the least computing resources, that is, ResNet-18 has more batch size and number of concurrent models to be configured than YOLO-v5 and TinyBERT, therefore BCEEdge achieves better trade-off for ResNet-18. There are similar results in Fig. 7. Even for Jetson nano with the weakest computing power, the utility of BCEEdge can increases by 30% and 19% compared with DeepRT and TAC, respectively. Since Jetson TX2 has more computing resources that configure more batch size and number of concurrent models, BCEEdge can achieve higher performance, and the utility is 39% and 27% higher than DeepRT and TAC, respectively. Fig. 12 shows the throughput and latency of three edge platforms corresponding to the utility in Fig. 11. The left y-axis in Fig. 12 represents peak throughput, and the right y-axis represents corresponding average latency. As we observed in Fig. 12, BCEEdge also has more significant performance improvement on the DNN models with fewer computing resources and the edge platforms with more abundant computing resources. Even for the weakest Jetson Nano, BCEEdge can fully utilize computing resources for different DNN models to optimize the trade-off between throughput and latency. In summary, BCEEdge exhibits flexible scalability that can be adapted to heterogeneous resource-constrained edge platforms. ### _Evaluation of Interference Model_ In this experiment, we investigate the proposed interference prediction model in BCEEdge under different requests per second (rps) on SLO violation rate. The interference prediction model records total 2000 inference interferences with one second period for each DNN model. Among the 2000 pieces of collected data, we randomly select 1600 pieces of execution data as training data and 400 pieces of data for validation. Fig. 13 presents the cumulative distribution of the prediction Fig. 11: The utility of heterogeneous edge platforms. The more computing resources of the edge platform, the higher the utility. Fig. 10: Comparison of training loss for DRL-based (PPO, DDQN and Ours) and heuristics (GA) scheduling algorithms. Fig. 9: Comparison of average latency with six DNN models. The scheduling duration of each model for 3,000 seconds. error with our NN-based interference model, compared with the linear regression model [16, 46]. The proposed model can predict up to 90% of cases within 2.69% error rate and up to 95% if 3.25% of error is allowed, which reduces the error rate by half compared to the linear regression model. Since the model interference we observed in Fig. 1 is not a simple linear relationship, the linear regression model has a higher prediction error. In contrast, out proposed NN-based interference model considers the resource utilization of edge platforms and the actual latency of DNN models, which can accurately predict the interference latency. Fig. 14 shows the cumulative distribution of SLO violation rate at 30 rps for BCEEdge with/without the interference prediction model. We analyzed the SLO violation rate for the scheduling duration of 3000 seconds in Fig. 8 and Fig. 9. The proposed model can reduce the SLO violation rate of BCEEdge from 9.2% to 4.1%, compared to BCEEdge without the interference prediction model. It illustrates that the interference prediction model can improve the robustness of BCEEdge and effectively reduce SLO violation rate. As shown in Fig. 15, we measure the SLO violation rate by gradually increasing the requests per second (rps). It can be seen that BCEEdge has the lowest SLO violation rate for all rps, which is 53% and 25% lower than DeepRT and TAC on average, respectively. The SLO violation rate of BCEEdge does not exceed 5% even at 40rps. Since the soft real-time scheduler in DeepRT [12] is only suitable for the DNN models without strict SLO constraints, it has the highest SLO violation rate with strict SLO constraints. In addition, TAC does not consider the impact of model interference, therefore the SLO violation rate is higher than BCEEdge. ### _Scheduling Overhead_ To measure the runtime overhead imposed by the scheduler, we compare BCEEdge with DeepRT and TAC in terms of the average scheduling latency. Fig. 16 depicts these scheduling overheads. As observed, BCEEdge has a low scheduling overhead due to the scheduler in BCEEdge introduces maximum entropy that can learn more approximate optimal actions to speed up learning in order to reduce overhead. Specifically, the average scheduling overhead of BCEEdge is 26% and 43% lower than that of DeepRT and TAC, respectively. It demonstrates that BCEEdge can efficiently schedule batching and concurrent requests with extremely low overhead. Note that Fig. 14: Cumulative distribution of SLO violation rate with 30rps. Our proposed interference prediction model can achieve the SLO violation rate within 4%, compared to up to 9.2% SLO violation rate without the interference prediction model. Fig. 12: The throughput and average latency of heterogeneous edge platforms. The more computing resources of the edge platform, the higher the throughput and the lower the average latency. Fig. 13: Cumulative distribution of relative error rate. Our proposed NN-based model can predict up-to 95% of cases with less than 3.25% error rate. Fig. 15: Comparison of service level objective (SLO) violation rate with real-world six DNN model benchmarks under different rps. Benefit from the interference prediction model, BCEEdge has the lowest SLO violation rate. we did not evaluate the overhead of the performance profiler and interference prediction model as they are negligible. ## VI Conclusion In this work, we present BCEdge, an adaptive, SLO-aware, and multi-tenant DNN-serving scheduling framework. BCEdge enables batching and concurrent inference for edge intelligent applications on edge platforms to achieve both high-throughput and low-latency. The key to BCEdge is a maximum entropy deep reinforcement learning-based scheduler, which automatically co-optimizes batch size and number of concurrent models. Compared to the state-of-the-art solutions, BCEdge achieves up to 37.6% average utility improvement, while satisfying SLOs.
2304.01630
Minimal $L^2$ integrals for the Hardy spaces and the Bergman spaces
In this article, we consider the minimal $L^2$ integrals for the Hardy spaces and the Bergman spaces, and we present some relations between them, which can be regarded as the solutions of the finite points versions of Saitoh's conjecture for conjugate Hardy kernels. As applications, we give optimal $L^2$ extension theorems for the Hardy spaces, and characterizations for the holding of the equality in the optimal $L^2$ extension theorems.
Qi'an Guan, Zheng Yuan
2023-04-04T08:38:55Z
http://arxiv.org/abs/2304.01630v1
# Minimal \(L^{2}\) integrals for the Hardy spaces and the Bergman spaces ###### Abstract. In this article, we consider the minimal \(L^{2}\) integrals for the Hardy spaces and the Bergman spaces, and we present some relations between them, which can be regarded as the solutions of the finite points versions of Saitoh's conjecture for conjugate Hardy kernels. As applications, we give optimal \(L^{2}\) extension theorems for the Hardy spaces, and characterizations for the holding of the equality in the optimal \(L^{2}\) extension theorems. Key words and phrases:Hardy space, Bergman space, minimal \(L^{2}\) integral, product manifold, concavity property 2020 Mathematics Subject Classification: 30H10 30H20 31C12 30E20 ## 1. Introduction Let \(D\) be a planar regular region with finite boundary components, which are analytic Jordan curves (see [18, 22]). **Definition 1.1** (see [18, 12]).: _We call a holomorphic function \(f\) on \(D\) belongs to Hardy space \(H^{2}(D)\), if \(|f(z)|^{2}\) have harmonic majorants \(U(z)\), i.e., \(|f(z)|^{2}\leq U(z)\) on \(D\)._ Each function \(f(z)\in H^{2}(D)\) has Fatou's nontangential boundary value a.e. on \(\partial D\) belonging to \(L^{2}(\partial D)\) (see [1]), and we also denote the nontangential boundary value by \(f\) for simplicity. The conjugate Hardy \(H^{2}\) kernel \(\hat{K}_{t}(z,\overline{w})\) is defined as follow: \[f(w)=\frac{1}{2\pi}\int_{\partial D}f(z)\overline{\hat{K}_{t}(z,\overline{w}) }\left(\frac{\partial G_{D}(z,t)}{\partial v_{z}}\right)^{-1}|dz|\] holds for any \(f\in H^{2}(D)\), where \(G_{D}(z,t)\) is the Green function on \(D\), and \(\partial/\partial v_{z}\) denotes the derivative along the outer normal unit vector \(v_{z}\). Fixed \(t\in D\), \(\frac{\partial G_{D}(z,t)}{\partial v_{z}}\) is positive and continuous on \(\partial D\) because of the analyticity of the boundary (see [18], [5]). When \(t=w=z\), \(\hat{K}(z)\) denotes \(\hat{K}_{t}(z,\overline{w})\) for simplicity. In [5], Guan proved the following theorem, which was conjectured by Saitoh (see [18]): **Theorem 1.2** ([5]).: _If \(D\) is not simple connected, then \(\hat{K}(z)>\pi B(z)\), where \(B(z)\) is the Bergman kernel on \(D\)._ By discussing the weighted kernel functions, we [12] gave a weighted version of Saitoh's conjecture and a weighted version of Saitoh's conjecture for higher derivatives. In [13], we considered two classes of weighted Hardy spaces on products of planar domains. Let us recall their definitions. Let \(D_{j}\) be a planar region bounded by finite analytic Jordan curves for any \(1\leq j\leq n\). Let \(M=\prod_{1\leq j\leq n}D_{j}\) be a bounded domain in \(\mathbb{C}^{n}\). Let \(M_{j}=\prod_{1\leq l\leq n,l\neq j}D_{l}\), then \(M=D_{j}\times M_{j}\) and \(\partial M=\cup_{1\leq j\leq n}\partial D_{j}\times\overline{M_{j}}\). Let \(\rho\) be a Lebesgue measurable function on \(\partial M\) such that \(\inf_{\partial M}\rho>0\). Now, we recall the Hardy space \(H^{2}_{\rho}(M,\partial M)\) (see [13]). Note that \(\partial M=\cup_{j=1}^{n}\partial D_{j}\times\overline{M_{j}}\). Let \(d\mu_{j}\) be the Lebesgue measure on \(M_{j}\) for any \(1\leq j\leq n\) and \(d\mu\) is a measure on \(\partial M\) defined by \[\int_{\partial M}hd\mu=\sum_{1\leq j\leq n}\frac{1}{2\pi}\int_{M_{j}}\int_{ \partial D_{j}}h(w_{j},\hat{w}_{j})|dw_{j}|d\mu_{j}(\hat{w}_{j})\] for any \(h\in L^{1}(\partial M)\), where \(\hat{w}_{j}:=(w_{1},\ldots,w_{j-1},w_{j+1},\ldots,w_{n})\in M_{j}\). For any \(f\in H^{2}(D_{j})\), \(\gamma_{j}(f)\) denotes the nontangential boundary value of \(f\) a.e. on \(\partial D_{j}\). **Definition 1.3** ([13]).: _Let \(f\in L^{2}(\partial M,\rho d\mu)\). We call \(f\in H^{2}_{\rho}(M,\partial M)\) if there exists \(f^{*}\in\mathcal{O}(M)\) such that for any \(1\leq j\leq n\), \(f^{*}(\cdot,\hat{w}_{j})\in H^{2}(D_{j})\) for any \(\hat{w}_{j}\in M_{j}\) and \(f=\gamma_{j}(f^{*})\) a.e. on \(\partial D_{j}\times M_{j}\)._ \(H^{2}_{\rho}(M,\partial M)\) is a Hilbert space (see [13]) equipped with the norm \(\ll\cdot,\cdot\gg_{\partial M,\rho}\), which is defined by \[\ll f,g\gg_{\partial M,\rho}:=\int_{\partial M}f\overline{g}\rho d\mu.\] Denote that \(P_{\partial M}(f)=f^{*}\) for any \(f\in H^{2}_{\rho}(M,\partial M)\). \(P_{\partial M}\) is a linear injective map from \(H^{2}(M,\partial D_{j}\times M_{j})\) to \(\mathcal{O}(M)\) (see [13]). When \(n=1\), \(P_{\partial M}=\gamma_{1}^{-1}\), thus \(H^{2}_{\rho}(M,\partial M)\) can be seen as a weighted generalization on product spaces of \(H^{2}(D)\). Denote that \(S:=\prod_{1\leq j\leq n}\partial D_{j}\). Let \(\lambda\) be a Lebesgue measurable function on \(S\) such that \(\inf_{S}\lambda>0\). Let us recall another class of Hardy space \(H^{2}_{\lambda}(M,S)\). **Definition 1.4** ([13]).: _Let \(f\in L^{2}(S,\lambda d\sigma)\), where \(d\sigma:=\frac{1}{(2\pi)^{n}}|dw_{1}|\ldots|dw_{n}|\). We call \(f\in H^{2}_{\lambda}(M,S)\) if there exists \(\{f_{m}\}_{m\in\mathbb{Z}_{\geq 0}}\subset\mathcal{O}(M)\cap C(\overline{M}) \cap L^{2}(S,\lambda d\sigma)\) such that \(\lim_{m\to+\infty}\|f_{m}-f\|^{2}_{S,\lambda}=0\), where \(\|g\|_{S,\lambda}:=\left(\int_{S}|g|^{2}\lambda d\sigma\right)^{\frac{1}{2}}\) for any \(g\in L^{2}(S,\lambda d\sigma)\)._ Denote that \[\ll f,g\gg_{S,\lambda}=\frac{1}{(2\pi)^{n}}\int_{S}f\overline{g}\lambda|dw_{1 }|\ldots|dw_{n}|\] for any \(f,g\in L^{2}(S,\lambda d\sigma)\), then \(H^{2}_{\lambda}(M,S)\) is a Hilbert space equipped with the inner product \(\ll\cdot,\cdot\gg_{S,\lambda}\) (see [13]). There exists a linear injective map \(P_{S}:H^{2}_{\lambda}(M,S)\to\mathcal{O}(M)\) satisfying that \(P_{S}(f)=f\) for any \(f\in\mathcal{O}(M)\cap C(\overline{M})\cap L^{2}(S,\lambda d\sigma)\) (see [13]). When \(n=1\), \(P_{S}=\gamma_{1}^{-1}\), thus \(H^{2}_{\lambda}(M,S)\) can also be seen as a weighted generalization on product spaces of \(H^{2}(D)\). In [13], we discussed some properties and kernel functions for the spaces \(H^{2}_{\rho}(M,\partial M)\) and \(H^{2}_{\lambda}(M,S)\), and we discussed the relations between them and the weighted Bergman kernels on \(M\), which can be regarded as the solutions of the product versions of Saitoh's conjecture. Note that the above mentioned kernel functions for the Hardy spaces and the Bergman spaces can be seen as the reciprocal of some minimal \(L^{2}\) integrals related to one point, such as: \[\hat{K}(z)=\frac{1}{\inf\left\{\frac{1}{2\pi}\int_{\partial D}|f(z)|^{2}\left( \frac{\partial G_{D}(z,t)}{\partial v_{z}}\right)^{-1}|dz|:f\in H^{2}(D)\, \&\,f(z)=1\right\}},\] \[B(z)=\frac{1}{\inf\left\{\int_{D}|f|^{2}:f\in\mathcal{O}(D)\,\&\,f(z)=1\right\}}.\] In this article, we consider more general minimal \(L^{2}\) integrals for the Hardy spaces and the Bergman spaces, and we give some relations between them. As applications, we give optimal \(L^{2}\) extension theorems for the Hardy spaces, and characterizations for the holding of equality in the optimal \(L^{2}\) extension theorems. ### Minimal \(L^{2}\) integrals on a planar region Let \(D\) be a planar region bounded by finite analytic Jordan curves, and let \(Z_{0}:=\{z_{1},\ldots,z_{m}\}\subset D\), where \(m\) is a positive integer. Let \(\psi\) be a Lebesgue measurable function on \(\overline{D}\), which satisfies that \(\psi\) is subharmonic on \(D\), \(\psi\equiv 0\) on \(\partial D\) and the Lelong number \(v(dd^{c}\psi,z_{j})>0\) for any \(z_{j}\in Z_{0}\), where \(d^{c}=\frac{\partial-\partial}{2\pi\sqrt{-1}}\). Assume that \(\psi\in C^{1}(U\cap\overline{D})\) and \(\frac{\partial\psi}{\partial v_{z}}\) is positive on \(\partial D\), where \(U\) is an open neighborhood of \(\partial D\) and \(\partial/\partial v_{z}\) denotes the derivative along the outer normal unit vector \(v_{z}\). Let \(k_{j}\) be a nonnegative integer for \(1\leq j\leq m\). Let \(\varphi\) be a Lebesgue measurable function on \(\overline{D}\) satisfying that \(\varphi+2\psi\) is subharmonic on \(D\), the Lelong number \[v(dd^{c}(\varphi+2\psi),z_{j})\geq 2(k_{j}+1)\] for any \(1\leq j\leq n\), and \(\varphi\) is continuous at \(z\) for any \(z\in\partial D\). Besides, we assume that one of the following two statements holds: (1) \((\psi-p_{j}G_{D}(\cdot,z_{j}))(z_{j})>-\infty\), where \(p_{j}=v(dd^{c}(\psi),z_{j})>0\) for any \(1\leq j\leq m\); (2) for any \(1\leq j\leq m\), there exists \(a_{j}\in[0,1)\) such that \(\varphi+2a_{j}\psi\) is subharmonic near \(z_{j}\). Let \(c\) be a positive Lebesgue measurable function on \([0,+\infty)\) satisfying that \(c(t)e^{-t}\) is decreasing on \([0,+\infty)\), \(\lim_{t\to 0+0}c(t)=c(0)=1\) and \(\int_{0}^{+\infty}c(t)e^{-t}dt<+\infty\). Denote that \[\tilde{\rho}:=e^{-\varphi}c(-2\psi),\] and assume that \(\tilde{\rho}\) has a positive lower bound on any compact subset of \(D\backslash Z\), where \(Z\subset\{\psi=-\infty\}\) is a discrete subset of \(D\). Denote that \[\rho:=e^{-\varphi}\left(\frac{\partial\psi}{\partial v_{z}}\right)^{-1}\] on \(\partial D\). Let us consider the following two minimal integrals. Let \(\mathfrak{a}=(a_{j,l})\) (\(1\leq j\leq m,0\leq l\leq k_{j}\)), where \(a_{j,l}\in\mathbb{C}\) such that \(\sum_{1\leq j\leq m}\sum_{0\leq l\leq k_{j}}|a_{j,l}|\neq 0\). Denote that \[M(Z_{0},\mathfrak{a},\tilde{\rho}):=\inf\bigg{\{} \int_{D}|f|^{2}\tilde{\rho}:f\in\mathcal{O}(D)\] s.t. \[f^{(l)}(z_{j})=l!a_{j,l}\text{ for any }0\leq l\leq k_{j} \text{ and any }1\leq j\leq m\bigg{\}}.\] and \[M_{H}(Z_{0},\mathfrak{a},\rho):=\inf\bigg{\{} \frac{1}{2\pi}\int_{\partial D}|f|^{2}\rho|dz|:f\in H^{2}(D)\] s.t. \[f^{(l)}(z_{j})=l!a_{j,l}\text{ for any }0\leq l\leq k_{j} \text{ and any }1\leq j\leq m\bigg{\}}.\] We recall some notations (see [3], see also [14, 9, 6]). Let \(p:\Delta\to D\) be the universal covering from unit disc \(\Delta\) to \(D\). we call the holomorphic function \(f\) on \(\Delta\) a multiplicative function, if there is a character \(\chi\), which is the representation of the fundamental group of \(D\), such that \(g^{\star}f=\chi(g)f\), where \(|\chi|=1\) and \(g\) is an element of the fundamental group of \(D\). It is known that for any harmonic function \(u\) on \(D\), there exists a \(\chi_{u}\) and a multiplicative function \(f_{u}\in\mathcal{O}^{\chi_{u}}(D)\), such that \(|f_{u}|=p^{\star}\left(e^{u}\right)\). Recall that for the Green function \(G_{D}(z,z_{j})\), there exist a \(\chi_{z_{j}}\) and a multiplicative function \(f_{z_{j}}\in\mathcal{O}^{\chi_{z_{j}}}(D)\), such that \(|f_{z_{j}}(z)|=p^{\star}\left(e^{G_{D}(z,z_{j})}\right)\) (see [22, 21]). We present a relation between \(M_{H}(Z_{0},\mathfrak{a},\rho)\) and \(M(Z_{0},\mathfrak{a},\tilde{\rho})\) as follows: **Theorem 1.5**.: _Assume that \(M(Z_{0},\mathfrak{a},\tilde{\rho})<+\infty\). Then_ \[M_{H}(Z_{0},\mathfrak{a},\rho)\leq\frac{M(Z_{0},\mathfrak{a},\tilde{\rho})}{ \pi\int_{0}^{+\infty}c(t)e^{-t}dt} \tag{1.1}\] _holds, and the equality holds if and only if the following statements hold:_ (1)_\(\varphi+2\psi=2\sum_{1\leq j\leq m}(k_{j}+1)G_{D}(\cdot,z_{j})+2u\), where \(u\) is a harmonic function on \(D\);_ (2)_\(\psi=\sum_{1\leq j\leq m}p_{j}G_{D}(\cdot,z_{j})\), where \(p_{j}=v(dd^{c}(\psi),z_{j})>0\);_ (3)_\(\chi_{-u}=\prod_{1\leq j\leq m}\chi_{z_{j}}^{k+1}\), where \(\chi_{-u}\) and \(\chi_{z_{j}}\) are the characters associated to the functions \(-u\) and \(G_{D}(\cdot,z_{j})\) respectively;_ (4) _For any \(1\leq j\leq m\), \(\lim_{z\to z_{j}}\frac{p_{*}\left(f_{*}\left(\prod_{1\leq j\leq m}f_{z_{j}}^{ k_{j}+1}\right)\left(\sum_{1\leq j\leq m}p_{j}\frac{df_{z_{j}}}{f_{z_{j}}} \right)\right)}{c_{0}dz}=c_{0}a_{j,k_{j}}\) and \(a_{j,l}=0\) for any \(l<k_{j}\), where \(c_{0}\neq 0\) is a constant independent of \(j\)._ When \(m=1\), Theorem 1.5 is a solution of the weighted version of Saitoh's conjecture for higher derivatives, which can be referred to [12]. **Remark 1.6**.: _Assume that the four statements in Theorem 1.5 hold, then we know \(\frac{p_{*}\left(f_{*}\left(\prod_{1\leq j\leq m}f_{z_{j}}^{k_{j}+1}\right) \left(\sum_{1\leq j\leq m}p_{j}\frac{df_{z_{j}}}{f_{z_{j}}}\right)\right)}{c_ {0}dz}\) is a (single-valued) holomorphic function on \(D\), and we denote it by \(F_{0}\). Then \(F_{0}^{(l)}(z_{j})=l!a_{j,l}\) for any \(0\leq l\leq k_{j}\) and any \(1\leq j\leq m\), and there exists \(f_{0}\in H^{2}(D)\) such that \(f_{0}^{*}=F_{0}\),_ \[M(Z_{0},\mathfrak{a},\tilde{\rho})=\int_{D}|F_{0}|^{2}\tilde{\rho}\;\;\text{ and}\;\;M_{H}(Z_{0},\mathfrak{a},\rho)=\frac{1}{2\pi}\int_{\partial D}|f_{0}|^{2}\rho|dz|.\] _We prove the remark in Section 3._ Let \(Z_{0}:=\{z_{j}:1\leq k\leq m\}\) be a subset of \(D\). Let \(\lambda\) be a positive continuous function on \(\partial D\). By solving Dirichlet problem, there exists a positive continuous function on \(\overline{D}\) denoted also by \(\lambda\), such that \(\log\lambda\) is harmonic on \(D\). Let \(c_{\beta}(z)\) be the logarithmic capacity (see [19]) on \(D\), which is locally defined by \[c_{\beta}(z):=\exp\lim_{\tilde{z}\to z}(G_{D}(\tilde{z},z)-\log|\tilde{z}-z|).\] Using Theorem 1.5, we present the following optimal \(L^{2}\) extension theorem for the Hardy space, and give a characterization for the holding of the equality in this extension theorem. **Corollary 1.7**.: _Let \(k_{j}\) be a nonnegative integer, and let \(a_{j}\in\mathbb{C}\) for any \(j\). Assume that \(\sum_{1\leq j\leq m}\frac{2|a_{j}|^{2}t_{j}}{(k_{j}+1)c_{\beta}(z_{j})^{2(k_{j} +1)}}\lambda(z_{j})\in(0,+\infty)\). Then there exists \(f\in H^{2}(D)\) such _that \(f^{(l)}(z_{j})=0\) for \(0\leq l<k_{j}\) and \(f^{(k_{j})}(z_{j})=k_{j}!a_{j}\) for any \(1\leq j\leq m\), and_ \[\frac{1}{2\pi}\int_{\partial D}|f|^{2}\lambda\left(\frac{\partial\psi}{ \partial v_{z}}\right)^{-1}|dz|\leq\sum_{1\leq j\leq m}\frac{2|a_{j}|^{2}t_{j} }{(k_{j}+1)c_{\beta}(z_{j})^{2(k_{j}+1)}}\lambda(z_{j}),\] _where \(\psi:=\sum_{1\leq j\leq m}(k_{j}+1)G_{D}(\cdot,z_{j})\) and \(t_{j}:=e^{-2\sum_{1\leq j_{1}\leq m,j_{1}\neq j}(k_{j}+1)G_{D}(z_{j},z_{j_{1}})}\)._ _Moreover, denote that \(M_{H}:=\inf\{\frac{1}{2\pi}\int_{\partial D}|f|^{2}\lambda\left(\frac{ \partial\psi}{\partial v_{z}}\right)^{-1}|dz|:f\in H^{2}(D)\) such that \(f^{(l)}(z_{j})=0\) for \(0\leq l<k_{j}\) and \(f^{(k_{j})}(z_{j})=k_{j}!a_{j}\) for any \(1\leq j\leq m\}\), then equality_ \[M_{H}=\sum_{1\leq j\leq m}\frac{2|a_{j}|^{2}t_{j}}{(k_{j}+1)c_{\beta}(z_{j})^ {2(k_{j}+1)}}\rho(z_{j})\] _holds if and only if the following statements hold:_ 1. \(\chi_{\frac{1}{2}\log\lambda}=\prod_{1\leq j\leq m}\chi_{z_{j}}^{k_{j}+1}\)_;_ 2. _For any_ \(1\leq j\leq m\)_,_ \[\lim_{z\to z_{j}}\frac{p_{*}\left(f_{-\frac{1}{2}\log\lambda}\left(\prod_{1 \leq j\leq m}f_{z_{j}}^{k_{j}+1}\right)\left(\sum_{1\leq j\leq m}(k_{j}+1) \frac{df_{z_{j}}}{f_{z_{j}}}\right)\right)}{(z-z_{j})^{k_{j}}dz}=c_{0}a_{j},\] _where \(c_{0}\neq 0\) is a constant independent of \(j\)._ Corollary 1.7 implies the following result. **Corollary 1.8**.: _Let \(k\) be a nonnegative integer. Then there is a constant \(C\) (depending on \(k\)), such that for any \(a_{j,l}\in\mathbb{C}\), where \(1\leq j\leq m\) and \(0\leq l\leq k\), there exists \(f\in H^{2}(D)\) such that \(f^{(l)}(z_{j})=a_{j,l}\) for any \(1\leq j\leq m\) and \(0\leq l\leq k\), and_ \[\frac{1}{2\pi}\int_{\partial D}|f|^{2}|dz|\leq C\sum_{1\leq j\leq m}\sum_{0 \leq l\leq k}|a_{j,l}|^{2}.\] ### Minimal \(L^{2}\) integrals for the Hardy space \(H^{2}_{\rho}(M,\partial M)\) Let \(D_{j}\) be a planar region bounded by finite analytic Jordan curves for any \(1\leq j\leq n\). Let \(M=\prod_{1\leq j\leq n}D_{j}\) be a bounded domain in \(\mathbb{C}^{n}\). Let \(Z_{j}=\{z_{j,1},z_{j,2},...,z_{j,m_{j}}\}\subset D_{j}\) for any \(j\in\{1,2,...,n\}\), where \(m_{j}\) is a positive integer. Denote that \[Z_{0}:=\prod_{1\leq j\leq n}Z_{j}\subset M.\] Let \(\psi=\max_{1\leq j\leq n}\{\sum_{1\leq k\leq m_{j}}p_{j,k}G_{D_{j}}(\cdot,z_{ j,k})\}\) on \(M\). Let \(V_{z_{j,k}}\Subset D_{j}\) be a neighborhood of \(z_{j,k}\) satisfying \(V_{z_{j,k}}\cap V_{z_{j,k^{\prime}}}=\emptyset\) for any \(j\) and \(k\neq k^{\prime}\). Denote that \(I_{1}:=\{(\beta_{1},\beta_{2},...,\beta_{n}):1\leq\beta_{j}\leq m_{j}\) for any \(j\in\{1,2,...,n\}\}\), \(V_{\beta}:=\prod_{1\leq j\leq n}V_{z_{j,\beta_{j}}}\) and \(z_{\beta}:=(z_{1,\beta_{1}},z_{2,\beta_{2}},\ldots,z_{n,\beta_{n}})\in M\) for any \(\beta=(\beta_{1},\beta_{2},...,\beta_{n})\in I_{1}\). Let \(\varphi_{j}\) be a subharmonic function on \(D_{j}\), which satisfies that \(\varphi_{j}\) is continuous at \(z\) for any \(z\in\partial D_{j}\). Denote that \[\varphi(w_{1},\ldots,w_{n}):=\sum_{1\leq j\leq n}\varphi_{j}(w_{j})\] on \(M\). Let \(f_{0}\) be a holomorphic function \(\cup_{\beta\in I_{1}}V_{\beta}\). For any \(\beta\in I_{1}\), let \(J_{\beta}\) be an ideal of \(\mathcal{O}_{z_{\beta}}\) satisfying \(\mathcal{I}(\varphi+2\psi)_{z_{\beta}}\subset J_{\beta}\). Note that for any \(\tilde{z}\in D_{j}\), \(\frac{\partial G_{D_{j}}(z,\tilde{z})}{\partial v_{z}}\) is a positive continuous function on \(\partial D_{j}\) by the analyticity of the boundary (see [18],[5]), where \(\partial/\partial v_{z}\) denotes the derivative along the outer normal unit vector \(v_{z}\). Let \(\rho\) be a Lebesgue measurable function on \(\partial M\) such that \[\rho(w_{1},\dots,w_{n})=\left(\sum_{1\leq k\leq m_{j}}p_{j,k}\frac{\partial G_{ D_{j}}(w_{j},z_{j,k})}{\partial v_{w_{j}}}\right)^{-1}\times\prod_{1\leq l\leq n}e^{- \varphi_{l}(w_{l})}\] on \(\partial D_{j}\times M_{j}\). Let \(c\) be a positive function on \([0,+\infty)\), which satisfies that \(c(t)e^{-t}\) is decreasing on \([0,+\infty)\), \(\lim_{t\to 0+0}c(t)=c(0)=1\) and \(\int_{0}^{+\infty}c(t)e^{-t}dt<+\infty\). Denote that \[\tilde{\rho}=c(-2\psi)\prod_{1\leq j\leq n}e^{-\varphi_{j}}\] on \(M\). Let us consider the following two minimal integrals. Denote that \[M(Z_{0},J,\tilde{\rho}):=\inf\bigg{\{}\int_{M}|f|^{2}\tilde{\rho}:f\in\mathcal{ O}(D)\text{ s.t. }(f-f_{0},z_{\beta})\in J_{\beta}\text{ for any }\beta\in I_{1}\bigg{\}}\] and \[M_{H}(Z_{0},J,\rho):=\inf\bigg{\{}\|f\|_{\partial M,\rho}^{2}:f \in H_{\rho}^{2}(M,\partial M)\] \[\text{ s.t. }(f^{*}-f_{0},z_{\beta})\in J_{\beta}\text{ for any }\beta\in I_{1} \bigg{\}}.\] Denote that \[G(t):=\inf\bigg{\{}\int_{\{2\psi<-t\}}|f|^{2}\tilde{\rho}: f\in\mathcal{O}(\{2\psi<-t\})\] \[\text{ s.t. }(f-f_{0},z_{\beta})\in J_{\beta}\text{ for any }\beta\in I _{1}\bigg{\}}\] for any \(t\geq 0\). Note that \(\tilde{\rho}=c(-2\psi)\prod_{1\leq j\leq n}e^{-\varphi_{j}}\) and \(G(0)=M(Z_{0},J_{\beta},\tilde{\rho})\). As \(\mathcal{I}(\varphi+2\psi)_{z_{\beta}}\subset J_{\beta}\) for any \(\beta\in I_{1}\), it follows from Theorem 2.23 that \(G(h^{-1}(r))\) is concave, where \(h(t)=\int_{t}^{+\infty}c(s)e^{-s}ds\). We present a relation between \(M_{H}(Z_{0},J,\rho)\) and \(M(Z_{0},J,\tilde{\rho})\). **Theorem 1.9**.: _Assume that \(M(Z_{0},J,\tilde{\rho})<+\infty\). Then_ \[M_{H}(Z_{0},J,\rho)\leq\frac{M(Z_{0},J,\tilde{\rho})}{\pi\int_{0}^{+\infty}c( t)e^{-t}dt} \tag{1.2}\] _holds, and equality holds if and only if \(G(h^{-1}(r))\) is linear on \([0,\int_{0}^{+\infty}c(t)e^{-t}dt]\) and there exists \(f\in H_{\rho}^{2}(M,\partial M)\), such that \((f^{*}-f_{0},z_{\beta})\in J_{\beta}\) for any \(\beta\in I_{1},\)\(M_{H}(Z_{0},J,\rho)=\|f\|_{\partial M,\rho}^{2}\) and \(M(Z_{0},J,\tilde{\rho})=\int_{M}|f^{*}|^{2}\tilde{\rho}\)._ **Remark 1.10**.: _Let \(\hat{\rho}\) be any Lebesgue measurable function on \(\overline{M}\), which satisfies that \(\inf_{\overline{M}}\hat{\rho}>0\), \(-\log\hat{\rho}\) is plurisubharmonic on \(M\) and \(\hat{\rho}(w_{j},\hat{w}_{j})\leq\liminf_{w\to w_{j}}\hat{\rho}(w,\hat{w}_{j})\) for any \((w_{j},\hat{w}_{j})\in\partial D_{j}\times M_{j}\subset\partial M\) and any \(1\leq j\leq n\), where \(M_{j}=\prod_{l\neq j}D_{l}\). Let \(\rho(w_{1},\dots,w_{n})=\left(\sum_{1\leq k\leq m_{j}}p_{j,k}\frac{\partial G_ {D_{j}}(w_{j},z_{j,k})}{\partial v_{w_{j}}}\right)^{-1}\times\hat{\rho}\) on \(\partial D_{j}\times M_{j}\), and let \(\tilde{\rho}=c(-2\psi)\hat{\rho}\) on \(M\). Inequality (1.2) in Theorem 1.9 also holds for this case (We prove the remark in the Step 1 of the proof of Theorem 1.9)._ Using Theorem 1.5 and Theorem 2.29 (a characterization for the concavity of \(G(h^{-1}(r))\) degenerating to linearity), we obtain the following theorem. **Theorem 1.11**.: _Assume that \(J_{\beta}=\mathcal{I}(2\psi)_{z_{\beta}}\) for any \(\beta\in I_{1}\), and \(f_{0}=\prod_{1\leq j\leq n}(w_{j}-z_{j,1})^{\bar{\alpha}_{j}}\) on \(V_{\beta^{*}}\), where \(\beta^{*}=(1,1,...,1)\in I_{1}\). Then equality_ \[M_{H}(Z_{0},J,\rho)=\frac{M(Z_{0},J,\tilde{\rho})}{\pi\int_{0}^{+\infty}c(t)e^{ -t}dt}\] _holds if and only if the following statements hold:_ \((1)\)_\(\varphi_{j}=2\log|g_{j}|+2u_{j}\) for any \(j\in\{1,2,...,n\}\), where \(u_{j}\) is a harmonic function on \(D_{j}\) and \(g_{j}\) is a holomorphic function on \(\mathbb{C}\) satisfying \(g_{j}(z_{j,k})\neq 0\) for any \(k\in\{1,2,...,m_{j}\}\);_ \((2)\) _There exists a nonnegative integer \(\gamma_{j,k}\) for any \(j\in\{1,2,...,n\}\) and \(k\in\{1,2,...,m_{j}\}\), which satisfies that \(\Pi_{1\leq k\leq m_{j}}\gamma_{j,z_{j,k}}^{\gamma_{j,k}+1}=\chi_{j,-u_{j}}\) and \(\sum_{1\leq j\leq n}\frac{\gamma_{j,\beta_{j}}+1}{p_{j,\beta_{j}}}=1\) for any \(\beta\in I_{1}\), where \(\chi_{-u_{j}}\) and \(\chi_{z_{j,k}}\) are the characters associated to the functions \(-u_{j}\) and \(G_{D_{j}}(\cdot,z_{j,k})\) respectively;_ \((3)\)_\(f_{0}=c_{\beta}\Pi_{1\leq j\leq n}(w_{j}-z_{j,\beta_{j}})^{\gamma_{j,\beta_{j} }}+g_{\beta}\) on \(V_{\beta}\) for any \(\beta\in I_{1}\), where \(c_{\beta}\) is a constant and \(g_{\beta}\) is a holomorphic function on \(V_{\beta}\) such that \((g_{\beta},z_{\beta})\in\mathcal{I}(\psi)_{z_{\beta}}\);_ \((4)\)_\(\lim_{z\to z_{\beta}}\frac{c_{\beta}\Pi_{1\leq j\leq n}(w_{j}-z_{j,\beta_{j}}) ^{\gamma_{j,\beta}}\,dw_{1}\wedge dw_{2}\wedge...\wedge dw_{n}}{\wedge_{1\leq j \leq n}g_{j}(P_{j})_{*}\left(\int_{u_{j}}\left(\Pi_{1\leq k\leq m_{j}}\int_{ j,k}^{\gamma_{j,k}+1}\right)\left(\sum_{1\leq k\leq m_{j}}\int_{j,k}^{ \gamma_{j,k}+1}\right)\right)}=c_{0}\) for any \(\beta\in I_{1}\), where \(c_{0}\in\mathbb{C}\backslash\{0\}\) is a constant independent of \(\beta\), \(P_{j}:\,\Delta\to D_{j}\) is the universal covering, \(f_{u_{j}}\) is a holomorphic function \(\Delta\) such that \(|f_{u_{j}}|=P_{j}^{*}(e^{u_{j}})\) and \(f_{z_{j,k}}\) is a holomorphic function on \(\Delta\) such that \(|f_{z_{j,k}}|=P_{j}^{*}\left(e^{G_{D_{j}}(\cdot,z_{j,k})}\right)\) for any \(j\in\{1,2,...,n\}\) and \(k\in\{1,2,...,m_{j}\}\)._ When \(m_{j}=0\) for any \(1\leq j\leq n\), the above theorem is a solution of the product version of Saitoh's conjecture, which can be referred to [13]. **Remark 1.12**.: _Assume that the four statements in Theorem 1.11 hold, then we know is a (single-valued) holomorphic function on \(D_{j}\), and we denote it by \(F_{0}\). Then \((F_{0}-f_{0},z_{\beta})\in\mathcal{I}(2\psi)_{z_{\beta}}\) for any \(\beta\in I_{1}\), and there exists \(\tilde{F}_{0}\in H_{\rho}^{2}(M,\partial M)\) such that \(\tilde{F}_{0}^{*}=F_{0}\),_ \[M(Z_{0},J,\tilde{\rho})=\int_{M}|F_{0}|^{2}\tilde{\rho}\ \ and\ \ M_{H}(Z_{0},J,\rho)=\| \tilde{F}_{0}\|_{\partial M,\rho}^{2}.\] _We prove the remark in Section 4._ Denote that \(E_{\beta}:=\left\{(\alpha_{1},\alpha_{2},...,\alpha_{n}):\sum_{1\leq j\leq n} \frac{\alpha_{j}+1}{p_{j,\beta_{j}}}=1\,\&\,\alpha_{j}\in\mathbb{Z}_{\geq 0}\right\}\) for any \(\beta\in I_{1}\), and assume that \(f_{0}=\sum_{\alpha\in E_{\beta}}d_{\beta,\alpha}\prod_{1\leq j\leq n}(w_{j}-z_{ j,\beta_{j}})^{\alpha_{j}}\) on \(V_{\beta}\). Denote that \[c_{j,k}:=\exp\lim_{z\to z_{j,k}}\left(\frac{\sum_{1\leq k_{1}\leq m_{j}}p_{j,k_ {1}}G_{D_{j}}(z,z_{j,k_{1}})}{p_{j,k}}-\log|w_{j,k}(z)|\right)\] for any \(j\in\{1,2,...,n\}\) and \(k\in\{1,2,...,m_{j}\}\). Using Theorem 1.11, we obtain the following optimal \(L^{2}\) extension theorem for the Hardy space on product spaces, and give a characterization for the holding of the equality in this extension theorem. **Corollary 1.13**.: _Assume that \(\sum_{\beta\in I_{1}}\sum_{\alpha\in E_{\beta}}\frac{|d_{\beta,\alpha}|^{2}2^{n} \pi^{n-1}e^{-\varphi(x_{\beta})}}{\Pi_{1\leq j\leq n}(\alpha_{j}+1)c_{j,\beta_{j }}}\in(0,+\infty)\). Then there exists \(f\in H^{2}_{\rho}(M,\partial M)\), satisfying that \((f^{*}-f_{0},z_{\beta})\in\mathcal{I}(2\psi)_{z_{\beta}}\) for any \(\beta\in I_{1}\) and_ \[\|f\|_{\partial M,\rho}^{2}\leq\sum_{\beta\in I_{1}}\sum_{\alpha\in E_{\beta} }\frac{|d_{\beta,\alpha}|^{2}2^{n}\pi^{n-1}e^{-\varphi(z_{\beta})}}{\Pi_{1\leq j \leq n}(\alpha_{j}+1)c_{j,\beta_{j}}^{2\alpha_{j}+2}}.\] _Moreover, assume that \(f_{0}=\prod_{1\leq j\leq n}(w_{j}-z_{1,\beta_{1}})^{\tilde{\alpha}_{\beta}}\) on \(V_{\beta^{*}}\), where \(\beta^{*}=(1,1,...,1)\in I_{1}\), then equality_ \[M_{H}(Z_{0},\mathcal{I}(2\psi),\rho)=\sum_{\beta\in I_{1}}\sum_{\alpha\in E_{ \beta}}\frac{|d_{\beta,\alpha}|^{2}2^{n}\pi^{n-1}e^{-\varphi(z_{\beta})}}{\Pi _{1\leq j\leq n}(\alpha_{j}+1)c_{j,\beta_{j}}^{2\alpha_{j}+2}}\] _holds if and only if the following statements hold:_ \((1)\)_\(\varphi_{j}=2\log|g_{j}|+2u_{j}\) for any \(j\in\{1,2,...,n\}\), where \(u_{j}\) is a harmonic function on \(\mathcal{C}\) and \(g_{j}\) is a holomorphic function on \(D_{j}\) satisfying \(g_{j}(z_{j,k})\neq 0\) for any \(k\in\{1,2,...,m_{j}\}\);_ \((2)\) _there exists a nonnegative integer \(\gamma_{j,k}\) for any \(j\in\{1,2,...,n\}\) and \(k\in\{1,2,...,m_{j}\}\), which satisfies that \(\Pi_{1\leq k\leq m_{j}}\chi_{j,z_{j,k}}^{\gamma_{j,k}+1}=\chi_{j,-u_{j}}\) and \(\sum_{1\leq j\leq n}\frac{\gamma_{j,\beta_{j}}+1}{p_{j,\beta_{j}}}=1\) for any \(\beta\in I_{1}\);_ \((3)\)_\(f_{0}=c_{\beta}\Pi_{1\leq j\leq n}(w_{j}-z_{j,\beta_{j}})^{\gamma_{j,\beta_{j }}}+g_{\beta}\) on \(V_{\beta}\) for any \(\beta\in I_{1}\), where \(c_{\beta}\) is a constant and \(g_{\beta}\) is a holomorphic function on \(V_{\beta}\) such that \((g_{\beta},z_{\beta})\in\mathcal{I}(\psi)_{z_{\beta}}\);_ \((4)\)_\(\lim_{z\to z_{\beta}}\frac{c_{\beta}\Pi_{1\leq j\leq n}(w_{j}-z_{j,\beta_{j }})^{\gamma_{j,\beta}}d_{w_{1}\wedge dw_{2}\wedge...\wedge dw_{n}}}{\wedge_{ 1\leq j\leq n}g_{j}(P_{j})_{*}\left(f_{w_{j}}\left(\Pi_{1\leq k\leq m_{j}}f_{ z_{j,k}}^{\gamma_{j,k}+1}\right)\left(\sum_{1\leq k\leq m_{j}}p_{j,k} \frac{df_{z_{j,k}}}{\gamma_{z_{j,k}}}\right)\right)}=c_{0}\) for any \(\beta\in I_{1}\), where \(c_{0}\in\mathbb{C}\backslash\{0\}\) is a constant independent of \(\beta\)._ Denote that \(L_{k}:=\{\sum_{1\leq j\leq n}\alpha_{j}\leq k:\alpha=(\alpha_{1},\ldots,\alpha _{n})\in\mathbb{Z}_{\geq 0}^{n}\}\). **Corollary 1.14**.: _Let \(k\) be a nonnegative integer. Then there is a constant \(C\) (depending on \(k\) and \(Z_{0}\)), such that for any \(a_{\beta,\alpha}\in\mathbb{C}\), where \(\beta\in I_{1}\) and \(\alpha\in L_{k}\), there exists \(f\in H^{2}_{\rho}(M,\partial M)\) such that \(\partial^{\alpha}f^{*}(z_{\beta})=a_{\beta,\alpha}\) for any \(\beta\in I_{1}\) and \(\alpha\in L_{k}\), and_ \[\|f\|_{\partial M,\rho}^{2}\leq C\sum_{\beta\in I_{1},\alpha\in L_{k}}|a_{ \beta,\alpha}|^{2},\] _where \(\partial^{\alpha}=\left(\frac{\partial}{\partial w_{1}}\right)^{\alpha_{1}} \ldots\left(\frac{\partial}{\partial w_{n}}\right)^{\alpha_{n}}\)._ ### Minimal \(L^{2}\) integrals for the Hardy space \(H^{2}_{\lambda}(M,S)\) Let \(D_{j}\) be a planar regular region with finite boundary components which are analytic Jordan curves for any \(1\leq j\leq n\). Let \(M=\prod_{1\leq j\leq n}D_{j}\) be a bounded domain in \(\mathbb{C}^{n}\). Denote that \(S:=\prod_{1\leq j\leq n}\partial D_{j}\). Let \(Z_{j}=\{z_{j,1},z_{j,2},...,z_{j,m_{j}}\}\subset D_{j}\) for any \(j\in\{1,2,...,n\}\), where \(m_{j}\) is a positive integer. Denote that \[Z_{0}:=\prod_{1\leq j\leq n}Z_{j}\subset M.\] Let \(\psi=\max_{1\leq j\leq n}\{\sum_{1\leq k\leq m_{j}}2G_{D_{j}}(\cdot,z_{j,k})\}\). Let \(V_{z_{j,k}}\Subset D_{j}\) be a neighborhood of \(z_{j,k}\) satisfying \(V_{z_{j,k}}\cap V_{z_{j,k^{\prime}}}=\emptyset\) for any \(j\) and \(k\neq k^{\prime}\). Denote that \(I_{1}:=\{(\beta_{1},\beta_{2},...,\beta_{n}):1\leq\beta_{j}\leq m_{j}\text{ for any }j\in\{1,2,...,n\}\}\), \(V_{\beta}:=\prod_{1\leq j\leq n}V_{z_{j,\beta_{j}}}\) and \(z_{\beta}:=(z_{1,\beta_{1}},z_{2,\beta_{2}},\ldots,z_{n,\beta_{n}})\in M\) for any \(\beta=(\beta_{1},\beta_{2},...,\beta_{n})\in I_{1}\). Let \(\varphi_{j}\) be a subharmonic function on \(D_{j}\), which satisfies that \(\varphi_{j}\) is continuous at \(z\) for any \(z\in\partial D_{j}\). Denote that \[\varphi(w_{1},\ldots,w_{n}):=\sum_{1\leq j\leq n}\varphi_{j}(w_{j})\] on \(M\). Let \(f_{0}\) be a holomorphic function \(\cup_{\beta\in I_{1}}V_{\beta}\). Let \(\rho\) be a Lebesgue measurable function on \(\partial M\) such that \[\rho(w_{1},\ldots,w_{n})=\left(\sum_{1\leq k\leq m_{j}}2\frac{\partial G_{D_{j }}(w_{j},z_{j,k})}{\partial v_{w_{j}}}\right)^{-1}\times\prod_{1\leq l\leq n}e^ {-\varphi_{l}(w_{l})}\] on \(\partial D_{j}\times M_{j}\). Let \(c\) be a positive function on \([0,+\infty)\), which satisfies that \(c(t)e^{-t}\) is decreasing on \([0,+\infty)\), \(\lim_{t\to 0+0}c(t)=c(0)=1\) and \(\int_{0}^{+\infty}c(t)e^{-t}dt<+\infty\). Let \[\lambda(w_{1},\ldots,w_{n})=\prod_{1\leq j\leq n}\left(\sum_{1\leq k\leq m_{j} }2\frac{\partial G_{D_{j}}(w_{j},z_{j,k})}{\partial v_{w_{j}}}\right)^{-1}e^{ -\varphi_{j}(w_{j})}\] on \(S=\prod_{1\leq j\leq n}\partial D_{j}\). Note that \(\lambda\) is continuous on \(S\). Let \(h_{j}\) be a holomorphic function on a neighborhood of \(Z_{j}\) for any \(1\leq j\leq n\) satisfying that there exists \(k\in\{1,\ldots,m_{j}\}\) such that \(h_{j}(z_{j,k})\neq 0\). Denote that \(h_{0}=\prod_{1\leq j\leq n}h_{j}\). Let us consider the following minimal integral. Let \(J_{\beta}\) be the maximal ideal of \(\mathcal{O}_{z_{\beta}}\) for any \(\beta\in I_{1}\). Denote that \[M_{S}(Z_{0},J,\lambda):=\inf\bigg{\{} \|f\|_{S,\lambda}^{2}:f\in H_{\lambda}^{2}(M,S)\] \[\text{s.t. }f^{*}(z_{\beta})=h_{0}(z_{\beta})\text{ for any }\beta\in I_{1} \bigg{\}}\] and \[M_{H}(Z_{0},J,\rho):=\inf\bigg{\{} \|f\|_{\partial M,\rho}^{2}:f\in H_{\rho}^{2}(M,\partial M)\] \[\text{s.t. }f^{*}(z_{\beta})=h_{0}(z_{\beta})\text{ for any }\beta\in I_{1} \bigg{\}}.\] We present a relation between \(M_{S}(Z_{0},J,\lambda)\) and \(M_{H}(Z_{0},J,\rho)\). **Theorem 1.15**.: _Assume that \(M_{H}(Z_{0},J,\rho)<+\infty\). Then_ \[M_{S}(Z_{0},J,\lambda)\leq\frac{M_{H}(Z_{0},J,\rho)}{n\pi^{n-1}} \tag{1.3}\] _holds, and equality holds if and only if the following three statements hold_ (1)_\(\varphi_{j}=2u_{j}\) for any \(1\leq j\leq n\), where \(u_{j}\) is a harmonic function on \(D_{j}\);_ (2)_\(\prod_{1\leq k\leq m_{j}}\chi_{j,z_{j,k}}=\chi_{j,-u_{j}}\) for any \(1\leq j\leq n\);_ (3) _For any \(j\), there exists a constant \(c_{j}\neq 0\) such that_ \[\lim_{z\to z_{j,k}}\frac{P_{j}^{*}\left(f_{u_{j}}\left(\prod_{1\leq k\leq m_{j }}f_{z_{j,k}}\right)\left(\sum_{1\leq k\leq m_{j}}\frac{df_{z_{j,k}}}{f_{z_{j,k }}}\right)\right)}{h_{j}dw_{j}}=c_{j}\] _holds for any \(1\leq k\leq m_{j}\)._ ## 2. Preparations In this section, we do some preparations. ### Some results on Hardy space \(H^{2}(d)\) Let \(D\) be a planar regular region with finite boundary components which are analytic Jordan curves (see [18], [22]). In this section, we recall some propoerties related to Hardy space \(H^{2}(D)\). Let \(H^{2}(D)\) (see [18]) denote the analytic Hardy class on \(D\) defined as the set of all analytic functions \(f(z)\) on \(D\) such that the subharmonic functions \(|f(z)|^{2}\) have harmonic majorants \(U(z)\): \[|f(z)|^{2}\leq U(z)\text{ on }D.\] Then each function \(f(z)\in H^{2}(D)\) has Fatou's nontangential boundary value a.e. on \(\partial D\) belonging to \(L^{2}(\partial D)\) (see [1]). It is well know (see [17]) that if a subharmonic function has a harmonic majorant in \(D\), then there exists a least harmonic majorant. Denote the least harmonic majorant of \(|f|^{2}\) by \(u_{f}\). Let \(z_{0}\in D\). Let \(L^{2}(\partial D,\rho)\) be the space of complex valued measurable function \(h\) on \(\partial D\), normed by \[\|h\|^{2}_{\partial D,\rho}=\frac{1}{2\pi}\int_{\partial D}|h|^{2}\rho|dz|,\] where \(\rho=\frac{\partial G_{D}(z,z_{0})}{\partial v_{z}}\) is a positive continuous function on \(\partial D\) by the analyticity of \(\partial D\), \(G_{D}(z,z_{0})\) is the Green function on \(D\), and \(\partial/\partial v_{z}\) denotes the derivative along the outer normal unit vector \(v_{z}\). The following lemma gives some properties related to the Hardy space \(H^{2}(D)\). **Lemma 2.1** ([17]).: \((a)\) _If \(f\in H^{2}(D)\), there is a function \(f_{*}\) on \(\partial D\) such that \(f\) has nontangential boundary value \(f_{*}\) almost everywhere on \(\partial D\). The map \(\gamma:f\mapsto f_{*}\) is an injective linear map from \(H^{2}(D)\) into \(L^{2}(\partial D,\rho)\) and_ \[\|f_{*}\|^{2}_{\partial D,\rho}=u_{f}(z_{0})\] _holds for any \(f\in H^{2}(D)\), where \(u_{f}\) is the least harmonic majorant of \(|f|^{2}\)._ \((b)\)_\(g\in\gamma(H^{2}(D))\) if and only if_ \[\int_{\partial D}g(z)\phi(z)dz=0\] _holds for any holomorphic function \(\phi\) on a neighborhood of \(\overline{D}\)._ \((c)\) _The inverse of \(\gamma\) is given by_ \[f(w)=\frac{1}{2\pi\sqrt{-1}}\int_{\partial D}\frac{f_{*}(z)}{z-w}dz \tag{2.1}\] _for any \(z\in D\)._ Equality (2.1) in Lemma 2.1 implies the following lemma. **Lemma 2.2**.: _If \(\lim_{n\to+\infty}\|\gamma(f_{n})\|_{\partial D,\rho}=0\) for \(f_{n}\in H^{2}(D)\), then \(f_{n}\) uniformly converges to \(0\) on any compact subset of \(D\)._ **Lemma 2.3**.: _For any compact set \(V\subset D\) and nonnegative integer \(k\), there is a constant \(C>0\), such that_ \[|f^{(k)}(w)|^{2}\leq C\int_{\partial D}|f_{*}|^{2}|dz|\] _for any \(w\in V\) and \(f\in H^{2}(D)\)._ Proof.: By equality (2.1), we have \[f^{(k)}(w)=\frac{(-1)^{k+1}k!}{2\pi\sqrt{-1}}\int_{\partial D}\frac{f_{*}(z)}{(z-w )^{k+1}}dz\] for any \(w\in D\) and \(f\in H^{2}(D)\). Hence, there exists a constant \(C>0\), such that \(|f^{(k)}(w)|^{2}\leq C\int_{\partial D}|f_{*}|^{2}|dz|\) for any \(w\in V\). **Lemma 2.4** ([17]).: \(H^{2}(D)\) _is a Hilbert space equipped with the inner product_ \[\ll f,g\gg_{\partial D,\rho}=\frac{1}{2\pi}\int_{\partial D}f_{*}\overline{g_ {*}}\rho|dz|,\] _where \(\rho=\frac{\partial G_{D}(z,z_{0})}{\partial v_{z}}\). Moreover, \(\mathcal{O}(D)\cap C(\overline{D})\) is dense in \(H^{2}(D)\)._ **Lemma 2.5**.: _Let \(f_{n}\in H^{2}(D)\) for any \(n\in\mathbb{Z}_{>0}\). Assume that \(f_{n}\) uniformly converges to \(0\) on any compact subset of \(D\) and there exists \(f\in L^{2}(\partial D,\rho)\) such that \(\lim_{n\to+\infty}\|\gamma(f_{n})-f\|_{\partial D,\rho}=0\). Then we have \(f=0\)._ Proof.: It follows from Lemma 2.1 and Lemma 2.4 that there exists \(f_{0}\in H^{2}(D)\) such that \(\gamma(f_{0})=f\). Using Lemma 2.2, we get that \(f_{n}-f_{0}\) uniformly converges to \(0\) on any compact subset of \(D\), i.e. \(f_{0}=0\), which implies that \(f=0\). Let \(\{D_{k}\}_{k\in\mathbb{Z}_{>0}}\) be an increasing sequence of domains with analytic boundaries, such that \(z_{0}\in D_{1}\) and \(\cup_{k=1}^{+\infty}D_{k}=D\). Let \(G_{D_{k}}(\cdot,z_{0})\) be the Green function of \(D_{k}\). **Lemma 2.6** (see [17]).: \(\|f_{*}\|_{\partial D,\rho}^{2}=\lim_{k\to+\infty}\frac{1}{2\pi}\int_{ \partial D_{k}}|f|^{2}\frac{\partial G_{D_{k}}(z,z_{0})}{\partial v_{z}}|dz|\) _holds for any \(f\in H^{2}(D)\)._ We recall a well-known property for the Green function \(G_{D}(\cdot,z_{j})\) on \(D\). **Lemma 2.7** (see [10]).: _Let \(Z^{\prime}_{0}:=\{z_{j}:j\in\mathbb{Z}_{\geq 1}\,\&\,j<\gamma\}\) be a discrete subset of \(D\), where \(\gamma\in\mathbb{Z}_{\geq 1}\cup\{+\infty\}\). Let \(\psi\) be a negative subharmonic function on \(D\) such that \(\frac{1}{2}v(dd^{c}\psi,z_{j})\geq p_{j}>0\) for any \(j\), where \(p_{j}\) is a constant. Then \(2\sum_{1\leq j<\gamma}p_{j}G_{D}(\cdot,z_{j})\) is a subharmonic function on \(D\) satisfying that \(2\sum_{1\leq j<\gamma}p_{j}G_{D}(\cdot,z_{j})\geq\psi\) and \(2\sum_{1\leq j<\gamma}p_{j}G_{D}(\cdot,z_{j})\) is harmonic on \(D\backslash Z^{\prime}_{0}\)._ We recall the following basic formula. **Lemma 2.8** (see [13]).: \(\frac{\partial\psi}{\partial v_{z}}=\left(\left(\frac{\partial\psi}{\partial x }\right)^{2}+\left(\frac{\partial\psi}{\partial y}\right)^{2}\right)^{\frac{1} {2}}\) _on \(\partial D\), where \(\partial/\partial v_{z}\) denotes the derivative along the outer normal unit vector \(v_{z}\)._ Let \(\psi=\sum_{1\leq j\leq m}p_{j}G_{D}(\cdot,z_{j})\), where \(p_{j}>0\) and \(\{z_{j}\}\subset D\) satisfying \(z_{j}\neq z_{k}\) for \(j\neq k\). Then there exist a neighborhood \(U\) of \(\partial D\) and \(r_{0}\in(0,1)\) such that \(\{z\in D:\psi(z)\geq\log r_{0}\}\Subset U\) and \(dG_{D}(\cdot,z_{j})\neq 0\) on \(U\cap\overline{D}\) for any \(j\). The following lemma will be used in the proof of Theorem 1.5. **Lemma 2.9**.: _Let \(\varphi\) be a positive Lebesgue measurable function on \(U\cap\overline{D}\) satisfying that \(\lim_{z\to\tilde{z}}\varphi(z)=\varphi(\tilde{z})\) for any \(\tilde{z}\in\partial D\). Then_ \[\int_{\partial D}|f|^{2}\varphi|dz|=\lim_{r\to 1-0}\int_{\partial D_{r}}|f|^{2} \varphi|dz| \tag{2.2}\] _holds for any \(f\in H^{2}(D)\), where \(D_{r}=\{z\in\overline{D}:\psi(z)<\log r\}\) for \(r\in[r_{0},1]\)._ Proof.: Following from Lemma 2.7, we know that \(\psi-\log r=\sum_{1\leq j\leq m}p_{j}G_{D_{r}}(\cdot,z_{j})\) on \(D_{r}\). Thus, using Lemma 2.6, we get that \[\begin{split}\lim_{r\to 1-0}\int_{\partial D_{r}}|\tilde{f}|^{2} \frac{\partial\psi}{\partial v_{z}}|dz|&=\sum_{1\leq j\leq m}p_{j }\lim_{r\to 1-0}\int_{\partial D_{r}}|\tilde{f}|^{2}\frac{\partial G_{D_{r}}(z, z_{j})}{\partial v_{z}}|dz|\\ &=\sum_{1\leq j\leq m}p_{j}\int_{\partial D}|\tilde{f}|^{2}\frac{ \partial G_{D}(z,z_{j})}{\partial v_{z}}|dz|\\ &=\int_{\partial D}|\tilde{f}|^{2}\frac{\partial\psi}{\partial v _{z}}|dz|\end{split} \tag{2.3}\] holds for any \(\tilde{f}\in H^{2}(D)\). As \(\lim_{z\to\tilde{z}}\varphi(z)=\varphi(\tilde{z})\) for any \(\tilde{z}\in\partial D_{1}\) and \(dG_{D}(\cdot,z_{j})\neq 0\) on \(U\cap\overline{D}\) for any \(j\), there exists a positive number \(L_{1}\) such that \[\frac{1}{L_{1}}<\inf_{\{z\in\overline{D}:\psi(z)\geq\log r_{0}\}}\min\{| \bigtriangledown\psi|,\varphi\}\leq\sup_{\{z\in\overline{D}:\psi(z)\geq\log r _{0}\}}\max\{|\bigtriangledown\psi|,\varphi\}<L_{1},\] where \(|\bigtriangledown|^{2}=\big{(}\frac{\partial\cdot}{\partial x}\big{)}^{2}+ \Big{(}\frac{\partial\cdot}{\partial y}\Big{)}^{2}\). By Lemma 2.4, there exists \(\{f_{n}\}_{n\in\mathbb{Z}_{>0}}\subset\mathcal{O}(D)\cap C(\overline{D})\) such that \[\lim_{n\to+\infty}\int_{\partial D}|f_{n}-f|^{2}\varphi|dz|=0. \tag{2.4}\] It follows from equality (2.3) and Lemma 2.8 that \[\begin{split}\limsup_{r\to 1-0}\int_{\partial D_{r}}|f_{n}-f|^{2} \varphi|dz|&\leq L_{1}^{2}\limsup_{r\to 1-0}\int_{\partial D_{r}}|f_{n}-f|^{2} \frac{\partial\psi}{\partial v_{z}}|dz|\\ &\leq L_{1}^{2}\int_{\partial D}|f_{n}-f|^{2}\frac{\partial\psi} {\partial v_{z}}|dz|\\ &\leq L_{1}^{4}\int_{\partial D}|f_{n}-f|^{2}\varphi|dz|.\end{split} \tag{2.5}\] Using the dominated convergence theorem, we know that \[\lim_{r\to 1-0}\int_{\partial D_{r}}|f_{n}|^{2}\varphi|dz|=\int_{\partial D}|f _{n}|^{2}\varphi|dz| \tag{2.6}\] holds for any \(n\in\mathbb{Z}_{>0}\). Following from equality (2.4), inequality (2.5) and equality (2.6), we have \[\begin{split}&\limsup_{r\to 1-0}\left(\int_{\partial D_{r}}|f|^{2} \varphi|dz|\right)^{\frac{1}{2}}\\ \leq&\liminf_{n\to+\infty}\left(\limsup_{r\to 1-0} \left(\int_{\partial D_{r}}|f_{n}|^{2}\varphi|dz|\right)^{\frac{1}{2}}+\limsup _{r\to 1-0}\left(\int_{\partial D_{r}}|f_{n}-f|^{2}\varphi|dz|\right)^{\frac{1} {2}}\right)\\ \leq&\liminf_{n\to+\infty}\left(\left(\int_{\partial D }|f_{n}|^{2}\varphi|dz|\right)^{\frac{1}{2}}+L_{1}^{2}\left(\int_{\partial D}|f_ {n}-f|^{2}\varphi|dz|\right)^{\frac{1}{2}}\right)\\ =&\left(\int_{\partial D}|f|^{2}\varphi|dz|\right)^{ \frac{1}{2}}.\end{split}\] By Fatou's Lemma, we have \[\liminf_{r\to 1-0}\left(\int_{\partial D_{r}}|f|^{2}\varphi|dz|\right)^{\frac{1}{ 2}}\geq\left(\int_{\partial D}|f|^{2}\varphi|dz|\right)^{\frac{1}{2}}.\] Thus, equality (2.2) holds. Let \(Z_{0}:=\{z_{j}:1\leq j\leq m\}\) be a subset of \(D\). Let \(\rho\) be a positive continuous function on \(\partial D\). Let \(\mathfrak{a}=(a_{j,l})\) (\(1\leq j\leq m,0\leq l\leq k_{j}\)), where \(a_{j,l}\in\mathbb{C}\) such that \(\sum_{1\leq j\leq m}\sum_{0\leq l\leq k_{j}}|a_{j,l}|\neq 0\). Denote that \[M(Z_{0},\mathfrak{a},\tilde{\rho}):=\inf\bigg{\{}\int_{D}|f|^{2} \tilde{\rho}:f\in\mathcal{O}(D)\\ \text{s.t. }f^{(l)}(z_{j})=l!a_{j,l}\text{ for any }0\leq l \leq k_{j}\text{ and any }1\leq j\leq m\bigg{\}}.\] and \[M_{H}(Z_{0},\mathfrak{a},\rho):=\inf\bigg{\{}\frac{1}{2\pi}\int_ {\partial D}|f|^{2}\rho|dz|:f\in H^{2}(D)\\ \text{s.t. }f^{(l)}(z_{j})=l!a_{j,l}\text{ for any }0\leq l \leq k_{j}\text{ and any }1\leq j\leq m\bigg{\}}.\] **Lemma 2.10**.: _If \(M_{H}(Z_{0},\mathfrak{a},\rho)<+\infty\), then there exists a unique \(f\in H^{2}(D)\) such that \(M_{H}(Z_{0},\mathfrak{a},\rho)=\frac{1}{2\pi}\int_{\partial D}|f|^{2}\rho|dz|\), and \(f^{(l)}(z_{j})=l!a_{j,l}\) for any \(0\leq l\leq k_{j}\) and any \(1\leq j\leq m\)._ Proof.: Firstly, we prove the existence of \(f\). As \(M_{H}(Z_{0},\mathfrak{a},\rho)<+\infty\), then there is \(\{f_{s}\}_{s\in\mathbb{Z}_{>0}}\subset H^{2}(D)\) such that \[\lim_{s\to+\infty}\frac{1}{2\pi}\int_{\partial D}|f_{s}|^{2}\rho|dz|=M_{H}(Z_{ 0},\mathfrak{a},\rho),\] and \(f_{s}^{(l)}(z_{j})=l!a_{j,l}\) for any \(0\leq l\leq k_{j}\) and any \(1\leq j\leq m\). Thus, there exists a subsequence of \(\{f_{s}\}_{s\in\mathbb{Z}_{>0}}\) (denoted also by \(\{f_{s}\}_{s\in\mathbb{Z}_{>0}}\)), which satisfies that \(\{f_{s}\}_{s\in\mathbb{Z}_{>0}}\) weakly converges to a function \(g\in L^{2}(\partial D,\rho)\) in the Hilbert space \(L^{2}(\partial D,\rho)\) and \(\{f_{s}\}_{s\in\mathbb{Z}_{>0}}\) uniformly converges to a function \(f\in\mathcal{O}(D)\) on any compact subset of \(D\). Then we have \[\frac{1}{2\pi}\int_{\partial D}|g|^{2}\rho|dz|\leq\lim_{s\to+\infty}\frac{1}{2 \pi}\int_{\partial D}|f_{s}|^{2}\rho|dz|=M_{H}(Z_{0},\mathfrak{a},\rho). \tag{2.7}\] By Lemma 2.1, we have \[\int_{\partial D}g(z)\phi(z)dz =\int_{\partial D}g(z)\left(\phi(z)\frac{dz}{|dz|\rho(z)}\right) \rho(z)|dz|\] \[=\lim_{s\to+\infty}\int_{\partial D}f_{s}(z)\left(\phi(z)\frac{ dz}{|dz|\rho(z)}\right)\rho(z)|dz|\] \[=\lim_{s\to+\infty}\int_{\partial D}f_{s}(z)\phi(z)dz\] \[=0\] for any holomorphic function \(\phi\) on a neighborhood of \(\overline{D}\), and \[f(w) =\lim_{s\to+\infty}f_{s}(w)\] \[=\lim_{s\to+\infty}\frac{1}{2\pi\sqrt{-1}}\int_{\partial D}\frac{f _{s}(z)}{z-w}dz\] \[=\lim_{s\to+\infty}\int_{\partial D}f_{s}(z)\left(\frac{dz}{|dz|(z -w)\rho(z)}\right)\rho(z)|dz|\] \[=\frac{1}{2\pi\sqrt{-1}}\int_{\partial D}\frac{g(z)}{z-w}dz.\] Thus, it follows from Lemma 2.1 that \(f\in H^{2}(D)\) and \(\gamma(f)=g\). By inequality (2.7) and the definition of \(M_{H}(Z_{0},\mathfrak{a},\rho)\), we get \[\frac{1}{2\pi}\int_{\partial D}|f|^{2}\rho|dz|=M_{H}(Z_{0},\mathfrak{a},\rho).\] Thus, we obtain the existence of \(f\). Secondly, we prove the uniqueness of \(f\) by contradiction: if not, there exist two different \(g_{1}\in H^{2}(D)\) and \(g_{2}\in H^{2}(D)\) satisfying that \(\frac{1}{2\pi}\int_{\partial D}|g_{s}|^{2}\rho|dz|=M_{H}(Z_{0},\mathfrak{a},\rho)\), and \(g_{s}^{(l)}(z_{j})=l!a_{j,l}\) for any \(0\leq l\leq k_{j}\) and any \(1\leq j\leq m\), where \(s=1,2\). Note that \[\frac{1}{2\pi}\int_{\partial D}|\frac{g_{1}+g_{2}}{2}|^{2}\rho|dz|+\frac{1}{2 \pi}\int_{\partial D}|\frac{g_{1}-g_{2}}{2}|^{2}\rho|dz|=M_{H}(Z_{0},\mathfrak{ a},\rho),\] hence we obtain that \[\frac{1}{2\pi}\int_{\partial D}|\frac{g_{1}+g_{2}}{2}|^{2}\rho|dz|<M_{H}(Z_{0},\mathfrak{a},\rho),\] which contradicts the definition of \(M_{H}(Z_{0},\mathfrak{a},\rho)\). Thus, Lemma 2.10 has been proved. In the following, let \(\psi\) be as in Theorem 1.5. The following lemma will be used in the proof of Lemma 2.13. **Lemma 2.11** ([12]).: _Let \(f\) be a holomorphic function on \(D\). Assume that_ \[\liminf_{r\to 1-0}\frac{\int_{\{z\in D:\psi(z)\geq\log r\}}|f(z)|^{2}}{1-r}<+\infty,\] _then we have \(f\in H^{2}(D)\)._ We recall the following coarea formula. **Lemma 2.12** (see [2]).: _Suppose that \(\Omega\) is an open set in \(\mathbb{R}^{n}\) and \(u\in C^{1}(\Omega)\). Then for any \(g\in L^{1}(\Omega)\),_ \[\int_{\Omega}g(x)|\bigtriangledown u(x)|dx=\int_{\mathbb{R}}\left(\int_{u^{-1} (t)}g(x)dH_{n-1}(x)\right)dt,\] _where \(H_{n-1}\) is the \((n-1)\)-dimensional Hausdorff measure._ Let \(\tilde{\rho}\) be a Lebesgue measurable function on \(\overline{D}\), which satisfies that \(\inf_{\overline{D}}\tilde{\rho}>0\) and \(\tilde{\rho}(z)\leq\liminf_{w\to z}\tilde{\rho}(w)\) for any \(z\in\partial D\). Denote that \[\rho=\left(\frac{\partial\psi}{\partial v_{z}}\right)^{-1}\tilde{\rho}\] on \(\partial D\). In the following, we give a sufficient condition for \(f\in H^{2}(D)\). **Lemma 2.13**.: _Let \(f\) be a holomorphic function on \(D\). Assume that_ \[\liminf_{r\to 1-0}\frac{\int_{\{z\in D:\psi(z)\geq\log r\}}|f(z)|^{2}\tilde{ \rho}}{1-r}<+\infty,\] _then we have \(f\in H^{2}(D)\) and_ \[\int_{\partial D}|f|^{2}\rho|dz|\leq\liminf_{r\to 1-0}\frac{\int_{\{z\in D:\psi(z) \geq\log r\}}|f(z)|^{2}\tilde{\rho}}{1-r}. \tag{2.8}\] Proof.: Note that \(\inf_{\overline{D}}\tilde{\rho}>0\), then Lemma 2.11 tells us that \(f\in H^{2}(D)\). Thus, it suffices to prove inequality (2.8). Note that \(f\) has Fatou's nontangential boundary value on \(\partial D\). By Lemma 2.8, we have \[\int_{\partial D}|f|^{2}\rho|dz|=\int_{\partial D}|f|^{2}\left(\frac{\partial \psi}{\partial v_{z}}\right)^{-1}\tilde{\rho}|dz|=\int_{\partial D}|f|^{2} \tilde{\rho}\left|\bigtriangledown\psi\right|^{-1}|dz|.\] As \(\frac{\partial\psi}{\partial v_{z}}>0\) on \(\partial D\), \(\psi=0\) on \(\partial D\) and \(\tilde{\rho}(z)\leq\liminf_{w\to z}\tilde{\rho}(w)\) for any \(z\in\partial D\), it follows from Fatou's Lemma and Lemma 2.12 that \[\int_{\partial D}|f|^{2}\tilde{\rho}\left|\bigtriangledown\psi \right|^{-1}|dz|\] \[\leq \liminf_{r\to 1-0}\frac{\int_{\log r}^{0}\left(\int_{\{z\in D:\psi(z )=t\}}|f|^{2}\rho\left|\bigtriangledown\psi\right|^{-1}|dz|\right)dt}{-\log r}\] \[= \liminf_{r\to 1-0}\frac{\int_{\{z\in D:\psi(z)\geq\log r\}}|f|^{2} \tilde{\rho}}{1-r}\times\frac{1-r}{-\log r}\] \[= \liminf_{r\to 1-0}\frac{\int_{\{z\in D:\psi(z)\geq\log r\}}|f|^{2} \tilde{\rho}}{1-r}.\] Thus, inequality (2.8) holds. ### The Hardy space over \(\partial M\) Let \(D_{j}\) be a planar regular region with finite boundary components which are analytic Jordan curves for any \(1\leq j\leq n\). Let \[M=\prod_{1\leq j\leq n}D_{j}\] be a bounded domain in \(\mathbb{C}^{n}\). In this section, we recall and give some properties on the Hardy space over \(\partial M\), which will be used in the proofs of the main theorems. #### 2.2.1. Some results on \(H^{2}_{\rho}(M,\partial D_{j}\times M_{j})\) Let \(M_{j}=\prod_{1\leq l\leq n,l\neq j}D_{l}\), then \(M=D_{j}\times M_{j}\). Let \(z_{j}\in D_{j}\) for any \(1\leq j\leq n\). Recall that \(H^{2}(D_{j})\) denotes the Hardy space on \(D_{j}\) and there exists a norm-preserving linear map \(\gamma_{j}:H^{2}(D_{j})\to L^{2}(\partial D_{j},\frac{\partial G_{D_{j}}(z,z_ {j})}{\partial v_{z}})\) (see Section 2.1) satisfying that \(\gamma_{j}(f)\) denotes the nontangential boundary value of \(f\) a.e. on \(\partial D_{j}\) for any \(f\in H^{2}(D_{j})\), where \(G_{D_{j}}(\cdot,z_{j})\) is the Green function on \(D_{j}\). Let \(d\mu_{j}\) be the Lebesgue measure on \(M_{j}\) for any \(1\leq j\leq n\), and let \(d\mu\) be a measure on \(\partial M\) defined by \[\int_{\partial M}hd\mu=\sum_{1\leq j\leq n}\frac{1}{2\pi}\int_{M_{j}}\int_{ \partial D_{j}}h(w_{j},\hat{w}_{j})|dw_{j}|d\mu_{j}(\hat{w}_{j})\] for any \(h\in L^{1}(\partial M)\), where \(\hat{w}_{j}:=(w_{1},\ldots,w_{j-1},w_{j+1},\ldots,w_{n})\). For simplicity, denote \(d\mu|_{\partial D_{j}\times M_{j}}\) by \(d\mu\). Let us consider a space over \(\partial D_{j}\times M_{j}\). Denote \[\{f\in L^{2}(\partial D_{j}\times M_{j},d\mu):\exists f^{*}\in \mathcal{O}(M),\,\text{s.t. }f^{*}(\cdot,\hat{w}_{j})\in H^{2}(D_{j})\text{ for any }\hat{w}_{j}\in M_{j}\] \[\&\,f=\gamma_{j}(f^{*})\text{ a.e. on }\partial D_{j}\times M_{j}\}\] by \(H^{2}(M,\partial D_{j}\times M_{j})\). In [13], we proved that there exists a unique linear injective map \(P_{\partial M,j}\) from \(H^{2}(M,\partial D_{j}\times M_{j})\) to \(\mathcal{O}(M)\) such that \(P_{\partial M,j}(f)\) (denoted by \(f^{*}\) for simplicity) satisfies the following conditions for any \(f\in H^{2}(M,\partial D_{j}\times M_{j})\): (1) \(P_{\partial M,j}(f)(\cdot,\hat{w}_{j})\in H^{2}(D_{j})\) for any \(\hat{w}_{j}\in M_{j}\); (2) \(f=\gamma_{j}(P_{\partial M,j}(f))\) a.e. on \(\partial D_{j}\times M_{j}\). Let \(\rho\) be a Lebesgue measurable function on \(\partial M\) such that \(\inf_{\partial M}\rho>0\). Denote \[\ll f,g\gg_{\partial D_{j}\times M_{j},\rho}:=\frac{1}{2\pi}\int_{M_{j}}\int_ {\partial D_{j}}f(w_{j},\hat{w}_{j})\overline{g(w_{j},\hat{w}_{j})}\rho|dw_{j} |d\mu_{j}(\hat{w}_{j})\] for any \(f,g\in L^{2}(\partial D_{j}\times M_{j},\rho d\mu)\subset L^{2}(\partial D_{ j}\times M_{j},d\mu)\). Denote that \[H^{2}_{\rho}(M,\partial D_{j}\times M_{j}):=\{f\in H^{2}(M,\partial D_{j} \times M_{j}):\|f\|_{\partial D_{j}\times M_{j},\rho}<+\infty\}.\] \(H^{2}_{\rho}(M,\partial D_{j}\times M_{j})\) is a Hilbert space equipped with the inner product \(\ll\cdot\), \(\cdot\gg_{\partial D_{j}\times M_{j},\rho}\) (see [13]). We recall the following lemma. **Lemma 2.14** ([13]).: _For any compact subset \(K\) of \(M\), there exists a positive constant \(C_{K}\) such that_ \[|f^{*}(z)|\leq C_{K}\|f\|_{\partial D_{j}\times M_{j},\rho}\] _holds for any \(z\in K\) and \(f\in H^{2}_{\rho}(M,\partial D_{j}\times M_{j})\)._ In the following, assume that \(\rho|_{\partial D_{1}\times M_{1}}=\rho_{1}\times\lambda_{1}\), where \(\rho_{1}\) is a positive Lebesgue measurable function on \(\partial D_{1}\), and \(\lambda_{1}\) is a positive Lebesgue measurable function on \(M_{1}\) such that \(A^{2}(M_{1},\lambda_{1}):=\{f\in\mathcal{O}(M_{1}):\int_{M_{1}}|f|^{2}\lambda_ {1}<+\infty\}\) is a Hilbert space with the inner \(\ll f,g\gg_{M_{1},\lambda_{1}}:=\int_{M_{1}}f\overline{g}\lambda_{1}\), i.e., \(\lambda_{1}\) is an admissible weight on \(M_{1}\) (see Section 2.4). **Lemma 2.15** ([13]).: _Assume that \(H^{2}_{\rho}(M,\partial D_{1}\times M_{1})\neq\{0\}\). Then we have \(H^{2}_{\rho_{1}}(D_{1},\partial D_{1})\neq\{0\}\) and \(A^{2}(M_{1},\lambda_{1})\neq\{0\}\). Furthermore, \(\{e_{m}(z)\tilde{e}_{l}(w)\}_{m,l\in\mathbb{Z}_{>0}}\) is a complete orthonormal basis for \(H^{2}_{\rho}(M,\partial D_{1}\times M_{1})\), where \(\{e_{m}\}_{m\in\mathbb{Z}_{>0}}\) is a complete orthonormal basis for \(H^{2}_{\rho_{1}}(D_{1},\partial D_{1})\), and \(\{\tilde{e}_{m}\}_{m\in\mathbb{Z}_{>0}}\) is a complete orthonormal basis for \(A^{2}(M_{1},\lambda_{1})\)._ Let \(p_{j,k}\) be positive real number for any \(1\leq j\leq n\) and \(1\leq k\leq m_{j}\). Let \(\psi_{1}=\sum_{1\leq k\leq m_{1}}p_{1,k}G_{D_{1}}(\cdot,z_{1,k})\) on \(D_{1}\), and let \(\hat{\psi}_{1}=\max_{2\leq j\leq n}\{\sum_{1\leq k\leq m_{j}}p_{j,k}G_{D_{j}}( \cdot,z_{j,k})\}\) on \(M_{1}\). **Lemma 2.16**.: _Assume that \(\rho_{1}\) is a positive Lebesgue measurable function on \(U\cap\overline{D}_{1}\) satisfying that \(\lim_{z\to\tilde{z}}\rho_{1}(z)=\rho_{1}(\tilde{z})\) for any \(\tilde{z}\in\partial D_{1}\), where \(U\) is a neighborhood of \(\partial D_{1}\). Then_ \[\|f\|_{\partial D_{1}\times M_{1},\rho}=\frac{1}{2\pi}\lim_{r\to 1-0}\int_{M_{1 },r}\int_{\partial D_{1},r}|f^{*}(w_{1},\hat{w}_{1})|^{2}\rho_{1}(w_{1})|dw_{1} |\lambda_{1}(\hat{w}_{1})d\mu_{1}(\hat{w}_{1}) \tag{2.9}\] _holds for any \(f\in H^{2}_{\rho}(M,\partial D_{1}\times M_{1})\), where \(D_{1,r}=\{z\in\overline{D}_{1}:\psi_{1}(z)<\log r\}\) and \(M_{1,r}=\{z\in M_{1}:\hat{\psi}_{1}(z)<\log r\}\) for \(r\in[0,1]\)._ Proof.: Following from Lemma 2.7, we know that \[\psi_{1}-\log r=\sum_{1\leq k\leq m_{1}}p_{1,k}G_{D_{1,r}}(\cdot,z_{1,k}) \tag{2.10}\] on \(D_{1,r}\), where \(G_{D_{1,r}}(\cdot,z_{1,k})\) is the Green function on \(D_{1,r}\). For any holomorphic function \(h\) on \(D_{1}\) and any \(\tilde{z}\in D_{1}\), as \(|h|^{2}\) is subharmonic, we know that \(\int_{\partial D_{1,r}}|h(z)|^{2}\frac{G_{D_{1,r}}(z,\tilde{z})}{\partial v_{z }}|dz|\) is increasing with respect to \(r\). Combining Lemma 2.6, we have \[\lim_{r\to+\infty}\int_{\partial D_{1,r}}|h(z)|^{2}\frac{G_{D_{1,r}}(z,\tilde {z})}{\partial v_{z}}|dz|=\int_{\partial D_{1}}|h(z)|^{2}\frac{G_{D_{1}}(z, \tilde{z})}{\partial v_{z}}|dz|\] for \(h\in H^{2}(D_{1})\). Thus, it follows from Levi's Theorem and equality (2.10) that \[\lim_{r\to 1-0}\int_{M_{1,r}}\int_{\partial D_{1,r}}|\tilde{f}^{*}( w_{1},\hat{w}_{1})|^{2}\frac{\partial\psi_{1}}{\partial v_{z}}|dw_{1}|\lambda_{1}( \hat{w}_{1})d\mu_{1}(\hat{w}_{1})\] \[= \lim_{r\to 1-0}\sum_{1\leq k\leq m_{1}}p_{1,k}\int_{M_{1,r}}\int_{ \partial D_{1,r}}|\tilde{f}^{*}(w_{1},\hat{w}_{1})|^{2}\frac{\partial G_{D_{1, r}}(w_{1},z_{1,k})}{\partial v_{z}}|dw_{1}|\lambda_{1}(\hat{w}_{1})d\mu_{1}( \hat{w}_{1})\] \[= \sum_{1\leq k\leq m_{1}}p_{1,k}\int_{M_{1}}\int_{\partial D_{1}}| \tilde{f}(w_{1},\hat{w}_{1})|^{2}\frac{\partial G_{D_{1}}(w_{1},z_{1,k})}{ \partial v_{z}}|dw_{1}|\lambda_{1}(\hat{w}_{1})d\mu_{1}(\hat{w}_{1})\] \[= \int_{M_{1}}\int_{\partial D_{1}}|\tilde{f}(w_{1},\hat{w}_{1})|^{ 2}\frac{\partial\psi_{1}}{\partial v_{z}}\lambda_{1}(\hat{w}_{1})|dw_{1}|d\mu_ {1}(\hat{w}_{1}) \tag{2.11}\] holds for any \(\tilde{f}\in H^{2}_{\frac{\partial\psi_{1}}{\partial v_{z}}\lambda_{1}}(M, \partial D_{1}\times M_{1})\). There exist positive numbers \(L_{1}\) and \(r_{0}\in[0,1]\) such that \[\frac{1}{L_{1}}<\inf_{\{z\in\overline{D}_{1}:\psi_{1}(z)\geq\log r_{0}\}}\min \{|\bigtriangledown\psi_{1}|,\rho_{1}\}\leq\sup_{\{z\in\overline{D}_{1}:\psi_{ 1}(z)\geq\log r_{0}\}}\max\{|\bigtriangledown\psi_{1}|,\rho_{1}\}<L_{1}.\] By Lemma 2.15, there exist \(\{f_{l}\}_{l\in\mathbb{Z}_{>0}}\subset H^{2}_{\rho_{1}}(D_{1},\partial D_{1})\) and \(\{g_{l}\}_{l\in\mathbb{Z}_{>0}}\subset A^{2}(M_{1},\lambda_{1})\) such that \[f=\sum_{l=1}^{+\infty}f_{l}g_{l}. \tag{2.12}\] Denote that \(F_{m}:=\sum_{l=m+1}^{+\infty}f_{l}g_{l}\in H^{2}_{\rho}(M,\partial D_{1} \times M_{1})\). It follows from equality (2.11) and Lemma 2.8 that \[\begin{split}&\limsup_{r\to 1-0}\int_{M_{1,r}}\int_{ \partial D_{1,r}}|F^{*}_{m}(w_{1},\hat{w}_{1})|^{2}\rho_{1}(w_{1})|dw_{1}|\lambda _{1}(\hat{w}_{1})d\mu_{1}(\hat{w}_{1})\\ \leq& L_{1}^{2}\limsup_{r\to 1-0}\int_{M_{1,r}}\int_{ \partial D_{1,r}}|F^{*}_{m}(w_{1},\hat{w}_{1})|^{2}\frac{\partial\psi_{1}}{ \partial v_{z}}|dw_{1}|\lambda_{1}(\hat{w}_{1})d\mu_{1}(\hat{w}_{1})\\ \leq& L_{1}^{2}\int_{M_{1}}\int_{\partial D_{1}}|F_{m }(w_{1},\hat{w}_{1})|^{2}\frac{\partial\psi_{1}}{\partial v_{z}}\lambda_{1}( \hat{w}_{1})|dw_{1}|d\mu_{1}(\hat{w}_{1})\\ \leq& L_{1}^{4}\int_{M_{1}}\int_{\partial D_{1}}|F_{m }(w_{1},\hat{w}_{1})|^{2}\rho|dw_{1}|d\mu_{1}(\hat{w}_{1}).\end{split} \tag{2.13}\] Using Lemma 2.9, we have \[\limsup_{r\to 1-0}\left(\int_{M_{1,r}}\int_{\partial D_{1,r}}|\sum_{1 \leq l\leq m}f_{l}(w_{1})g_{l}(\hat{w}_{1})|^{2}\rho_{1}(w_{1})|dw_{1}|\lambda_{ 1}(\hat{w}_{1})d\mu_{1}(\hat{w}_{1})\right)^{\frac{1}{2}}\] \[\leq \limsup_{r\to 1-0}\sum_{1\leq l\leq m}\left(\int_{M_{1,r}}\int_{ \partial D_{1,r}}|f_{l}(w_{1})g_{l}(\hat{w}_{1})|^{2}\rho_{1}(w_{1})|dw_{1}| \lambda_{1}(\hat{w}_{1})d\mu_{1}(\hat{w}_{1})\right)^{\frac{1}{2}}\] \[= \limsup_{r\to 1-0}\sum_{1\leq l\leq m}\left(\int_{M_{1,r}}|g_{l}( \hat{w}_{1})|^{2}\lambda_{1}(\hat{w}_{1})d\mu_{1}(\hat{w}_{1})\int_{\partial D _{1,r}}|f_{l}(w_{1})|^{2}\rho_{1}(w_{1})|dw_{1}|\right)^{\frac{1}{2}}\] \[= \sum_{1\leq l\leq m}\left(\int_{M_{1}}|g_{l}(\hat{w}_{1})|^{2} \lambda_{1}(\hat{w}_{1})d\mu_{1}(\hat{w}_{1})\int_{\partial D_{1}}|f_{l}(w_{1} )|^{2}\rho_{1}(w_{1})|dw_{1}|\right)^{\frac{1}{2}}\] \[= \left(\int_{M_{1}}\int_{\partial D_{1}}|\sum_{1\leq l\leq m}f_{l} (w_{1})g_{l}(\hat{w}_{1})|^{2}\rho_{1}(w_{1})|dw_{1}|\lambda_{1}(\hat{w}_{1})d \mu_{1}(\hat{w}_{1})\right)^{\frac{1}{2}} \tag{2.14}\] for any \(m\in\mathbb{Z}_{>0}\). Following from equality (2.12), inequality (2.13) and (2.14), we have \[\limsup_{r\to 1-0}\left(\int_{M_{1,r}}\int_{\partial D_{1,r}}|f^{ *}(w_{1},\hat{w}_{1})|^{2}\rho_{1}(w_{1})|dw_{1}|\lambda_{1}(\hat{w}_{1})d\mu_ {1}(\hat{w}_{1})\right)^{\frac{1}{2}}\] \[\leq \liminf_{m\to+\infty}\left(\limsup_{r\to 1-0}\Big{(}\int_{M_{1,r}} \int_{\partial D_{1,r}}|\sum_{1\leq l\leq m}f_{l}g_{l}|^{2}\rho_{1}(w_{1})|dw_ {1}|\lambda_{1}(\hat{w}_{1})d\mu_{1}(\hat{w}_{1})\Big{)}^{\frac{1}{2}}\right.\] \[+\limsup_{r\to 1-0}\Big{(}\int_{M_{1,r}}\int_{\partial D_{1,r}}|F _{m}^{*}|^{2}\rho_{1}(w_{1})|dw_{1}|\lambda_{1}(\hat{w}_{1})d\mu_{1}(\hat{w}_{ 1})\Big{)}^{\frac{1}{2}}\Bigg{)}\] \[\leq \liminf_{m\to+\infty}\left(\Big{(}\int_{M_{1}}\int_{\partial D_{ 1}}|\sum_{1\leq l\leq m}f_{l}(w_{1})g_{l}(\hat{w}_{1})|^{2}\rho_{1}(w_{1})|dw_ {1}|\lambda_{1}(\hat{w}_{1})d\mu_{1}(\hat{w}_{1})\right)^{\frac{1}{2}}\] \[+\left(L_{1}^{4}\int_{M_{1}}\int_{\partial D_{1}}|F_{m}(w_{1}, \hat{w}_{1})|^{2}\rho|dw_{1}|d\mu_{1}(\hat{w}_{1})\right)^{\frac{1}{2}}\right)\] \[= (2\pi)^{\frac{1}{2}}\|f\|_{\partial D_{1}\times M_{1},\rho}.\] By Fatou's Lemma, we have \[2\pi\|f\|_{\partial D_{1}\times M_{1},\rho}^{2}\leq\liminf_{r\to 1-0}\int_{M_{1,r}} \int_{\partial D_{1,r}}|f^{*}(w_{1},\hat{w}_{1})|^{2}\rho_{1}(w_{1})|dw_{1}| \lambda_{1}(\hat{w}_{1})d\mu_{1}(\hat{w}_{1}).\] Thus, equality (2.9) holds. #### 2.2.2. Some results on \(H^{2}_{\rho}(M,\partial M)\) Let \(\rho\) be a Lebesgue measurable function on \(\partial M\) such that \(\inf_{\partial M}\rho>0\). Denote that \[\ll f,g\gg_{\partial M,\rho}:=\sum_{1\leq j\leq n}\frac{1}{2\pi}\int_{M_{j}} \int_{\partial D_{j}}f(w_{j},\hat{w}_{j})\overline{g(w_{j},\hat{w}_{j})}\rho| dw_{j}|d\mu_{j}(\hat{w}_{j})\] for any \(f,g\in L^{2}(\partial M,\rho d\mu)\subset L^{2}(\partial M,d\mu)\). The weighted Hardy space over \(\partial M\) is defined as follows: _For any \(f\in L^{2}(\partial M,\rho d\mu)\), we call \(f\in H^{2}_{\rho}(M,\partial M)\) if \(f\in H^{2}_{\rho}(M,\partial D_{j}\times M_{j})\) for any \(1\leq j\leq n\) and \(P_{\partial M,j}(f)=P_{\partial M,k}(f)\) for any \(j\neq k\)._ Denote that \(P_{\partial M}(f):=P_{\partial M,j}(f)\) for any \(f\in H^{2}_{\rho}(M,\partial M)\) (denote also by \(f^{*}\) for simplicity), and \(P_{\partial M}\) is a linear injective map from \(H^{2}_{\rho}(M,\partial M)\) to \(\mathcal{O}(M)\). \(H^{2}_{\rho}(M,\partial M)\) is a Hilbert space with the inner product \(\ll\cdot,\cdot\gg_{\partial M,\rho}\) (see [13]). Let \(Z_{0}\) be any subset of \(M\), and let \(J_{z}\) be an ideal of \(\mathcal{O}_{z}\) for any \(z\in Z_{0}\). Let \(f_{0}\) be a holomorphic function on a neighborhood of \(Z_{0}\). Denote that \[M_{H}(Z_{0},J,\rho):=\inf\bigg{\{} \|f\|_{\partial M,\rho}^{2}:f\in H^{2}_{\rho}(M,\partial M)\] \[\text{s.t. }(f^{*}-f_{0},z)\in J_{z}\text{ for any }z\in Z_{0} \bigg{\}}.\] The following Lemma will be used in the proof of Lemma 2.18. **Lemma 2.17** (see [4]).: _Let \(N\) be a submodule of \(\mathcal{O}^{q}_{\mathbb{C}^{n},o}\), \(1\leq q<+\infty\), and let \(f_{j}\in\mathcal{O}_{\mathbb{C}^{n}}(U)^{q}\) be a sequence of \(q-\)tuples holomorphic in an open neighborhood \(U\) of the origin \(o\). Assume that the \(f_{j}\) converge uniformly in \(U\) towards a \(q-\)tuple \(f\in\mathcal{O}_{\mathbb{C}^{n}}(U)^{q}\), assume furthermore that all germs \((f_{j},o)\) belong to \(N\). Then \((f,o)\in N\)._ **Lemma 2.18**.: _Assume that \(M_{H}(Z_{0},J,\rho)<+\infty\). Then there is a unique \(f\in H^{2}_{\rho}(M,\partial M)\) satisfying that \((f^{*}-f_{0},z)\in J_{z}\) for any \(z\in Z_{0}\) and \(M_{H}(Z_{0},J,\rho)=\|f\|_{\partial M,\rho}^{2}\)._ Proof.: Firstly, we prove the existence of \(f\). As \(M_{H}(Z_{0},J,\rho)<+\infty\), there is \(\{f_{j}\}_{j\in\mathbb{Z}_{>0}}\subset H^{2}_{\rho}(M,\partial M)\) such that \(\lim_{j\to+\infty}\|f_{j}\|_{\partial M,\rho}^{2}=M_{H}(Z_{0},J,\rho)<+\infty\) and \((f_{j}^{*}-f_{0},z)\in J_{z}\) for any \(z\in Z_{0}\) and any \(j\). Then there is a subsequence of \(\{f_{j}\}_{j\in\mathbb{Z}_{>0}}\) denoted also by \(\{f_{j}\}_{j\in\mathbb{Z}_{>0}}\), which weakly converges to an element \(f\in H^{2}_{\rho}(M,\partial M)\), i.e., \[\lim_{j\to+\infty}\ll f_{j},g\gg_{\partial M,\rho}=\ll f,g\gg_{\partial M,\rho} \tag{2.15}\] holds for any \(g\in H^{2}_{\rho}(M,\partial M)\). Hence we have \[\|f\|_{\partial M,\rho}^{2}\leq\lim_{j\to+\infty}\|f_{j}\|_{\partial M,\rho}^ {2}=M_{H}(Z_{0},J,\rho). \tag{2.16}\] It follows from Lemma 2.14 that there is a subsequence of \(\{f_{j}\}_{j\in\mathbb{Z}_{>0}}\) denoted also by \(\{f_{j}\}_{j\in\mathbb{Z}_{>0}}\), which satisfies that \(f_{j}^{*}\) uniformly converges to a holomorphic function \(g_{0}\) on \(M\) on any compact subset of \(M\). Following from Lemma 2.14, for any \(z\in M\), there exists \(g_{z}\in H^{2}_{\rho}(M,\partial M)\) such that \[\ll g,g_{z}\gg_{\partial M,\rho}=g(z) \tag{2.17}\] holds for any \(g\in H^{2}_{\rho}(M,\partial M)\). By equality (2.15) and (2.17), we get that \[\lim_{j\to+\infty}f_{j}^{*}(z)=f^{*}(z)\] for any \(z\in M\), hence we know that \(f^{*}=g_{0}\) and \(f_{j}^{*}\) uniformly converges to \(f^{*}\) on any compact subset of \(M\). Following from Lemma 2.17 and \((f_{j}^{*}-f_{0},z)\in J_{z}\) for any \(z\in Z_{0}\) and any \(j\), we get \[(f^{*}-f_{0},z)\in J_{z}\] for any \(z\in Z_{0}\). By definition of \(M_{H}(Z_{0},J,\rho)\) and inequality (2.16), we have \[\|f\|_{\partial M,\rho}^{2}=M_{H}(Z_{0},J,\rho).\] Thus, we obtain the existence of \(f\). Now, we prove the uniqueness of \(f\) by contradiction: if not, there exist two different \(g_{1}\in H_{\rho}^{2}(M,\partial M)\) and \(g_{2}\in H_{\rho}^{2}(M,\partial M)\) satisfying that \(\|g_{1}\|_{\partial M,\rho}^{2}=\|g_{1}\|_{\partial M,\rho}^{2}=M_{H}(Z_{0},J,\rho)\), \((g_{1}^{*}-f_{0},z)\in J_{z}\) and \((g_{2}^{*}-f_{0},z)\in J_{z}\) for any \(z\in Z_{0}\). It is clear that \[(\frac{g_{1}^{*}+g_{2}^{*}}{2}-f_{0},z)\in J_{z}.\] Note that \[\|\frac{g_{1}+g_{2}}{2}\|_{\partial M,\rho}^{2}+\|\frac{g_{1}-g_{2}}{2}\|_{ \partial M,\rho}^{2}=\frac{\|g_{1}\|_{\partial M,\rho}^{2}+\|g_{2}\|_{ \partial M,\rho}^{2}}{2}=M_{H}(Z_{0},J,\rho),\] then we obtain that \[\|\frac{g_{1}+g_{2}}{2}\|_{\partial M,\rho}^{2}<M_{H}(Z_{0},J,\rho),\] which contradicts the definition of \(M_{H}(Z_{0},J,\rho)\). Thus, Lemma 2.18 has been proved. Let \(Z_{j}=\{z_{j,1},z_{j,2},...,z_{j,m_{j}}\}\subset D_{j}\) for any \(j\in\{1,2,...,n\}\), where \(m_{j}\) is a positive integer. Denote that \[Z_{0}:=\prod_{1\leq j\leq n}Z_{j}\subset M.\] Let \(\psi=\max_{1\leq j\leq n}\{\sum_{1\leq k\leq m_{j}}p_{j,k}G_{D_{j}}(\cdot,z_{ j,k})\}\). Let \(\hat{\rho}\) be a Lebesgue measurable function on \(\overline{M}\), which satisfies that \(\inf_{\overline{M}}\hat{\rho}>0\) and \(\hat{\rho}(w_{j},\hat{w}_{j})\leq\liminf_{w\to w_{j}}\hat{\rho}(w,\hat{w}_{j})\) for any \((w_{j},\hat{w}_{j})\in\partial D_{j}\times M_{j}\subset\partial M\) and any \(1\leq j\leq n\), where \(M_{j}=\prod_{l\neq j}D_{l}\). Let \(\rho\) be a Lebesgue measurable function on \(\partial M\) such that \[\rho(w_{1},\ldots,w_{n}):=\left(\sum_{1\leq k\leq m_{j}}p_{j,k}\frac{ \partial G_{D_{j}}(w_{j},z_{j,k})}{\partial v_{w_{j}}}\right)^{-1}\hat{\rho}\] on \(\partial D_{j}\times M_{j}\) for any \(1\leq j\leq n\), thus we have \(\inf_{\partial M}\rho>0\). The following proposition gives an sufficient condition for \(f\in H_{\rho}^{2}(M,\partial M)\). **Proposition 2.19**.: _Let \(g\) be a holomorphic function on \(M\). Assume that_ \[\liminf_{r\to 1-0}\frac{\int_{\{z\in M:2\psi(z)\geq\log r\}}|g(z)|^{2}\hat{ \rho}}{1-r}<+\infty,\] _then there is \(f\in H_{\rho}^{2}(M,\partial M)\) such that \(f^{*}=g\) and_ \[\|f\|_{\partial M,\rho}^{2}\leq\frac{1}{\pi}\liminf_{r\to 1-0}\frac{\int_{\{z\in M :2\psi(z)\geq\log r\}}|g(z)|^{2}\hat{\rho}}{1-r}.\] Proof.: If \(Z_{0}\) is a single point set, Proposition 2.19 can be referred to [13]. Let \(\tilde{\psi}(w_{1},\ldots,w_{n})=\max_{1\leq j\leq n}\{2G_{D_{j}}(w_{j},z_{j,1 })\}\) on \(M\). There exist \(t_{0}>0\) and \(C\) such that \[\{z\in M:\tilde{\psi}(z)\geq-t\}\subset\{z\in M:\psi(z)\geq-Ct\}\] for any \(t\in(0,t_{0})\) (see [12]), which implies that \[\liminf_{r\to 1-0}\frac{\int_{\{z\in M:\tilde{\psi}(z)\geq\log r\}}|g(z)|^{2} \hat{\rho}}{1-r}<+\infty.\] As Proposition 2.19 holds when \(Z_{0}\) is a single point set, there is \(f\in H^{2}(M,\partial M)\) such that \(f^{*}=g\). In the following part, we will prove that \[\sum_{1\leq j\leq n}\int_{M_{j}}\int_{\partial D_{j}}|f|^{2}\rho|dw_{j}|d\mu_{j} (\hat{w}_{j})\leq 2\liminf_{r\to 1-0}\frac{\int_{\{z\in M:2\psi(z)\geq\log r \}}|g(z)|^{2}\hat{\rho}}{1-r}.\] Choose any compact subset \(K_{j}\) of \(M_{j}\) for any \(1\leq j\leq n\), and denote that \[\Omega_{j,r}:=\{z\in D_{j}:2\sum_{1\leq k\leq m_{j}}p_{j,k}G_{D_{j}}(z,z_{j,k}) \geq\log r\}\times K_{j}\subset\{z\in M:2\psi(z)\geq\log r\}\] for any \(1\leq j\leq n\). There exists \(r_{1}\in(0,1)\) such that \(\Omega_{j,r_{1}}\cap\Omega_{j^{\prime},r_{1}}=\emptyset\) for any \(j\neq j^{\prime}\) and \(|\bigtriangledown\psi_{j}|\neq 0\) on \(\{z\in D_{j}:2\psi_{j}\geq\log r\}\), where \(\psi_{j}=\sum_{1\leq k\leq m_{j}}p_{j,k}G_{D_{j}}(\cdot,z_{j,k})\) on \(D_{j}\). Note that \(f(\cdot,\hat{w}_{1})=\gamma_{1}(g(\cdot,\hat{w}_{1}))\) denotes the nontangential boundary value of \(g(\cdot,\hat{w}_{1})\) a.e. on \(\partial D_{1}\) for any \(\hat{w}_{1}\in M_{1}\). As \[\hat{\rho}(w_{1},\hat{w}_{1})\leq\liminf_{w\to w_{1}}\hat{\rho}(w,\hat{w}_{1})\] for any \((w_{1},\hat{w}_{1})\in\partial D_{1}\times M_{1}\), it follows from Fatou's lemma, Lemma 2.8 and Lemma 2.12 that \[\int_{K_{1}}\int_{\partial D_{1}}|f(w_{1},\hat{w}_{1})|^{2}\rho| dw_{1}|d\mu_{1}(\hat{w}_{1})\] \[= \int_{K_{1}}\left(\int_{\partial D_{1}}\frac{|\gamma_{1}(g(\cdot,\hat{w}_{1}))|^{2}}{\sum_{1\leq k\leq m_{j}}p_{1,k}\frac{\partial G_{D_{1}}( w_{1},z_{1,k})}{\partial v_{w_{1}}}}\hat{\rho}|dw_{1}|\right)d\mu_{1}(\hat{w}_{1})\] \[\leq \liminf_{r\to 1-0}\frac{\int_{\log r}^{0}\left(\int_{K_{1}} \left(\int_{\{\psi_{1}=s\}}\frac{|g|^{2}\hat{\rho}}{|\bigtriangledown\psi_{1} }|dw_{1}|\right)d\mu_{1}(\hat{w}_{1})\right)ds}{-\log r}\] \[= \liminf_{r\to 1-0}\frac{\int_{\Omega_{1,r}}|g|^{2}\hat{\rho}}{-\log r}\] \[= 2\liminf_{r\to 1-0}\frac{\int_{\Omega_{1,r}}|g|^{2}\hat{\rho}}{- \log r}.\] By similar discussion, we have \[\int_{K_{j}}\int_{\partial D_{j}}|f(w_{j},\hat{w}_{j})|^{2}\rho|dw_{j}|d\mu_{j }(\hat{w}_{j})\leq 2\liminf_{r\to 1-0}\frac{\int_{\Omega_{j,r}}|g|^{2}\hat{ \rho}}{-\log r} \tag{2.18}\] for any \(1\leq j\leq n\). As \(\Omega_{j,r}\cap\Omega_{j^{\prime},r}=\emptyset\) for any \(j\neq j^{\prime}\) and \(r\in(r_{1},1)\), following from the arbitrariness of \(K_{j}\) and inequality (2.18) that \[\sum_{1\leq j\leq n}\int_{M_{j}}\int_{\partial D_{j}}|f(w_{j}, \hat{w}_{j})|^{2}\rho|dw_{j}|d\mu_{j}(\hat{w}_{j})\] \[\leq 2\liminf_{r\to 1-0}\frac{\int_{\{z\in M:2\psi(z)\geq\log r \}}|g|^{2}\hat{\rho}}{-\log r}\] \[= 2\liminf_{r\to 1-0}\frac{\int_{\{z\in M:2\psi(z)\geq\log r \}}|g|^{2}\hat{\rho}}{1-r}\] \[<+\infty.\] Thus, Proposition 2.19 holds. The following lemma will be used in the proof of Lemma 2.21. **Lemma 2.20** ([20]).: _Let \(u\) is a subharmonic function on \(\Omega\). If \(v(dd^{c}u,z_{0})<1\), then \(e^{-2u}\) is \(L^{1}\) on a neighborhood of \(z_{0}\)._ Let \(\varphi_{j}\) be a subharmonic function on \(D_{j}\), which satisfies that \(\varphi_{j}\) is continuous at \(z\) for any \(z\in\partial D_{j}\). Let \(\rho=\prod_{1\leq j\leq n}e^{-\varphi_{j}}\) on \(\overline{M}\). **Lemma 2.21**.: _Assume that \(n>1\). Let \(f\in H^{2}_{\rho}(M,\partial M)\). Then for any compact subset \(K\) of \(M\), we have \(\int_{K}|f^{*}|^{2}e^{-\varphi}<+\infty\)._ Proof.: Since \(\varphi_{j}\) is continuous at \(z\) for any \(z\in\partial D_{j}\), for any \(j\), it follows from Weierstrass theorem (see [3]) and Siu's Decomposition Theorem that there exists a holomorphic function \(g_{j}\) on \(\mathbb{C}\) such that \(\varphi_{j}-2\log|g_{j}|\) is subharmonic on \(D_{j}\) and the Lelong number \[v(dd^{c}(\varphi_{j}-2\log|g_{j}|),z)\in[0,2)\] holds for any \(z\in D_{j}\). Lemma 2.20 shows that \(\prod_{1\leq j\leq n}e^{-(\varphi_{j}-2\log|g_{j}|)}\) is locally integrable on \(M\). Thus, it suffices to prove that \(\frac{f^{*}}{\prod_{1\leq j\leq n}g_{j}}\) is holomorphic. As \(f^{*}(\cdot,\hat{w}_{1})\in H^{2}(D_{1})\) for any \(\hat{w}_{1}\in M_{1}\) and \(\gamma_{1}(f^{*})=f\) a.e. on \(\partial D_{1}\times M_{1}\), it follows from Lemma 2.1 that for any \(K_{1}\Subset D_{1}\), there is \(C_{K_{1}}>0\) such that \[\sup_{w_{1}\in K_{1}}|f^{*}(w_{1},\hat{w}_{1})|^{2}\leq C_{K_{1}}\frac{1}{2\pi}\int_{ \partial D_{1}}|f(z_{1},\hat{w}_{1})|^{2}\,|dz_{1}|, \tag{2.19}\] holds for a.e. \(\hat{w}_{1}\in M_{1}\). Note that \(\inf_{M}\frac{\rho}{e^{-2\log|g_{1}|}}>0\). As \(f\in H^{2}_{\rho}(M,\partial M)\), we have \[\begin{split}&\int_{M_{j}}\int_{\partial D_{j}}\left|\frac{f^{*}}{g _{l}}\right|^{2}|dz_{j}|d\mu_{j}(\hat{w}_{j})\\ \leq& C_{0}\int_{M_{j}}\int_{\partial D_{j}}|f^{*}|^{ 2}\,e^{-2\log|g_{l}|}\frac{\rho}{e^{-2\log|g_{l}|}}|dz_{j}|d\mu_{j}(\hat{w}_{j })\\ \leq& C_{0}\|f\|_{\partial M,\rho}^{2}\\ <&+\infty\end{split} \tag{2.20}\] for any \(1\leq j,l\leq n\). Since \(g_{2}\not\equiv 0\), inequality (2.19) and (2.20) imply that \[\begin{split}&\int_{M_{1}}\int_{K_{1}}\left|\frac{f^{*}}{g_{2}}(w _{1},\hat{w}_{1})\right|^{2}\\ \leq& C_{1}\int_{M_{1}}\sup_{w_{1}\in K_{1}}\left| \frac{f^{*}}{g_{2}}(w_{1},\hat{w}_{1})\right|^{2}d\mu_{1}(\hat{w}_{1})\\ \leq& C_{1}C_{K_{1}}\int_{M_{1}}\left(\frac{1}{2\pi} \int_{\partial D_{1}}\left|\frac{f^{*}}{g_{2}}(z_{1},\hat{w}_{1})\right|^{2}| dz_{1}|\right)d\mu_{1}(\hat{w}_{1})\\ <&+\infty\end{split}\] for any \(K_{1}\), which implies that \(\frac{f^{*}}{g_{2}}\) is holomorphic on \(M\). Thus, \(\frac{f^{*}}{\prod_{1\leq j\leq n}g_{j}}\in\mathcal{O}(M)\) ### Concavity property of minimal \(L^{2}\) integrals In this section, we recall some results about the concavity property of minimal \(L^{2}\) integrals (see [9, 10, 11]). Let \(M\) be an \(n-\)dimensional Stein manifold, and let \(K_{M}\) be the canonical (holomorphic) line bundle on \(M\). Let \(\psi\) be a plurisubharmonic function on \(M\), and let \(\varphi\) be a Lebesgue measurable function on \(M\), such that \(\varphi+\psi\) is a plurisubharmonic function on \(M\). Take \(T=-\sup_{M}\psi>-\infty\). **Definition 2.22**.: _We call a positive measurable function \(c\) on \((T,+\infty)\) in class \(\mathcal{P}_{T}\) if the following two statements hold:_ (1)_\(c(t)e^{-t}\) is decreasing with respect to \(t\);_ (2) _there is a closed subset \(E\) of \(M\) such that \(E\subset\{z\in Z:\psi(z)=-\infty\}\) and for any compact subset \(K\subseteq M\backslash E\), \(e^{-\varphi}c(-\psi)\) has a positive lower bound on \(K\), where \(Z\) is some analytic subset of \(M\)._ Let \(Z_{0}\) be a subset of \(\{\psi=-\infty\}\) such that \(Z_{0}\cap Supp(\{\mathcal{O}/\mathcal{I}(\varphi+\psi)\})\neq\emptyset\). Let \(U\supseteq Z_{0}\) be an open subset of \(M\) and let \(f\) be a holomorphic \((n,0)\) form on \(U\). Let \(\mathcal{F}\supseteq\mathcal{I}(\varphi+\psi)|_{U}\) be an analytic subsheaf of \(\mathcal{O}\) on \(U\). Denote \[\inf\Bigg{\{}\int_{\{\psi<-t\}}|\tilde{f}|^{2}e^{-\varphi}c(-\psi) :(\tilde{f}-f)\in H^{0}(Z_{0}, (\mathcal{O}(K_{M})\otimes\mathcal{F})|_{Z_{0}})\] \[\&\,\tilde{f}\in H^{0}(\{\psi<-t\},\mathcal{O}(K_{M}))\Bigg{\}},\] by \(G(t;c)\) (without misunderstanding, we denote \(G(t;c)\) by \(G(t)\)), where \(t\in[T,+\infty)\), \(c\in\mathcal{P}_{T}\) satisfying \(\int_{T}^{+\infty}c(l)e^{-l}dl<+\infty\), \(|f|^{2}:=\sqrt{-1}^{n^{2}}f\wedge\tilde{f}\) for any \((n,0)\) form \(f\) and \((\tilde{f}-f)\in H^{0}(Z_{0},(\mathcal{O}(K_{M})\otimes\mathcal{F})|_{Z_{0}})\) means \((\tilde{f}-f,z_{0})\in(\mathcal{O}(K_{M})\otimes\mathcal{F})_{z_{0}}\) for all \(z_{0}\in Z_{0}\). We recall some results about the concavity for \(G(t)\). **Theorem 2.23** ([9]).: _Assume that \(G(T)<+\infty\). Then \(G(h^{-1}(r))\) is concave with respect to \(r\in(0,\int_{T}^{+\infty}c(l)e^{-l}dl)\), \(\lim_{t\to T+0}G(t)=G(T)\) and \(\lim_{t\to+\infty}G(t)=0\), where \(h(t)=\int_{t}^{+\infty}c(l)e^{-l}dl\)._ The following corollary gives a necessary condition for the concavity of \(G(h^{-1}(r))\) degenerating to linearity. **Corollary 2.24** ([9]).: _Assume that \(G(T)\in(0,+\infty)\). If \(G(h^{-1}(r))\) is linear with respect to \(r\in(0,\int_{T}^{+\infty}c(l)e^{-l}dl)\), where \(h(t)=\int_{t}^{+\infty}c(l)e^{-l}dl\), then there is a unique holomorphic \((n,0)\) form \(F\) on \(M\) satisfying \((F-f)\in H^{0}(Z_{0},(\mathcal{O}(K_{M})\otimes\mathcal{F})|_{Z_{0}})\) and \(G(t;c)=\int_{\{\psi<-t\}}|F|^{2}e^{-\varphi}c(-\psi)\) for any \(t\geq T\). Furthermore,_ \[\int_{\{-t_{1}\leq\psi<-t_{2}\}}|F|^{2}e^{-\varphi}a(-\psi)=\frac{G(T_{1};c)}{ \int_{T_{1}}^{+\infty}c(l)e^{-l}dl}\int_{t_{2}}^{t_{1}}a(t)e^{-t}dt\] _for any nonnegative measurable function \(a\) on \((T,+\infty)\), where \(+\infty\geq t_{1}>t_{2}\geq T\)._ We recall the existence and uniqueness of the holomorphic \((n,0)\) form related to \(G(t)\). **Lemma 2.25** ([9]).: _Assume that \(G(t)<+\infty\) for some \(t\in[T,+\infty)\). Then there exists a unique holomorphic \((n,0)\) form \(F_{t}\) on \(\{\psi<-t\}\) satisfying \((F_{t}-f)\in H^{0}(Z_{0},(\mathcal{O}(K_{M})\otimes\mathcal{F})|_{Z_{0}})\) and \((\tilde{f}-f)\in H^{0}(Z_{0},(\mathcal{O}(K_{M})\otimes\mathcal{F})|_{Z_{0}})\)._ Proof.: We first prove the existence and uniqueness of the holomorphic \((n,0)\) form related to \(G(t)\). **Lemma 2.26** ([9]).: _Assume that \(G(t)<+\infty\) for some \(t\in[T,+\infty)\). Then there exists a unique holomorphic \((n,0)\) form \(F_{t}\) on \(\{\psi<-t\}\) satisfying \((F_{t}-f)\in G(t)\)._ Proof.: We first prove the existence and uniqueness of the holomorphic \((n,0)\) form related to \(G(t)\). **Lemma 2.27** ([9]).: _Assume that \(G(t)<+\infty\) for some \(t\in[T,+\infty)\). Then there exists a unique holomorphic \((n,0)\) form \(F_{t}\) on \(\{\psi<-t\}\) satisfying \((F_{t}-f)\in G(t)\)._ Proof.: We first prove the existence and uniqueness of the holomorphic \((n,0)\) form related to \(G(t)\). **Lemma 2.28** ([9]).: _Assume that \(G(t)<+\infty\) for some \(t\in[T,+\infty)\). Then there exists a unique holomorphic \((n,0)\) form \(F_{t}\) on \(\{\psi<-t\}\) satisfying \((F_{t}-f)\in G(t)\)._ Proof.: We first prove the existence and uniqueness of the holomorphic \((n,0)\) form related to \(G(t)\). **Lemma 2.29** ([9]).: _Assume that \(G(t)<+\infty\) for some \(t\in[T,+\infty)\). Then there exists a unique holomorphic \((n,0)\) form \(F_{t}\) on \(\{\psi<-t\}\) satisfying \((F_{t}-f)\in G(t)\)._ Proof.: We first prove the existence and uniqueness of the holomorphic \((n,0)\) form related to \(G(t)\). **Lemma 2.29** ([9]).: _Assume that \(G(t)<+\infty\) for some \(t\in[T,+\infty)\). Then there exists a unique holomorphic \((n,0)\) form \(F_{t}\) on \(\{\psi<-t\}\) satisfying \((F_{t}-f)\in G(t)\)._ Proof.: We first prove the existence and uniqueness of the holomorphic \((n,0)\) form related to \(G(t)\). **Lemma 2.30** ([9]).: _Assume that \(G(t)<+\infty\) for some \(t\in[T,+\infty)\). Then there exists a unique holomorphic \((n,0)\) form \(F_{t}\) on \(\{\psi<-t\}\) satisfying \((F_{t}-f)\in G(t)\)._ Proof.: We first prove the existence and uniqueness of the holomorphic \((n,0)\) form related to \(G(t)\). **Lemma 2.31** ([9]).: _Assume that \(G(t)<+\infty\) for some \(t\in[T,+\infty)\). Then there exists a unique holomorphic \((n,0)\) form \(F_{t}\) on \(\{\psi<-t\}\) satisfying \((F_{t}-f)\in G(t)\)._ Proof.: We first prove the existence and uniqueness of the holomorphic \((n,0)\) form related to \(G(t)\). **Lemma 2.32** ([9]).: _Assume that \(G(t)<+\infty\) for some \(t\in[T,+\infty)\). Then there exists a unique holomorphic \((n,0)\) form \(F_{t}\) on \(\{\psi<-t\}\) satisfying \((F_{t}-f)\in G(t)\)._ Proof.: We first prove the existence and uniqueness of the holomorphic \((n,0)\) form related to \(G(t)\). **Lemma 2.33** ([9]).: _Assume that \(G(t)<+\infty\) for some \(t\in[T,+\infty)\). Then there exists a unique holomorphic \((n,0)\) form \(F_{t}\) on \(\{\psi<-t\}\) satisfying \((F_{t}-f)\in G(t)\)._ Proof.: We first prove the existence and uniqueness of the holomorphic \((n,0)\) form related to \(G(t)\). **Lemma 2.34** ([9]).: _Assume that \(G(t)<+\infty\) for some \(t\in[T,+\infty)\). Then there exists a unique holomorphic \((n,0)\) form \(F_{t}\) on \(\{\psi<-t\}\) satisfying \((F_{t}-f)\in G(t \(H^{0}(Z_{0},(\mathcal{O}(K_{M})\otimes\mathcal{F})|_{Z_{0}})\) and \(\int_{\{\psi<-t\}}|F_{t}|^{2}e^{-\varphi}c(-\psi)=G(t)\). Furthermore, for any holomorphic \((n,0)\) form \(\hat{F}\) on \(\{\psi<-t\}\) satisfying \((\hat{F}-f)\in H^{0}(Z_{0},(\mathcal{O}(K_{M})\otimes\mathcal{F})|_{Z_{0}})\) and \(\int_{\{\psi<-t\}}|\hat{F}|^{2}e^{-\varphi}c(-\psi)<+\infty\), we have the following equality_ \[\int_{\{\psi<-t\}}|F_{t}|^{2}e^{-\varphi}c(-\psi)+\int_{\{\psi<-t\} }|\hat{F}-F_{t}|^{2}e^{-\varphi}c(-\psi)\] \[= \int_{\{\psi<-t\}}|\hat{F}|^{2}e^{-\varphi}c(-\psi).\] In the following, we recall some characterizations for the concavity of \(G(h^{-1}(r))\) degenerating to linearity. Assume that \(M=\Omega\) is an open Riemann surface, which admitted a nontrivial Green function \(G_{\Omega}\). Let \(Z_{0}=\{z_{1},z_{2},\ldots,z_{m}\}\subset\Omega\) be a finite subset of \(\Omega\) satisfying that \(z_{j}\neq z_{k}\) for any \(j\neq k\). We recall some notations (see [3], see also [14, 9, 6]). Let \(p:\Delta\to\Omega\) be the universal covering from unit disc \(\Delta\) to \(\Omega\). we call the holomorphic function \(f\) on \(\Delta\) a multiplicative function, if there is a character \(\chi\), which is the representation of the fundamental group of \(\Omega\), such that \(g^{\star}f=\chi(g)f\), where \(|\chi|=1\) and \(g\) is an element of the fundamental group of \(\Omega\). It is known that for any harmonic function \(u\) on \(\Omega\), there exists a \(\chi_{u}\) and a multiplicative function \(f_{u}\in\mathcal{O}^{\chi_{u}}(\Omega)\), such that \(|f_{u}|=p^{\star}\left(e^{u}\right)\). Recall that for the Green function \(G_{\Omega}(z,z_{j})\), there exist a \(\chi_{z_{j}}\) and a multiplicative function \(f_{z_{j}}\in\mathcal{O}^{\chi_{z_{j}}}(\Omega)\), such that \(|f_{z_{j}}(z)|=p^{\star}\left(e^{G_{\Omega}(z,z_{j})}\right)\) (see [22, 21]). The following Theorem gives a characterization of the concavity of \(G(h^{-1}(r))\) degenerating to linearity. **Theorem 2.26** ([10], see also [7]).: _Let \(G(0)\in(0,+\infty)\) and \(p_{j}=\frac{1}{2}v(dd^{c}(\psi),z_{j})>0\) for any \(j\in\{1,2,\ldots,m\}\). For any \(j\in\{1,2,\ldots,m\}\), assume that one of the following conditions holds:_ \((A)\)_\(\varphi+a\psi\) is subharmonic near \(z_{j}\) for some \(a\in[0,1)\);_ \((B)\)_\((\psi-2p_{j}G_{\Omega}(\cdot,z_{j}))(z_{j})>-\infty\)._ _Then \(G(h^{-1}(r))\) is linear with respect to \(r\) if and only if the following statements hold:_ \((1)\)_\(\psi=2\sum_{1\leq j\leq m}p_{j}G_{\Omega}(\cdot,z_{j})\);_ \((2)\)_\(\varphi+\psi=2\log|g|+2\sum_{1\leq j\leq m}G_{\Omega}(\cdot,z_{j})+2u\) and \(\mathcal{F}_{z_{j}}=\mathcal{I}(\varphi+\psi)_{z_{j}}\) for any \(j\in\{1,2,\ldots,m\}\), where \(g\) is a holomorphic function on \(\Omega\) such that \(ord_{z_{j}}(g)=ord_{z_{j}}(f)\) for any \(j\in\{1,2,\ldots,m\}\) and \(u\) is a harmonic function on \(\Omega\);_ \((3)\)_\(\prod_{1\leq j\leq m}\chi_{z_{j}}=\chi_{-u}\), where \(\chi_{-u}\) and \(\chi_{z_{j}}\) are the characters associated to the functions \(-u\) and \(G_{\Omega}(\cdot,z_{j})\) respectively;_ \((4)\)_\(\lim_{z\to z_{k}}\frac{f}{gp_{*}\left(f_{u}\left(\prod_{1\leq j\leq m}f_{z_{j}} \right)\left(\sum_{1\leq j\leq m}p_{j}\frac{df_{z_{j}}}{f_{z_{j}}}\right)\right) }=c_{0}\) for any \(k\in\{1,2,\ldots,m\}\), where \(c_{0}\in\mathbb{C}\backslash\{0\}\) is a constant independent of \(k\), \(f_{u}\) is a holomorphic function on \(\Delta\) such that \(|f_{u}|=p^{*}(e^{u})\) and \(f_{z_{j}}\) is a holomorphic function on \(\Delta\) such that \(|f_{z_{j}}|=p^{*}\left(e^{G_{\Omega}(\cdot,z_{j})}\right)\)_ **Remark 2.27** ([10]).: _When the four statements in Theorem 2.26 hold,_ \[c_{0}gp_{*}(f_{u}(\Pi_{1\leq j\leq m}f_{z_{j}})(\sum_{1\leq j\leq m}p_{j}\frac{ df_{z_{j}}}{f_{z_{j}}}))\] _is the unique holomorphic \((1,0)\) form \(F\) on \(\Omega\) such that \((F-f,z_{j})\in(\mathcal{O}(K_{\Omega}))_{z_{j}}\otimes\mathcal{F}_{z_{j}}\) for any \(j\in\{1,2,...,m\}\) and \(G(t)=\int_{\{\psi<-t\}}|F|^{2}e^{-\varphi}c(-\psi)\) for any \(t\geq 0\)._ We recall the following characterization for the holding of the equality in the optimal \(L^{2}\) extension problem. **Theorem 2.28** ([10]).: _Let \(k_{j}\) be a nonnegative integer for any \(j\in\{1,2,...,m\}\). Let \(\psi\) be a negative subharmonic function on \(\Omega\) satisfying that \(\frac{1}{2}v(dd^{c}\psi,z_{j})=p_{j}>0\) for any \(j\in\{1,2,...,m\}\). Let \(\varphi\) be a Lebesgue measurable function on \(\Omega\) such that \(\varphi+\psi\) is subharmonic on \(\Omega\), \(\frac{1}{2}v(dd^{c}(\varphi+\psi),z_{j})=k_{j}+1\) and \(\alpha_{j}:=(\varphi+\psi-2(k_{j}+1)G_{\Omega}(\cdot,z_{j}))(z_{j})>-\infty\) for any \(j\). Let \(c(t)\) be a positive measurable function on \((0,+\infty)\) satisfying \(c(t)e^{-t}\) is decreasing on \((0,+\infty)\) and \(\int_{0}^{+\infty}c(s)e^{-s}ds<+\infty\). Let \(a_{j}\) be a constant for any \(j\)._ _Let \(f\) be a holomorphic \((1,0)\) form on \(V_{0}\) satisfying that \(f=a_{j}w_{j}^{k_{j}}dw_{j}\) on \(V_{z_{j}}\). Then there exists a holomorphic \((1,0)\) form \(F\) on \(\Omega\) such that \((F-f,z_{j})\in(\mathcal{O}(K_{\Omega})\otimes\mathcal{I}(2(k_{j}+1)G_{\Omega} (\cdot,z_{j})))_{z_{j}}\) and_ \[\int_{\Omega}|F|^{2}e^{-\varphi}c(-\psi)\leq(\int_{0}^{+\infty}c(s)e^{-s}ds) \sum_{1\leq j\leq m}\frac{2\pi|a_{j}|^{2}e^{-\alpha_{j}}}{p_{j}c_{\beta}(z_{j })^{2(k_{j}+1)}}. \tag{2.21}\] _Moreover, equality \((\int_{0}^{+\infty}c(s)e^{-s}ds)\sum_{1\leq j\leq m}\frac{2\pi|a_{j}|^{2}e^{- \alpha_{j}}}{p_{j}c_{\beta}(z_{j})^{2(k_{j}+1)}}=\inf\{\int_{\Omega}|\tilde{F }|^{2}e^{-\varphi}c(-\psi):\tilde{F}\) is a holomorphic \((1,0)\) form on \(\Omega\) such that \((\tilde{F}-f,z_{j})\in(\mathcal{O}(K_{\Omega})\otimes\mathcal{I}(2(k_{j}+1)G_{ \Omega}(\cdot,z_{j})))_{z_{j}}\) for any \(j\)\(\}\) holds if and only if the following statements hold:_ \((1)\)_\(\psi=2\sum_{1\leq j\leq m}p_{j}G_{\Omega}(\cdot,z_{j})\);_ \((2)\)_\(\varphi+\psi=2\log|g|+2\sum_{1\leq j\leq m}(k_{j}+1)G_{\Omega}(\cdot,z_{j})+2u\), where \(g\) is a holomorphic function on \(\Omega\) such that \(g(z_{j})\neq 0\) for any \(j\in\{1,2,...,m\}\) and \(u\) is a harmonic function on \(\Omega\);_ \((3)\)_\(\Pi_{1\leq j\leq m}\chi_{z_{j}}^{k_{j}+1}=\chi_{-u}\), where \(\chi_{-u}\) and \(\chi_{z_{j}}\) are the characters associated to the functions \(-u\) and \(G_{\Omega}(\cdot,z_{j})\) respectively;_ \((4)\)_\(\lim_{z\to z_{k}}\frac{f}{gp_{*}(f_{u}(\Pi_{1\leq j\leq m}f_{z_{j}}^{k_{j}+1})( \sum_{1\leq j\leq m}p_{j}\frac{4fz_{j}}{f_{z_{j}}}))}=c_{0}\) for any \(k\in\{1,2...,m\}\), where \(c_{0}\in\mathbb{C}\backslash\{0\}\) is a constant independent of \(k\)._ In the following, we consider the case \(M\) is a product manifold of open Riemann surfaces. Let \(\Omega_{j}\) be an open Riemann surface, which admits a nontrivial Green function \(G_{\Omega_{j}}\) for any \(1\leq j\leq n\). Let \[M=\prod_{1\leq j\leq n}\Omega_{j}\] be an \(n-\)dimensional complex manifold, and let \(\pi_{j}\) be the natural projection from \(M\) to \(\Omega_{j}\). Let \(K_{M}\) be the canonical (holomorphic) line bundle on \(M\). Let \(\varphi_{j}\) be a subharmonic function on \(\Omega_{j}\), and let \[\varphi=\sum_{1\leq j\leq n}\pi_{j}^{*}(\varphi_{j}).\] Let \(Z_{j}=\{z_{j,1},z_{j,2},...,z_{j,m_{j}}\}\subset\Omega_{j}\) for any \(j\in\{1,2,...,n\}\), where \(m_{j}\) is a positive integer. Denote that \[Z_{0}:=\prod_{1\leq j\leq n}Z_{j}\subset M.\] Let \(\psi=\max_{1\leq j\leq n}\{\pi_{j}^{*}(2\sum_{1\leq k\leq m_{j}}p_{j,k}G_{D_{j}}( \cdot,z_{j,k}))\}\), where \(p_{j,k}>0\) is a constant. Let \(\mathcal{F}_{z}=\mathcal{I}(\psi)_{z}\) for any \(z\in Z_{0}\). Let \(w_{j,k}\) be a local coordinate on a neighborhood \(V_{z_{j,k}}\in\Omega_{j}\) of \(z_{j,k}\in\Omega_{j}\) satisfying \(w_{j,k}(z_{j,k})=0\) for any \(j\in\{1,2,...,n\}\) and \(k\in\{1,2,...,m_{j}\}\), where \(V_{z_{j,k}}\cap V_{z_{j,k^{\prime}}}=\emptyset\) for any \(j\) and \(k\neq k^{\prime}\). Denote that \(I_{1}:=\{(\beta_{1},\beta_{2},...,\beta_{n}):1\leq\beta_{j}\leq m_{j}\) for any \(j\in\{1,2,...,n\}\}\), \(V_{\beta}:=\prod_{1\leq j\leq n}V_{z_{j,\beta_{j}}}\) for any \(\beta=(\beta_{1},\beta_{2},...,\beta_{n})\in I_{1}\) and \(w_{\beta}:=(w_{1,\beta_{1}},w_{2,\beta_{2}},...,w_{n,\beta_{n}})\) is a local coordinate on \(V_{\beta}\) of \(z_{\beta}:=(z_{1,\beta_{1}},z_{2,\beta_{2}},...,z_{n,\beta_{n}})\in M\). Let \(f_{0}\) be a holomorphic \((n,0)\) form on \(\cup_{\beta\in I_{1}}V_{\beta}\). Let \(c\) be a positive function on \([0,+\infty)\), which satisfies that \(c(t)e^{-t}\) is decreasing on \([0,+\infty)\), \(\lim_{t\to 0+0}c(t)=c(0)=1\) and \(\int_{0}^{+\infty}c(t)e^{-t}dt<+\infty\). **Theorem 2.29** ([11]).: _Assume that \(G(0)\in(0,+\infty)\) and \(\varphi(z_{\beta})>-\infty\) for any \(\beta\in I_{1}\). \(G(h^{-1}(r))\) is linear with respect to \(r\in(0,\int_{0}^{+\infty}c(s)e^{-s}ds]\) if and only if the following statements hold:_ (1)_\(\varphi_{j}=2\log|g_{j}|+2u_{j}\) for any \(j\in\{1,2,...,n\}\), where \(u_{j}\) is a harmonic function on \(\Omega_{j}\) and \(g_{j}\) is a holomorphic function on \(\Omega_{j}\) satisfying \(g_{j}(z_{j,k})\neq 0\) for any \(k\in\{1,2,...,m_{j}\}\);_ (2) _There exists a nonnegative integer \(\gamma_{j,k}\) for any \(j\in\{1,2,...,n\}\) and \(k\in\{1,2,...,m_{j}\}\), which satisfies that \(\Pi_{1\leq k\leq m_{j}}\chi_{j,z_{j,k}}^{\gamma_{j,k}+1}=\chi_{j,-u_{j}}\) and \(\sum_{1\leq j\leq n}\frac{\gamma_{j,\beta_{j}}+1}{p_{j,\beta_{j}}}=1\) for any \(\beta\in I_{1}\), where \(\chi_{-u_{j}}\) and \(\chi_{z_{j,k}}\) are the characters associated to the functions \(-u_{j}\) and \(G_{\Omega_{j}}(\cdot,z_{j,k})\) respectively;_ (3)_\(f=(c_{\beta}\Pi_{1\leq j\leq n}w_{j,\beta_{j}}^{\gamma_{j,\beta_{j}}}+g_{\beta })dw_{1,\beta_{1}}\wedge dw_{2,\beta_{2}}\wedge...\wedge dw_{n,\beta_{n}}\) on \(V_{\beta}\) for any \(\beta\in I_{1}\), where \(c_{\beta}\) is a constant and \(g_{\beta}\) is a holomorphic function on \(V_{\beta}\) such that \((g_{\beta},z_{\beta})\in\mathcal{I}(\psi)_{z_{\beta}}\);_ (4)_\(\lim_{z\to z_{\beta}}\frac{c_{\beta}\Pi_{1\leq j\leq n}w_{j,\beta_{j}}^{\gamma_ {j,\beta_{j}}}dw_{1,\beta_{1}}\wedge dw_{2,\beta_{2}}\wedge...\wedge dw_{n, \beta_{n}}}{\wedge_{1\leq j\leq n}\pi_{j}^{*}(g_{j}(P_{j})_{*}(\int_{f_{u_{j}} }(\Pi_{1\leq k\leq m_{j}}f_{z_{j,k}}^{\gamma_{j,k}+1})(\sum_{1\leq k\leq m_{j} }p_{j,k}\frac{d\gamma_{z_{j,k}}}{f_{z_{j,k}}}))}=c_{0}\) for any \(\beta\in I_{1}\), where \(c_{0}\in\mathbb{C}\backslash\{0\}\) is a constant independent of \(\beta\), \(P_{j}:\Delta\rightarrow\Omega_{j}\) is the universal covering, \(f_{u_{j}}\) is a holomorphic function \(\Delta\) such that \(|f_{u_{j}}|=P_{j}^{*}(e^{u_{j}})\) and \(f_{z_{j,k}}\) is a holomorphic function on \(\Delta\) such that \(|f_{z_{j,k}}|=P_{j}^{*}(e^{G_{\Omega_{j}}(\cdot,z_{j,k})})\) for any \(j\in\{1,2,...,n\}\) and \(k\in\{1,2,...,m_{j}\}\)._ The following Lemma will be used in the proof of Remark 2.31. **Lemma 2.30** (see [11]).: _Let \(\psi=\max_{1\leq j\leq n}\{2p_{j}\log|w_{j}|\}\) be a plurisubharmonic function on \(\mathbb{C}^{n}\), where \(p_{j}>0\). Let \(f=\sum_{\alpha\in\mathbb{Z}_{\geq 0}^{n}}b_{\alpha}w^{\alpha}\) (Taylor expansion) be a holomorphic function on \(\{\psi<-t_{0}\}\), where \(t_{0}>0\). Then_ \[\int_{\{\psi<-t\}}|f|^{2}d\lambda_{n}=\sum_{\alpha\in\mathbb{Z}_{\geq 0}^{n}}e^{- \sum_{1\leq j\leq n}\frac{\alpha_{j}+1}{p_{j}}t}\frac{|b_{\alpha}|^{2}\pi^{n}}{ \Pi_{1\leq j\leq n}(\alpha_{j}+1)}\] _holds for any \(t\geq t_{0}\)._ **Remark 2.31**.: _The requirement "\(\varphi(z_{\beta})>-\infty\) for any \(\beta\in I_{1}\)" in Theorem 2.29 can be removed._ Proof.: It suffices to prove that the linearity of \(G(h^{-1}(r))\) can deduce \(\varphi(z_{\beta})>-\infty\) for any \(\beta\in I_{1}\). Assume that \(G(h^{-1}(r))\) is linear with respect to \(r\in(0,\int_{0}^{+\infty}c(t)e^{-t}dt]\). It follows from Corollary 2.24 that there is a holomorphic \((n,0)\) form \(F\) on \(M\) such that \((F-f,z_{\beta})\in\mathcal{I}(\psi)_{z_{\beta}}\) for any \(\beta\in I_{1}\), and \[\int_{\{\psi<-t\}}|F|^{2}e^{-\varphi}=\frac{G(0)}{\int_{0}^{+\infty}c(t)e^{-t} dt}e^{-t} \tag{2.22}\] for any \(t\geq 0\). Firstly, we prove that \((F,z_{\beta})\not\in\mathcal{I}(\psi)_{z_{\beta}}\) for any \(\beta\in I_{1}\). We prove this by contradiction: if not, there exists \(\beta_{0}\in I_{1}\) such that \((F,z_{\beta_{0}})\in\mathcal{I}(\psi)_{z_{\beta_{0}}}\). Then we have \[(f,z_{\beta_{0}})\in\mathcal{I}(\psi)_{z_{\beta_{0}}}.\] There exists \(t>0\) such that \(\{\psi<-t\}\cap V_{\beta_{0}}\Subset V_{\beta_{0}}.\) Corollary 2.24 tells us that \(F\) is the unique "minimal form" on any sublevel set of \(\psi\), thus we have \(F\equiv 0\) on \(\{\psi<-t\}\cap V_{\beta_{0}}\), which implies that \[F\equiv 0\] on \(M\). Then we get that \((f,z_{\beta})\in\mathcal{I}(\psi)_{z_{\beta}}\) for any \(\beta\in I_{1}\), which contradict to \(G(0)>0\). Now, we prove \(\varphi_{j}(z_{\beta})>-\infty\) for any \(\beta\in I_{1}\). Fixed any \(\beta\in I_{1}\), without loss of generality, assume that \(|w_{j}(z)|=e^{\sum_{1\leq k\leq m_{j}}\frac{p_{j,k}}{p_{j,\beta_{j}}}G_{\Omega _{j}}(z,z_{j,k})}\) on \(V_{z_{j,\beta_{j}}}\), hence \(\psi=\max_{1\leq j\leq n}\{2p_{j,\beta_{j}}\log|w_{j}|\}\) on \(V_{\beta}\). There is \(t_{0}>0\) such that \[\{\psi<-t_{0}\}\cap V_{\beta}\Subset V_{\beta}.\] Denote that \[c_{t}:=\sup_{\{\psi<-t\}\cap V_{\beta}}\varphi<+\infty\] for any \(t\geq t_{0}\). As \(\varphi=\sum_{1\leq j\leq n}\varphi_{j}\) is plurisubharmonic, we know that \[\lim_{t\to+\infty}c_{t}=\varphi(z_{\beta})=\sum_{1\leq j\leq n}\varphi_{j}(z_ {j,\beta_{j}}).\] Let \(F=\sum_{\alpha\in\mathbb{Z}_{\geq 0}^{n}}d_{\alpha}w^{\alpha}dw_{1}\wedge \ldots\wedge dw_{n}\) near \(z_{\beta}\). Denote that \(E_{\beta}=\{\alpha\in\mathbb{Z}_{\geq 0}^{n}:\sum_{1\leq j\leq n}\frac{ \alpha_{j}+1}{p_{j,\beta_{j}}}\leq 1\}\). Since \((F,z_{\beta})\not\in\mathcal{I}(\psi)_{z_{\beta}}\), we have \[\sum_{\alpha\in E_{\beta}}|d_{\alpha}|^{2}>0.\] Lemma 2.30 tells us that \[\begin{split}\int_{\{\psi<-t\}}|F|^{2}e^{-\varphi}& \geq e^{-c_{t}}\int_{\{\psi<-t\}\cap V_{\beta}}|F|^{2}\\ &=e^{-c_{t}}\sum_{\alpha\in\mathbb{Z}_{\geq 0}^{n}}e^{-\sum_{1\leq j \leq n}\frac{\alpha_{j}+1}{p_{j,\beta_{j}}}t}\frac{|d_{\alpha}|^{2}(2\pi)^{n} }{\Pi_{1\leq j\leq n}(\alpha_{j}+1)}\\ &\geq e^{-c_{t}}\sum_{\alpha\in E_{\beta}}e^{-\sum_{1\leq j\leq n }\frac{\alpha_{j}+1}{p_{j,\beta_{j}}}t}\frac{|d_{\alpha}|^{2}(2\pi)^{n}}{\Pi_ {1\leq j\leq n}(\alpha_{j}+1)}\end{split} \tag{2.23}\] for any \(t\geq t_{0}\). It follows from equality (2.22) and inequality (2.23) that \[\begin{split}\frac{G(0)}{\int_{0}^{+\infty}c(t)e^{-t}dt}& =\lim_{t\to+\infty}e^{t}\int_{\{\psi<-t\}}|F|^{2}e^{-\varphi}\\ &\geq\lim_{t\to+\infty}e^{-c_{t}}\sum_{\alpha\in E_{\beta}}e^{ \left(1-\sum_{1\leq j\leq n}\frac{\alpha_{j}+1}{p_{j},\beta_{j}}\right)t} \frac{|d_{\alpha}|^{2}(2\pi)^{n}}{\Pi_{1\leq j\leq n}(\alpha_{j}+1)},\end{split} \tag{2.24}\] Note that \(\frac{G(0)}{\int_{0}^{+\infty}c(t)e^{-t}dt}\in(0,+\infty)\) and \(1-\sum_{1\leq j\leq n}\frac{\alpha_{j}+1}{p_{j},\beta_{j}}\geq 0\) for any \(\alpha\in E_{\beta}\), inequality (2.24) shows that \[\lim_{t\to+\infty}c_{t}>-\infty,\] hence we have \(\varphi(z_{\beta})>-\infty\). Denote that \[c_{j,k}:=\exp\lim_{z\to z_{j,k}}\left(\frac{\sum_{1\leq k_{1}\leq m_{j}}p_{j, k_{1}}G_{\Omega_{j}}(z,z_{j,k_{1}})}{p_{j,k}}-\log|w_{j,k}(z)|\right)\] for any \(j\in\{1,2,...,n\}\) and \(k\in\{1,2,...,m_{j}\}\). **Remark 2.32** ([11]).: _When the four statements in Theorem 2.29 hold,_ \[c_{0}\wedge_{1\leq j\leq n}\pi_{j}^{*}(g_{j}(P_{j})_{*}(f_{u_{j}}(\Pi_{1\leq k \leq m_{j}}f_{z_{j,k}}^{\gamma_{j,k}+1})(\sum_{1\leq k\leq m_{j}}p_{j,k}\frac{ df_{z_{j,k}}}{f_{z_{j,k}}}))\] _is the unique holomorphic \((n,0)\) form \(F\) on \(M\) such that \((F-f,z_{\beta})\in(\mathcal{O}(K_{M}))_{z_{\beta}}\otimes\mathcal{I}(\psi)_{z_ {\beta}}\) for any \(\beta\in I_{1}\) and_ \[G(t)=\int_{\{\psi<-t\}}|F|^{2}e^{-\varphi}c(-\psi)=\left(\int_{t}^{+\infty}c( s)e^{-s}ds\right)\sum_{\beta\in I_{1}}\frac{|c_{\beta}|^{2}(2\pi)^{n}e^{- \varphi(z_{\beta})}}{\Pi_{1\leq j\leq n}(\gamma_{j,\beta_{j}}+1)c_{j,\beta_{j }}^{2\gamma_{j,\beta_{j}}+2}}\] _for any \(t\geq 0\)._ Denote that \(E_{\beta}:=\{(\alpha_{1},\alpha_{2},...,\alpha_{n}):\sum_{1\leq j\leq n}\frac {\alpha_{j}+1}{p_{j,\beta_{j}}}=1\,\&\,\alpha_{j}\in\mathbb{Z}_{\geq 0}\}\). Let \(f\) be a holomorphic \((n,0)\) form on \(\cup_{\beta\in I_{1}}V_{\beta}\) such that \(f=\sum_{\alpha\in E_{\beta}}d_{\beta,\alpha}w_{\beta}^{\alpha}dw_{1,\beta_{1} }\wedge dw_{2,\beta_{2}}\wedge...\wedge dw_{n,\beta_{n}}\) on \(V_{\beta}\) for any \(\beta\in I_{1}\). We recall the following characterization for the holding of the equality in the optimal \(L^{2}\) extension problem. **Theorem 2.33** ([11]).: _If \(\sum_{\beta\in I_{1}}\sum_{\alpha\in E_{\beta}}\frac{|d_{\beta,\alpha}|^{2}(2 \pi)^{n}e^{-\varphi(z_{\beta})}}{\Pi_{1\leq j\leq n}(\alpha_{j}+1)c_{j,\beta _{j}}^{2\alpha_{j}+2}}\in(0,+\infty)\), there exists a holomorphic \((n,0)\) form \(F\) on \(M\), which satisfies that \((F-f,z_{\beta})\in(\mathcal{O}(K_{M})\otimes\mathcal{I}(\psi))_{z_{\beta}}\) for any \(\beta\in I_{1}\) and_ \[\int_{M}|F|^{2}e^{-\varphi}c(-\psi)\leq(\int_{0}^{+\infty}c(s)e^{-s}ds)\sum_{ \beta\in I_{1}}\sum_{\alpha\in E_{\beta}}\frac{|d_{\beta,\alpha}|^{2}(2\pi)^{n }e^{-\varphi(z_{\beta})}}{\Pi_{1\leq j\leq n}(\alpha_{j}+1)c_{j,\beta_{j}}^{2 \alpha_{j}+2}}.\] _Moreover, assume that \(f=w_{\beta^{*}}^{\alpha_{\beta^{*}}}dw_{1,1}\wedge dw_{2,1}\wedge...\wedge dw _{n,1}\) on \(V_{\beta^{*}}\), where \(\beta^{*}=(1,1,...,1)\in I_{1}\), then equality \(\inf\{\int_{M}|\tilde{F}|^{2}e^{-\varphi}c(-\psi):\tilde{F}\in H^{0}(M, \mathcal{O}(K_{M}))\,\&\,(\tilde{F}-f,z_{\beta})\in(\mathcal{O}(K_{M}) \otimes\mathcal{I}(\max_{1\leq j\leq n}\{2\sum_{1\leq k\leq m_{j}}p_{j,k}\pi_{ j}^{*}(G_{\Omega_{j}}(\cdot,z_{j,k}))\}))_{z_{\beta}}\) for any \(\beta\in I_{1}\}=(\int_{0}^{+\infty}c(s)e^{-s}ds)\sum_{\beta\in I_{1}}\sum_{ \alpha\in E_{\beta}}\frac{|d_{\beta,\alpha}|^{2}(2\pi)^{n}e^{-\varphi(z_{\beta} )}}{\Pi_{1\leq j\leq n}(\alpha_{j}+1)c_{j,\beta_{j}}^{2\alpha_{j}+2}}\) holds if and only if the following statements hold:_ (1) \(\varphi_{j}=2\log|g_{j}|+2u_{j}\) _for any \(j\in\{1,2,...,n\}\), where \(u_{j}\) is a harmonic function on \(\Omega_{j}\) and \(g_{j}\) is a holomorphic function on \(\Omega_{j}\) satisfying \(g_{j}(z_{j,k})\neq 0\) for any \(k\in\{1,2,...,m_{j}\}\);_ (2) _there exists a nonnegative integer \(\gamma_{j,k}\) for any \(j\in\{1,2,...,n\}\) and \(k\in\{1,2,...,m_{j}\}\), which satisfies that \(\Pi_{1\leq k\leq m_{j}}\chi_{j,z_{j,k}}^{\gamma_{j,k}+1}=\chi_{j,-u_{j}}\) and \(\sum_{1\leq j\leq n}\frac{\gamma_{j,g_{j}}+1}{p_{j,\beta_{j}}}=1\) for any \(\beta\in I_{1}\);_ (3) \(f=(c_{\beta}\Pi_{1\leq j\leq n}w_{j,\beta_{j}}^{\gamma_{j,\beta_{j}}}+g_{\beta })dw_{1,\beta_{1}}\wedge dw_{2,\beta_{2}}\wedge...\wedge dw_{n,\beta_{n}}\) _on \(V_{\beta}\) for any \(\beta\in I_{1}\), where \(c_{\beta}\) is a constant and \(g_{\beta}\) is a holomorphic function on \(V_{\beta}\) such that \((g_{\beta},z_{\beta})\in\mathcal{I}(\psi)_{z_{\beta}}\);_ (4) \(\lim_{z\to z_{\beta}}\frac{c_{\beta}\Pi_{1\leq j\leq n}w_{j,\beta_{j}}^{\gamma _{j,\beta_{j}}}dw_{1,\beta_{1}}\wedge dw_{2,\beta_{2}}\wedge...\wedge dw_{n, \beta_{n}}}{\wedge_{1\leq j\leq n}\pi_{j}^{*}(g_{j}(P_{j})_{*}(f_{\psi_{j}}( \Pi_{1\leq k\leq m_{j}}f_{z_{j,k}}^{\gamma_{j,k}+1})(\sum_{1\leq k\leq m_{j}} p_{j,k}\frac{df_{z_{j,k}}}{f_{z_{j,k}}})))}=c_{0}\) _for any \(\beta\in I_{1}\), where \(c_{0}\in\mathbb{C}\backslash\{0\}\) is a constant independent of \(\beta\), \(f_{uj}\) is a holomorphic function \(\Delta\) such that \(|f_{uj}|=P_{j}^{*}(e^{u_{j}})\) and \(f_{z_{j,k}}\) is a holomorphic function on \(\Delta\) such that \(|f_{z_{j,k}}|=P_{j}^{*}(e^{G_{\Omega_{j}}(\cdot,z_{j,k})})\) for any \(j\in\{1,2,...,n\}\) and \(k\in\{1,2,...,m_{j}\}\)._ ### Some other required results In this section, we recall and give some lemmas, which will be used in the proofs of the main theorems. Let \(U\subset\mathbb{C}^{n}\) be an open set. Let us recall the definition of admissible weight given in [15] and [16]. **Definition 2.34** (see [15, 16]).: _A nonnegative measurable function \(\rho\) on \(U\) is called an admissible weight, if for any \(z_{0}\in U\) the following condition is satisfied: there exists a neighborhood \(V_{z_{0}}\) in \(U\) and a constant \(C_{z_{0}}>0\) such that_ \[|f(z)|^{2}\leq C_{z_{0}}\int_{U}|f|^{2}\rho\] _holds for any \(z\in V_{z_{0}}\) and any holomorphic function \(f\) on \(U\)._ Let \(\rho\) be an admissible weight on \(U\). The weighted Bergman space \(A^{2}(U,\rho)\) is defined as follows: \[A^{2}(U,\rho):=\left\{f\in\mathcal{O}(U):\int_{U}|f|^{2}\rho<+\infty\right\}.\] Denote that \[\ll f,g\gg_{U,\rho}:=\int_{U}f\overline{g}\rho\] and \(||f||_{U,\rho}:=(\int_{U}|f|^{2}\rho)^{\frac{1}{2}}\) for any \(f,g\in A^{2}(U,\rho)\). **Lemma 2.35** (see [15, 16]).: \(A^{2}(U,\rho)\) _is a separable Hilbert space equipped with the inner product \(\ll\cdot,\cdot\gg_{U,\rho}\)._ We recall a sufficient condition for a weight to be an admissible weight. **Lemma 2.36** (see [13]).: _Let \(\rho\) be a nonnegative Lebesgue measurable function on \(U\), and let \(S\) be an analytic subset of \(U\). Assume that for any \(K\Subset U\backslash S\), there is \(a>0\) such that \(\int_{K}\rho^{-a}<+\infty\). Then \(\rho\) is an admissible weight on \(U\)._ Let \(U\subset\mathbb{C}^{n}\) and \(W\subset\mathbb{C}^{m}\) be two open sets. Let \(\rho_{1}\) and \(\rho_{2}\) be two nonnegative Lebesgue measurable functions on \(U\) and \(W\) respectively. Assume that for any relatively compact set \(U_{1}\Subset U\) (\(W_{2}\Subset W\)), there exists a real number \(a_{1}>0\) (\(a_{2}>0\)) such that \(\rho_{1}^{-a_{1}}\) (\(\rho_{2}^{-a_{2}}\)) is integrable on \(U_{1}\) (\(W_{2}\)). Let \(M:=U\times W\) and \(\rho=\rho_{1}\times\rho_{2}\). By Lemma 2.36, we know that \(\rho_{1}\), \(\rho_{2}\) and \(\rho\) are admissible weights on \(U\), on \(W\) and on \(M\) respectively. The following lemma gives a product property of Bergman spaces. **Lemma 2.37** (see [8]).: _Let \(\{f_{i}(z)\}_{i\in\mathbb{Z}_{\geq 0}}\) and \(\{g_{j}(w)\}_{j\in\mathbb{Z}_{\geq 0}}\) be the complete orthonormal basis of \(A^{2}(U,\rho_{1})\) and \(A^{2}(W,\rho_{2})\) respectively. Then \(\{f_{i}(z)g_{j}(w)\}_{i,j\in\mathbb{Z}_{\geq 0}}\) is a complete orthonormal basis of \(A^{2}(M,\rho)\)._ Let \(D_{j}\), \(M\), \(M_{j}\), \(Z_{j}\), \(Z_{0}\), \(I_{1}\) be as in Section 2.2. Denote that \[S:=\prod_{1\leq j\leq n}\partial D_{j}.\] Let us recall the definition and some properties of the Hardy space over \(S\). Let \(\lambda\) be a Lebesgue measurable function on \(S\) such that \(\inf_{S}\lambda>0\). Let \(f\in L^{2}(S,\lambda d\sigma)\), where \(d\sigma:=\frac{1}{(2\pi)^{n}}|dw_{1}|\ldots|dw_{n}|\). We call \(f\in H^{2}_{\lambda}(M,S)\) if there exists \(\{f_{m}\}_{m\in\mathbb{Z}_{\geq 0}}\subset\mathcal{O}(M)\cap C(\overline{M}) \cap L^{2}(S,\lambda d\sigma)\) such that \(\lim_{m\to+\infty}\|f_{m}-f\|_{S,\lambda}^{2}=0\), where \(\|g\|_{S,\lambda}:=\left(\int_{S}|g|^{2}\lambda d\sigma\right)^{\frac{1}{2}}\) for any \(g\in L^{2}(S,\lambda d\sigma)\). Denote that \[\ll f,g\gg_{S,\lambda}=\frac{1}{(2\pi)^{n}}\int_{S}f\overline{g}\lambda|dw_{1} |\ldots|dw_{n}|\] for any \(f,g\in L^{2}(S,\lambda d\sigma)\), then \(H^{2}_{\lambda}(M,S)\) is a Hilbert space equipped with the inner product \(\ll\cdot,\cdot\gg_{S,\lambda}\) (see [13]). There exists a linear injective map \(P_{S}:H^{2}_{\lambda}(M,S)\to\mathcal{O}(M)\) satisfying that \(P_{S}(f)=f\) for any \(f\in\mathcal{O}(M)\cap C(\overline{M})\cap L^{2}(S,\lambda d\sigma)\) (see [13]). For simplicity, denote \(P_{S}(f)\) by \(f^{*}\). We recall three lemmas about \(H^{2}_{\lambda}(M,S)\), which will be used in the proof of Lemma 2.42. **Lemma 2.38** ([13]).: _For any compact subset \(K\) of \(M\), there exists a positive constant \(C_{K}\) such that_ \[|f^{*}(z)|\leq C_{K}\|f\|_{S,\lambda}\] _holds for any \(z\in K\) and \(f\in H^{2}_{\lambda}(M,S)\)._ **Lemma 2.39** ([13]).: _Assume that \(M_{S}(Z_{0},J,\lambda)<+\infty\). Then there is a unique holomorphic function \(f\in H^{2}_{\lambda}(M,S)\) such that \(f^{*}(z_{\beta})=h_{0}(z_{\beta})\) for any \(\beta\in I_{1}\), and \(M_{S}(Z_{0},J,\lambda)=\|f\|_{S,\lambda}^{2}\)._ Let \(M_{a}=\prod_{1\leq j\leq n_{a}}D_{j}\) be a bounded domain in \(\mathbb{C}^{n_{a}}\), where \(D_{j}\) is planar regular region with finite boundary components which are analytic Jordan curves for any \(1\leq j\leq n_{a}\). Denote that \(S_{a}:=\prod_{1\leq j\leq n_{a}}\partial D_{j}\). Let \(M_{b}=\prod_{1\leq j\leq n_{b}}\tilde{D}_{j}\) be a bounded domain in \(\mathbb{C}^{n_{b}}\), where \(\tilde{D}_{j}\) is planar regular region with finite boundary components which are analytic Jordan curves for any \(1\leq j\leq n_{b}\). Denote that \(S_{b}:=\prod_{1\leq j\leq n_{b}}\partial\tilde{D}_{j}\). Denote that \(M:=M_{a}\times M_{b}\subset\mathbb{C}^{n_{a}+n_{b}}\) (\(n=n_{a}+n_{b}\)) and \(S:=S_{a}\times S_{b}\). **Lemma 2.40** ([13]).: _Let \(\lambda_{a}\) be a Lebesgue measurable function on \(S_{a}\) such that \(\inf_{S_{a}}\lambda_{a}>0\), and let \(\lambda_{a}\) be a Lebesgue measurable function on \(S_{b}\) such that \(\inf_{S_{b}}\lambda_{b}>0\). Denote that \(\lambda:=\lambda_{a}\lambda_{b}\) on \(S\). Assume that \(H^{2}_{\lambda}(M,S)\neq\{0\}\). Then we have \(H^{2}_{\lambda_{a}}(M_{a},S_{a})\neq\{0\}\) and \(H^{2}_{\lambda_{b}}(M_{b},S_{b})\neq\{0\}\). Furthermore, \(\{e_{m}(z)\tilde{e}_{l}(w)\}_{m,l\in\mathbb{Z}_{>0}}\) is a complete orthonormal basis for \(H^{2}_{\lambda}(M,S)\), where \(\{e_{m}\}_{m\in\mathbb{Z}_{>0}}\) is a complete orthonormal basis for \(H^{2}_{\lambda_{a}}(M_{a},S_{a})\), and \(\{\tilde{e}_{m}\}_{m\in\mathbb{Z}_{>0}}\) is a complete orthonormal basis for \(H^{2}_{\lambda_{b}}(M_{b},S_{b})\)._ In the following, we give three product properties, which will be used in the proof of Theorem 1.15. Let \(Z_{j}=\{z_{j,1},z_{j,2},...,z_{j,m_{j}}\}\subset D_{j}\) for any \(j\in\{1,2,...,n\}\), where \(m_{j}\) is a positive integer. Denote that \[Z_{0}:=\prod_{1\leq j\leq n}Z_{j}\subset M.\] Denote that \(I_{1}:=\{(\beta_{1},\beta_{2},...,\beta_{n}):1\leq\beta_{j}\leq m_{j}\text{ for any }j\in\{1,2,...,n\}\}\), \(V_{\beta}:=\prod_{1\leq j\leq n}V_{z_{j,\beta_{j}}}\) and \(z_{\beta}:=(z_{1,\beta_{1}},z_{2,\beta_{2}},\dots,z_{n,\beta_{n}})\in M\) for any \(\beta=(\beta_{1},\beta_{2},...,\beta_{n})\in I_{1}\). Let \(h_{j}\) be a holomorphic function on a neighborhood of \(Z_{j}\) for any \(1\leq j\leq n\) satisfying that there exists \(k\in\{1,\dots,m_{j}\}\) such that \(h_{j}(z_{j,k})\neq 0\). Denote that \(h_{0}=\prod_{1\leq j\leq n}h_{j}\). Let \(\rho_{1}\) and \(\rho_{2}\) be two Lebesgue measurable functions on \(\partial D_{1}\) and \(S_{1}:=\prod_{2\leq j\leq n}\partial D_{j}\) respectively, which satisfy that \(\inf_{\partial D_{1}}\rho_{1}>0\) and \(\inf\,_{S_{1}}\rho_{2}>0\). Let \(\lambda_{1}\) and \(\lambda_{2}\) be two nonnegative Lebesgue measurable functions on \(D_{1}\) and \(M_{1}\) respectively, which satisfy that for any relatively compact subset \(R_{1}\Subset D_{1}\) (\(R_{2}\Subset M_{1}\)), there is \(a>0\) such that \(\lambda_{1}^{-a}\) (\(\lambda_{2}^{-a}\)) is integrable on \(R_{1}\) (\(R_{2}\)). By Lemma 2.36, we know that \(\lambda_{1}\) and \(\lambda_{2}\) are admissible weights on \(D_{1}\) and on \(M_{1}\) respectively. Let us consider the following minimal integrals. Let \(J_{\beta}\) be the maximal ideal of \(\mathcal{O}_{z_{\beta}}\) for any \(\beta\in I_{1}\). Denote that \[M_{H,1}(Z_{0},J,\rho_{1}\lambda_{2}):=\inf\bigg{\{} \|f\|_{\partial D_{1}\times M_{1},\rho_{1}\lambda_{2}}^{2}:f\in H _{\rho}^{2}(M,\partial D_{1}\times M_{1})\] \[\text{ s.t. }f^{*}(z_{\beta})=h_{0}(z_{\beta})\text{ for any }\beta\in I_{1} \bigg{\}},\] \[M_{\partial D_{1}}:=\inf\bigg{\{} \frac{1}{2\pi}\int_{\partial D_{1}} |f|^{2}\rho_{1}|dz_{1}|:f\in H^{2}(D_{1})\] \[\text{ s.t. }f(z_{j,k})=h_{1}(z_{j,k})\text{ for any }1\leq k \leq m_{1}\bigg{\}}\] and \[M_{M_{1}}:=\inf\bigg{\{} \int_{M_{1}}|f|^{2}\lambda_{2}:f\in\mathcal{O}(M_{1})\] \[\text{ s.t. }f(z_{\gamma})=\prod_{2\leq l\leq n}h_{l}(z_{l,\gamma_{l}}) \text{ for any }\gamma\in I_{1,1}\bigg{\}},\] where \(I_{1,1}:=\{\gamma=(\gamma_{2},\dots,\gamma_{n})\in\mathbb{Z}^{n-1}:1\leq\gamma _{l}\leq m_{l}\text{ for any }2\leq l\leq n\}\) and \(z_{\gamma}:=(z_{2,\gamma_{2}},\dots,z_{n,\gamma_{n}})\in M_{j}\) for any \(\gamma\in I_{1,1}\). **Lemma 2.41**.: \(M_{H,1}(Z_{0},J,\rho_{1}\lambda_{2})=M_{\partial D_{1}}\times M_{M_{1}}\)_._ Proof.: By definitions of \(M_{H,1}(Z_{0},J,\rho_{1}\lambda_{2})\), \(M_{\partial D_{1}}\) and \(M_{M_{1}}\), we have \[M_{H,1}(Z_{0},J,\rho_{1}\lambda_{2})\leq M_{\partial D_{1}}\times M_{M_{1}}.\] Thus, it suffices to prove \(M_{H,1}(Z_{0},J,\rho_{1}\lambda_{2})\geq M_{\partial D_{1}}\times M_{M_{1}}\). Without loss of generality, assume that \(M_{H,1}(Z_{0},J,\rho_{1}\lambda_{2})<+\infty\). There exists \(f_{0}\in H_{\rho_{1}\lambda_{2}}^{2}(M,\partial D_{1},\times M_{1})\) satisfying \(f_{0}^{*}(z_{\beta})=h_{0}(z_{\beta})\) for any \(\beta\in I_{1}\) and \[M_{H,1}(Z_{0},J,\rho_{1}\lambda_{2})=\|f_{0}\|_{\partial D_{1}\times M_{1},\rho _{1}\lambda_{2}}^{2}. \tag{2.25}\] As \(H^{2}_{\rho_{1}\lambda_{2}}(M,\partial D_{1}\times M_{1})\neq\emptyset\), by Lemma 2.15, \(H^{2}_{\rho_{1}}(D_{1},\partial D_{1})\neq\{0\}\) and \(A^{2}(M_{1},\lambda_{2})\neq\{0\}\). Let \(\{e_{l}\}_{l\in\mathbb{Z}_{>0}}\) be a complete orthonormal basis for \(H^{2}_{\rho_{1}}(D_{1},\partial D_{1})\), which satisfies that \(e_{l}(z_{1,l})\neq 0\) for \(1\leq l\leq m_{1}\) and \(e_{l}(z_{1,k})=0\) for \(0\leq k<l\). Denote that \[K_{1}:=\{e_{l}\}_{l>m_{1}}.\] We call \(\gamma<\tilde{\gamma}\) for \(\gamma,\tilde{\gamma}\in I_{1,1}\) if there exists \(s\in\{2,\ldots,n\}\) such that \(\gamma_{l}=\tilde{\gamma}_{l}\) when \(l<s\) and \(\gamma_{s}>\tilde{\gamma}_{s}\). Let \(\{\tilde{e}_{m}\}_{m\in\mathbb{Z}_{>0}}\) be a complete orthonormal basis for \(A^{2}(M_{1},\lambda_{2})\), which satisfies that there exists \(N_{1}\in\mathbb{Z}_{>0}\) such that \(\sum_{\gamma\in I_{1,1}}|\tilde{e}_{m}(z_{\gamma})|=0\) when \(m>N_{1}\) and \(\sum_{\gamma\in I_{1,1}}|\tilde{e}_{m}(z_{\gamma})|\neq 0\) when \(m\leq N_{1}\), and \(s_{m}\) is strictly increasing with respect to \(m\) when \(m\leq N_{1}\), where \(s_{m}:=\inf\{\gamma\in I_{1,1}:\tilde{e}_{m}(z_{\gamma})\neq 0\}\). Denote that \[K_{2}:=\{\tilde{e}_{m}\}_{m>N_{1}}.\] Lemma 2.15 shows that \(\{e_{m}(z)\tilde{e}_{l}(w)\}_{m,l\in\mathbb{Z}_{>0}}\) is a complete orthonormal basis for \(H^{2}_{\rho_{1}\lambda_{2}}(M,\partial D_{1}\times M_{1})\). Then we have \[f_{0}=\sum_{l,m\in\mathbb{Z}_{>0}}a_{l,m}e_{l}\tilde{e}_{m}.\] By Lemma 2.14, we know that \[f_{0}^{*}=\sum_{l,m\in\mathbb{Z}_{>0}}a_{l,m}e_{l}^{*}\tilde{e}_{m}\ \ \mbox{( compactly uniform convergence)}.\] Since there exists \(k\in\{1,\ldots,m_{1}\}\) such that \(h_{1}(z_{1,k})\neq 0\), without loss of generality, assume that \(h_{z}(z_{1,1})\neq 0.\) As \(f_{0}^{*}(z_{\beta})=h_{0}(z_{\beta})=\prod_{1\leq j\leq n}h_{j}(z_{j,\beta_{ j}})\) for any \(\beta\in I_{1}\), we obtain that \[\frac{\sum_{l,m\in\mathbb{Z}_{>0}}a_{l,m}e_{l}^{*}(z_{1,1})\tilde{e}_{m}(z_{ \gamma})}{h_{1}(z_{1,1})}=\prod_{2\leq j\leq n}h_{j}(z_{j,\gamma_{j}})\] for any \(\gamma\in I_{1,1}\). Note that \(\sum_{l,m\in\mathbb{Z}_{>0}}a_{l,m}e_{l}^{*}(z_{1,1})\tilde{e}_{m}\in\mathcal{ O}(M_{1})\) and \[\int_{M_{1}}|\sum_{l,m\in\mathbb{Z}_{>0}}a_{l,m}e_{l}^{*}(z_{1,1} )\tilde{e}_{m}|^{2}\lambda_{2}\] \[= \sum_{l,m\in\mathbb{Z}_{>0}}|a_{l,m}e_{l}^{*}(z_{1,1})|^{2}\] \[= |e^{*}(z_{1,1})|^{2}\|f_{0}\|_{\partial D_{1}\times M_{1},\rho_{ 1}\lambda_{2}}^{2}\] \[< +\infty.\] Thus, we have \(M_{M_{1}}<+\infty\). As \(H^{2}_{\rho_{1}}(D_{1},\partial D_{1})\neq\emptyset\) and \(D_{1}\) is a planar regular region bounded by finite analytic Jordan curves, we know that \(M_{\partial D_{1}}<+\infty\). Let \(f_{1}\in H^{2}_{\rho_{1}}(D_{1},\partial D_{1})\) satisfy \(f_{1}^{*}(z_{1,k})=h_{1}(z_{1,k})\) for any \(1\leq k\leq m_{1}\) and \[M_{\partial D_{1}}=\frac{1}{2\pi}\int_{\partial D_{1}}|f_{1}|^{2}\rho_{1}|dz_{1}|.\] Let \(f_{2}\in\mathcal{O}(M_{1})\) satisfy that \(f_{2}(z_{\gamma})=\prod_{2\leq j\leq n}h_{j}(z_{j,\gamma_{j}})\) for any \(\gamma\in I_{1,1}\) and \[M_{M_{1}}=\int_{M_{1}}|f_{2}|^{2}\lambda_{2}.\] Then we know that \[\int_{\partial D_{1}}f_{1}\overline{f}\rho_{1}|dz_{1}|=0 \tag{2.26}\] for any \(f\in K_{1}\), and \[\int_{M_{1}}f_{2}\overline{g}\lambda_{2}=0 \tag{2.27}\] for any \(g\in K_{2}\). Denote that \[F_{0}:=f_{0}-f_{1}f_{2}, \tag{2.28}\] then we have \(F_{0}\in H^{2}_{\rho_{1}\lambda_{2}}(M,\partial D_{1}\times M_{1})\) and \(F_{0}^{*}(z_{\beta})=0\) for any \(\beta\in I_{1}\). As \(\{e_{m}(z)\tilde{e}_{l}(w)\}_{m,l\in\mathbb{Z}_{>0}}\) is a complete orthonormal basis for \(H^{2}_{\rho_{1}\lambda_{2}}(M,\partial D_{1}\times M_{1})\), there exists \(\{b_{l,m}\}_{l,m\in\mathbb{Z}_{>0}}\subset\mathbb{C}\) such that \[F_{0}=\sum_{l,m\in\mathbb{Z}_{>0}}b_{l,m}e_{l}\tilde{e}_{m}=F_{1}+F_{2},\] where \(F_{1}:=\sum_{1\leq l\leq m_{1}}\sum_{1\leq m\leq N_{1}}b_{l,m}e_{l}\tilde{e}_ {m}\) and \(F_{2}:=\sum_{e_{l}\in K_{1}\text{ or }\tilde{e}_{m}\in K_{2}}b_{l,m}e_{l}\tilde{e}_ {m}\). Note that \(F_{2}^{*}(z_{\beta})=0\) for any \(\beta\in I_{1}\), then \(F_{1}^{*}(z_{\beta})=0\) for any \(\beta\in I_{1}\). By the construction of \(\{e_{l}\}_{1\leq l\leq m_{1}}\) and \(\{\tilde{e}_{m}\}_{1\leq m\leq N_{1}}\), we know \(b_{l,m}=0\) for \(1\leq l\leq m_{1}\) and \(1\leq m\leq N_{1}\), i.e., \[F_{1}\equiv 0.\] Note that \(F_{0}=F_{2}=\sum_{e_{l}\in K_{1}\text{ or }\tilde{e}_{m}\in K_{2}}b_{l,m}e_{l} \tilde{e}_{m}\), then it follows from equality (2.25), (2.26), (2.27) and (2.28) that \[M_{H,1}(Z_{0},J,\rho_{1}\lambda_{2}) =\|f_{0}\|_{\partial D_{1}\times M_{1},\rho_{1}\lambda_{2}}^{2}\] \[=\|F_{2}\|_{\partial D_{1}\times M_{1},\rho_{1}\lambda_{2}}^{2}+ \|f_{1}f_{2}\|_{\partial D_{1}\times M_{1},\rho_{1}\lambda_{2}}^{2}\] \[\geq M_{\partial D_{1}}\times M_{M_{1}}.\] Thus, Lemma 2.41 holds. Let us consider the following minimal integrals. Denote that \[M_{S}(Z_{0},J,\rho_{1}\rho_{2}):=\inf\bigg{\{} \|f\|_{S,\rho_{1}\rho_{2}}^{2}:f\in H^{2}_{\rho_{1}\rho_{2}}(M,S)\] s.t. \[f^{*}(z_{\beta})=h_{0}(z_{\beta})\text{ for any }\beta\in I_{1} \bigg{\}},\] and \[M_{S_{1}}:=\inf\bigg{\{} \|f\|_{S_{1},\rho_{2}}^{2}:f\in H^{2}_{\rho_{2}}(M_{1},S_{1})\] s.t. \[f^{*}(z_{\gamma})=\prod_{2\leq j\leq n}h_{j}(z_{j,\gamma_{j}}) \text{ for any }\gamma\in I_{1,1}\bigg{\}}.\] We give a product property as follows: **Lemma 2.42**.: \(M_{S}(Z_{0},J,\rho_{1}\rho_{2})=M_{\partial D_{1}}\times M_{S_{1}}\)_._ Proof.: The proof is similar to the proof of Lemma 2.41. By definitions of \(M_{S}(Z_{0},J,\rho_{1}\rho_{2})\), \(M_{\partial D_{1}}\) and \(M_{S_{1}}\), we have \[M_{S}(Z_{0},J,\rho_{1}\rho_{2})\leq M_{\partial D_{1}}\times M_{S_{1}}.\] Thus, it suffices to prove \(M_{S}(Z_{0},J,\rho_{1}\rho_{2})\geq M_{\partial D_{1}}\times M_{S_{1}}\). Without loss of generality, assume that \(M_{S}(Z_{0},J,\rho_{1}\rho_{2})<+\infty\). By Lemma 2.39, there exists \(f_{0}\in H^{2}_{\rho_{1}\rho_{2}}(M,S)\) satisfying \(f_{0}^{*}(z_{\beta})=h_{0}(z_{\beta})\) for any \(\beta\in I_{1}\) and \[M_{S}(Z_{0},J,\rho_{1}\rho_{2})=\|f_{0}\|_{S,\rho_{1}\rho_{2}}^{2}. \tag{2.29}\] As \(H^{2}_{\rho_{1}\rho_{2}}(M,S)\neq\emptyset\), by Lemma 2.40, \(H^{2}_{\rho_{1}}(D_{1},\partial D_{1})\neq\{0\}\) and \(H^{2}_{\rho_{2}}(M_{1},S_{1})\neq\{0\}\). Let \(\{e_{l}\}_{l\in\mathbb{Z}_{>0}}\) be a complete orthonormal basis for \(H^{2}_{\rho_{1}}(D_{1},\partial D_{1})\), which satisfies that \(e_{l}(z_{1,l})\neq 0\) for \(1\leq l\leq m_{1}\) and \(e_{l}(z_{1,k})=0\) for \(0\leq k<l\). Denote that \[K_{1}:=\{e_{l}\}_{l>m_{1}}.\] We call \(\gamma<\tilde{\gamma}\) for \(\gamma,\tilde{\gamma}\in I_{1,1}\) if there exists \(s\in\{2,\ldots,n\}\) such that \(\gamma_{l}=\tilde{\gamma}_{l}\) when \(l<s\) and \(\gamma_{s}>\tilde{\gamma}_{s}\). Let \(\{\tilde{e}_{m}\}_{m\in\mathbb{Z}_{>0}}\) be a complete orthonormal basis for \(H^{2}_{\rho_{2}}(M_{1},S_{1})\), which satisfies that there exists \(N_{1}\in\mathbb{Z}_{>0}\) such that \(\sum_{\gamma\in I_{1,1}}|\tilde{e}_{m}(z_{\gamma})|=0\) when \(m>N_{1}\) and \(\sum_{\gamma\in I_{1,1}}|\tilde{e}_{m}(z_{\gamma})|\neq 0\) when \(m\leq N_{1}\), and \(s_{m}\) is strictly increasing with respect \(m\) when \(m\leq N_{1}\), where \(s_{m}:=\inf\{\gamma\in I_{1,1}:\tilde{e}_{m}(z_{\gamma})\neq 0\}\). Denote that \[K_{2}:=\{\tilde{e}_{m}\}_{m>N_{1}}.\] Lemma 2.40 shows that \(\{e_{m}(z)\tilde{e}_{l}(w)\}_{m,l\in\mathbb{Z}_{>0}}\) is a complete orthonormal basis for \(H^{2}_{\rho_{1}\rho_{2}}(M,S)\). Then we have \[f_{0}=\sum_{l,m\in\mathbb{Z}_{>0}}a_{l,m}e_{l}\tilde{e}_{m}.\] By Lemma 2.38, we know that \[f_{0}^{*}=\sum_{l,m\in\mathbb{Z}_{>0}}a_{l,m}e_{l}^{*}\tilde{e}_{m}^{*}\ \ \mbox{( compactly uniform convergence)}.\] Since there exists \(k\in\{1,\ldots,m_{1}\}\) such that \(h_{1}(z_{1,k})\neq 0\), without loss of generality, assume that \(h_{z}(z_{1,1})\neq 0.\) Note that \(\sum_{l,m\in\mathbb{Z}_{>0}}|a_{l,m}e_{l}^{*}(z_{1,1})|^{2}<+\infty\), then we have \[\sum_{l,m\in\mathbb{Z}_{>0}}a_{l,m}e_{l}^{*}(z_{1,1})\tilde{e}_{m}\in H^{2}_{ \rho_{2}}(M_{1},S_{1}).\] As \(f^{*}(z_{\beta})=h_{0}(z_{\beta})=\prod_{1\leq j\leq n}h_{j}(z_{j,\beta_{j}})\) for any \(\beta\in I_{1}\), we obtain that \[\frac{\sum_{l,m\in\mathbb{Z}_{>0}}a_{l,m}e_{l}^{*}(z_{1,1})\tilde{e}_{m}^{*}(z _{\gamma})}{h_{1}(z_{1,1})}=\prod_{2\leq j\leq n}h_{j}(z_{j,\gamma_{j}})\] for any \(\gamma\in I_{1,1}\). Thus, we have \(M_{S_{1}}<+\infty\). Similarly, we have \(M_{\partial D_{1}}<+\infty\). Let \(f_{1}\in H^{2}_{\rho_{1}}(D_{1},\partial D_{1})\) satisfy \(f_{1}^{*}(z_{1,k})=h_{1}(z_{1,k})\) for any \(1\leq k\leq m_{1}\) and \[M_{\partial D_{1}}=\frac{1}{2\pi}\int_{\partial D_{1}}|f_{1}|^{2}\rho_{1}|dz_{1 }|.\] Let \(f_{2}\in H^{2}_{\rho_{2}}(M_{1},S_{1})\) satisfy that \(f_{2}^{*}(z_{\gamma})=\prod_{2\leq j\leq n}h_{j}(z_{j,\gamma_{j}})\) for any \(\gamma\in I_{1,1}\) and \[M_{M_{1}}=\|f_{2}\|_{S_{1},\rho_{2}}^{2}.\] Then we know that \[\int_{\partial D_{1}}f_{1}\overline{f}\rho_{1}|dz_{1}|=0 \tag{2.30}\] for any \(f\in K_{1}\), and \[\ll f_{2},g\gg_{S_{1},\rho_{2}}=0 \tag{2.31}\] for any \(g\in K_{2}\). Denote that \[F_{0}:=f_{0}-f_{1}f_{2}, \tag{2.32}\] then we have \(F_{0}\in H^{2}_{\rho_{1}\rho_{2}}(M,S)\) and \(F_{0}^{*}(z_{\beta})=0\) for any \(\beta\in I_{1}\). As \(\{e_{m}(z)\tilde{e}_{l}(w)\}_{m,l\in\mathbb{Z}_{>0}}\) is a complete orthonormal basis for \(H^{2}_{\rho_{1}\rho_{2}}(M,S)\), there exists \(\{b_{l,m}\}_{l,m\in\mathbb{Z}_{>0}}\subset\mathbb{C}\) such that \[F_{0}=\sum_{l,m\in\mathbb{Z}_{>0}}b_{l,m}e_{l}\tilde{e}_{m}=F_{1}+F_{2},\] where \(F_{1}:=\sum_{1\leq l\leq m_{1}}\sum_{1\leq m\leq N_{1}}b_{l,m}e_{l}\tilde{e}_{m}\) and \(F_{2}:=\sum_{e_{l}\in K_{1}\text{ or }\tilde{e}_{m}\in K_{2}}b_{l,m}e_{l} \tilde{e}_{m}\). Note that \(F_{2}^{*}(z_{\beta})=0\) for any \(\beta\in I_{1}\), then \(F_{1}^{*}(z_{\beta})=0\) for any \(\beta\in I_{1}\). By the construction of \(\{e_{l}\}_{1\leq l\leq m_{1}}\) and \(\{\tilde{e}_{m}\}_{1\leq m\leq N_{1}}\), we know \(b_{l,m}=0\) for \(1\leq l\leq m_{1}\) and \(1\leq m\leq N_{1}\), i.e., \[F_{1}\equiv 0.\] Note that \(F_{0}=F_{2}=\sum_{e_{l}\in K_{1}\text{ or }\tilde{e}_{m}\in K_{2}}b_{l,m}e_{l} \tilde{e}_{m}\), then it follows from equality (2.29), (2.30), (2.31) and (2.32) that \[M_{S}(Z_{0},J,\rho_{1}\rho_{2})=\|f_{0}\|_{S,\rho_{1}\rho_{2}}^{2}=\|F_{2}\|_{ S,\rho_{1}\rho_{2}}^{2}+\|f_{1}f_{2}\|_{S,\rho_{1}\rho_{2}}^{2}\geq M_{ \partial D_{1}}\times M_{S_{1}}.\] Thus, Lemma 2.42 holds. Let us consider the following minimal integrals. Denote that \[M_{D_{1}}:=\inf\bigg{\{} \int_{D_{1}}|f|^{2}\lambda_{1}:f\in\mathcal{O}(D_{1})\] s.t. \[f(z_{1,k})=h_{1}(z_{1,k})\text{ for any }1\leq k\leq m_{1}\bigg{\}},\] and \[M_{M}:=\inf\bigg{\{} \int_{M}|f|^{2}\lambda_{1}\lambda_{2}:f\in\mathcal{O}(M)\] s.t. \[f(z_{\beta})=h_{0}(z_{\beta})\text{ for any }\beta\in I_{1}\bigg{\}}.\] **Lemma 2.43**.: \(M_{M}=M_{D_{1}}\times M_{M_{1}}\)_._ Proof.: The proof is similar to the proof of Lemma 2.41. By definitions of \(M_{M}\), \(M_{D_{1}}\) and \(M_{M_{1}}\), we have \(M_{M}\leq M_{D_{1}}\times M_{M_{1}}\). Thus, it suffices to prove \(M_{M}\geq M_{D_{1}}\times M_{M_{1}}\). Without loss of generality, assume that \(M_{M}<+\infty\). There exists \(f_{0}\in\mathcal{O}(M)\) satisfying \(f_{0}(z_{\beta})=h_{0}(z_{\beta})\) for any \(\beta\in I_{1}\) and \[M_{M}=\int_{M}|f_{0}|^{2}\lambda_{1}\lambda_{2}. \tag{2.33}\] Let \(\{e_{l}\}_{l\in\mathbb{Z}_{>0}}\) be a complete orthonormal basis for \(A^{2}(D_{1},\lambda_{1})\), which there exists \(N_{1}\in\mathbb{Z}_{>0}\) such that \(\sum_{1\leq k\leq m_{1}}|e_{l}(z_{1,k})|=0\) when \(l>N_{1}\) and \(\sum_{1\leq k\leq m_{1}}|e_{l}(z_{1,k})|\neq 0\) when \(l\leq N_{1}\), and \(s_{l}\) is strictly increasing with respect \(l\) when \(l\leq N_{1}\), where \(s_{l}:=\inf\{k\in\{1,\dots,m_{1}\}:e_{l}(z_{1,k})\neq 0\}\). Denote that \[K_{1}:=\{e_{l}\}_{l>N_{1}}.\] Let \(\{\tilde{e}_{m}\}_{m\in\mathbb{Z}_{>0}}\) be a complete orthonormal basis for \(A^{2}(M_{1},\lambda_{2})\), which satisfies that there exists \(N_{2}\in\mathbb{Z}_{>0}\) such that \(\sum_{\gamma\in I_{1}}|\tilde{e}_{m}(z_{\gamma})|=0\) when \(m>N_{2}\) and \(\sum_{\gamma\in I_{1,1}}|\tilde{e}_{m}(z_{\gamma})|\neq 0\) when \(m\leq N_{2}\), and \(\tilde{s}_{m}\) is strictly increasing with respect \(m\) when \(m\leq N_{2}\), where \(\tilde{s}_{m}:=\inf\{\gamma\in I_{1,1}:\tilde{e}_{m}(z_{\gamma})\neq 0\}\). Denote that \[K_{2}:=\{\tilde{e}_{m}\}_{m>N_{2}}.\] Lemma 2.37 shows that \(\{e_{m}(z)\tilde{e}_{l}(w)\}_{m,l\in\mathbb{Z}_{>0}}\) is a complete orthonormal basis for \(A^{2}(M,\lambda_{1}\lambda_{2})\). Then we have \[f_{0}=\sum_{l,m\in\mathbb{Z}_{>0}}a_{l,m}e_{l}\tilde{e}_{m}.\] Since there exists \(k\in\{1,\ldots,m_{1}\}\) such that \(h_{1}(z_{1,k})\neq 0\), without loss of generality, assume that \(h_{z}(z_{1,1})\neq 0.\) As \(f_{0}(z_{\beta})=h_{0}(z_{\beta})=\prod_{1\leq j\leq n}h_{j}(z_{j,\beta_{j}})\) for any \(\beta\in I_{1}\), we obtain that \[\frac{\sum_{l,m\in\mathbb{Z}_{>0}}a_{l,m}e_{l}(z_{1,1})\tilde{e}_{m}(z_{ \gamma})}{h_{1}(z_{1,1})}=\prod_{2\leq j\leq n}h_{j}(z_{j,\gamma_{j}})\] for any \(\gamma\in I_{1,1}\). Note that \(\sum_{l,m\in\mathbb{Z}_{>0}}a_{l,m}e_{l}(z_{1,1})\tilde{e}_{m}\in\mathcal{O}( M_{1})\) and \[\int_{M_{1}}|\sum_{l,m\in\mathbb{Z}_{>0}}a_{l,m}e_{l}(z_{1,1}) \tilde{e}_{m}|^{2}\lambda_{2}\] \[= \sum_{l,m\in\mathbb{Z}_{>0}}|a_{l,m}e_{l}(z_{1,1})|^{2}\] \[= |e(z_{1,1})|^{2}\int_{M}|f_{0}|^{2}\lambda_{1}\lambda_{2}\] \[< +\infty.\] Thus, we have \(M_{M_{1}}<+\infty\). Similarly, we have \(M_{D_{1}}<+\infty\). Let \(f_{1}\in\mathcal{O}(D_{1})\) satisfy \(f_{1}(z_{1,k})=h_{1}(z_{1,k})\) for any \(1\leq k\leq m_{1}\) and \[M_{D_{1}}=\int_{D_{1}}|f_{1}|^{2}\lambda_{1}.\] Let \(f_{2}\in\mathcal{O}(M_{1})\) satisfy that \(f_{2}(z_{\gamma})=\prod_{2\leq j\leq n}h_{j}(z_{j,\gamma_{j}})\) for any \(\gamma\in I_{1,1}\) and \[M_{M_{1}}=\int_{M_{1}}|f_{2}|^{2}\lambda_{2}.\] Then we know that \[\int_{D_{1}}f_{1}\overline{f}\lambda_{1}=0 \tag{2.34}\] for any \(f\in K_{1}\), and \[\int_{M_{1}}f_{2}\overline{g}\lambda_{2}=0 \tag{2.35}\] for any \(g\in K_{2}\). Denote that \[F_{0}:=f_{0}-f_{1}f_{2}, \tag{2.36}\] then we have \(F_{0}\in A^{2}(M,\lambda_{1}\lambda_{2})\) and \(F_{0}(z_{\beta})=0\) for any \(\beta\in I_{1}\). As \(\{e_{m}(z)\tilde{e}_{l}(w)\}_{m,l\in\mathbb{Z}_{>0}}\) is a complete orthonormal basis for \(A^{2}(M,\lambda_{1}\lambda_{2})\), there exists \(\{b_{l,m}\}_{l,m\in\mathbb{Z}_{>0}}\subset\mathbb{C}\) such that \[F_{0}=\sum_{l,m\in\mathbb{Z}_{>0}}b_{l,m}e_{l}\tilde{e}_{m}=F_{1}+F_{2},\] where \(F_{1}:=\sum_{1\leq l\leq N_{1}}\sum_{1\leq m\leq N_{2}}b_{l,m}e_{l}\tilde{e}_{m}\) and \(F_{2}:=\sum_{e_{l}\in K_{1}\text{ or }\tilde{e}_{m}\in K_{2}}b_{l,m}e_{l}\tilde{e}_{m}\). Note that \(F_{2}(z_{\beta})=0\) for any \(\beta\in I_{1}\), then \(F_{1}(z_{\beta})=0\) for any \(\beta\in I_{1}\). By the construction of \(\{e_{l}\}_{1\leq l\leq N_{1}}\) and \(\{\tilde{e}_{m}\}_{1\leq m\leq N_{2}}\), we know \(b_{l,m}=0\) for \(1\leq l\leq N_{1}\) and \(1\leq m\leq N_{2}\), i.e., \[F_{1}\equiv 0.\] Note that \(F_{0}=F_{2}=\sum_{e_{l}\in K_{1}\text{ or }\tilde{e}_{m}\in K_{2}}b_{l,m}e_{l} \tilde{e}_{m}\), then it follows from equality (2.33), (2.34), (2.35) and (2.36) that \[M_{M}=\int_{M}|f_{0}|^{2}\lambda_{1}\lambda_{2}=\int_{M}|F_{2}|^{2}\lambda_{1 }\lambda_{2}+\int_{M}|f_{1}f_{2}|^{2}\lambda_{1}\lambda_{2}\geq M_{D_{1}}\times M _{M_{1}}.\] Thus, Lemma 2.43 holds. ## 3. Proofs of Theorem 1.5, Remark 1.6, Corollary 1.7 and Corollary 1.8 In this section, we prove Theorem 1.5, Remark 1.6, Corollary 1.7 and Corollary 1.8. ### Proof of Theorem 1.5 We prove Theorem 1.5 in three steps. _Step 1: proof of inequality (1.1)_ Denote \[\inf\bigg{\{} \int_{\{2\psi<-t\}}|f|^{2}\tilde{\rho}:f\in\mathcal{O}(\{2\psi<-t\})\] \[\text{s.t. }f^{(l)}(z_{j})=a_{j,l}\text{ for any }0\leq l\leq k_{j} \text{ and any }1\leq j\leq m\bigg{\}}\] by \(G(t)\) for \(t\geq 0\). Note that \(\tilde{\rho}=e^{-\varphi}c(-2\psi)\) and \(G(0)=M(Z_{0},\mathfrak{a},\tilde{\rho}).\) As \(v(dd^{c}(\varphi+2\psi),z_{j})\geq 2(k_{j}+1)\) for any \(1\leq j\leq m\), it follows from Theorem 2.23 that \(G(h^{-1}(r))\) is concave, where \(h(t)=\int_{t}^{+\infty}c(s)e^{-s}ds\). By Lemma 2.25, there exists a holomorphic function \(F_{0}\) on \(D\) such that \(f^{(l)}(z_{j})=a_{j,l}\) for any \(0\leq l\leq k_{j}\) and any \(1\leq j\leq m\), and \[G(0)=\int_{D}|F_{0}|^{2}\tilde{\rho}.\] By definition of \(G(t)\), we have \[G(-\log r)\leq\int_{\{2\psi<\log r\}}|F_{0}|^{2}\tilde{\rho}\] for any \(r\in(0,1]\), then combining the concavity of \(G(h^{-1}(r))\), we obtain that \[\frac{\int_{\{z\in D:2\psi(z)\geq\log r\}}|F_{0}(z)|^{2}\tilde{\rho}}{\int_{0 }^{-\log r}c(t)e^{-t}dt}\leq\frac{G(0)-G(-\log r)}{\int_{0}^{-\log r}c(t)e^{- t}dt}\leq\frac{G(0)}{\int_{0}^{+\infty}c(t)e^{-t}dt}<+\infty. \tag{3.1}\] Since \(\lim_{t\to 0+0}c(t)=c(0)=1\) and \(\lim_{w\to z}\varphi(w)=\varphi(z)\) for any \(z\in\partial D\), it follows from Lemma 2.13 and inequality (3.1) that \(F_{0}\in H^{2}(D)\) and \[\begin{split} M_{H}(Z_{0},\mathfrak{a},\rho)&\leq \frac{1}{2\pi}\int_{\partial D}|F_{0}|^{2}\rho|dz|\\ &\leq\frac{1}{2\pi}\liminf_{r\to 1-0}\frac{\int_{\{z\in D: \psi\geq\log r\}}|F_{0}|^{2}\tilde{\rho}}{1-r}\\ &=\frac{1}{2\pi}\liminf_{r\to 1-0}\frac{\int_{\{z\in D:2\psi\geq \log r\}}|F_{0}|^{2}\tilde{\rho}}{\int_{0}^{-\log r}c(t)e^{-t}dt}\times\frac{ \int_{0}^{-\log r}c(t)e^{-t}dt}{1-r^{\frac{1}{2}}}\\ &\leq\frac{1}{\pi\int_{0}^{+\infty}c(t)e^{-t}dt}M(Z_{0}, \mathfrak{a},\tilde{\rho})\end{split} \tag{3.2}\] Thus, inequality (1.1) holds. _Step 2: necessity of the characterization_ Assume that the equality \[M_{H}(Z_{0},\mathfrak{a},\rho)=\frac{M(Z_{0},\mathfrak{a},\tilde{\rho})}{\pi \int_{0}^{+\infty}c(t)e^{-t}dt} \tag{3.3}\] holds. Combining inequality (3.1) and inequality (3.2), we get that \[\liminf_{r\to 1-0}\frac{\int_{\{z\in D:2\psi(z)\geq\log r\}}|F_{0}(z)|^{2} \tilde{\rho}}{\int_{0}^{-\log r}c(t)e^{-t}dt}=\liminf_{r\to 1-0}\frac{G(0)-G(- \log r)}{\int_{0}^{-\log r}c(t)e^{-t}dt}=\frac{G(0)}{\int_{0}^{+\infty}c(t)e^{ -t}dt}.\] Since \(G(h^{-1}(r))\) is concave, we know that \(G(h^{-1}(r))\) is linear with respect to \(r\in(0,\int_{0}^{+\infty}c(t)e^{-t}dt)\). By Theorem 2.26, we get that (1) \(\varphi+2\psi=2\log|g_{1}|+2\sum_{1\leq j\leq m}G_{D}(\cdot,z_{j})+2u_{1}\), where \(g_{1}\) is a holomorphic function on \(D\) such that \(ord_{z_{j}}(g_{1})=\min\{l:a_{j,l}\neq 0\}\), and \(u_{1}\) is a harmonic function on \(D\); (2) \(\psi=\sum_{1\leq j\leq m}p_{j}G_{D}(\cdot,z_{j})\); (3) \(\chi_{-u_{1}}=\prod_{1\leq j\leq m}\chi_{z_{j}}\); (4) For any \(1\leq j\leq m\), \[\lim_{z\to z_{j}}\frac{g_{1}p_{*}\left(f_{u_{1}}\left(\prod_{1\leq j\leq m}f_{ z_{j}}\right)\left(\sum_{1\leq j\leq m}p_{j}\frac{df_{z_{j}}}{f_{z_{j}}} \right)\right)}{\sum_{0\leq l\leq k_{j}}a_{j,l}(z-z_{j})^{l}dz}=c_{0} \tag{3.4}\] holds, where \(c_{0}\neq 0\) is a constant independent of \(j\). As \(v(dd^{c}(\varphi+2\psi),z_{j})\geq 2(k_{j}+1)\), it follows from statements (1) and (4) above that, for any \(1\leq j\leq m\), \(ord_{z_{j}}(g_{1})=k_{j}\) and \(a_{j}=0\) for any \(l<k_{j}\). By Weierstrass theorem (see [3]), there exists a holomorphic function \(g_{0}\) on \(D\) such that \(dg_{0}\neq 0\) on \(\Omega\backslash Z_{0}\) and \(ord_{z_{j}}(g)=k_{j}\) for any \(j\). Denote that \[g_{2}:=\frac{g_{1}}{g_{0}}\quad\text{and}\quad u_{2}:=u_{1}+\log|g_{0}|-\sum_{ 1\leq j\leq m}k_{j}G_{D}(\cdot,z_{j})\] on \(D\). Note that \(u_{2}\) is harmonic on \(D\), and \(g_{2}\) is harmonic on \(D\) satisfying \(dg_{2}(z_{j})\neq 0\) for any \(1\leq j\leq m\). Combining statements (1) and (3) above, we have \[\varphi+2\psi=2\log|g_{2}|+2\sum_{1\leq j\leq m}(k_{j}+1)G_{D}(\cdot,z_{j})+2u_ {2}\] \[\chi_{-u_{2}}=\prod_{1\leq j\leq m}\chi_{z_{j}}^{k_{j}+1}.\] In the following, we prove that \(g_{2}\neq 0\) on \(D\). Then \(u:=\log|g_{2}|+u_{2}\) is harmonic on \(D\) and \(\chi_{-u}=\prod_{1\leq j\leq m}\chi_{z_{j}}^{k_{j}+1}\). Combining equality (3.4), \(\log|g_{1}|+u_{1}=\log|g_{0}|+\log|g_{2}|+u_{1}=u+\sum_{1\leq j\leq m}k_{j}G_{ D}(\cdot,z_{j})\) and \(a_{j,l}=0\) for any \(l<k_{j}\), we have \[\lim_{z\to z_{j}}\frac{p_{*}\left(f_{u}\left(\prod_{1\leq j\leq m}f_{z_{j}}^{ k_{j}+1}\right)\left(\sum_{1\leq j\leq m}p_{j}\frac{df_{z_{j}}}{f_{z_{j}}} \right)\right)}{a_{j,k_{j}}(z-z_{j})^{k_{j}}dz}=c_{0},\] thus the necessity of the characterization holds. Denote that \(h:=\varphi+2\psi-2\sum_{1\leq j\leq m}(k_{j}+1)G_{D}(\cdot,z_{j})\) on \(\overline{D}\). Then \(h\) is subharmonic on \(D\) and \(h\) is continuous at \(z\) for any \(z\in\partial D\). It suffices to prove that \(h\) is harmonic on \(D\). By solving Dirichlet problem, there is a continuous function \(\tilde{h}\) on \(\overline{D}\), which satisfies that \(\tilde{h}=h\) on \(\partial D\) and \(\tilde{h}\) is harmonic on \(D\). As \(h\) is subharmonic on \(D\), we have \[h\leq\tilde{h}\] on \(\overline{D}\). Denote that \[\tilde{\varphi}:=\varphi+\tilde{h}-h.\] Then we have \(\tilde{\varphi}|_{\partial D}=\varphi|_{\partial D}\) and \(\tilde{\varphi}+2\psi=2\sum_{1\leq j\leq m}(k_{j}+1)G_{D}(\cdot,z_{j})+\tilde{h}\). Denote that \[\tilde{\rho}_{1}:=e^{-\tilde{\varphi}}c(-2\psi)\] on \(\overline{D}\). Note that \(\tilde{\rho}_{1}\leq\tilde{\rho}\). By definition, we have \[M(Z_{0},\mathfrak{a},\tilde{\rho})\geq M(Z_{0},\mathfrak{a},\tilde{\rho}_{1}).\] Combining equality (3.3) and inequality (1.1), we have \[M_{H}(Z_{0},\mathfrak{a},\rho)\leq\frac{M(Z_{0},\mathfrak{a},\tilde{\rho}_{1} )}{\pi\int_{0}^{+\infty}c(t)e^{-t}dt}\leq\frac{M(Z_{0},\mathfrak{a},\tilde{ \rho})}{\pi\int_{0}^{+\infty}c(t)e^{-t}dt}=M_{H}(Z_{0},\mathfrak{a},\rho),\] which shows that \[M(Z_{0},\mathfrak{a},\tilde{\rho})=M(Z_{0},\mathfrak{a},\tilde{\rho}_{1}).\] As \(M(Z_{0},\mathfrak{a},\tilde{\rho})<+\infty\) and \(\sum_{1\leq j\leq m}\sum_{0\leq l\leq k_{j}}|a_{j,l}|\neq 0\), we have \(\tilde{\rho}_{1}=\tilde{\rho}\), which implies that \(2\log|g_{2}|\) is harmonic on \(D\), i.e. \(g_{2}\neq 0\) on \(D\). Thus, the necessity of characterization in Theorem 1.5 has been proved. _Step 3: sufficiency of the characterization_ Assume that the four statements \((1)-(4)\) in Theorem 1.5 hold. By Weierstrass theorem (see [3]), there exists a holomorphic function \(g_{0}\) on \(D\) such that \(dg_{0}\neq 0\) on \(\Omega\backslash Z_{0}\) and \(ord_{z_{j}}(g)=k_{j}\) for any \(j\). Denote that \(\tilde{u}=u+\sum_{1\leq j\leq m}k_{j}G_{D}(\cdot,z_{j})-\log|g_{0}|\) is a harmonic on \(\Omega\). Thus, we have \(\varphi+2\psi=2\log|g_{0}|+\sum_{1\leq j\leq m}2G_{D}(\cdot,z_{j})+2\tilde{u}\), \(\chi_{-\tilde{u}}=\chi_{-u}\prod_{1\leq j\leq m}\chi_{z_{j}}^{-k_{j}}=\prod_{1 \leq j\leq m}\chi_{z_{j}}\) and \[\lim_{z\to z_{j}}\frac{g_{0}p_{*}\left(f_{\tilde{u}}\left(\prod_{1\leq j\leq m }f_{z_{j}}\right)\left(\sum_{1\leq j\leq m}p_{j}\frac{df_{z_{j}}}{f_{z_{j}}} \right)\right)}{a_{j,k_{j}}(z-z_{j})^{k_{j}}dz}=c_{0}\] for any \(j\). Then, by Theorem 2.26, we know that \(G(h^{-1}(r))\) is linear with respect to \(r\in(0,\int_{0}^{+\infty}c(t)e^{-t}dt)\), where the definition of \(G(t)\) comes from Step 1. Using Corollary 2.24 and Remark 2.27, we obtain that \[G(t)=\int_{\{2\psi<-t\}}|F_{0}|^{2}\tilde{\rho} \tag{3.5}\] for any \(t\geq 0\) and \[\begin{split} F_{0}=&\frac{g_{0}p_{*}\left(f_{\tilde{ u}}\left(\prod_{1\leq j\leq m}f_{z_{j}}\right)\left(\sum_{1\leq j\leq m}p_{j} \frac{df_{z_{j}}}{f_{z_{j}}}\right)\right)}{c_{0}dz}\\ =&\frac{p_{*}\left(f_{u}\left(\prod_{1\leq j\leq m}f _{z_{j}}^{k_{j}+1}\right)\left(\sum_{1\leq j\leq m}p_{j}\frac{df_{z_{j}}}{f_{z_ {j}}}\right)\right)}{c_{0}dz},\end{split} \tag{3.6}\] where \(p\) is the universal covering from unit disc \(\Delta\) to \(D\), \(f_{\tilde{u}}\) is a holomorphic function on \(\Delta\) such that \(|f_{\tilde{u}}|=p^{*}(e^{\tilde{u}})\), \(f_{z_{0}}\) is a holomorphic function on \(\Delta\) such that \(|f_{z_{j}}|=p^{*}(e^{G_{D}(\cdot,z_{j})})\) for any \(j\), and \(f_{u}=\frac{f_{\tilde{u}}a_{0}}{\prod_{1\leq j\leq m}f_{z_{j}}^{k_{j}}}\) satisfies that \(|f_{u}|=p^{*}\left(e^{u}\right)\). Note that \(u=\frac{\varphi}{2}+\psi-\sum_{1\leq j\leq m}(k_{j}+1)G_{D}(\cdot,z_{j})\) can be extended to a continuous function on \(\overline{D}\), then we know \[|F_{0}|\in C(\overline{D}).\] Let \(f\in H^{2}(D)\) satisfying \(f^{(l)}(z_{j})=l!a_{j,l}\) for any \(1\leq l\leq k_{j}\) and any \(1\leq j\leq m\). Note that \((f-F_{0},z_{j})\in\mathcal{I}(\varphi+2\psi)_{z_{j}}\) for any \(j\), \(c(t)e^{-t}\) is decreasing and \(\{\psi<-t\}\Subset D\) for any \(t>0\), then it follows from \(\int_{D}|F_{0}|^{2}\tilde{\rho}<+\infty\) that \[\begin{split}\int_{\{2\psi<-t\}}|f|^{2}\tilde{\rho}& \leq 2\int_{\{2\psi<-t\}}|f-F_{0}|^{2}e^{-\varphi}c(-2\psi)+2\int_{D }|F_{0}|^{2}\tilde{\rho}\\ &\leq 2C\int_{\{2\psi<-t\}}|f-F_{0}|^{2}e^{-\varphi-2\psi}+2\int_{D }|F_{0}|^{2}\tilde{\rho}\\ &<+\infty\end{split}\] for any \(t>0\). Following from Lemma 2.25, we have \[\int_{\{2\psi<-t\}}|f|^{2}\tilde{\rho}= \int_{\{2\psi<-t\}}|F_{0}|^{2}\tilde{\rho}+\int_{\{2\psi<-t\}}|f -F_{0}|^{2}\tilde{\rho},\] which implies that \[\int_{\{2\psi<-t\}}F_{0}\overline{F_{0}-f}\tilde{\rho}=0\] for any \(t>0\). It follows from Lemma 2.12 and Lemma 2.8 that there exists \(r_{1}>0\) such that \[\int_{\{z\in D:\psi(z)=r\}}F_{0}\overline{F_{0}-f}e^{-\varphi} \left(\frac{\partial\psi}{\partial v_{z}}\right)^{-1}|dz|=0\] holds for any \(r\in(-r_{1},0)\), which implies that \[\int_{\{z\in D:\psi(z)=r\}}|f|^{2}e^{-\varphi}\left(\frac{\partial \psi}{\partial v_{z}}\right)^{-1}|dz|\geq\int_{\{z\in D:\psi(z)=r\}}|F_{0}|^{2 }e^{-\varphi}\left(\frac{\partial\psi}{\partial v_{z}}\right)^{-1}|dz|. \tag{3.7}\] As \(|F_{0}|\in C(\overline{D})\), it follows from the dominated convergence theorem, Lemma 2.9 and equality (3.7) that \[\int_{\partial D}|f|^{2}e^{-\varphi}\left(\frac{\partial\psi}{ \partial v_{z}}\right)^{-1}|dz|\geq\int_{\partial D}|F_{0}|^{2}e^{-\varphi} \left(\frac{\partial\psi}{\partial v_{z}}\right)^{-1}|dz|,\] then we have \[M_{H}(Z_{0},\mathfrak{a},\rho)=\frac{1}{2\pi}\int_{\partial D}|F_{0}|^{2}e^{-\varphi }\left(\frac{\partial\psi}{\partial v_{z}}\right)^{-1}|dz|. \tag{3.8}\] Note that \(\lim_{t\to 0+0}c(t)=c(0)=1\). It follows from equality (3.5), the dominated convergence theorem and Lemma 2.12 that \[\frac{M(Z_{0},\mathfrak{a},\tilde{\rho})}{\int_{0}^{+\infty}c(t)e ^{-t}dt} =\frac{G(0)}{\int_{0}^{+\infty}c(t)e^{-t}dt}\] \[=\lim_{r\to 1-0}\frac{\int_{\{z\in D:2\psi(z)\geq\log r\}}|F_{0}|^{ 2}\tilde{\rho}}{\int_{0}^{-\log r}c(t)e^{-t}dt}\] \[=\frac{1}{2}\int_{\partial D}|F_{0}|^{2}e^{-\varphi}\left(\frac{ \partial\psi}{\partial v_{z}}\right)^{-1}|dz|.\] Combining equality (3.8), we have \[M_{H}(Z_{0},\mathfrak{a},\rho)=\frac{M(Z_{0},\mathfrak{a},\tilde{\rho})}{\pi \int_{0}^{+\infty}c(t)e^{-t}dt}.\] Thus, Theorem 1.5 has been proved. ### Proof of Remark 1.6 Remark 1.6 holds by equality (3.5), (3.6) and (3.8) in the proof of Theorem 1.5. ### Proof of Corollary 1.7 In this section, we prove Corollary 1.7. Denote that \(M:=\inf\{\int_{D}|f|^{2}\lambda:f\in\mathcal{O}(D)\) such that \(f^{(l)}(z_{j})=0\) for \(0\leq l<k_{j}\) and \(f^{(k_{j})}(z_{j})=k_{j}!a_{j}\) for any \(1\leq j\leq m\}.\) Following from Theorem 1.5 and Theorem 2.28 (Taking \(c\equiv 1\)), we have \[M_{H}\leq\frac{M}{\pi}\leq\sum_{1\leq j\leq m}\frac{2|a_{j}|^{2}t_{j}}{(k_{j}+ 1)c_{\beta}(z_{j})^{2(k_{j}+1)}}\lambda(z_{j}). \tag{3.9}\] By Lemma 2.10, there exists \(f\in H^{2}(D)\) such that \(f^{(l)}(z_{j})=0\) for \(0\leq l<k_{j}\) and \(f^{(k_{j})}(z_{j})=k_{j}!a_{j}\) for any \(1\leq j\leq m\), and \[\frac{1}{2\pi}\int_{\partial D}|f|^{2}\lambda\left(\frac{\partial\psi}{ \partial v_{z}}\right)^{-1}|dz|\leq\sum_{1\leq j\leq m}\frac{2|a_{j}|^{2}t_{j }}{(k_{j}+1)c_{\beta}(z_{j})^{2(k_{j}+1)}}\lambda(z_{j}).\] In the following part, we prove the characterization of the holding of equality \(M_{H}=\sum_{1\leq j\leq m}\frac{2|a_{j}|^{2}t_{j}}{(k_{j}+1)c_{\beta}(z_{j}) ^{2(k_{j}+1)}}\lambda(z_{j}).\) Firstly, we prove the necessity. Assume that \(M_{H}=\sum_{1\leq j\leq m}\frac{2|a_{j}|^{2}t_{j}}{(k_{j}+1)c_{\beta}(z_{j})^ {2(k_{j}+1)}}\lambda(z_{j}),\) then by inequality (3.9), we have \[M_{H}=\frac{M}{\pi}.\] Using Theorem 1.5, we know the two statements in Corollary 1.7 hold. Secondly, we prove the sufficiency. Assume that the two statements in Corollary 1.7 hold. Theorem 1.5 shows that \(M_{H}=\frac{M}{\pi}\), and Theorem 2.28 shows that \(\frac{M}{\pi}=\sum_{1\leq j\leq m}\frac{2|a_{j}|^{2}t_{j}}{(k_{j}+1)c_{\beta} (z_{j})^{2(k_{j}+1)}}\lambda(z_{j})\). Then we have \(M_{H}=\sum_{1\leq j\leq m}\frac{2|a_{j}|^{2}t_{j}}{(k_{j}+1)c_{\beta}(z_{j})^ {2(k_{j}+1)}}\lambda(z_{j}).\) Thus, Corollary 1.7 holds. ### Proof of Corollary 1.8 We prove Corollary 1.8 by inductive method. If \(k=0\), it follows from Corollary 1.7 that Corollary 1.8 holds. Assume that \(k\geq 1\) and there is a constant \(C_{1}\), such that for any \(\tilde{a}_{j,l}\in\mathbb{C}\), where \(1\leq j\leq m\) and \(0\leq l\leq k-1\), there exists \(f\in H^{2}(D)\) such that \(f^{(l)}(z_{j})=\tilde{a}_{j,l}\) for any \(1\leq j\leq m\) and \(0\leq l\leq k-1\), and \(\frac{1}{2\pi}\int_{\partial D}|f|^{2}|dz|\leq C_{1}\sum_{1\leq j\leq m}\sum_{ 0\leq l\leq k-1}|\tilde{a}_{j,l}|^{2}\). Then there exists \(f_{1}\in H^{2}(D)\) such that \(f_{1}^{(l)}(z_{j})=a_{j,l}\) for any \(1\leq j\leq m\) and \(0\leq l\leq k-1\), and \[\frac{1}{2\pi}\int_{\partial D}|f_{1}|^{2}|dz|\leq C_{1}\sum_{1\leq j\leq m} \sum_{0\leq l\leq k-1}|a_{j,l}|^{2}. \tag{3.10}\] Following from Lemma 2.3 and inequality (3.10), we have \[\sum_{1\leq j\leq m}|f_{1}^{(k)}(z_{j})|^{2}\leq\frac{C_{2}}{2\pi}\int_{ \partial D}|f_{1}|^{2}|dz|\leq C_{1}C_{2}\sum_{1\leq j\leq m}\sum_{0\leq l\leq k -1}|a_{j,l}|^{2}. \tag{3.11}\] According to Corollary 1.7, there is \(f_{2}\in H^{2}(D)\) such that for any \(j\), \(f_{2}^{(l)}(z_{j})=0\) for \(0\leq l\leq k-1\) and \(f_{2}^{(k)}(z_{j})=a_{j,k}-f_{1}^{(k)}(z_{j})\), and \[\frac{1}{2\pi}\int_{\partial D}|f_{2}|^{2}|dz|\leq C_{3}\sum_{1\leq j\leq m}| a_{j,k}-f_{1}^{(k)}(z_{j})|^{2}, \tag{3.12}\] where \(C_{3}\) is a constant independent of \(a_{j,k}\). Denote that \[f:=f_{1}+f_{2},\] then we have \(f^{(l)}(z_{j})=a_{j,l}\) for \(1\leq j\leq m\) and \(0\leq l\leq k\). Combining inequality (3.10), (3.11) and (3.12), we have \[\frac{1}{2\pi}\int_{\partial D}|f|^{2}|dz|\] \[\leq \frac{1}{\pi}\int_{\partial D}|f_{1}|^{2}|dz|+\frac{1}{\pi}\int_{ \partial D}|f_{2}|^{2}|dz|\] \[\leq C_{1}\sum_{1\leq j\leq m}\sum_{0\leq l\leq k-1}|a_{j,l}|^{2}+C_ {3}\sum_{1\leq j\leq m}|a_{j,k}-f_{1}^{(k)}(z_{j})|^{2}\] \[\leq C_{1}\sum_{1\leq j\leq m}\sum_{0\leq l\leq k-1}|a_{j,l}|^{2}+2C_ {3}\sum_{1\leq j\leq m}|a_{j,k}|^{2}+2C_{3}C_{1}C_{2}\sum_{1\leq j\leq m}\sum_ {0\leq l\leq k-1}|a_{j,l}|^{2}.\] Take \(C=\max\{C_{1}+2C_{1}C_{2}C_{3},2C_{3}\}\), thus Corollary 1.8 holds by induction. ## 4. Proofs of Theorem 1.9, Theorem 1.11, Remark 1.12, Corollary 1.13 and Corollary 1.14 In this section, we prove Theorem 1.9, Theorem 1.11, Remark 1.12, Corollary 1.13 and Corollary 1.14. ### Proof of Theorem 1.9 We prove Theorem 1.9 in three steps. _Step 1: proof of inequality (1.2)_ Denote that \(\hat{\rho}:=\prod_{1\leq j\leq n}e^{-\varphi_{j}}\), then we have \(-\log\hat{\rho}\) is plurisubharmonic on \(M\) and \(\hat{\rho}(w_{j},\hat{w}_{j})\leq\liminf_{w\to w_{j}}\hat{\rho}(w,\hat{w}_{j})\) for any \((w_{j},\hat{w}_{j})\in\partial D_{j}\times M_{j}\subset\partial M\) and any \(1\leq j\leq n\). By Lemma 2.25, there exists a holomorphic function \(F_{0}\) on \(M\) such that \((F_{0}-f_{0},z_{\beta})\in J_{\beta}\) for any \(\beta\in I_{1}\), and \[G(0)=\int_{M}|F_{0}|^{2}\tilde{\rho}.\] By definition of \(G(t)\), we have \[G(-\log r)\leq\int_{\{2\psi<\log r\}}|F_{0}|^{2}\tilde{\rho}\] for any \(r\in(0,1]\), then combining the concavity of \(G(h^{-1}(r))\), we obtain that \[\frac{\int_{\{z\in M:2\psi(z)\geq\log r\}}|F_{0}(z)|^{2}\tilde{\rho}}{\int_{0} ^{-\log r}c(t)e^{-t}dt}\leq\frac{G(0)-G(-\log r)}{\int_{0}^{-\log r}c(t)e^{-t} dt}\leq\frac{G(0)}{\int_{0}^{+\infty}c(t)e^{-t}dt}<+\infty. \tag{4.1}\] Since \(\lim_{t\to 0+0}c(t)=c(0)=1\) and \(\hat{\rho}(w_{j},\hat{w}_{j})\leq\liminf_{w\to w_{j}}\hat{\rho}(w,\hat{w}_{j})\) for any \((w_{j},\hat{w}_{j})\in\partial D_{j}\times M_{j}\subset\partial M\) and any \(1\leq j\leq n\), it follows from Proposition 2.19 and inequality (4.1) that then there is \(\tilde{F}_{0}\in H^{2}_{\rho}(M,\partial M)\) such that \(\tilde{F}_{0}^{*}=F_{0}\) and \[\begin{split} M_{H}(Z_{0},J,\rho)&\leq\|\tilde{F}_{0 }\|_{\partial M,\rho}^{2}\\ &\leq\frac{1}{\pi}\liminf_{r\to 1-0}\frac{\int_{\{z\in D:2\psi \geq\log r\}}|F_{0}|^{2}\tilde{\rho}}{1-r}\\ &=\frac{1}{\pi}\liminf_{r\to 1-0}\frac{\int_{\{z\in D:2\psi \geq\log r\}}|F_{0}|^{2}\tilde{\rho}}{\int_{0}^{-\log r}c(t)e^{-t}dt}\times \frac{\int_{0}^{-\log r}c(t)e^{-t}dt}{1-r}\\ &\leq\frac{M(Z_{0},J,\tilde{\rho})}{\pi\int_{0}^{+\infty}c(t)e^{ -t}dt}\end{split} \tag{4.2}\] This, inequality (1.2) holds. _Step 2: necessity of the characterization_ Assume that the equality \[M_{H}(Z_{0},J,\rho)=\frac{M(Z_{0},J,\tilde{\rho})}{\pi\int_{0}^{+\infty}c(t)e^ {-t}dt}\] holds. Combining inequality (4.1) and inequality (4.2), we get that \[\liminf_{r\to 1-0}\frac{\int_{\{z\in M:2\psi(z)\geq\log r\}}|F_{0}(z)|^{2} \tilde{\rho}}{\int_{0}^{-\log r}c(t)e^{-t}dt}=\liminf_{r\to 1-0}\frac{G(0)-G(- \log r)}{\int_{0}^{-\log r}c(t)e^{-t}dt}=\frac{G(0)}{\int_{0}^{+\infty}c(t)e^ {-t}dt}\] and \[M_{H}(Z_{0},J,\rho)=\|\tilde{F}_{0}\|_{\partial M,\rho}^{2}.\] Since \(G(h^{-1}(r))\) is concave, we know that \(G(h^{-1}(r))\) is linear with respect to \(r\in[0,\int_{0}^{+\infty}c(t)e^{-t}dt]\). _Step 3: sufficiency of the characterization_ Assume that \(G(h^{-1}(r))\) is linear with respect to \(r\in[0,\int_{0}^{+\infty}c(t)e^{-t}dt]\) and \[M_{H}(Z_{0},J,\rho)=\|\tilde{F}_{0}\|_{\partial M,\rho}^{2}. \tag{4.3}\] Using Corollary 2.24, we obtain that \[G(t)=\int_{\{2\psi<-t\}}|F_{0}|^{2}\tilde{\rho}\] for any \(t\geq 0\). Thus, inequality (4.1) becomes an equality, which shows that \[\frac{1}{\pi}\liminf_{r\to 1-0}\frac{\int_{\{z\in D:2\psi\geq\log r\}}|F_{0}|^{ 2}\tilde{\rho}}{1-r}=\frac{M(Z_{0},J,\tilde{\rho})}{\pi\int_{0}^{+\infty}c(t)e^ {-t}dt}. \tag{4.4}\] Note that \[\{z\in M:2\psi(z)=s\}=\cup_{1\leq j\leq m}\{w_{j}\in D_{j}:2\psi_{j}(w_{j})=s \}\times\{\hat{w}_{j}\in M_{j}:2\hat{\psi}_{j}(\hat{w}_{j})\leq s\},\] where \(s\in(-\infty,0)\), \(\hat{\psi}_{j}:=\max_{1\leq j^{\prime}\leq m,j^{\prime}\neq j}\{\sum_{1\leq k \leq m_{j}}p_{j^{\prime},k}G_{D_{j^{\prime}}}(\cdot,z_{j^{\prime},k})\}\) on \(M_{j}\) and \(\psi_{j}:=\sum_{1\leq k\leq m_{j}}p_{j,k}G_{D_{j}}(\cdot,z_{j,k})\) on \(D_{j}\). Denote that \[M_{j,s}:=\{\hat{w}_{j}\in M_{j}:2\hat{\psi}_{j}(\hat{w}_{j})\leq s\}\] and \[D_{j,s}:=\{w_{j}\in D_{j}:2\psi_{j}(w_{j})\leq s\}\] for \(1\leq j\leq n\). Following from Lemma 2.8, there exists \(r_{0}\in(0,1)\) such that \(\bigtriangledown\psi_{j}\neq 0\) on \(D_{j}\backslash D_{j,\log r_{0}}\) for any \(1\leq j\leq n\). By Lemma 2.12, we have \[\begin{split}&\int_{\{z\in M:2\psi(z)\geq\log r\}}|F_{0}|^{2} \tilde{\rho}\\ =&\sum_{1\leq j\leq n}\int_{\{z\in M:2\psi(z)\geq\log r \&\&\&\ \psi_{j}(z)>\hat{\psi}_{j}(z)\}}|F_{0}|^{2}\tilde{\rho}\\ =&\sum_{1\leq j\leq n}\int_{\log r}^{0}\int_{M_{j,s} }\int_{\partial D_{j,s}}\frac{|F_{0}(w_{j},\hat{w}_{j})|^{2}}{2|\bigtriangledown \psi_{j}|}|dw_{j}|d\mu_{j}(\hat{w}_{j})ds\\ =&\sum_{1\leq j\leq n}\int_{\log r}^{0}c(-s)\int_{M _{j,s}}\int_{\partial D_{j,s}}\frac{|F_{0}(w_{j},\hat{w}_{j})|^{2}}{2| \bigtriangledown\psi_{j}|}\times\prod_{1\leq l\leq n}e^{-\varphi_{l}}|dw_{j} |d\mu_{j}(\hat{w}_{j})ds.\end{split} \tag{4.5}\] for \(r\in(r_{0},1)\). By Lemma 2.16 and \(\tilde{F}_{0}^{*}=F_{0}\), \[\lim_{s\to 0}\sum_{1\leq j\leq n}\int_{M_{j,s}}\int_{\partial D_{j,s}}\frac{|F_{0} (w_{j},\hat{w}_{j})|^{2}}{|\bigtriangledown\psi_{j}|}\times e^{-\varphi}|dw_{j }|d\mu_{j}(\hat{w}_{j})ds=2\pi\|\tilde{F}_{0}\|_{\partial M,\rho}^{2}.\] As \(\lim_{s\to 0+0}c(s)=c(0)=1\), equality (4.5) implies that \[\frac{1}{\pi}\liminf_{r\to 1-0}\frac{\int_{\{z\in D:2\psi\geq\log r\}}|F_{0}|^{ 2}\tilde{\rho}}{1-r}=\|\tilde{F}_{0}\|_{\partial M,\rho}^{2}. \tag{4.6}\] Combining inequality (4.2), equality (4.3), (4.4) and (4.6), we have \(M_{H}(Z_{0},J,\rho)=\frac{M(Z_{0},J,\tilde{\rho})}{\pi\int_{0}^{+\infty}c(t)e^ {-t}dt}\). Thus, Theorem 1.9 has been proved. ### Proof of Theorem 1.11 As \(\varphi_{j}\) is continuous at \(z\) for any \(z\in\partial D_{j}\), following from Weierstrass theorem (see [3]), statement (1) in Theorem 1.11 is equivalent to \(\varphi_{j}=2\log|g_{j}|+2u_{j}\) for any \(j\in\{1,2,...,n\}\), where \(u_{j}\) is a harmonic function on \(D_{j}\) and \(g_{j}\) is a holomorphic function on \(D_{j}\) satisfying \(g_{j}(z_{j,k})\neq 0\) for any \(k\in\{1,2,...,m_{j}\}\). Thus, using Theorem 2.29 and Remark 2.31, the four statements holds if and only if \(G(h^{-1}(r))\) is linear. We follow the notations in the proof of Theorem 1.9. By Theorem 1.9, we know that it suffices to prove that: if \(G(h^{-1}(r))\) is linear, then \[M_{H}(Z_{0},J,\rho)=\|\tilde{F}_{0}\|_{\partial M,\rho}^{2}, \tag{4.7}\] where \(F_{0}\) is a holomorphic function on \(M\) (introduced in the proof of Theorem 1.9) satisfying that \(\tilde{F}_{0}^{*}=F_{0}\), \[M(Z_{0},J,\tilde{\rho})=\int_{M}|F_{0}|^{2}\tilde{\rho} \tag{4.8}\] and \((F_{0}-f_{0},z_{\beta})\in\mathcal{I}(2\psi)_{z_{\beta}}\) for any \(\beta\in I_{1}\). In the following, assume that \(G(h^{-1}(r))\) is linear on \([0,\int_{0}^{+\infty}c(t)e^{-t}dt]\). Using Corollary 2.24, we obtain that \[G(t)=\int_{\{2\psi<-t\}}|F_{0}|^{2}\tilde{\rho} \tag{4.9}\] for any \(t\geq 0\). Let \(f\) be any element in \(H^{2}_{\rho}(M,\partial M)\) satisfying that \((f^{*}-F_{0},z_{\beta})\in\mathcal{I}(2\psi)_{z_{\beta}}\) for any \(\beta\in I_{1}\). By Theorem 2.29, we know that \(\varphi_{j}=2\log|g_{j}|+2u_{j}\), where \(u_{j}\) is a harmonic function on \(D_{j}\) and \(g_{j}\) is a holomorphic function on \(D_{j}\) satisfying \(g_{j}(z_{j,k})\neq 0\) for \(1\leq k\leq m_{j}\), thus \(\varphi_{j}\) is bounded near \(z_{j,k}\). Note that \(c(t)e^{-t}\) is decreasing on \((0,+\infty)\), then \(\int_{M}|F_{0}|^{2}e^{-\varphi}c(-2\psi)<+\infty\) implies that \[|f^{*}|^{2}e^{-\varphi}c(-2\psi)\leq C|f^{*}-F_{0}|^{2}e^{-2\psi}+2|F_{0}|^{2 }e^{-\varphi}c(-2\psi)\] is integrable near \(z_{\beta}\). For any \(z\in M\backslash\{z_{\beta}:\beta\in I_{1}\}\), as \(c(-2\psi)\) is bounded near \(z\), it follows from Lemma 2.21 that \(|f^{*}|^{2}e^{-\varphi}c(-2\psi)\) is integrable near \(z\). Thus, we obtain that \[\int_{\{2\psi<-t\}}|f^{*}|^{2}e^{-\varphi}c(-2\psi)<+\infty\] holds for any \(t>0\). By equality (4.9), we have \[\int_{\{2\psi<-t\}}(f^{*}-F_{0})\overline{F_{0}}\tilde{\rho}=0,\] which implies that \[\sum_{1\leq j\leq n}\int_{-\infty}^{t}c(-s)\int_{M_{j,*}}\int_{\partial D_{j, *}}\frac{(f^{*}-F_{0})\overline{F_{0}}}{|\bigtriangledown\psi_{j}|}\times \prod_{1\leq l\leq n}e^{-\varphi_{l}}|dw_{j}|d\mu_{j}(\hat{w}_{j})ds=0\] for any \(t>0\) according to Lemma 2.12, where the definitions of \(M_{j,s}\) and \(D_{j,s}\) can be seen in the proof of Theorem 1.9. Thus, \[\sum_{1\leq j\leq n}\int_{M_{j,*}}\int_{\partial D_{j,*}}\frac{(f^{*}-F_{0}) \overline{F_{0}}}{|\bigtriangledown\psi_{j}|}\times\prod_{1\leq l\leq n}e^{- \varphi_{l}}|dw_{j}|d\mu_{j}(\hat{w}_{j})=0,\] which shows that \[\sum_{1\leq j\leq n}\int_{M_{j,*}}\int_{\partial D_{j,*}}\frac{|f^{*}|^{2}e^{- \varphi}}{|\bigtriangledown\psi_{j}|}|dw_{j}|d\mu_{j}(\hat{w}_{j})\geq\sum_{1 \leq j\leq n}\int_{M_{j,*}}\int_{\partial D_{j,*}}\frac{|F_{0}|^{2}e^{-\varphi} }{|\bigtriangledown\psi_{j}|}|dw_{j}|d\mu_{j}(\hat{w}_{j}) \tag{4.10}\] for any \(s>0\). Combining equality (4.10) and Lemma 2.16, we know that \[\|f^{*}\|_{\partial M,\rho}\geq\|\check{F}_{0}\|_{\partial M,\rho},\] which implies that equality (4.7) holds. Thus, Theorem 1.11 has been proved. ### Proof of Remark 1.12 When the four statements in Theorem 1.11 hold, \(G(h^{-1}(r))\) is linear. Then Remark 1.12 holds by Remark 2.32, equality (4.7) and (4.8). ### Proof of Corollary 1.13 Following from Theorem 1.9 and Theorem 2.33 (Taking \(c\equiv 1\)), we have \[M_{H}(Z_{0},\mathcal{I}(2\psi),\rho)\leq\frac{M(Z_{0},\mathcal{I}(2\psi),\rho )}{\pi}\leq\sum_{\beta\in I_{1}}\sum_{\alpha\in E_{\beta}}\frac{|d_{\beta, \alpha}|^{2}2^{n}\pi^{n-1}e^{-\varphi(z_{\beta})}}{\Pi_{1\leq j\leq n}(\alpha_ {j}+1)c_{j,\beta_{j}}^{2\alpha_{j}+2}}. \tag{4.11}\] By Lemma 2.18, there exists \(f\in H^{2}_{\rho}(M,\partial M)\) such that \((f^{*}-f_{0},z_{\beta})\in\mathcal{I}(2\psi)_{z_{\beta}}\) for any \(\beta\in I_{1}\), and \[\|f\|_{\partial M,\rho}^{2}\leq\sum_{\beta\in I_{1}}\sum_{\alpha\in E_{\beta} }\frac{|d_{\beta,\alpha}|^{2}2^{n}\pi^{n-1}e^{-\varphi(z_{\beta})}}{\Pi_{1\leq j \leq n}(\alpha_{j}+1)c_{j,\beta_{j}}^{2\alpha_{j}+2}}.\] In the following part, we prove the characterization of the holding of equality \[M_{H}(Z_{0},\mathcal{I}(2\psi),\rho)=\sum_{1\leq j\leq m}\frac{2|a_{j}|^{2}t_{ j}}{(k_{j}+1)c_{\beta}(z_{j})^{2(k_{j}+1)}}\rho(z_{j}). \tag{4.12}\] Firstly, we prove the necessity. Assume that equality (4.12) holds, then by inequality (4.11), we have \[M_{H}(Z_{0},\mathcal{I}(2\psi),\rho)=\frac{M(Z_{0},\mathcal{I}(2\psi),\rho)}{ \pi}.\] Using Theorem 1.11, we know the four statements in Corollary 1.13 hold. Secondly, we prove the sufficiency. Assume that the four statements in Corollary 1.13 hold. Theorem 1.11 shows that \(M_{H}(Z_{0},\mathcal{I}(2\psi),\rho)=\frac{M(Z_{0},\mathcal{I}(2\psi),\rho)}{ \pi}\), and Theorem 2.33 shows that \(\frac{M(Z_{0},\mathcal{I}(2\psi),\rho)}{\pi}=\sum_{\beta\in I_{1}}\sum_{\alpha \in E_{\beta}}\frac{|d_{\beta,\alpha}|^{2}2^{n}\pi^{n-1}e^{-\varphi(\alpha_{j }+2)}}{\Pi_{1\leq j\leq n}(\alpha_{j}+1)c_{j,\beta_{j}}^{2\alpha_{j}+2}}\), then equality (4.12) holds. Thus, Corollary 1.13 holds. ### Proof of Corollary 1.14 We prove Corollary 1.14 by inductive method. If \(k=0\), it follows from Corollary 1.13 that Corollary 1.14 holds. Assume that \(k\geq 1\) and there is a constant \(C_{1}\), such that for any \(\tilde{a}_{\beta,\alpha}\in\mathbb{C}\), where \(\beta\in I_{1}\) and \(\alpha\in L_{k-1}\), there exists \(f\in H^{2}_{\rho}(M,\partial M)\) such that \(\partial^{\alpha}f^{*}(z_{\beta})=a_{\beta,\alpha}\) for any \(\beta\in I_{1}\) and \(\alpha\in L_{k-1}\), and \(\|f\|_{\partial M,\rho}^{2}\leq C_{1}\sum_{\beta\in I_{1},\alpha\in L_{k-1}}| \tilde{a}_{\beta,\alpha}|^{2}.\) Then there exists \(f_{1}\in H^{2}_{\rho}(M,\partial M)\) such that \(\partial^{\alpha}f_{1}^{*}(z_{\beta})=a_{\beta,\alpha}\) for any \(\beta\in I_{1}\) and \(\alpha\in L_{k-1}\), and \[\|f_{1}^{*}\|_{\partial M,\rho}^{2}\leq C_{1}\sum_{\beta\in I_{1},\alpha\in L_ {k-1}}|a_{\beta,\alpha}|^{2}. \tag{4.13}\] Following from Lemma 2.14 and inequality (4.13), we have \[\sum_{\beta\in I_{1}|\alpha|=k}|\partial^{\alpha}f_{1}^{*}(z_{\beta})|^{2}\leq C _{2}\|f_{1}^{*}\|_{\partial M,\rho}^{2}\leq C_{1}C_{2}\sum_{\beta\in I_{1}, \alpha\in L_{k-1}}|a_{\beta,\alpha}|^{2}, \tag{4.14}\] where \(|\alpha|=\sum_{1\leq j\leq n}\alpha_{j}\). It follows from Corollary 1.13 (taking \(f_{0}=\sum_{|\alpha|=k}(a_{\beta,\alpha}-\partial^{\alpha}f_{1}^{*}(z_{\beta}) )\prod_{1\leq j\leq n}(w_{j}-z_{j,\beta_{j}})^{\alpha_{j}}\) on \(V_{\beta}\) and \(\psi=2(n+k)\max\{\sum_{1\leq k\leq m_{j}}G_{D_{j}}(\cdot,z_{j,k})\}\)), that there is \(f_{2}\in H^{2}_{\rho}(M,\partial M)\) such that for any \(\beta\in I_{1}\), \(\partial^{\alpha}f_{2}^{*}(z_{\beta})=0\) for \(\alpha\in L_{k-1}\) and \(\partial^{\alpha}f_{2}^{*}(z_{\beta})=a_{\beta,\alpha}-\partial^{\alpha}f_{1} ^{*}(z_{\beta})\), and \[\frac{1}{2\pi}\int_{\partial D}|f_{2}|^{2}|dz|\leq C_{3}\sum_{\beta\in I_{1}, |\alpha|=k}|a_{\beta,\alpha}-\partial^{\alpha}f_{1}^{*}(z_{\beta})|^{2}, \tag{4.15}\] where \(C_{3}\) is a constant independent of \(a_{\beta,\alpha}\). Denote that \[f:=f_{1}+f_{2},\] then we have \(\partial^{\alpha}f^{*}(z_{\beta})=a_{\beta,\alpha}\) for \(\beta\in I_{1}\) and \(\alpha\in L_{k}\). Combining inequality (4.13), (4.14) and (4.15), we have \[\|f\|_{\partial M,\rho}^{2}\] \[\leq 2\|f_{1}\|_{\partial M,\rho}^{2}+2\|f_{2}\|_{\partial M,\rho}^{2}\] \[\leq C_{1}\sum_{\beta\in I_{1},\alpha\in L_{k-1}}|a_{\beta,\alpha}|^ {2}+C_{3}\sum_{\beta\in I_{1},|\alpha|=k}|a_{\beta,\alpha}-\partial^{\alpha}f _{1}^{*}(z_{\beta})|^{2}\] \[\leq C_{1}\sum_{\beta\in I_{1},\alpha\in L_{k-1}}|a_{\beta,\alpha}|^ {2}+2C_{3}\sum_{\beta\in I_{1},|\alpha|=k}|a_{\beta,\alpha}|^{2}+2C_{3}C_{1}C_ {2}\sum_{\beta\in I_{1},\alpha\in L_{k-1}}|a_{\beta,\alpha}|^{2}.\] Take \(C=\max\{C_{1}+2C_{1}C_{2}C_{3},2C_{3}\}\), thus Corollary 1.13 holds by induction. ## 5. Proof of Theorem 1.15 We prove Theorem 1.15 in three steps: Firstly, we prove inequality (1.3); Secondly, we prove the necessity of the characterization; Finally, we prove the sufficiency of the characterization. _Step 1._ By Lemma 2.18, there is a unique \(F_{0}\in H^{2}_{\rho}(M,\partial M)\) satisfying that \(F_{0}^{*}(z_{\beta})=h_{0}(z_{\beta})\) for any \(\beta\in I_{1}\) and \(M_{H}(Z_{0},J,\rho)=\|F_{0}\|_{\partial M,\rho}^{2}\). For any \(1\leq j\leq n\), denote that \[M_{H,j}(Z_{0},J,\rho):=\inf\bigg{\{} \|f\|_{\partial D_{j}\times M_{j},\rho}^{2}:f\in H^{2}_{\rho}(M, \partial D_{j}\times M_{j})\] \[\text{ s.t. }f^{*}(z_{\beta})=h_{0}(z_{\beta})\text{ for any } \beta\in I_{1}\bigg{\}}.\] By definitions of \(M_{H}(Z_{0},J,\rho)\) and \(M_{H,j}(Z_{0},J,\rho)\), we have \[M_{H}(Z_{0},J,\rho)=\|F_{0}\|_{\partial M,\rho}^{2}\geq\sum_{1\leq j\leq n}M_{H,j}(Z_{0},J,\rho). \tag{5.1}\] For any \(1\leq j\leq n\), denote that \[M_{D_{j}}:=\inf\bigg{\{} \int_{D_{j}}|f|^{2}e^{-\varphi_{j}}:f\in\mathcal{O}(D_{j})\] \[\text{s.t. }f(z_{j,k})=h_{j}(z_{j,k})\text{ for any }1\leq k\leq m_{j} \bigg{\}},\] \[M_{\partial D_{j}}:=\inf\bigg{\{} \frac{1}{2\pi}\int_{\partial D_{j}}|f|^{2}\bigg{(}\sum_{1\leq k\leq m_{j}}2 \frac{\partial G_{D_{j}}(w_{j},z_{j,k})}{\partial v_{w_{j}}}\bigg{)}^{-1}e^{- \varphi_{j}}|dw_{j}|:\] \[f\in H^{2}(D_{j})\text{ s.t. }f(z_{j,k})=h_{j}(z_{j,k})\text{ for any }1\leq k \leq m_{j}\bigg{\}}\] and \[M_{M_{j}}:=\inf\bigg{\{} \int_{M_{j}}|f|^{2}\prod_{1\leq l\leq n,l\neq j}e^{-\varphi_{l}}: f\in\mathcal{O}(M_{j})\] \[\text{s.t. }f(z_{\gamma})=\prod_{1\leq l\leq n,l\neq j}h_{l}(z_{l, \gamma_{l}})\text{ for any }\gamma\in I_{1,j}\bigg{\}},\] where \(I_{1,j}:=\{\gamma=(\gamma_{1},\ldots,\gamma_{j-1},\gamma_{j+1},\ldots,\gamma_ {n})\in\mathbb{Z}^{n-1}:1\leq\gamma_{l}\leq m_{l}\text{ for any }l\neq j\}\) and \(z_{\gamma}:=(z_{1,\gamma_{1}},\ldots,z_{j-1,\gamma_{j-1}},z_{j+1,\gamma_{j+1} },\ldots,z_{n,\gamma_{n}})\in M_{j}\) for any \(\gamma\in I_{1,j}\). It follows from Theorem 1.5, Lemma 2.42, Lemma 2.41, Lemma 2.43 and inequality (5.1), that \[M_{H}(Z_{0},J,\rho) \geq\sum_{1\leq j\leq n}M_{H,j}(Z_{0},J,\rho)\] \[=\sum_{1\leq j\leq n}M_{\partial D_{j}}\times M_{M_{j}}\] \[=\sum_{1\leq j\leq m}M_{\partial D_{j}}\times\prod_{1\leq l\leq n,l\neq j}M_{D_{l}}\] \[\geq\] \[= n\pi^{n-1}M_{S}(Z_{0},J,\lambda). \tag{5.2}\] _Step 2._ Assume that equality \(M_{S}(Z_{0},J,\lambda)=\frac{M_{H}(Z_{0},J,\rho)}{n\pi^{n-1}}\) holds. As there exists \(k\in\{1,\ldots,m_{j}\}\) such that \(h_{j}(z_{j,k})\neq 0\), we know that \(M_{\partial D_{j}}>0\) and \(M_{D_{j}}>0\). Following from inequality (5.2) and Theorem 1.5, we get that \[M_{D_{j}}=\pi M_{\partial D_{j}}\] for any \(1\leq j\leq n\), and then the three statements in Theorem 1.15 hold. _Step 3._ Assume that the three statements in Theorem 1.15 hold. Theorem 1.5 tells us that \[M_{D_{j}}=\pi M_{\partial D_{j}} \tag{5.3}\] holds for any \(1\leq j\leq n\). For any \(1\leq j\leq n\), denote that \[F_{j}=\frac{P_{j}^{*}\left(f_{u_{j}}\left(\prod_{1\leq k\leq m_{j}}f_{z_{j,k} }\right)\left(\sum_{1\leq k\leq m_{j}}\frac{df_{z_{j,k}}}{f_{z_{j,k}}}\right) \right)}{c_{j}dz}\in\mathcal{O}(D_{j}).\] Following from Remark 1.6, there exists \(f_{j}\in H^{2}(D_{j})\) such that \(f_{j}^{*}=F_{j}\), and we have \[M_{D_{j}}=\int_{D_{j}}|F_{j}|^{2}e^{-\varphi_{j}} \tag{5.4}\] and \[M_{\partial D_{j}}=\frac{1}{2\pi}\int_{\partial D_{j}}|f_{j}|^{2}\bigg{(}\sum_{ 1\leq k\leq m_{j}}2\frac{\partial G_{D_{j}}(w_{j},z_{j,k})}{\partial v_{w_{j}}} \bigg{)}^{-1}e^{-\varphi_{j}}|dw_{j}|. \tag{5.5}\] Then there exists \(\tilde{F}_{0}\in H^{2}_{\rho}(M,\partial M)\) such that \(\tilde{F}_{0}=f_{j}\times\prod_{1\leq l\leq n,l\neq j}F_{l}\) on \(\partial D_{j}\times M_{j}\) for any \(1\leq j\leq n\), and \(\tilde{F}_{0}^{*}=\prod_{1\leq j\leq n}F_{j}\). By Lemma 2.41, Lemma 2.43, equality (5.4) and (5.5), we know that \[M_{H,j}(Z_{0},J,\rho)=\|\tilde{F}_{0}\|^{2}_{\partial D_{j}\times M_{j},\rho}. \tag{5.6}\] Note that \(F_{j}(z_{j,k})=h_{j}(z_{j,k})\) for any \(1\leq j\leq n\) and \(1\leq k\leq m_{j}\), hence \(\tilde{F}_{0}^{*}(z_{\beta})=h_{0}(z_{\beta})\) for any \(\beta\in I_{1}\). Inequality (5.6) implies that \[\sum_{1\leq j\leq n}M_{H,j}(Z_{0},J,\rho)=\sum_{1\leq j\leq n}\|\tilde{F}_{0} \|^{2}_{\partial D_{j}\times M_{j},\rho}\geq M_{H}(Z_{0},J,\rho).\] Combining inequality (5.1), we have \[\sum_{1\leq j\leq n}M_{H,j}(Z_{0},J,\rho)=M_{H}(Z_{0},J,\rho). \tag{5.7}\] Using inequality (5.2), equality (5.3) and (5.7), we get that \[M_{H}(Z_{0},J,\rho)=n\pi^{n-1}M_{S}(Z_{0},J,\lambda).\] Thus, Theorem 1.15 holds. _Acknowledgements._ The authors would like to thank Dr. Shijie Bao and Dr. Zhitong Mi for checking the manuscript and pointing out some typos. The first named author was supported by National Key R&D Program of China 2021YFA1003100, NSFC-11825101, NSFC-11522101 and NSFC-11431013.
2305.09145
Deep ReLU Networks Have Surprisingly Simple Polytopes
A ReLU network is a piecewise linear function over polytopes. Figuring out the properties of such polytopes is of fundamental importance for the research and development of neural networks. So far, either theoretical or empirical studies on polytopes only stay at the level of counting their number, which is far from a complete characterization of polytopes. To upgrade the characterization to a new level, here we propose to study the shapes of polytopes via the number of simplices obtained by triangulating the polytope. Then, by computing and analyzing the histogram of simplices across polytopes, we find that a ReLU network has relatively simple polytopes under both initialization and gradient descent, although these polytopes theoretically can be rather diverse and complicated. This finding can be appreciated as a novel implicit bias. Next, we use nontrivial combinatorial derivation to theoretically explain why adding depth does not create a more complicated polytope by bounding the average number of faces of polytopes with a function of the dimensionality. Our results concretely reveal what kind of simple functions a network learns and its space partition property. Also, by characterizing the shape of polytopes, the number of simplices be a leverage for other problems, \textit{e.g.}, serving as a generic functional complexity measure to explain the power of popular shortcut networks such as ResNet and analyzing the impact of different regularization strategies on a network's space partition.
Feng-Lei Fan, Wei Huang, Xiangru Zhong, Lecheng Ruan, Tieyong Zeng, Huan Xiong, Fei Wang
2023-05-16T03:51:34Z
http://arxiv.org/abs/2305.09145v1
# Deep ReLU Networks Have Surprisingly Simple Polytopes ###### Abstract A ReLU network is a piecewise linear function over polytopes. Figuring out the properties of such polytopes is of fundamental importance for the research and development of neural networks. So far, either theoretical or empirical studies on polytopes only stay at the level of counting their number, which is far from a complete characterization of polytopes. To upgrade the characterization to a new level, here we propose to study the shapes of polytopes via the number of simplices obtained by triangulating the polytope. Then, by computing and analyzing the histogram of simplices across polytopes, we find that a ReLU network has relatively simple polytopes under both initialization and gradient descent, although these polytopes theoretically can be rather diverse and complicated. This finding can be appreciated as a novel implicit bias. Next, we use nontrivial combinatorial derivation to theoretically explain why adding depth does not create a more complicated polytope by bounding the average number of faces of polytopes with a function of the dimensionality. Our results concretely reveal what kind of simple functions a network learns and its space partition property. Also, by characterizing the shape of polytopes, the number of simplices be a leverage for other problems, _e.g._, serving as a generic functional complexity measure to explain the power of popular shortcut networks such as ResNet and analyzing the impact of different regularization strategies on a network's space partition. ## 1 Introduction It was shown in a thread of studies Chu et al. (2018); Balestriero and Baraniuk (2020); Hanin and Rolnick (2019); Schonsheck et al. (2019) that a neural network with the piecewise linear activation is to partition the input space into many convex regions, mathematically referred to as polytopes, and each polytope is associated with a linear function (hereafter, we use convex regions, linear regions, and polytopes interchangeably). Hence, a neural network is essentially a piecewise linear function over polytopes. Based on this adorable result, the core idea of a variety of important theoretical advances and empirical findings is to turn the investigation of neural networks into the investigation of polytopes. Figuring out the properties of such polytopes can shed light on many critical problems, which can greatly expedite the research and development of neural networks. Let us use two representative examples to demonstrate the utility of characterizing polytopes: The first representative example is the explanation of the power of depth. In the era of deep learning, many studies (Mohri et al., 2018; Bianchini and Scarselli, 2014; Telgarsky, 2015; Arora et al., 2016) attempted to explain why a deep network can perform superbly over a shallow one. One explanation to this question is on the superior representation power of deep networks, _i.e._, a deep network can express a more complicated function but a shallow one with a similar size cannot (Cohen et al., 2016; Poole et al., 2016; Xiong et al., 2020). Their basic idea is to characterize the complexity of the function expressed by a neural network, thereby demonstrating that increasing depth can greatly maximize such a complexity measure compared to increasing width. Currently, the number of linear regions is one of the most popular complexity measures because it respects the functional structure of the widely-used ReLU networks. Pascanu et al. (2013) firstly proposed to use the number of linear regions as the complexity measure. By directly applying Zaslavsky's Theorem (Zaslavsky, 1997), Pascanu et al. (2013) obtained a lower bound \(\left(\prod_{l=0}^{L-1}\left\lfloor\frac{n_{l}}{n_{0}}\right\rfloor\right) \sum_{i=0}^{n_{0}}\binom{n_{L}}{i}\) for the maximum number of linear regions of a fully-connected ReLU network with \(n_{0}\) inputs and \(L\) hidden layers of widths \(n_{1},n_{2},\cdots,n_{L}\). Since this work, deriving the lower and upper bounds of the maximum number of linear regions becomes a hot topic (Montufar et al., 2014; Telgarsky, 2015; Montufar, 2017; Serra et al., 2018; Croce et al., 2019; Hu and Zhang, 2018; Xiong et al., 2020). All these bounds suggest the expressive ability of depth. The second interesting example is the finding of the high-capacity-low-reality phenomenon (Hu et al., 2021; Hanin and Rolnick, 2019), that the theoretical tight upper bound for the number of polytopes is much larger than what is actually learned by a network, _i.e._, deep ReLU networks have surprisingly few polytopes both at initialization and throughout the training. This counter-intuitive phenomenon can also be regarded as an implicit bias, which to some extent suggests why a deep network does not overfit. We observe that the current studies on polytopes suffer a critical limit. So far, either theoretical or empirical studies only stay at the level of counting the number of polytopes, which is far from a complete characterization of polytopes and the corresponding ReLU network. As we know, in a feed-forward network of \(L\) hidden layers, each polytope is encompassed by a group of hyperplanes, as shown in Figure 1(a), and each hyperplane is associated with a neuron. The details of how polytopes are formed in a ReLU network are in Appendix A. Hence, any polytope is created by at most \(\sum_{i=1}^{L}n_{i}\) and at least \(n_{0}+1\) hyperplanes, which is quite a large range. Thus, face numbers of polytopes can vary a lot. Unfortunately, the existing "counting" studies did not accommodate the differences among polytopes. _Can we upgrade the characterization of polytopes beyond counting to capture a more complete picture of a neural network?_ In this manuscript, 1) we propose to move one step further to study the shape of polytopes by seamlessly dividing each polytope into simplices in a triangulation of the polytope, as Figure 1(b) shows, and we describe the shape of polytopes by the minimum number of simplices to partition it. 2) We observe that polytopes formed by ReLU networks are surprisingly simple under both initialization and gradient descent, which is a fundamental characteristic of a ReLU network. Here, simplicity means that although theoretically quite diverse and complicated polytopes can be derived, deep networks tend to find a function with many simple polytopes. In addition, we underscore that the purported simplicity is relative to the possibly most complicated polytopes by a network in a given dimension. We do not compare polytopes in different input dimensions. Our results concretely reveal what kind of simple functions a network learns and its space partition property, which can be regarded as an implicit simplicity bias, explaining why deep networks may not overfit. 3) We establish a theorem that bounds the average face numbers of polytopes of a network to a small number. This theorem explains why depth does not make polytopes more complicated. The key idea is that increasing depth is to cut the existing polytope with a new hyperplane that cannot intersect with all faces of the existing polytope. Hence, the number of newly-generated faces is smaller than the current, and the average face number will not increase. To summarize, our contributions are threefold. Figure 1: The number of simplices a polytope contains can reveal the shape information of a polytope, with which one can dig out valuable information of a neural network. 1. We point out the limitation of counting #polytopes. To deepen our understanding of how a ReLU network partitions the space, we propose to investigate the shape of polytopes with the minimum number of simplices to partition it. Investigating polytopes of a network can lead to a more complete characterization of ReLU networks. 2. We empirically find that a ReLU network has surprisingly simple polytopes under both initialization and gradient descent. Such an interesting finding is a new implicit bias from the perspective of shapes of linear regions. Previously, Hanin and Rolnick (2019) showed that deep ReLU networks have few polytopes. Our discovery is that polytopes are simple, which is more fine-grained. _Our result and Hanin and Rolnick (2019) address two essentially different aspects: quantity and shape._ Compared to (Hanin and Rolnick, 2019), our result more convincingly illustrates a deep network learns a simple function. Showing the number of polytopes is few is insufficient to claim that a network learns a simple solution because a network can have a small number of very complicated polytopes. 3. To substantiate our empirical finding, we use combinatorial techniques to derive a tight upper bound for the average face number of polytopes, which not only offers a theoretical guarantee but also explains why increasing depth does not make polytopes more complicated. ## 2 Related Work **Studies on polytopes of a neural network.** Besides the aforementioned works (Pascanu et al., 2013; Xiong et al., 2020; Montufar et al., 2014; Hu and Zhang, 2018) that count the number of polytopes, there are increasingly many studies on polytopes of neural networks. Chu et al. (2018); Hanin and Rolnick (2019); Balestriero and Baraniuk (2020) showed that polytopes created by a network are convex. Zhang and Wu (2020) studied how different optimization techniques influence the local properties of polytopes, such as the inspheres, the directions of the corresponding hyperplanes, and the relevance of the surrounding regions. Gamba et al. (2020) showed that the angles between activation hyperplanes defined by convolutional layers are prone to be similar after training. Hu et al. (2020) studied the network using an arbitrary activation function. They first used a piecewise linear function to approximate the given activation function. Then, they monitored the change of #polytopes to probe if the network overfits. Park et al. (2021) proposed neural activation coding that maximizes the number of linear regions to enhance the model's performance. Our work goes beyond counting the number of polytopes to consider the shapes of polytopes, with the goal of delineating a more complete picture of neural networks. **Implicit bias of deep learning.** A network used in practice is highly over-parameterized compared to the number of training samples. A natural question is often asked: why do deep networks not overfit? To address this question, extensive studies have proposed that a network is implicitly regularized to learn a simple solution. Implicit regularization is also referred to as an implicit bias. Gradient descent algorithms are widely believed to play an essential role in capacity control even when it is not specified in the loss function (Gunasekar et al., 2018; Soudry et al., 2018; Arora et al., 2019; Sekhari et al., 2021; Lyu et al., 2021). Du et al. (2018); Woodworth et al. (2020) showed that the optimization trajectory of neural networks stays close to the initialization with the help of neural tangent kernel theory. A line of works Arora et al. (2019); Cao et al. (2019); Yang and Salman (2019); Choraria et al. (2022) have analyzed the bias of a deep network towards lower frequencies, which is referred to as the spectral bias. It was shown in Arora et al. (2018); Yu et al. (2017) that replacing weight matrices with low-rank matrices only deteriorates a network's accuracy very moderately. Ongie and Willett (2022); Le and Jegelka identified the low-rank bias in linear layers of neural networks with gradient flow. Both theoretical derivation Tu et al. (2016); Li et al. (2020) and empirical findings Jing et al. (2020); Huh et al. (2021); Galanti et al. (2023) suggested that gradient descent tends to find a low-rank solution. What's more, weight decay is a necessary condition to achieve the low-rank bias. In contrast, our investigation identifies a new implicit bias from the perspective of linear regions. Different from most implicit biases highlighting a certain property of a network, our implicit bias straightforwardly reveals what kind of simple functions a network learns. Our finding is relevant to the spectral bias. Since polytopes are both few and simple, a ReLU network does not produce a lot of oscillations in all directions, which roughly corresponds to a low-frequency solution. ## 3 Preliminaries Throughout this paper, we always assume that the input space of an NN is a \(d\)-dimensional hypercube \(C(d,B):=[-B,B]^{d}=\{\mathbf{x}=(x_{1},x_{2},\ldots,x_{d})\in\mathbb{R}^{d}:-B \leq x_{i}\leq B\}\) for some large enough constant \(B\). Furthermore, we need the following definition for linear regions (polytopes). **Definition 1** (Linear regions (polytopes) Hanin and Rolnick (2019)).: _Suppose that \(\mathcal{N}\) is a ReLU NN with \(L\) hidden layers and input dimension \(d\). An activation pattern of \(\mathcal{N}\) is a function \(\mathcal{P}\) from the set of neurons to the set \(\{1,-1\}\), i.e., for each neuron \(z\) in \(\mathcal{N}\), we have \(\mathcal{P}(z)\in\{1,-1\}\). Let \(\theta\) be a fixed set of parameters in \(\mathcal{N}\), and \(\mathcal{P}\) be an activation pattern. Then the region corresponding to \(\mathcal{P}\) and \(\theta\) is \(\mathcal{R}(\mathcal{P};\theta):=\{X\in C(d,B):z(X;\theta)\cdot\mathcal{P}(z)>0\}\), where \(z(X;\theta)\) is the pre-activation of a neuron \(z\) in \(\mathcal{N}\). A linear region of \(\mathcal{N}\) at \(\theta\) is a non-empty set \(\mathcal{R}(\mathcal{P},\theta)\neq\emptyset\) for some activation pattern \(\mathcal{P}\). Let \(R_{\mathcal{N},\theta}\) be the number of linear regions of \(\mathcal{N}\) at \(\theta\), i.e., \(R_{\mathcal{N},\theta}:=\#\{\mathcal{R}(\mathcal{P};\theta):\mathcal{R}( \mathcal{P};\theta)\neq\emptyset\) for some activation pattern \(\mathcal{P}\}\). Moreover, let \(R_{\mathcal{N}}:=\max_{\theta}R_{\mathcal{N},\theta}\) denote the maximum number of linear regions of \(\mathcal{N}\) when \(\theta\) ranges over \(\mathbb{R}^{\#weights+\#bias}\)._ In the following, Preliminary 1 shows that the polytopes created by a ReLU network are convex, which is the most important preliminary knowledge used in this manuscript. Since each polytope of a ReLU network is convex, as Figure 1 shows, one can further divide each polytope into simplices in a triangulation of polytopes to make it a simplicial complex (Preliminary 2), where a simplex is a fundamental unit. The number of simplices contained by a polytope can reflect the shape and complexity of the polytope. Then, Preliminary 3 introduces how to compute the vertices of polytopes. The detailed explanation of Preliminaries 1 and 3 can be seen in the Appendix. **Preliminary 1** (Polytopes of a neural network).: _A neural network with ReLU activation partitions the input space into many polytopes (linear regions), such that the function represented by this neural network becomes linear when restricted in each polytope (linear region). Each polytope corresponds to a collection of activation states of all neurons, and each polytope is convex [2]. In this paper, we mainly focus on \((n_{0}-1)\)-dim faces of a \(n_{0}\)-dim polytope. **For convenience, we just simply use the terminology face to represent an \((n_{0}-1)\)-dim facet of an \(n_{0}\)-dim polytope.**_ **Preliminary 2** (Simplex and simplicial complex).: _A simplex is just a generalization of the notion of triangles or tetrahedrons to any dimensions. More precisely, a \(D\)-simplex \(S\) is a \(D\)-dimensional convex hull provided by convex combinations of \(D+1\) affinely independent vectors \(\{\mathbf{v}_{i}\}_{i=0}^{D}\subset\mathbb{R}^{D}\). In other words, \(S=\left\{\sum_{i=0}^{D}\xi_{i}\mathbf{v}_{i}\mid\xi_{i}\geq 0,\sum_{i=0}^{D }\xi_{i}=1\right\}\). The convex hull of any subset of \(\{\mathbf{v}_{i}\}_{i=0}^{D}\) is called a face of \(S\). A **simplicial complex**\(\mathcal{S}=\bigcup_{\alpha}S_{\alpha}\) is composed of a set of simplices \(\{S_{\alpha}\}\) satisfying: 1) every face of a simplex from \(\mathcal{S}\) is also in \(\mathcal{S}\); 2) the non-empty intersection of any two simplices \(S_{1},S_{2}\in\mathcal{S}\) is a face of both \(S_{1}\) and \(S_{2}\). A **triangulation of a polytope**\(P\) is a partition of \(P\) into simplices such that the union of all simplices equals \(P\), and the intersection of any two simplices is a common face or empty. The triangulation of a polytope results in a simplicial complex._ **Preliminary 3** (Computing simplices in a polytope).: _Given an \(L\)-hidden-layer ReLU network, neurons' activation states lead to a group of inequalities. A polytope with dimension \(n_{0}\) is defined as \(\{\mathbf{x}\in\mathbb{R}^{n_{0}}\mid\mathbf{a}_{\mathbf{x}}\mathbf{x}^{\top}+b _{k}\leq 0,k\in[K]\}\), where \(K=\sum_{i=1}^{L-1}n_{i}\) and \(n_{i}\) is the number of neurons in the \(i\)-th layer. Given these inequalities, the vertices of the polytope are derived based on the vertex enumeration algorithm [1]. Then, we can apply triangulation to these vertices to compute the number of simplices constituting this polytope._ ## 4 Deep ReLU Networks Have Simple Linear Regions Here, by analyzing the number of simplices a polytope contains, we observe that linear regions formed by ReLU networks are surprisingly simple under both initialization and gradient descent. Although theoretically quite diverse linear regions can be derived, simple linear regions dominate, which is a high-capacity-low-reality phenomenon and a new implicit bias, which may explain why a deep learning model tends not to overfit. Combining the finding in [1], we can upgrade the conclusion to that deep ReLU networks have surprisingly _few_ and _simple_ linear regions. We validate our findings comprehensively and consistently at different initialization methods, network depths, sizes of the outer bounding box, biases, the bottleneck, network architecture, and input dimensions (Appendices H, I, J, and K). Furthermore, we showcase that during the training, although the number of linear regions increases, linear regions keep their simplicity. To ensure the preciseness of the discovery, our experiments are primarily on low-dimensional inputs. ### Initialization We validate four popular initialization methods: Xavier uniform, Xavier normal2, Kaiming, orthogonal initialization [He et al., 2015]. For each initialization method, we use two different network architectures (3-40-20-1, 3-80-40-1). The bias values are set to 0.01 for all neurons. A total of 8,000 points are uniformly sampled from \([-1,1]^{3}\) to compute the polytope. At the same time, we check the activation states of all neurons to avoid counting some polytopes more than once. Each run is repeated five times. Footnote 2: [https://pytorch.org/docs/stable/nn.init.html](https://pytorch.org/docs/stable/nn.init.html) \(\bullet\)**Initialization methods**: Figure 2 shows the histogram of the #simplices each polytope has under different initialization methods. Hereafter, if no special specification, the x-axis of all figures denotes the number of simplices a polytope has, and the y-axis denotes the count of polytopes with a certain number of simplices. Without loss of generality, suppose that in an experiment, the maximum #simplices a polytope has is \(\Omega\), we deem a polytope with no more than \(\Omega/3\) as simple. The spotlight is that for all initialization methods and network structures, simple polytopes significantly dominate over complicated polytopes. We calculate that simple polytopes take account for at least 57% and at most 76% of the total. In addition, among different initialization methods, the Xavier normal method tends to generate more uniform polytopes on four architectures. The achieved polytope is far simpler than the theoretically most complicated polytope. \(\bullet\)**Depths**: Here, we evaluate if the simplicity of polytopes still holds for deeper networks. This question is non-trivial, since a deeper network can theoretically generate more complicated polytopes. Will the depth break the simplicity? We choose four different widths (20, 40, 80, 160). For comprehensiveness, the network initialization methods are the Xavier uniform, Xavier normal, Kaiming, and orthogonal initialization. The depth is set to 5 and 8, respectively. The bias value is 0.01. Likewise, a total of 8,000 points are uniformly sampled from \([-1,1]^{3}\) to compute the polytope. At the same time, we check the activation states of all neurons to avoid counting some polytopes more than once. Each run is repeated five times. The results under the Xavier uniform initialization are shown in Figure 3 (results under other initialization methods are put into the Appendix E), from which we draw three highlights. First, we find that both going deep and going wide can increase the number of polytopes at different initializations. But the effect of going deep is much more significant than that of going wide. Second, when the network goes deep, although the total number of polytopes increases, simple polytopes still dominate among all polytopes. Third, for different initialization methods and different depths, the dominating polytope is slightly different. For example, the dominating polytopes for the network 3-40-40-40-40-40-1 under Xavier normal initialization are those with 6\(\sim\)10 simplices, while the dominating polytopes for the network 3-20-20-20-20-1 under Xavier uniform initialization are those with 1\(\sim\)5 simplices. \(\bullet\)**Biases**: Here, we are curious about how the bias value of neurons will affect the distribution of polytopes. To address this issue, we set the bias values to \(0,0.01,0.05,0.1\), respectively for the network 3-80-40-1. The outer bounding box is \([-1,1]^{3}\). A total of 8,000 points are uniformly sampled from \([-1,1]^{3}\) to compute the polytope. At the same time, we check the activation states of all neurons to avoid counting some polytopes more than once. Each run is repeated five times. The initialization methods are the Xavier uniform, Xavier normal, Kaiming, and orthogonal initialization. Figure 4 is from the Xavier uniform, and figures from other initialization methods are put into Appendix G. We observe that as the bias value increases, more polytopes are produced. However, the number of Figure 2: Deep ReLU networks have simple linear regions at different initialization methods. simple polytopes still takes up the majority. It is worthwhile mentioning that when the bias equals 0, the simplicity is crystal clear. The bias=0 is the extremal case, where all hyperplanes of the first layer intersect at the original point, and much fewer faces in polytopes are created. ### Training Earlier, we show that at the initialization stage, deep networks exhibit simple linear regions. It is natural to ask _will the simplicity of linear regions be broken during training_? We answer this question by training a fully-connected network using ReLU activation function on a real-world problem and counting the simplices of each polytope. The task is to predict if a COVID-19 patient will be at high risk, given one's health status, living habits, and medical history. This prediction task has 388,878 raw samples, and each has 5 medical features including 'HIPERTENSION','CARDIOVASCULAR', 'OBESITY', 'RENAL CHRONIC', 'TOBACCO'. The labels are 'at risk' or 'no'. The detailed descriptions of data and this task can be referred to in Kaggle3. The data are preprocessed as follows: The discrete value is assigned to different attributes. If a patient has that pre-existing disease or habit, 1 will be assigned; otherwise, 0 will be assigned. Then, the data are randomly split into training and testing sets with a ratio of 0.8:0.2. We implement a network of 5-20-20-1. The optimizer is Adam with a learning rate of 0.1. The network is initialized by Xavier uniform. The loss function is the binary cross-entropy function. The epoch number is 400 to guarantee convergence. A total of 8,000 points are uniformly sampled from \([-1,1]^{3}\) to compute the polytope. The outer bounding box is \([-5,5]^{3}\) to ensure as many polytopes as possible are counted. Footnote 3: [https://www.kaggle.com/code/meirnizri/covid-19-risk-prediction](https://www.kaggle.com/code/meirnizri/covid-19-risk-prediction) Figure 5 shows that as the training goes on, the total number of linear regions drops compared to the random initialization. Overall speaking, throughout the training, polytopes with no more than 1500 simplices take the majority. But the number of polytopes with 500-1000 simplices goes up, and the number of polytopes with fewer than 500 simplices goes down. It may suggest that the network may be primarily using them to fit data. Appendix L supplements results obtained from other initialization methods: Xavier normalization, Kaiming initialization, and orthogonal initialization. We also train networks on MNIST, following the same procedure in [10]. Here, we can not compute the polytopes in \(28\times 28\) dimensional space because the vertex enumeration algorithm suffers the curse of dimensionality. Therefore, we visualize the polytopes in the cross-section plane. We initialize a network of size 784-7-7-6-10 with Kaiming normalization. The batch size is 128. The network is trained with Adam with a learning rate of 0.001. The total epoch number is set to 480, which ensures the convergence of the network. Figure 4: The simplicity holds true for different bias values under Xavier initialization. Figure 3: The simplicity holds true for deep networks under Xavier uniform initialization. Figure 6 shows the cross-section of the function learned by a network at different epochs. A cross-section is a plane that passes through two randomly-selected images from MNIST. Figure 6 shows that as the training goes on, the number of polytopes increases. But almost all the polytopes are triangles or quadrilaterals. Although these polytopes are from a cross-section other than the whole landscape, one can indirectly sense the simplicity of these polytopes. ## 5 Theoretical Explanation (Simple Polytope Theorem) In this section, we seek to provide a theoretical explanation for the simple polytope phenomenon. We establish a theorem that bounds the average face numbers of polytopes of a network to a small number under some mild assumption, thereby substantiating our finding. Our theoretical derivation is twofold: initialization and after training. **Geometric heuristics of multi-layer networks.** Generically, we argue that a deep ReLU network should still have simple polytopes. We think that the simplicity of polytopes is given rise to two reasons. (1) First, since a ReLU network divides the space into many local polytopes, to yield a complicated polytope from a local polytope, two or more hyperplanes associated with neurons in the later layers should intersect within the given local polytope, which is hard because the area of polytopes is typically small. In [14], where a deep ReLU network was proved to have few polytopes because hyperplanes do not cross in a local polytope. Without crossing, complicated polytopes will not emerge either. As such, the complexity of polytopes probably only increases moderately as the network goes deeper. (2) Take a polytope \(P\) with \(k\) faces as an example. If we add one hyperplane to divide \(P\) into two new polytopes \(P_{1}\) and \(P_{2}\), then the total number of faces increases by \(k_{0}+2\) (the hyperplane itself should be counted twice), if the hyperplane intersects with \(k_{0}\) faces in \(P\). Usually, \(k_{0}\) should be smaller than \(k\) because a hyperplane cannot cross all faces of \(P\), thus the average number of edges of \(P_{1}\) and \(P_{2}\) becomes \((k+k_{0}+2)/2\leq k\). By this idea, we can see that in most cases, the number of the average number of faces in polytopes of a general Figure 5: The results over a COVID dataset show that throughout the training, most polytopes are simple, despite that the number of linear regions drops during the training. Figure 6: A cross-sectional visualization of the polytopes learned by a network over MNIST at different epochs. Almost all the polytopes are triangles or quadrilaterals. multi-layer ReLU neural network \(\mathcal{N}\) will not increase when the number of layers gets deeper, since adding the hyperplanes one by one in a polytope will not increase the average number of faces in the new polytopes they divide. ### Initialization **Theorem 1** (Simple Polytope Theorem, One-hidden-layer NNs).: _Let \(\mathcal{N}\) be a one-hidden-layer fully-connected ReLU NN with \(d\) inputs and \(n\) hidden neurons, where \(d\) is a fixed positive integer. Suppose that \(n\) hyperplanes generated by \(n\) hidden neurons are in general position. Let \(C(d,B):=[-B,B]^{d}\) be the input space of \(\mathcal{N}\). Furthermore, assume that \(n\) and \(B\) are large enough, then the average number of faces in linear regions of \(\mathcal{N}\) is at most \(2d+1\)._ **Theorem 2** (Simple Polytope Theorem, Multi-layer NNs, \(d=2\)).: _Let \(\mathcal{N}\) be an \(L\)-layer fully-connected ReLU NN with \(d=2\) inputs and \(n_{i}\) hidden neurons in the \(i\)-th hidden layer. Let \(C(d,B):=[-B,B]^{d}\) be the input space of \(\mathcal{N}\). Furthermore, assume that \(n_{i}\) and \(B\) are large enough, then the average number of faces in linear regions of \(\mathcal{N}\) is at most \(2d=4\)._ **Theorem 3** (Simple Polytope Theorem, Multi-layer NNs with Zero Biases).: _Let \(\mathcal{N}\) be an \(L\)-layer fully-connected ReLU NN with \(d\) inputs and \(n_{i}\) hidden neurons in the \(i\)-th hidden layer where \(d\) is a fixed positive integer. Suppose that all the biases of \(\mathcal{N}\) are equal to zero. Let \(C(d,B):=[-B,B]^{d}\) be the input space of \(\mathcal{N}\). Furthermore, assume that the number of hidden neurons and \(B\) are large enough, then the average number of faces in linear regions of \(\mathcal{N}\) is at most \(3d-1\)._ Proof.: Please see Appendix B. **Interpretation of these bounds.** It is desirable that no matter how deep a network is, the average number of faces of polytopes produced by a network is only concerned with the input dimension. We highlight that this bound is an intrinsic quantity regarding a network. Considering that \(3d-1\) is a rather small bound, it can justify why simple polytopes dominate. If the dominating polytopes are complex polytopes, the average face number should surpass \(3d-1\) a lot. If simple polytopes only take up a small portion, the average face number will be larger than \(3d-1\), too. Although we assume that the network is wide in deriving the bound, based on our geometric heuristics, the average face number should also be small for narrow networks. We leave this question for future exploration. Theorem 3 and Theorems 1, 2 are built for cases of zero biases and non-zero biases, respectively. It is a general practice to initialize biases with 0 before training a network, _e.g._, biases are often set to 0 in Xavier initialization Glorot and Bengio (2010). Therefore, Theorem 3 aligns with reality well. In addition, a ReLU network with zero biases becomes homogeneous, _i.e._, \(\mathcal{N}(\alpha\boldsymbol{\theta};\cdot)=\alpha^{L}\mathcal{N}(\boldsymbol {\theta};\cdot)\), which is a widely-used setting when investigating implicit bias Lyu and Li (2020); Vardi et al. (2022). Non-zero biases are so complicated to give a general and complete theorem for arbitrary cases. We only make success for one-hidden-layer networks with an arbitrary dimension and multi-layer networks with \(d=2\). Deriving Theorems 1 and 3 is tricky. The basic idea can be divided into two parts: Firstly, we derive the upper bound of simplices depending on the observation that for each \((d-1)\)-dim face of a \(d\)-dim polytope, it can only be a face for one unique simplex in a triangulation of this polytope, thus the total number of simplices in triangulations of polytopes must be smaller than or equal to the total number of \((d-1)\)-dim faces in all polytopes. Therefore, we just need to derive the upper bound for the total number of \((d-1)\)-dim faces in all polytopes generated by a neural network \(\mathcal{N}\), which can be done by induction on the number of layers of \(\mathcal{N}\). Secondly, we derive the number of polytopes by the techniques and results from the classic hyperplane arrangement theories (see Stanley et al. (2004)). Finally, the quotient between the upper bound of simplices and the number of polytopes gives the upper bound for the average number of faces in linear regions of \(\mathcal{N}\). ### After Training: Low-Rank _Can we theoretically derive that polytopes remain simple after training?_ It was shown that gradient descent-based optimization learns weight matrices of low rank Galanti et al. (2023); Huh et al. (2021). Therefore, under the low-rank setting, We also investigate if the polytopes are simple after the training. We derive Theorems 4 and 5 to substantiate that after training, polytopes remain simple. **Theorem 4** (Simple Polytope Theorem, Multi-Layer NNs with Zero Biases and Low-rank Weight Matrices).: _Let \(\mathcal{N}\) be an \(L\)-layer fully-connected ReLU NN with \(d\) inputs and \(n_{i}\) hidden neurons in the \(i\)-th hidden layer where \(d\) is a fixed positive integer. Assume that the weight matrix \(W\in\mathbb{R}^{d\times n}\) has rank \(d_{0}\leq d\). Suppose that all the biases of \(\mathcal{N}\) are equal to zero. Let \(C(d,B):=[-B,B]^{d}\) be the input space of \(\mathcal{N}\). Furthermore, assume that the number of hidden neurons and \(B\) are large enough, then the average number of faces in linear regions of \(\mathcal{N}\) is at most \(2d_{0}+d-1\)._ **Theorem 5** (Simple Polytope Theorem, One-hidden-layer NNs, Low-rank Weight Matrices).: _Let \(\mathcal{N}\) be a one-hidden-layer fully-connected ReLU NN with \(d\) inputs and \(n\) hidden neurons, where \(d\) is a fixed positive integer. Assume that the weight matrix \(W\in\mathbb{R}^{d\times n}\) has rank \(d_{0}\leq d\). Suppose that any \(d_{0}\) hyperplanes generated by any \(d_{0}\) hidden neurons are in general position. Let \(C(d,B):=[-B,B]^{d}\) be the input space of \(\mathcal{N}\). Furthermore, assume that \(n\) and \(B\) are large enough, then the average number of faces in linear regions of \(\mathcal{N}\) is at most \(2d_{0}+1\)._ Proof.: For proofs of Theorems 4 and 5, please see Appendix B. According to Theorems 4 and 5, we can see that when the weight matrix has a lower rank \(d_{0}\), which is smaller than the input dimension \(d\), then the average number of faces in linear regions of \(\mathcal{N}\) is determined by \(d_{0}\) and irrelevant to the input dimension \(d\). This means that, after the training of a ReLU neural network, if the weight matrices become low-rank matrices (which is suggested by Galanti et al. (2023); Huh et al. (2021)), then the average number of faces in linear regions of \(\mathcal{N}\) would be much smaller, which means that the linear regions tend to be much simpler after training. ## 6 Discussion **Estimating the shape of polytopes by Monte Carlo samping**. _How to empirically estimate the shape of polytopes for high-dimensional inputs?_ Our theoretical derivation suggests that one can use the number of faces of a polytope as a proxy for #simplices to describe the shape of polytopes. Thus, we don't need to compute the vertices and triangulation that are time-consuming for high-dimensional space. Since a polytope with the dimension \(n_{0}\) is defined as \(\{\mathbf{x}\in\mathbb{R}^{n_{0}}\mid\mathbf{a}_{k}\mathbf{x}^{\top}+b_{k} \leq 0,k\in[K]\}\), computing the number of faces is equivalent to finding which inequalities are non-redundant. Non-redundant inequalities are the faces of a polytope. Determining the non-redundant inequalities can be done by the hit-and-run algorithms, a representative Monte Carlo sampling method. Appendix M shows the methodology and provides an example that estimates the number of faces of polytopes formed by a variant of LeNet-5 architectures for 784 dimensions. **Extension to CNNs**. Our results on the average number of faces in polytopes generated by multi-layer ReLU NNs can be extended to many kinds of architectures such as CNNs. Theoretically, both CNNs and fully-connected networks conform to the geometric restrictions. Since one hyperplane cannot cross all faces of a polytope, the average face number will not tend to increase. In Appendix M, using Monte Carlo estimation, experiments on a LeNet-5 variant discovered that the average number of faces polytopes have is much smaller than the maximum number of inequalities, suggesting that simple polytopes are also held by CNNs. **#Simplices as a leverage for other problems**. The focus of this draft is to use #simplices to study the shape of polytopes formed by a ReLU network. However, since #simplices that concerns the shape of polytopes is a more fine-grained characterization compared to #polytopes, one can also use #simplices as a complexity measure to describe the expressivity of a ReLU network. Appendix C lists the potential advantages of #simplices as a complexity measure over #polytopes in terms of uniqueness via modularization and applicability to classification networks. As a basic complexity measure, #simplices can help us understand the behavior of a network from a different angle such as explaining the power of popular shortcut networks such as ResNet and analyzing the impact of regularization on a network's space partition. For example, 1) in Appendix D, Theorems 6 and 7 summarize the upper and lower bounds of the maximum #simplices of a feedforward ReLU network and ResNet. It is found that the upper bound of the maximum #simplices of a feedforward ReLU network and ResNet are the same. In other words, the addition of residual connections may not increase the expressivity of a network a lot. The main advantage of residual connections lies in the optimization because they can facilitate the flow of gradients. 2) In Appendix N, we use the #simplices to investigate the impact of weight decay on the shapes of learned polytopes. It is observed that as the weight decay is exerted more and more heavily, the number of polytopes goes down and on the other hand, learned polytopes have fewer simplices, which means they are simpler. Conclusion and Limitation In this manuscript, we have advocated studying the properties of polytopes instead of just counting them, towards revealing other valuable properties of a neural network. Then, we observed that deep ReLU networks have simple linear regions, which is not only a fundamental characterization but also an implicit bias for ReLU networks explaining why deep networks do not overfit. Lastly, we have mathematically established a small bound for the average number of faces in polytopes, therefore supplying an explanation for the simple polytope phenomenon. An important limitation of our work is that we don't fully illustrate the relationship between our implicit bias and others. Just like in mathematics, there can be more than one set of axioms for an axiom system, and different axiom systems can be mutually deduced. Since the implicit bias is derived from gradient descent, different implicit biases should be related to one another. An important future direction will be building the relationship between different forms of implicit biases Galanti et al. (2023). If so, the understanding of implicit biases can be greatly deepened.
2305.06276
Maximal Leakage of Masked Implementations Using Mrs. Gerber's Lemma for Min-Entropy
A common countermeasure against side-channel attacks on secret key cryptographic implementations is $d$th-order masking, which splits each sensitive variable into $d+1$ random shares. In this paper, maximal leakage bounds on the probability of success of any side-channel attack are derived for any masking order. Maximal leakage (Sibson's information of order infinity) is evaluated between the sensitive variable and the noisy leakage, and is related to the conditional ``min-entropy'' (Arimoto's entropy of order infinity) of the sensitive variable given the leakage. The latter conditional entropy is then lower-bounded in terms of the conditional entropies for each share using majorization inequalities. This yields a generalization of Mrs. Gerber's lemma for min-entropy in finite Abelian groups.
Julien BΓ©guinot, Yi Liu, Olivier Rioul, Wei Cheng, Sylvain Guilley
2023-05-10T16:07:32Z
http://arxiv.org/abs/2305.06276v1
# Maximal Leakage of Masked Implementations Using Mrs. Gerber's Lemma for Min-Entropy ###### Abstract A common countermeasure against side-channel attacks on secret key cryptographic implementations is \(d\)th-order masking, which splits each sensitive variable into \(d+1\) random shares. In this paper, maximal leakage bounds on the probability of success of any side-channel attack are derived for any masking order. Maximal leakage (Sibson's information of order infinity) is evaluated between the sensitive variable and the noisy leakage, and is related to the conditional "min-entropy" (Arimoto's entropy of order infinity) of the sensitive variable given the leakage. The latter conditional entropy is then lower-bounded in terms of the conditional entropies for each share using majorization inequalities. This yields a generalization of Mrs. Gerber's lemma for min-entropy in finite Abelian groups. ## I Introduction When a cryptographic device is operating, any kind of physical leakage (time, power, electromagnetic emanations, etc.) can be exploited by an attacker. The attacker queries the device multiple times, and measures the corresponding leakages to infer the secret key. The security of devices against side-channel attacks has become a major concern. To evaluate the probability of success for any side-channel attack, information-theoretic metrics turn out to be effective and have been used in many studies. Using conditional mutual information and Fano's inequality, de Cherisey et al. [6] established several universal bounds on the probability of success for a given number of queries, or equivalently, the minimum number of queries required to achieve a given level of success. This approach has been extended to conditional Sibson's \(\alpha\)-information by Liu et al. [15]. However, both [6] and [15] were restricted to unprotected cryptographic devices. _Masking_ is one of the most well-established countermeasures. The main issue in this context is the fact that a direct evaluation of the information leakage requires data and computational complexities that increase rapidly with the masking order [5]. Therefore, it is important to derive bounds in terms of the individual information leakages for each share. Duc et al. [7] conjectured a general form of such bounds. Rigorous bounds were obtained in two independent recent works by Ito et al. [13] and Masure et al. [18]. Even more recently, Beguinot et al. [3] improved these results using Mrs. Gerber's lemma [14, 27] to derive sharp bounds in terms of mutual information for masking in additive groups of order \(2^{n}\). In the case of unprotected implementations (without masking), it is shown by simulation in [15] that the probability of success of a side-channel attack is evaluated using Sibson's \(\alpha\)-information all the more accurately as \(\alpha\) increases. Therefore, the case of mutual information, which corresponds to \(\alpha=1\) is not optimal. This motivates the derivation of new bounds in the limiting case \(\alpha=+\infty\). The usual setup of masking countermeasures involves bit-wise XOR (exclusive or) operations, which are particularly well suited to symmetric cryptographic algorithms like AES. However, modern cryptography also relies on operations performed in groups of prime order, and masking can also be multiplicative [1] and not only additive [9]. For all these reasons, there is a strong incentive to extend the previous bounds to arbitrary finite Abelian groups. This motivates the generalization of Mrs. Gerber's lemma to any such Abelian group. Mrs. Gerber's lemma was initially derived by Wyner and Ziv [27] to lower bound the entropy of a modulo 2 addition of binary random variables in terms of the entropies of each summand. It was extended by Jog and Anatharam [14] to the case of additive groups of order \(2^{n}\), and by Hirche [10] to the case of Renyi entropy of binary variables. The general case of additive groups was only considered by Tao [23] for Shannon entropy and independent copies of two shares, in relation to sumset theory. While the original binary Mrs. Gerber's lemma was used to derive a binary version of the entropy power inequality [21], a generalization of the entropy power inequality to any prime cyclic additive group and Renyi entropy was investigated by Madiman et al. [16], but does not reduce to an explicit "Mrs. Gerber's lemma"-type inequality. Therefore, it appears that the case of min-entropy (Renyi entropy of order \(\infty\)) and additive groups of any order has not been investigated yet in our context. ### Contributions In this paper, we show that when evaluating the performance of side-channel attacks of masked implementations using conditional Sibson's \(\alpha\)-information, the exact performance of optimal maximum likelihood attacks is attained in the limiting case \(\alpha=+\infty\). This motivates the investigation of Mrs. Gerber's lemma for conditional min-entropy (Arimoto's conditional entropy of order \(\infty\)). We derive a variation of such Mrs. Gerber's lemma for any finite Abelian group and for any masking order. The remainder of this paper is organized as follows. Section II gives some notations and preliminaries on \(\alpha\)-informational quantities. Section III shows that the optimal evaluation of side-channel attack success by Fano's inequality is achieved in the limiting case \(\alpha=+\infty\) and derives the corresponding bound in terms of the information between the sensitive variable and the leakage, which is linear in the number of queries. Section IV derives Mrs. Gerber's lemma for min-entropy, first for two summands in any finite Abelian group, then extends it to the general case of \(d+1\) summands. Section V concludes and gives some perspectives. ## II Preliminaries and Notations ### _Framework and Notations_ Let \(K\) be the secret key and \(T\) be a public variable (usually plaintext or ciphertext) known to the attacker. It is assumed that \(T\) is independent of \(K\), and \(K\) is uniformly distributed over an Abelian group \(\mathcal{G}\) of order \(M\). The cryptographic algorithm operates on \(K\) and \(T\) to compute a sensitive variable \(X\), which takes values in the same group \(\mathcal{G}\) and is determined by \(K\) and \(T\), in such a way that \(X\) is also uniformly distributed over \(\mathcal{G}\). In a masking scheme of order \(d\), the sensitive variable \(X\) is randomly split into \(d+1\)_shares_\(X_{0}\), \(X_{1}\),..., \(X_{d}\) and cryptographic operations are performed on each share separately. Thus, \(X=X_{0}\oplus X_{1}\oplus\cdots\oplus X_{d}\), where each share \(X_{i}\) is a uniformly distributed random variable over \(\mathcal{G}\) and \(\oplus\) is the group operation in \(\mathcal{G}\). For this group operation, we let \(\ominus g\) denote the opposite of \(g\in\mathcal{G}\). A typical example is "Boolean masking", for which \(\oplus\equiv\ominus\) is the bitwise XOR operation. During computation, shares \(\mathbf{X}=(X_{0},X_{1},\ldots,X_{d})\) are leaking through some side channel. Noisy "traces," denoted by \(\mathbf{Y}=(Y_{0},Y_{1},\ldots,Y_{d})\), are measured by the attacker, where \(\mathbf{Y}\) is the output of a memoryless side channel with input \(\mathbf{X}\). Since masking shares are drawn uniformly and independently, both \(\mathbf{X}\) and \(\mathbf{Y}\) are i.i.d. sequences. The attacker measures \(m\) traces \(\mathbf{Y}^{m}=(\mathbf{Y}_{1},\mathbf{Y}_{2},\ldots,\mathbf{Y}_{m})\) corresponding to the i.i.d. text sequence \(T^{m}=(T_{1},T_{2},\ldots,T_{m})\), then exploits her knowledge of \(\mathbf{Y}^{m}\) and \(T^{m}\) to guess the secret key \(\hat{K}\). Again, since the side-channel is memoryless, both \(\mathbf{X}^{m}\) and \(\mathbf{Y}^{m}\) are i.i.d. sequences. Let \(\mathbb{P}_{s}=\mathbb{P}(K=\hat{K})\) be the probability of success of the attack upon observing \(T^{m}\) and \(\mathbf{Y}^{m}\). In theory, maximum success is obtained by the MAP (maximum a posteriori probability) rule with success probability denoted by \(\mathbb{P}_{s}=\mathbb{P}_{s}(K|\mathbf{Y}^{m},T^{m})\). The whole process is illustrated in Fig. 1. ### _Renyi's \(\alpha\)-Entropy and Arimoto's Conditional \(\alpha\)-Entropy_ Assume that either \(0<\alpha<1\) or \(1<\alpha<+\infty\) (the limiting values \(0,1,+\infty\) can be obtained by taking limits). We consider probability distributions \(P,Q\) with a dominating measure \(\mu\), with respect to which they follow densities denoted by the corresponding lower-case letters \(p,q\). We follow the notations of [15] in the following **Definition 1** (Renyi \(\alpha\)-Entropy and \(\alpha\)-Divergence): \[H_{\alpha}(P) =\tfrac{\alpha}{1-\alpha}\log\|p\|_{\alpha}\] (1) \[D_{\alpha}(P\|Q) =\tfrac{1}{\alpha-1}\log\!\left\langle p\|q\right\rangle_{\alpha}^ {\alpha}\] (2) with the special notation: \[\|p\|_{\alpha} =\left(\int|p|^{\alpha}d\mu\right)^{1/\alpha} \tag{3}\] \[\langle p\|q\rangle_{\alpha} =\left(\int\!p^{\alpha}q^{1-\alpha}d\mu\right)^{1/\alpha}. \tag{4}\] The usual Shannon entropy and Kullback-Leibler divergence are recovered by letting \(\alpha\to 1\). The \(\alpha\)-entropy is nonincreasing in \(\alpha\) and achieves its _min-entropy_\(H_{\infty}\) at the limit \(\alpha=\infty\): **Definition 2** (Min-Entropy): _For a probability distribution \(P\) over a finite alphabet, the min-entropy is_ \[H_{\infty}(P)=-\log(\max\ p). \tag{5}\] Many different definitions of conditional \(\alpha\)-entropy \(H_{\alpha}(X|Y)\) were proposed in the literature. We use Arimoto's definition, which is argued to be the most promising one [8]: **Definition 3** (Arimoto's Conditional \(\alpha\)-Entropy [2]): _The conditional \(\alpha\)-entropy of \(X\) given \(Y\) is defined as_ \[H_{\alpha}(X|Y)=\frac{\alpha}{1-\alpha}\log\mathbb{E}_{Y}\|p_{X|Y}\|_{\alpha}. \tag{6}\] _Assuming \(X\) takes values in a finite alphabet, the conditional min-entropy can be obtained by letting \(\alpha\to\infty\) in \(H_{\alpha}(X|Y)\):_ **Definition 4** (Conditional Min-Entropy [24]): \[H_{\infty}(X|Y)=-\log(\mathbb{E}_{Y}\max_{x}p_{X|Y})=-\log\mathbb{P}_{s}(X|Y)\] (7) _where \(\mathbb{P}_{s}(X|Y)\) is the maximum average probability of success in estimating \(X\) having observed \(Y\), by the MAP rule._ ### _Sibson's \(\alpha\)-Information and Liu et al.'s Conditional Version_ Again, several different definitions of \(\alpha\)-information \(I_{\alpha}(X;Y)\) have been proposed, and Sibson's \(\alpha\)-information is perhaps the most appropriate one because it satisfies several useful properties that other definitions do not [26]. **Definition 5** (Sibson's \(\alpha\)-Information [22, 26]): \[I_{\alpha}(X;Y) =\min_{Q_{Y}}D_{\alpha}(P_{X}\|P_{X}\times Q_{Y})\] (8) \[=\tfrac{\alpha}{\alpha-1}\log\mathbb{E}_{Y}\langle p_{X|Y}\|p_{X }\rangle_{\alpha}.\] (9) **Definition 6** (Max-Information [11, Thm. 4]): _Assuming \(X,Y\) are discrete random variables, one has_ \[I_{\infty}(X;Y)=\log\sum_{y}\sup_{x:p_{X}(x)>0}p_{Y|X}(y|x). \tag{10}\] _Max-information is also studied in [12] as maximal leakage._ Fig. 1: Side-channel analysis as a (unintended) β€œcommunication” channel. β€œCrypto” can be any sensitive computation (encryption or decryption). \(T\) is a public random variable (e.g., a plain or cipher text byte). Again, there are many different proposals for _conditional_\(\alpha\)-information. We use the following definition which seems most appropriate in the context of side-channel analysis [15]: **Definition 7** (Conditional \(\alpha\)-Information [15]): \[I_{\alpha}(X;Y|Z) =\min_{Q_{YZ}}D_{\alpha}(P_{XYZ}\|P_{X|Z}Q_{YZ})\] (11) \[=\frac{\alpha}{\alpha-1}\log\mathbb{E}_{YZ}\langle p_{X|YZ}\|p_{ X|Z}\rangle_{\alpha}.\] (12) ## III Fano's Equality for Order \(\infty\): Linear Bound ### _Fano Inequality for Conditional \(\alpha\)-Information as \(\alpha\to\infty\)_ Using conditional \(\alpha\)-information, Liu et al. [15] derived a universal bound on the probability of success as follows. **Theorem 1** (Generalized Fano's Inequality [15, Thm. 1]): \[I_{\alpha}(K;\mathbf{Y}^{m}|T^{m})\geq d_{\alpha}(\mathbb{P}_{s}(K|\mathbf{Y}^{m},T^{ m})\|(\mathbb{P}_{s}(K)))\] (13) _where \(d_{\alpha}(p\|q)\) is the binary \(\alpha\)-divergence:_ \[d_{\alpha}(p\|q)=\tfrac{1}{\alpha-1}\log(p^{\alpha}q^{1-\alpha}+(1-p)^{\alpha }(1-q)^{1-\alpha}). \tag{14}\] _When \(\alpha\to 1\), this bound recovers the previous bound in [6]. The simulation results in [15] show that (13) is tighter as \(\alpha\) increases._ In this section, we prove that Fano's inequality for conditional \(\alpha\)-information becomes an _equality_ in the limiting case \(\alpha=\infty\). Thus, conditional max-information can accurately characterize the probability of success. **Theorem 2** (Generalized Fano's Inequality at \(\alpha=+\infty\)): _For a uniformly distributed secret \(K\),_ \[\begin{split} I_{\infty}(K;\mathbf{Y}^{m}|T^{m})&=d_{ \infty}(\mathbb{P}_{s}(K|\mathbf{Y}^{m},T^{m})\|(\mathbb{P}_{s}(K)))\\ &=\log(M\mathbb{P}_{s})\end{split} \tag{15}\] _where \(d_{\infty}(p\|q)=\lim\limits_{\alpha\to\infty}d_{\alpha}(p\|q)=\log\max \limits_{x,q(x)>0}(p(x)/q(x))\), \(\mathbb{P}_{s}=\mathbb{P}_{s}(K|\mathbf{Y}^{m},T^{m})\) is the optimal probability of success, and \(\mathbb{P}_{s}(K)=1/M\) is the corresponding probability of success in the case of blind estimation (without any observation)._ To prove this theorem, we need the explicit expression of conditional max-information. **Proposition 1** (Conditional Max-Information): _Assuming \(X\) takes values in a finite alphabet, one has_ \[I_{\infty}(X;Y|Z)=\log\mathbb{E}_{Z}\int_{y}(\max\limits_{x:p_{X|Z}(x|z)>0}p_{ Y|XZ})\ d\mu_{Y}. \tag{16}\] _This result easily follows from the following Lemmas 1 and 2, which are proved in Appendices B and C respectively. In [12], conditional maximal leakage is defined as a maximum over \(Z\), while our conditional max-information is averaged over \(Z\)--which is less than or equal to the conditional maximal leakage of [12]._ **Lemma 1**: _Given any fixed \(y,z\), we have_ \[\lim\limits_{\alpha\to\infty}\ p_{Y|Z}\cdot\langle p_{X|YZ}\|p_{X|Z}\rangle_{ \alpha}=\max\limits_{x:p_{X|Z}(x|z)>0}p_{Y|XZ}. \tag{17}\] **Lemma 2**: \[\lim\limits_{\alpha\to\infty}\ \log\ \mathbb{E}_{YZ}\langle p_{X|YZ}\|p_{X|Z} \rangle_{\alpha}\\ =\log\mathbb{E}_{Z}\int_{y}\!\!\lim\limits_{\alpha\to\infty}\ p_{Y| Z}\cdot\langle p_{X|YZ}\|p_{X|Z}\rangle_{\alpha}.\] (18) **Proof of Theorem 2**: _Under the MAP rule, the probability of success writes_ \[\mathbb{P}_{s} =\mathbb{E}_{\mathbf{Y}^{m}T^{m}}(\max\limits_{k}\ p_{K|\mathbf{Y}^{m},T^ {m}})\] \[=\mathbb{E}_{T^{m}}\int_{\mathbf{y}^{m}}(\max\limits_{k}\ p_{\mathbf{Y}^{ m}|K,T^{m}}p_{K|T^{m}})d\mu_{\mathbf{Y}^{m}}. \tag{19}\] _Recall \(K\) is uniformly distributed and independent from \(T^{m}\). Therefore, (19) becomes_ \[\mathbb{P}_{s}=\frac{1}{M}\cdot\mathbb{E}_{T^{m}}\int_{\mathbf{y}^{m}}\!\!\!\left( \max\limits_{k}\ p_{\mathbf{Y}^{m}|K,T^{m}}\right)\!d\mu_{\mathbf{Y}^{m}}. \tag{20}\] Combining (16) and (20) we have \(I_{\infty}(K;\mathbf{Y}^{m}|T^{m})=\log(M\mathbb{P}_{s})\). Since \(\mathbb{P}_{s}\geq 1/M\), one has \(\mathbb{P}_{s}\cdot M\geq(1-\mathbb{P}_{s})\cdot M/(M-1)\) and \(d_{\infty}(\mathbb{P}_{s}(K|\mathbf{Y}^{m},T^{m})\|(\mathbb{P}_{s}(K)))=\log(M \mathbb{P}_{s})\), which proves (15). \(\blacksquare\) ### _Linear Bound Using Maximal Leakage \(I_{\infty}(X;\mathbf{Y})\)_ Evaluating \(I_{\infty}(K;\mathbf{Y}^{m}|T^{m})\) directly turns out to be cumbersome (see Remark 1 below). Instead we use the unconditional max-information measure, i.e., maximal leakage \(I_{\infty}(X;\mathbf{Y})\) to bound the probability of success, which is linear in the number \(m\) of measurements: **Theorem 3** (Linear Bound): \[\log(M\mathbb{P}_{s})\leq mI_{\infty}(X;\mathbf{Y}).\] (21) **Proof: _By Definition 6,_ \[I_{\infty}(K,T^{m};\mathbf{Y}^{m})=\log\int_{\mathbf{y}^{m}}\max\limits_{k,t^{m}}\ p_{\mathbf{Y}^{m}|K,T^{m}}d\mu_{\mathbf{Y}^{m}}. \tag{22}\] _Because \(\max\limits_{k,t^{m}}\ p_{\mathbf{Y}^{m}|K,T^{m}}\geq\mathbb{E}_{T^{m}}\ (\max\limits_{k}\ p_{\mathbf{Y}^{m}|K,T^{m}})\), by (15) and (16) we have_ \[I_{\infty}(K,T^{m};\mathbf{Y}^{m})\geq I_{\infty}(K;\mathbf{Y}^{m}|T^{m})=\log(M \mathbb{P}_{s}). \tag{23}\] _Because \((K,T^{m})\leftrightarrow X^{m}\leftrightarrow\mathbf{Y}^{m}\) forms a Markov chain, using the data processing inequality (DPI) for Sibson's \(\alpha\)-information [19, 20] we have_ \[I_{\alpha}(K,T^{m};\mathbf{Y}^{m})\leq I_{\alpha}(X^{m};\mathbf{Y}^{m}). \tag{24}\] _Also, when \(T^{m}\) is not observed, each component of \(X^{m}\) is i.i.d., and since the side-channel is memoryless, \((X^{m};\mathbf{Y}^{m})\) is an i.i.d. sequence. It easily follows from the definition that_ \[I_{\alpha}(X^{m};\mathbf{Y}^{m})=mI_{\alpha}(X;\mathbf{Y}). \tag{25}\] _Letting \(\alpha\to\infty\) in (24) and (25) we have \(I_{\infty}(K,T^{m};\mathbf{Y}^{m})\leq mI_{\infty}(X;\mathbf{Y})\). \(\blacksquare\)_ **Remark 1**: _For conditional \(\alpha\)-information we have the inequality \(I_{\alpha}(K;\mathbf{Y}^{m}|T^{m})\leq I_{\alpha}(X^{m};\mathbf{Y}^{m}|T^{m})\) similar to (24). However, one does not have an equality similar to (25) when \(T^{m}\) is observed._ _Remark 2:_ This proof cannot use the result in [12, Theorem 1] because in this theorem \(\mathbf{Y^{m}}\) is not on a finite alphabet. What's more, if we use Definition 1 and Theorem 1 in [12] we will have \[I_{\infty}(X^{m};\mathbf{Y}^{m},T^{m})\geq\log(M\cdot\mathbb{P}_{s}(K|\mathbf{Y}^{m},T^ {m})) \tag{26}\] but \(I_{\infty}(X^{m};\mathbf{Y}^{m})\) is less than \(I_{\infty}(X^{m};\mathbf{Y}^{m},T^{m})\). ## IV Mrs. Gerber's Lemma for Min-Entropy in Any Finite Abelian Group To benefit from Theorem 3 it remains to upper bound \(I_{\infty}(X;\mathbf{Y})\). Since \(X\) is uniformly distributed, it is easily seen from the definition that \(I_{\infty}(X;\mathbf{Y})=\log M-H_{\infty}(X|\mathbf{Y})\). Thus, it remains to lower bound the conditional min-entropy \(H_{\infty}(X|\mathbf{Y})\). This can be seen as an extension of Mrs. Gerber's lemma to min-entropy in finite additive groups. ### _Mrs. Gerber's Lemma for Two Random Variables_ Wyner and Ziv [27] lower bounded the entropy of a sum of binary random variables with the entropies of each summand. This is known as Mrs. Gerber's lemma. _Theorem 4 (Mrs. Gerber's Lemma [27]):_ Let \(X_{0},X_{1}\) be two independent \(\mathbb{Z}_{2}\)-valued random variables with side information \(\mathbf{Y}=(Y_{0},Y_{1})\) and sensitive bit \(X=X_{0}\oplus X_{1}\). Then \[H(X|\mathbf{Y})\geq h(h^{-1}(H(X_{0}|Y_{0}))\star h^{-1}(H(X_{1}|Y_{1}))) \tag{27}\] where \(h(p)=-p\log p-\bar{p}\log\bar{p}\), \(a\star b=a\bar{b}+\bar{a}b\) and \(\bar{x}=1-x\). Jog and Anatharam [14] extended Mrs. Gerber's lemma to additive groups of order \(2^{n}\). Hirche [10] extended Mrs. Gerber's lemma for binary random variables to the case of Renyi entropies. In particular for min-entropy, one has equality: _Theorem 5 (Christoph Hirche [10, Lem. IV.7]):_ Let \(X_{0},X_{1}\) be two independent \(\mathbb{Z}_{2}\)-valued random variables with side information \(\mathbf{Y}=(Y_{0},Y_{1})\) and \(X=X_{0}\oplus X_{1}\). Then \[H_{\infty}(X|\mathbf{Y})=h_{\infty}(h_{\infty}^{-1}(H_{\infty}(X_{0}|Y_{0})) \star h_{\infty}^{-1}(H_{\infty}(X_{1}|Y_{1}))) \tag{28}\] where \(h_{\infty}(p)=-\log\max\{p,\bar{p}\}\). In this section, Mrs. Gerber's lemma is extended for the min-entropy in any additive finite group: **Theorem 6**: _Let \(X_{0},X_{1}\) be two independent \(\mathcal{G}\)-valued random variables with side information \(\mathbf{Y}=(Y_{0},Y_{1})\) and sensitive variable \(X=X_{0}\oplus X_{1}\). Then for \(k=\max\{\lfloor p^{-1}\rfloor,\lfloor q^{-1}\rfloor\}\), one has the optimal bound_ \[\exp(\!-\!H_{\infty}\!(X|\mathbf{Y}))\!\leq\!\begin{cases}kpq+(1-kp)(1-kq)& \text{if }\frac{1}{k+1}\leq p,q\leq\!\frac{1}{k}\\ \min\{p,q\}&\text{otherwise},\end{cases} \tag{29}\] _where \(p=\exp(\!-\!H_{\infty}(X_{0}|Y_{0}))\) and \(q=\exp(\!-\!H_{\infty}(X_{1}|Y_{1}))\)._ _Remark 3:_ Since \(kpq+(1\!-\!kp)(1\!-\!kq)\!=\!\frac{1}{k+1}\!+\!\frac{k}{k+1}(\!(k\!+\!1)\!p\!- \!1)(\!(k\!+\!1)\!q\!-\!1)\), \(\frac{1}{k+1}\leq pq\leq\frac{1}{k}\) implies \(\frac{1}{k+1}\leq kpq+(1\!-\!kp)(1\!-\!kq)\leq\frac{1}{k}\). Thus, if both \(H_{\infty}(X_{0}|Y_{0})\) and \(H_{\infty}(X_{1}|Y_{1})\) lie in the interval \([\log k,\log(k+1)]\), then so does the corresponding bound on \(H_{\infty}(X|\mathbf{Y})\). We first prove the inequality in the unconditional case. The probability mass function of \(X_{0}\oplus X_{1}\) is given by the convolution with respect to \(\mathcal{G}\) of the probability mass functions of \(X_{0}\) and \(X_{1}\). That is, for any \(x\in\mathcal{G}\), \[\mathbb{P}(X_{0}\oplus X_{1}=x)=\sum_{i\in\mathcal{G}}\mathbb{P}(X_{0}=x\oplus i )\mathbb{P}(X_{1}=\ominus i). \tag{30}\] In particular, \[\exp(\!-\!H_{\infty}(X_{0}\oplus X_{1}))=\max_{x\in\mathcal{G}}\sum_{i\in \mathcal{G}}\mathbb{P}(X_{0}=x\oplus i)\mathbb{P}(X_{1}=\ominus i). \tag{31}\] Hence the problem reduces to upper-bound \[\max_{x\in\mathcal{G}}\sum_{i\in\mathcal{G}}\mathbb{P}(X_{0}=x\oplus i) \mathbb{P}(X_{1}=\ominus i). \tag{32}\] Since \(\exp(\!-\!H_{\infty}(X_{0}\ominus x))=\exp(\!-\!H_{\infty}(X_{0}))\) we can assume without loss of generality that the maximum is reached for \(x=0\) and the problem reduces to the maximization of \[\sum_{i\in\mathcal{G}}\mathbb{P}(X_{0}=i)\mathbb{P}(X_{1}=\ominus i). \tag{33}\] Let \((1),\ldots,(M)\in\mathcal{G}\) be an ordering of the group elements so that \(\mathbb{P}(X_{0}=(1))\geq\mathbb{P}(X_{0}=(2))\geq\ldots\geq\mathbb{P}(X_{0}=(M))\). The problem is to maximize \[\sum_{i=1}^{M}\underbrace{\mathbb{P}(X_{0}=(i))}_{p_{(i)}}\underbrace{ \mathbb{P}(X_{1}=\ominus(i))}_{q_{(i)}}. \tag{34}\] The min-entropy of \(X_{1}\) is invariant under any permutation of its probability mass function. Furthermore, by the _rearrangement inequality_ (Lemma 5 in Appendix A) a permutation of the probability mass function of \(X_{1}\) maximizing the sum is such that \(\mathbb{P}(X_{1}\!=\!\ominus(1))\geq\mathbb{P}(X_{1}\!=\!\ominus(2))\geq\ldots \geq\mathbb{P}(X_{1}\!=\!\ominus(M))\). Finally the problem is reduced to \[\max_{\mathbf{p},\mathbf{q}}\phi(\mathbf{p},\mathbf{q})\!\triangleq\!\sum p_{(i )}q_{(i)} \tag{35}\] under the constraint that \(\exp(\!-\!H_{\infty}(X_{0}))=p_{(1)}=p\) and \(\exp(\!-\!H_{\infty}(X_{1}))=q_{(1)}=q\). Moreover, \(h\) is Schur-convex in \(\mathbf{p}\) when \(\mathbf{q}\) is fixed and vice-versa (see Lemma 3 in Appendix A). Hence the maximum in (35) is reached for the least spread out probability mass function under the min entropy constraints. That is (Lemma 4 in Appendix A), \[\begin{cases}(p_{(1)},\ldots,p_{(M)})=(p,\ldots,p,1-kp,0,\ldots,0)\\ (q_{(1)},\ldots,q_{(M)})=(q,\ldots,q,1-l\,q,0,\ldots,0)\end{cases} \tag{36}\] where \(k=\lfloor p^{-1}\rfloor\) and \(l=\lfloor q^{-1}\rfloor\). Hence we obtain the bound \[\exp(\!-\!H_{\infty}(X))\leq\begin{cases}kpq+(1-kp)(1-kq)&\text{if }k=l\\ \min\{p,q\}&\text{otherwise}.\end{cases} \tag{37}\] It remains to prove that (37) carries over to the conditional case. Note that the bound is concave in \(p\) for a fixed \(q\) and vice-versa. Indeed, let \(\frac{1}{k+1}\leq q\leq\frac{1}{k}\) be fixed. Then the inequality is piecewise linear in \(p\), equal to \[\begin{cases}p&\text{if }p\leq\frac{1}{k+1}\\ kpq+(1-kp)(1-kq)&\text{if }\frac{1}{k+1}\leq p\leq\frac{1}{k}\\ q&\text{otherwise}.\end{cases} \tag{38}\] The three successive slopes are \(1\), \(k(k+1)q-k\) and \(0\). Since \(k(k+1)q-k\in[0,1]\), these slopes are in decreasing order and the function is indeed concave. Therefore, applying Jensen's inequality (twice) proves (29). ### _Extension to \(d+1\) Summands_ Jog and Anatharam [14] extended their generalization of Mrs. Gerber's lemma (for Shannon entropy) for random variables in group of order \(2^{n}\) with two summands by repeating their inequality. In the same fashion, Theorem 6 is extended to \(d+1\) summands by repeated application of Theorem 6: **Theorem 7** (Extension to \(d+1\) summands): _Let \(p_{i}=\exp(\neg\!H_{\infty}(X_{i}|Y_{i}))\), without loss of generality assume \(p_{0}\leq p_{1}\leq\ldots\leq p_{d}\). Let \(k=\lfloor p_{0}^{-1}\rfloor\), \(r=\max\{i|p_{i}\leq\frac{1}{k}\}\). Then \(H_{d}=H_{\infty}(X|\mathbf{Y})\) is lower bounded as_ \[H_{d}\geq-\log\!\left(\frac{1}{k+1}+\frac{k^{r}}{k+1}\prod_{i=0}^{r}((k+1)p_{i }-1)\right)\!. \tag{39}\] Proof:: See Appendix D. In the side-channel context, it is particularly interesting to characterize the behavior of the inequality in the high entropy regime in terms of maximal leakage. This corresponds to the high noise regime of Theorem 3. **Theorem 8** (Asymptotic for High Noise): _Let \(I_{d}=I_{\infty}(X;\mathbf{Y})\) in bits, then as \(I_{\infty}(X_{i};Y_{i})\to 0\),_ \[I_{d}\leq C_{d}\prod_{i=0}^{d}I_{\infty}(X_{i};Y_{i})+o\!\left(\prod_{i=0}^{d} I_{\infty}(X_{i};Y_{i})\right) \tag{40}\] _where \(C_{d}=(M-1)^{d}(\ln 2)^{d}\)._ Proof:: See Appendix E. ### _Refined Unconditioned Extension to \(d+1\) Summands_ In contrast to Theorem 6, Theorem 7 is not guaranteed to be optimal when \(d>1\). The inequality can be improved by exploiting the structure of the sum of multiple random variables. We derive an improved bound which is optimal for entropies in the range \([\log(k\!-\!1),\log(k)]\) provided that there is a subgroup of \(\mathcal{G}\) of order \(k\). In particular, it is optimal in the high entropy regime \([\log(M\!-\!1),\log(M)]\) (since the group itself is a subgroup of order \(M\)). **Theorem 9** (Refined extension): _Let \(p_{i}=\exp(\neg\!H_{\infty}(X_{i}))\), without loss of generality we assume \(p_{0}\leq p_{1}\leq\ldots\leq p_{d}\). Let \(k=\lfloor p_{0}^{-1}\rfloor\), \(r=\max\{i|p_{i}\leq\frac{1}{k}\}\). Let \(H_{d}=H_{\infty}(X)\),_ \[H_{d}\!\geq\!\begin{cases}-\!\log\!\left(\frac{1}{k+1}+\frac{1}{k+1}\prod_{j= 0}^{r}\left((k\!+\!1)p_{i}\!-\!1\right)\right)\text{ if $r$ is even,}\\ -\!\log\!\left(\frac{1}{k+1}+\frac{k}{k+1}\prod_{j=0}^{r}\left((k\!+\!1)p_{i} \!-\!1\right)\right)\text{ if $r$ is odd.}\end{cases} \tag{41}\] Proof:: See Appendix F. Contrary to Theorem 7, Theorem 9 does not apply to conditional min-entropy in general. In fact, when all the variables are fixed except one, the bound inside the logarithm is piecewise linear but discontinuous in \(\frac{1}{k}\) when \(r\) is even. This discontinuity breaks the convexity of the inequality. Ensuring continuity for the desired convexity, we are led back to the expression of Theorem 7. However, under the assumption that \[\frac{1}{M}\leq\exp(\neg H_{\infty}(X_{i}|Y_{i}=y))\leq\frac{1}{M-1} \tag{42}\] for all \(i\) and \(y\), the bound of Theorem 9 inside the logarithm is linear and we do obtain a conditional inequality. Fortunately, assumption (42) makes sense in the side-channel context. In fact, a common leakage model is \(Y_{i}=f_{i}(X_{i})+\sigma\mathcal{N}(0,1)\) where \(f_{i}\) is a fixed (possibly unknown) leakage function, such as the Hamming weight or a linear combination of the bits of the variable \(X_{i}\). In particular (42) holds for large enough \(\sigma\) (high noise regime). Then we have the following **Theorem 10** (Taylor Expansion): _Assume (42) and let \(I_{d}=I_{\infty}(X;\mathbf{Y})\) in bits, then as \(I_{\infty}(X_{i};Y_{i})\to 0\),_ \[I_{d}\leq C_{d}\prod_{j=0}^{d}I_{\infty}(X_{i};Y_{i})+o\!\left(\prod_{j=0}^{d }I_{\infty}(X_{i};Y_{i})\right) \tag{43}\] _where_ \[C_{d}=\begin{cases}(\ln 2)^{d}&\text{if $d$ is even,}\\ (M-1)(\ln 2)^{d}&\text{if $d$ is odd.}\end{cases} \tag{44}\] Proof:: Taylor expansion of the exponential about \(0\) and of the logarithm about \(1\). Theorem 10 is particularly interesting because it suggests that, with respect to the _worst_ case leakage distribution, masking of _odd_ order \(d\) is not useful compared to masking with order \(d-1\) at high noise. In practice, however, for observed leakages this phenomenon may not apply. Theorem 10 is different from Theorem 8 as the constant \(C_{d}\) is improved largely. Though Theorem 10 requires the high noise assumption (42) to hold. Finally, combining Theorem 10 and Theorem 3 yields a bound on the probability of success **Corollary 1** (Bound on \(\mathbb{P}_{s}\)): _For \(m\) traces, as \(\mathbb{P}_{s}\to\frac{1}{M}\),_ \[\mathbb{P}_{s}\leq\frac{\exp(mI_{\infty}(X;\mathbf{Y}))}{M}\approx\frac{1}{M}+ \frac{mC_{d}}{M}\prod_{i=0}^{d}I_{\infty}(X_{i};Y_{i}). \tag{45}\] _This is to be compared with the bound of [3, Eqn. 8]:_ **Proposition 2**: _As \(\mathbb{P}_{s}\to\frac{1}{M}\),_ \[\mathbb{P}_{s}\leq\frac{1}{M}+\sqrt{m}A_{d}\!\left(\prod_{i=0}^{d}I(X_{i},Y_{i} )\right)^{\frac{1}{2}} \tag{46}\] _where \(A_{d}=\sqrt{M-1}(2\ln 2)^{\frac{d+1}{2}}M^{-1}\)._ Proof:: See Appendix G. As expected both bounds decrease exponentially in \(d\) to the minimum value \(\frac{1}{M}\). Although \(I\) and \(I_{\infty}\) are different metrics, we observe that * the constant factor \(C_{d}/M\) for \(I_{\infty}\) in (44) is exponentially lower in \(d\) than the factor \(A_{d}\) for \(I\); * the exponential decay in \(d\) is twice higher for \(I_{\infty}\); * the inequality scales better for \(I\) than for \(I_{\infty}\) in terms of number \(m\) of traces (since we compared both bounds for \(\mathbb{P}_{s}\approx\frac{1}{M}\), \(m\) is not necessarily taken large). Finally, we can contrast both bounds on a toy example. Let \(Y_{i}\) be uniformly distributed in \(\{x\in\mathcal{G}|x\neq X_{i}\}\). Then it is easily seen that \(I(X_{i},Y_{i})=I_{\infty}(X_{i},Y_{i})=\log(\frac{M}{M-1})\). In this case, the bound of this paper outperforms the bound of [3] in the high noise regime (\(\mathbb{P}_{s}\to\frac{1}{M}\)). Both bounds are compared numerically in Figs. 5 and 6 in Appendix I for \(d=1\) and \(2\), respectively, and \(M=256\). ## V Conclusion and Perspectives We have shown that maximal leakage for masked implementations can be used to bound the probability of success of any side-channel attack. Maximal leakage is bounded by an efficiently computable bound based on a new variation of Mrs. Gerber's lemma for min-entropy. The bound tightness is commented with some example groups and probability mass function with figures in Appendix H. Improving the inequality when there is no subgroup of order \(k+1\) in \(\mathcal{G}\) is an interesting perspective. Indeed, groups of prime order which have no subgroup except the trivial ones are of major interest for their application to masking in asymmetric cryptographic schemes (especially post-quantum schemes). Besides, it would also be of interest to check whether the parity of \(d\) does play a practical role in the efficiency of masked implementations.
2305.10414
Constraining the Thickness of the Atmosphere of TRAPPIST-1 b from its JWST Secondary Eclipse Observation
Recently, the first JWST measurement of thermal emission from a rocky exoplanet was reported. The inferred dayside brightness temperature of TRAPPIST-1 b at 15 $\mu$m is consistent with the planet having no atmosphere and therefore no mechanism by which to circulate heat to its nightside. In this Letter, we compare the measured secondary eclipse depth of TRAPPIST-1 b to predictions from a suite of self-consistent radiative-convective equilibrium models in order to quantify the maximum atmospheric thickness consistent with the observation. We find that plausible atmospheres (i.e., those that contain at least 100 ppm CO$_2$) with surface pressures greater than 0.01 bar (0.1 bar) are ruled out at 1$\sigma$ (3$\sigma$), regardless of the choice of background atmosphere. Thicker atmospheres of up to 10 bar (100 bar) at 1$\sigma$ (3$\sigma$) are only allowed if the atmosphere lacks any strong absorbers across the mid-IR wavelength range, a scenario that we deem unlikely. We additionally model the emission spectra for bare-rock planets of various compositions. We find that a variety of silicate surfaces match the measured eclipse depth to within 1$\sigma$, and the best-fit grey albedo is $0.02 \pm 0.11$. We conclude that planned secondary eclipse observations at 12.8 $\mu$m will serve to validate the high observed brightness temperature of TRAPPIST-1 b, but are unlikely to further distinguish among the consistent atmospheric and bare-rock scenarios.
Jegug Ih, Eliza M. -R. Kempton, Emily A. Whittaker, Madeline Lessard
2023-05-17T17:50:04Z
http://arxiv.org/abs/2305.10414v2
Constraining the Thickness of TRAPPIST-1 b's Atmosphere from its JWST Secondary Eclipse Observation at 15 \(\mu\)m ###### Abstract Recently, the first JWST measurement of thermal emission from a rocky exoplanet was reported. The inferred dayside brightness temperature of TRAPPIST-1 b at 15 \(\mu\)m is consistent with the planet having no atmosphere and therefore no mechanism by which to circulate heat to its nightside. In this Letter, we compare TRAPPIST-1 b's measured secondary eclipse depth to predictions from a suite of self-consistent radiative-convective equilibrium models in order to quantify the maximum atmospheric thickness consistent with the observation. We find that plausible atmospheres (i.e., those that contain at least 100 ppm CO\({}_{2}\)) with surface pressures greater than 0.3 bar are ruled out at 3\(\sigma\), regardless of the choice of background atmosphere, and a Mars-like thin atmosphere with surface pressure 6.5 mbar composed entirely of CO\({}_{2}\) is also ruled out at 3\(\sigma\). Thicker atmospheres of up to 10 bar (100 bar) are consistent with the data at 1\(\sigma\) (3\(\sigma\)) only if the atmosphere lacks _any_ strong absorbers across the mid-IR wavelength range -- a scenario that we deem unlikely. We additionally model the emission spectra for bare-rock planets of various compositions. We find that a basaltic, metal-rich, and Fe-oxidized surface best matches the measured eclipse depth to within 1\(\sigma\), and the best-fit grey albedo is \(0.02\pm 0.11\). We conclude that planned secondary eclipse observations at 12.8 \(\mu\)m will serve to validate TRAPPIST-1 b's high observed brightness temperature, but are unlikely to further distinguish among the consistent atmospheric and bare-rock scenarios. 0000-0002-8820-7885]Jegug In 0000-0002-4618-7885]Eliza M.-R. Kempton 0000-0002-4880-7885]Emily A. Whittaker 0000-0002-4880-7885]Madeline Lessard ## 1 Introduction We have now entered the era of JWST, and with it comes the potential to perform the first meaningful characterization of terrestrial (i.e., rocky) exoplanets. Among the possible rocky planet targets for JWST, those in the TRAPPIST-1 system are some of the most promising for atmospheric characterization due to their very favorable planet-to-star size ratios (Gillon et al., 2016). The system is also of extreme interest because it hosts multiple terrestrial planets, including several that reside in or near the habitable zone (Gillon et al., 2017). Recently, Greene et al. (2023) measured the thermal emission from the innermost planet, TRAPPIST-1 b, and found that its 15-\(\mu\)m brightness temperature is consistent with the planet being a bare rock, devoid of any atmosphere at all. Thermal emission measurements of presumed tidally-locked planets, such as those produced by Greene et al. (2023) for TRAPPIST-1 b, are a productive avenue for confirming whether rocky exoplanets possess atmospheres (Koll et al., 2019; Mansfield et al., 2019). By measuring the planet's dayside temperature via secondary eclipse observations, one can constrain the presence and thickness of the atmosphere in the following sense: atmospheres serve to lower the dayside emission temperature below what would be expected for a bare (and dark) rocky surface. Even moderately thick atmospheres transport considerable heat away from a tidally-locked planet's dayside (Koll, 2022). Reflective aerosols, another signpost of a planet possessing an atmosphere, also serve to lower the dayside temperature by reflecting incoming stellar radiation back to space (Mansfield et al., 2019). The maximal dayside effective temperature, corresponding to no atmosphere and a zero-albedo surface is: \[T_{max}=T_{*}\sqrt{\frac{R_{*}}{d}}\left(\frac{2}{3}\right)^{1/4} \tag{1}\] where \(T_{*}\) and \(R_{*}\) are the stellar effective temperature and radius, and \(d\) is the planet-star separation. For TRAPPIST-1 b, \(T_{max}=508\pm 6\) K, whereas the 15 \(\mu\)m brightness temperature reported by Greene et al. is \(503^{+26}_{-27}\) K, fully consistent with the no-atmosphere scenario. From a theoretical standpoint, it is unclear whether terrestrial planets orbiting M-dwarfs should be expected to possess atmospheres. There are studies that go both ways. Atmospheric loss processes should be efficient for planets orbiting active M-dwarf host stars, but some planets may be able to retain their atmospheres or renew them via outgassing following a decline in stellar activity with age (e.g. Zahnle and Catling, 2017; Turbet et al., 2020; Wordsworth and Kreidberg, 2022). Observationally, to-date there are no studies that definitively confirm the presence of an atmosphere on a rocky exoplanet. Flat transmission spectra are the norm (e.g. de Wit et al., 2018; Diamond-Lowe et al., 2018, 2020; Mugnai et al., 2021; Libby-Roberts et al., 2022; Lustig-Yaeger et al., 2023), and the few studies that have claimed detections of atmospheric spectral features for terrestrial exoplanets have been called into question or have ambiguous interpretation (e.g. Southworth et al., 2017; Swain et al., 2021; Moran et al., 2023). Thermal emission measurements of the planets LHS 3844b (Kreidberg et al., 2019) and GJ 1252b (Crossfield et al., 2022) have found dayside temperatures that are consistent with the no-atmosphere limit, the former by way of a full-orbit phase curve. It stands to reason that less irradiated planets should be less susceptible to atmospheric loss, but TRAPPIST-1 b is the coldest planet yet to be subjected to the thermal emission test for possessing an atmosphere, yielding the same result of no apparent sign of a gaseous envelope. In this Letter we quantify the range of atmospheres and surfaces that are consistent with the Greene et al. (2023) measurement of TRAPPIST-1 b's secondary eclipse depth at 15 \(\mu\)m. We show in what follows that thick atmospheres can be definitively ruled out by this single data point. Given the range of scenarios that we still find to be consistent with the data, we also predict the degree to which further observations, including planned measurements at 12.8 \(\mu\)m, will be able to distinguish among the remaining plausible atmospheres and surfaces. ## 2 Methods In this section, we describe our model and parameter choices. To calculate the eclipse spectrum of different surfaces and atmospheres, we use helios, an open-source 1D radiative transfer code that computes the thermal profile of a planetary atmosphere in radiative-convective equilibrium (Malik et al., 2017, 2019, 2020). Most of our approach closely follows Whittaker et al. (2022), which performed a similar analysis for the _Spitzer_ observation of LHS 3844 b, and we refer the readers to that work for more details of the modelling. One key detail worth mentioning here is that we calculate the heat redistribution factor (\(f\)) self-consistently with the radiative transfer using the analytical approximation in Koll (2022, equation 10). In the approximation, \(f\) depends on the equilibrium temperature, the surface pressure, and the longwave optical depth at the surface; helios has the ability to iterate to a value of \(f\) that satisfies global energy balance. We note a caveat that this method subtracts the approximated transported heat from the incident stellar flux to calculate the dayside energy budget, but does not consider the vertical dependence of the day-to-night heat flow; hence the redistribution could be construed to happen either uniformly or at the top of the atmosphere in our models. We model a range of surface pressures that is broad enough span full redistribution (\(f=1/4\)) to no redistribution (\(f=2/3\)), resulting in a surface pressure grid of \(10^{-4}\) bars to \(10^{2}\) bars, spaced at 1 dex. For the composition of the atmospheres, in addition to a 100% CO\({}_{2}\) atmosphere, we choose to vary the abundance of trace CO\({}_{2}\), at 1 ppm, 100 ppm, and 1%, against background gases of N\({}_{2}\), O\({}_{2}\), and H\({}_{2}\)O. Moreover, we also consider atmospheres containing a range of other trace gases plausible in secondary atmospheres (Turbet et al., 2020; Krissansen-Totton and Fortney, 2022; Whittaker et al., 2022), which may not necessarily absorb at 15 \(\mu\)m but may be detected via observations at other wavelengths. For this purpose, we adopt the same trace abundance grids (i.e. 1 ppm, 100 ppm, 1%) for CO, CH\({}_{4}\), H\({}_{2}\)O, and SO\({}_{2}\), against a background gas of N\({}_{2}\) for the former two and O\({}_{2}\) for the latter. SO\({}_{2}\) is unique in that it has broad infrared absorption features just outside the 15-\(\mu\)m bandpass, which produce interesting implications for observations at 15 \(\mu\)m; we discuss this further in Section 3. For all models, we assume an intrinsic temperature of \(T_{\rm int}=0\)K. For all of the atmosphere models, we adopt a surface albedo of 0 (i.e. a true blackbody), to produce the maximum limit on the atmospheric pressure consistent with the observation; any value of non-zero albedo will dilute the energy budget and decrease the eclipse depth, thereby making a model at a given atmospheric pressure even less consistent with the observation. Given that TRAPPIST-1 b's dayside temperature is consistent with the no-atmosphere limit, we also explore a number of bare surface models that have no atmospheres at all. Here the eclipse spectrum instead arises due to the wavelength-dependent albedo spectrum of the surfaces. We consider six surfaces that are plausible, given the level of irradiation received by TRAPPIST-1 b: basaltic, ultramafic, feldspathic, metal-rich, Fe-oxidized, and granitoid (Hu et al., 2012; Mansfield et al., 2019). We also run a number of grey albedo surfaces at \(A=0.2,0.4,0.6,0.8,0.95\). We adopt the stellar and planetary parameters as obtained in Agol et al. (2021). We use the sphix stellar model spectrum grid (Iyer et al., 2023) interpolated to TRAPPIST-1 parameters assuming solar composition to calculate the thermal profile and the eclipse depth of the planet. sphix models are expected to better model the stellar spectra at such low temperature ranges than the typical phoenix models, using updated line lists (Iyer et al., 2023). Indeed, we find that the sphix model reproduces the observed stellar flux at 15 \(\mu\)m better than the phoenix model (to within 7% versus 13%; see Methods of Greene et al., 2023) After obtaining the eclipse spectra, we calculate the binned depth at the photometric band of F1500W; we integrate the planetary flux weighted by the bandpass function, then integrate the stellar flux weighted by the same function, and then obtain the ratio of the two. We perform the same calculation for F1280W to make predictions for upcoming observations. The F1280W bandpass lies outside the CO\({}_{2}\) absorption feature, and the difference between the two bandpasses serves as a metric to constrain either atmospheric pressure, CO\({}_{2}\) abundance, or both (Deming et al., 2009). We calculate the brightness temperature (\(T_{\rm b}\)) in the F1500W filter by determining the temperature of the blackbody whose eclipse depth (obtained via identical weighting and integrating as for the planetary flux) matches the observed eclipse depth. We note that this calculation differs slightly from the procedure followed by Greene et al. (2023), who found the temperature of the blackbody whose per-frequency flux evaluated at the "effective" filter wavelength matched the observed per-frequency planetary flux. Our calculation leads to a best-fit brightness temperature of \(T_{\rm b}=505\pm 27\)K, rather than the \(T_{\rm b}=503^{+26}_{-27}\)K reported in Greene et al. (2023). Given the uncertainty, this minor discrepancy will not impact our analysis. ## 3 Results ### Atmospheric Thickness and Surface Composition Our results support the general conclusion from Greene et al. (2023) that TRAPPIST-1 b does not possess a thick atmosphere. We will present the maximum atmospheric thickness consistent with the observed eclipse depth of \(861\pm 99\) ppm for each set of model composition and also highlight interesting behaviors from a theoretical perspective. We show the eclipse spectra for selected atmospheric and surface models in Figure 1 and the binned eclipse depths for all of the atmospheric models in Figure 2, varying the composition and the surface pressure. The accompanying temperature-pressure (T-P) profiles for each of the atmosphere models are shown in Figure 3. #### 3.1.1 Atmospheres with CO\({}_{2}\) We posit that TRAPPIST-1 b should realistically have at least moderate amounts of CO\({}_{2}\) if it does possess an atmosphere. This statement is in line with theoretical studies of the atmosphere of TRAPPIST-1 b and in general of rocky exoplanets receiving a comparable degree of irradiation (Lincowski et al., 2018; Hu et al., 2020; Turbet et al., 2020). CO\({}_{2}\) is robustly expected to be present in non-hydrogen-dominated atmospheres (e.g., as indicated for TRAPPIST-1 b from its transmission spectrum; de Wit et al., 2016), and the gas is robust against various escape processes, although photodissociation can deplete its abundance. Pure CO\({}_{2}\) atmospheres are 1-\(\sigma\) consistent with the eclipse measurement for surface pressures up to 0.4 mbar and 3-\(\sigma\) consistent up to 3 mbar (Figure 2), indicating that even a Mars-like thin atmosphere (\(P_{\rm surf}=6.5\) mbar) composed entirely of CO\({}_{2}\) is unambiguously ruled out. To first order, the secondary eclipse depth depends on the _partial pressure_ of CO\({}_{2}\), so the atmosphere may be thicker if the CO\({}_{2}\) abundance (i.e. its mixing ratio) is smaller. N\({}_{2}\) or O\({}_{2}\)-dominated atmospheres with \(\geq\)100 ppm of CO\({}_{2}\) are 1-\(\sigma\) consistent at 0.04 bar at most, and 1 bar atmospheres are ruled out by more than 3\(\sigma\). The presence of H\({}_{2}\)O has a non-trivial effect on the eclipse spectrum as it both increases the absorption and changes the thermal structure. For instance, at a surface pressure of 0.1 bar, H\({}_{2}\)O-dominated atmospheres with 1 ppm or 100 ppm CO\({}_{2}\) have deeper eclipse depths than the corresponding O\({}_{2}\) or N\({}_{2}\)-dominated atmospheres, while the one with 1% CO\({}_{2}\) has a shallower depth than atmospheres with the other background gases. Additionally, the lower atmosphere becomes much hotter for the thicker H\({}_{2}\)O-dominated atmospheres due to greenhouse heating being more effective than the cooling of day-night redistribution. H\({}_{2}\)O is also interesting in that it can generate thermal inversions in planets orbiting M stars (Malik et al., 2019). Thermal inversions are interesting in the context of the Greene et al. (2023) secondary eclipse measurement because they have the potential to reverse absorption features into emission, opening a possibility that the high observed 15-\(\mu\)m brightness temperature could be due to a CO\({}_{2}\)_emission_ feature originating from a thick(er) atmosphere. For TRAPPIST-1 b, we indeed find that H\({}_{2}\)O causes thermal inversions (Figure 3), but they occur in the upper atmosphere well above the IR photosphere and thus do not significantly impact the shape of the 15-\(\mu\)m CO\({}_{2}\) feature, which uniformly appears in absorption in all of the models we have produced. We have also experimented with different mixtures of O\({}_{2}\), H\({}_{2}\)O, and CO\({}_{2}\) (not shown), but find that no combination leads to emission features. In fact, in Figure 4, one can see that the brightness temperature at 15 micron is lower than that at 12.8 micron for every model, indicating CO\({}_{2}\) absorption, rather than emission, is being observed. #### 3.1.2 Atmospheres with no CO\({}_{2}\) While less plausible chemically, atmospheres that do not contain any CO\({}_{2}\) at all remain consistent with the secondary eclipse measurement to higher surface pressures. Atmospheres that have CO or CH\({}_{4}\) as the trace gas are 1-\(\sigma\) consistent to 1 bar for all trace abundances, except the 1% CH\({}_{4}\) model which has a shallower depth that is 2-\(\sigma\) consistent. In Figure 3, it can be seen in the right panel that all of these atmospheres except the 1% CH\({}_{4}\) 10\({}^{2}\) bar model remain optically thin in the 15 \(\mu\)m bandpass down to the surface, and the change in eclipse Figure 1: The eclipse spectra of various models run in this study. We show: a suite of atmospheric models that are 1-\(\sigma\) consistent with the observation (_top left_); bare surface models, which are all consistent with the observation (_top right_); 100 % CO\({}_{2}\) atmosphere models at various surface pressures (_bottom left_); and models with surface pressures of 0.1 bar, varying the compositions (_bottom right_). The compositions denote that the first species is the dominant species, with the second species in indicated trace amounts. The binned depths at F1500W and F1280W are shown as markers, as well as each bandpass function weighted by the stellar spectrum. We also show, in dashed lines, the eclipse depths resulting from blackbodies at 508 K (blue) and 400 K (red), corresponding to no redistribution (\(f=2/3\)) and full redistribution (\(f=1/4\)), respectively. On the upper right panel, dashed lines indicate grey albedo surface models. The features in the blackbody eclipse spectrum arise due to spectral features in the _stellar_ spectrum. depth with surface pressure is due to the cooling effect of redistribution. Atmospheres with trace H\({}_{2}\)O behave Figure 3: The temperature-pressure (T-P) profiles of the model atmospheres in radiative-convective equilibrium. Models atmospheres that do and do not include CO\({}_{2}\) are shown in the left and the right panel, respectively, similarly to Figure 2. The optically thick region of the T-P profiles below the photosphere (\(\tau=2/3\)) at \(\lambda=14.79~{}\mu\)m are shown with thick lines. The markers indicate the surface pressure of each model atmosphere. The compositions denote that the first species is the dominant species, with the second species in indicated trace amounts. The N\({}_{2}\) and O\({}_{2}\)-dominated atmospheres completely overlap in the left panel. It can be seen that while near-infrared absorbers such as H\({}_{2}\)O can cause thermal inversions, they occur at regions where the atmosphere is optically thin and hence will not result in emission features in the spectra. For most of the models that do not contain CO\({}_{2}\), the atmosphere is optically thin in the F1500W bandpass down to the surface. Figure 2: The binned eclipse depths and their brightness temperature in the F1500W band for all of the atmospheric models run, varying the pressure of the atmosphere at the surface. Models atmospheres that do and do not include CO\({}_{2}\) are shown in the left and the right panel, respectively. The measured eclipse depth from Greene et al. (2023) is shown as the solid black line, and its 1-\(\sigma\) (grey) and 3-\(\sigma\) (red) uncertainties are are also shown, as well as the corresponding brightness temperatures. The compositions denote that the first species is the dominant species, with the second species in indicated trace amounts. Atmospheres with \(\geq\)100 ppm CO\({}_{2}\) are consistent with the measurement at 1\(\sigma\) only if the atmospheric pressure is less than 0.1 bar. similarly except that the 1% H\({}_{2}\)O atmospheres becomes optically thick at atmospheric pressures around 0.1 bar, and the eclipse depth is already \(>\) 3-\(\sigma\) inconsistent for a surface pressure of 1 bar. Atmospheres with trace SO\({}_{2}\) behave somewhat differently since SO\({}_{2}\) has a broad absorption feature at wavelengths just redward of the 15-\(\mu\)m bandpass. For moderate SO\({}_{2}\) abundances (e.g. the pink line for the 100 ppm 0.1 bar atmosphere in the top left panel of Figure 1), the strong absorption at \(\sim\)18-20 \(\mu\)m pushes more flux into the 15-\(\mu\)m bandpass, leading to _increased_ planetary emission over the wavelength range of the Greene et al. (2023) secondary eclipse observation. The emission from a transparent spectral window is therefore a plausible mechanism for increasing the secondary eclipse depth in a single bandpass, but it comes at the cost of sharply reduced fluxes at other wavelengths; this effect can therefore be diagnosed with additional spectroscopic observations. For higher SO\({}_{2}\) abundances however, the absorption feature is strong enough to affect the 15-\(\mu\)m bandpass, and it therefore has the opposite effect of reducing the eclipse depth in the F1500W filter (Figure 1, pink line in bottom right panel). This indicates that the nature of the absorber needs to be very finely tuned to match the Greene et al. (2023) measurement. #### 3.1.3 Bare surfaces If TRAPPIST-1 b truly has no atmosphere whatsoever, we find that the F1500W measurement is consistent with a bare rock planet with a basaltic, Fe-oxidized, or metal-rich surface to within 1\(\sigma\), while granitoid and feldspathic surfaces are ruled out at more than 3\(\sigma\) (Figure 1, top right panel). The latter two materials have high albedos around 1 \(\mu\)m where the luminosity of the TRAPPIST-1 host star is greatest (Hu et al., 2012; Mansfield et al., 2019), thus reducing the energy received by the planet and lowering the temperature at which it radiates. The fact that we can rule out some surface compositions demonstrates the utility of secondary eclipse spectroscopy for constraining the surface properties of rocky exoplanets. However, Mansfield et al. (2019) point out that granitoid and feldspathic surfaces (the ones that we rule out here) are also among those that are implausible for hot rocky planets like TRAPPIST-1 b, as they either require liquid water to form or they are unlikely to be able to form on larger planets (Elkins-Tanton, 2012). Among grey surfaces, we find that the best-fit surface albedo is \(0.02\pm 0.11\). ### Prospects for Future Observations Given the various atmospheres and surfaces that remain consistent with the Greene et al. (2023) 15 \(\mu\)m secondary eclipse measurement, we investigate here the possibility that additional observations could help to further constrain the properties of TRAPPIST-1 b. In particular, five secondary eclipses are slated to be observed with MIRI F1280W filter centered on 12.8 \(\mu\)m to provide a second spectroscopic data point for TRAPPIST-1 b's thermal emission. In Figure 4 we show the eclipse depths from our models binned to the F1280W bandpass against the the binned eclipse depth in the F1500W bandpass. The F1280W is intended to observe the eclipse depth out of the CO\({}_{2}\) band such that the difference between the two provides a constraint on the atmospheric pressure and possibly composition, but the very high eclipse depth of F1500W alone already provides a firm constraint on the brightness temperature and hence the atmospheric pressure. Assuming an observation uncertainty comparable to that of F1500W (99 ppm), the F1280W secondary eclipse is unlikely to help further distinguish between, for example, a very thin 10\({}^{-4}\) bar 100% CO\({}_{2}\) atmosphere, a 1 bar O\({}_{2}\)-dominated 1 ppm CO\({}_{2}\) atmosphere, a 1 bar N\({}_{2}\)-dominated atmosphere with 100 ppm CH\({}_{4}\) as they all fall roughly within a span of 100 ppm. Therefore, we conclude that the F1280W observation will be most useful for validating the high brightness temperature of TRAPPIST-1 b as observed by F1500W. Indeed, in Figure 1, most 1-\(\sigma\) consistent spectra follow the \(f=2/3\) blackbody spectrum (blue dashed line) closely down to 10 \(\mu\)m, and only at shorter wavelengths do spectroscopic absorption features appear. However, due to the small eclipse depth at these wavelengths, spectroscopy using MIRI LRS with nominal uncertainty of (say) 30 ppm at a spectral resolution of \(R=10\) will be able to distinguish only between end-member cases at best rather than tightly constraining the composition and the surface pressure. Namely, if the planet has H\({}_{2}\)O, CH\({}_{4}\), or SO\({}_{2}\), absorption features between 5-10 \(\mu\)m, MIRI LRS could be used to distinguish between an airless blackbody and a thin atmosphere. As for distinguishing among bare rock surfaces, the additional F1280W observation is unlikely to be helpful for this purpose as the binned eclipse depths of consistent surfaces are very similar (Figure 1). The surfaces are generally difficult to distinguish across all wavelengths that MIRI can observe in. ## 4 Discussion and Summary We have shown that, based on the Greene et al. (2023) secondary eclipse observation at 15 \(\mu\)m, TRAPPIST-1 b does not appear to host a thick atmosphere. Formally, our models rule out atmospheres with at least 100 ppm CO\({}_{2}\) thicker than 0.3 bars at 3\(\sigma\). For a 100% CO\({}_{2}\) at mosphere (i.e., a Mars or Venus-like composition), the atmosphere must be less than 3 mbar thick at 3\(\sigma\) confidence to be consistent with the measured eclipse depth at 15 \(\mu\)m. We argue that TRAPPIST-1 b is unlikely to host an atmosphere devoid of CO\({}_{2}\), and therefore atmospheres thicker than \(\sim\)0.1 bar are ruled out. Various types of geophysically plausible rocky surfaces are all consistent with the Greene et al. (2023) measurement, and the eclipse observation rules out less plausible granitiod and feldspathic surfaces. The best-fit grey surface albedo is \(0.02\pm 0.11\). The 1-\(\sigma\) consistent atmospheres and surfaces that we identify in this Letter will be difficult to distinguish with upcoming JWST observations except perhaps the very end-member scenarios. The predicted eclipse depths for the F1280W filter are close enough to each other to be within the uncertainty of the observation. MIRI LRS may be able to distinguish between a bare rock and a 0.1 bar H\({}_{2}\)O-dominated atmosphere by measuring the eclipse spectrum from 5-10 \(\mu\)m, but there are many degenerate scenarios in between. Finally, the planned NIRISS SOSS observation of TRAPPIST-1 b via complementary measurements in _transmission_(Lim et al., 2021, Cycle 1 GO 2589) also aims to distinguish between a bare rock and a thin atmosphere. In the case of a clear atmosphere, transmission spectroscopy can generally provide a signal that is easier to interpret than that of thermal emission, since H\({}_{2}\)O and CO\({}_{2}\) features should be detectable. Transmission spectroscopy is also more agnostic to the thermal structure of the atmosphere and could therefore provide a less ambiguous constraint on the composition. On the other hand, transmission spectroscopy of small, rocky planets is challenging as the high mean molecular weight of secondary atmospheres and aerosols (if present) render the transmission spectrum closer to a flat spectrum, which is indistinguishable from a bare rock planet (Miller-Ricci et al., 2009; Barstow and Irwin, 2016; Ducrot et al., 2020). Additionally, host stellar effects also leave an imprint on the transmission spectrum, leading to spectral contamination that can be difficult to disentangle from _bona fide_ Figure 4: A color-color-like diagram of predicted binned eclipse depths in the F1280W band (horizontal axis) and the binned F1500W eclipse depths for all of the model atmospheres, along with their brightness temperatures (\(T_{\rm b}\)) in each band. Models atmospheres that do and do not include CO\({}_{2}\) are shown in the left and the right panel, respectively. The measured eclipse depth from Greene et al. (2023) is shown as the solid black line, and its 1-\(\sigma\) (grey) and 3-\(\sigma\) (red) uncertainties are are also shown. The vertical axis is identical to Figure 2, but is zoomed to focus on models consistent with the F1500W observation, alongside the expected F1280W uncertainty (\(\sim 100\) ppm) shown as an errorbar. The binned eclipse depths for a blackbody over a range of temperatures is shown as a multi-colored line. The temperature of the blackbody can be read off from the \(T_{\rm b}\) in either axes, by definition. The corresponding \(T_{\rm b}\) and bond albedo (\(A\)) at each confidence interval is also shown. All models that include CO\({}_{2}\) (in the left panel) lie on the right side of the blackbody line, indicating a higher \(T_{\rm b}\) in the F1280W than in F1500W due to the CO\({}_{2}\)_absorption_ at 15 micron. The compositions denote that the first species is the dominant species, with the second species in indicated trace amounts. As one follows each composition line, atmospheric pressure starts at \(10^{-4}\) bar close to the observed F1500W measurement and increases in 1-dex intervals as in Figure 2 with generally decreasing 15-\(\mu\)m eclipse depths. We do not show the bare surface depths in this figure, but they lie close to the blackbody line and deviate less than 25 ppm in either bandpass. atmospheric features (Rackham et al., 2018; Rackham and de Wit, 2023; Moran et al., 2023). We have neglected the radiative effects of clouds in our work. The clear atmosphere T-P profiles in Figure 3 do cross condensation curves such that water or sulfur clouds can form (Mbarek and Kempton, 2016; Lincowski et al., 2018). However, clouds of appreciable column density will have higher albedos than rocky surfaces (Mansfield et al., 2019, Fig. 6) and are inconsistent with the observation, given such a low inferred albedo (even with the uncertainties taken into account). Additionally, climate modelling suggests that aerosols are unlikely to form in TRAPPIST-1 b (Lincowski et al., 2018). As such, we find the scenario that the planet hosts an atmosphere with a reflecting cloud to be inconsistent with the Greene et al. (2023) secondary eclipse measurement. The F1500W observations of TRAPPIST-1 b demonstrate the utility of secondary eclipse observations for determining whether rocky planets possess atmospheres and for constraining their surface composition. Secondary eclipse observations will soon also be applied to other rocky planets around M dwarfs, with observation planned for more targets such as TRAPPIST-1 c (Kreidberg et al., 2021, Cycle 1 GO 2304), Gl 486 b (Mansfield et al., 2021, Cycle 1 GO 1743), GJ 1132 b (Lunine and Bean, 2017, Cycle 1 GTO), and LHS 3844 b (Kreidberg et al., 2021, Cycle 1 GO 1846). The latter three use MIRI LRS rather than F1500W; an identical analysis to the current work can be performed by binning the entire 8-12 \(\mu\)m LRS spectrum to create a single broad photometric bandpass (see e.g. SS3 of Koll et al., 2019), and the additional _spectral_ information can be used to further constrain the composition of the atmosphere or the surface (Whittaker et al., 2022). A larger sample of rocky planet targets observed in secondary eclipse will also help to answer population-level questions of whether rocky planets around M dwarfs can really host atmospheres and identify the ideal parameter space for establishing regimes in which they can. We thank Tom Greene for useful discussion and for providing additional context on his JWST observations. JI and EMRK acknowledge funding from the Alfred P. Sloan Foundation under grant G202114194. We thank the anonymous referee for useful feedback that helped to improve this manuscript.
2306.09415
The most luminous AGN do not produce the majority of the detected stellar-mass black hole binary mergers in the local Universe
Despite the increasing number of Gravitational Wave (GW) detections, the astrophysical origin of Binary Black Hole (BBH) mergers remains elusive. A promising formation channel for BBHs is inside accretion discs around supermassive black holes, that power Active Galactic Nuclei (AGN). In this paper, we test for the first time the spatial correlation between observed GW events and AGN. To this end, we assemble all sky catalogues with 1,412 (242) AGN with a bolometric luminosity greater than $10^{45.5} {\rm erg\ s}^{-1}$ ($10^{46}\,{\rm erg\,s}^{-1}$) with spectroscopic redshift of $z\leq0.3$ from the Milliquas catalogue, version 7.7b. These AGN are cross-matched with localisation volumes of BBH mergers observed in the same redshift range by the LIGO and Virgo interferometers during their first three observing runs. We find that the fraction of the detected mergers originated in AGN brighter than $10^{45.5}\,{\rm erg\,s}^{-1}$ ($10^{46}\,{\rm erg\,s}^{-1}$) cannot be higher than $0.49$ ($0.17$) at a 95 per cent credibility level. Our upper limits imply a limited BBH merger production efficiency of the brightest AGN, while most or all GW events may still come from lower luminosity ones. Alternatively, the AGN formation path for merging stellar-mass BBHs may be actually overall subdominant in the local Universe. To our knowledge, ours are the first observational constraints on the fractional contribution of the AGN channel to the observed BBH mergers.
NiccolΓ² Veronesi, Elena Maria Rossi, Sjoert van Velzen
2023-06-15T18:00:32Z
http://arxiv.org/abs/2306.09415v2
The most luminous AGN do not produce the majority of the detected stellar-mass black hole binary mergers in the local Universe ###### Abstract Despite the increasing number of Gravitational Wave (GW) detections, the astrophysical origin of Binary Black Hole (BBH) mergers remains elusive. A promising formation channel for BBHs is inside accretion discs around supermassive black holes, that power Active Galactic Nuclei (AGN). In this paper, we test for the first time the spatial correlation between observed GW events and AGN. To this end, we assemble all sky catalogues with 1,412 (242) AGN with a bolometric luminosity greater than \(10^{45.5}\)erg s\({}^{-1}\) (\(10^{46}\)erg s\({}^{-1}\)) with spectroscopic redshift of \(z\leq 0.3\) from the Milliquas catalogue, version 7.7b. These AGN are cross-matched with localisation volumes of BBH mergers observed in the same redshift range by the LIGO and Virgo interferometers during their third observing run. We find that the fraction of the detected mergers originated in AGN brighter than \(10^{45.5}\)erg s\({}^{-1}\) (\(10^{46}\)erg s\({}^{-1}\)) cannot be higher than 0.74 (0.33) at a 95 per cent credibility level. Our upper limits imply a limited BBH merger production efficiency of the brightest AGN, while most or all GW events may still come from lower luminosity ones. Alternatively, the AGN formation path for merging stellar-mass BBHs may be actually overall subdominant in the local Universe. To our knowledge, ours are the first observational constraints on the fractional contribution of the AGN channel to the observed BBH mergers. keywords: Gravitational Waves - Active Galactic Nuclei - localisation ## 1 Introduction The astrophysical mass spectrum of stellar-mass Black Holes (sMBHs) inferred from the results of the first three observing runs of Advanced LIGO (LIGO Scientific Collaboration et al., 2015) and Advanced Virgo (Acernese et al., 2015) extends also to masses between 50 M\({}_{\odot}\) and 120 M\({}_{\odot}\)(The LIGO Scientific Collaboration et al., 2021). This evidence challenges our current understanding of stellar evolution, since no remnant with a mass in that range is expected to be the final stage of the life of a single star (Heger and Woosley, 2002; Belczynski et al., 2016). Pair Instability Supernovae are expected to happen in that mass range, and are expected to leave no compact remnant, thus opening a gap in the black hole mass spectrum (Woosley, 2019; Mapelli, 2021). The detection of mergers of sMBHs within this mass gap can be interpreted as an evidence of binary formation channels beyond the "isolated stellar binary" channel (however, see also de Mink and Mandel, 2016; Costa et al., 2021; Tanikawa et al., 2021). Other channels for Black Hole Binary (BBH) formation and merger involve dense dynamical environments, such as Globular Clusters (Rodriguez et al., 2016; Rodriguez and Loeb, 2018; Rodriguez et al., 2021), Nuclear Star Clusters (Antonini et al., 2019; Krits et al., 2022), and accretion discs around Supermassive Black Holes (SMBHs) in Active Galactic Nuclei (AGN) (Stone et al., 2017; Fabj et al., 2020; Ford and McKernan, 2022; McKernan et al., 2022; Li and Lai, 2022, 2022; Rowan et al., 2022). The formation of binaries with massive components in all these dense environments is facilitated by dynamical interactions such as exchanges in the case of three-body encounters. In the interaction between a binary system and a third object, the least massive of the three objects is expected to be scattered away from the binary system, that is tightened by this process (Hills and Fullerton, 1980; Ziosi et al., 2014). In case the gravitational potential of the host environment is deep enough to retain the remnant of a BBH merger despite the post-merger recoil kick, this can take part in a subsequent merger (Gerosa and Berti, 2019). Binaries that merge in this so-called hierarchical scenario (Yang et al., 2019; Barrera and Bartos, 2022) are expected to show specific signatures in the mass and spin distributions of their components. Examples of these features are a low mass ratio, and isotropically oriented spins (Gerosa and Berti, 2017; Gerosa and Fishbach, 2021; Tagawa et al., 2021; Wang et al., 2021; Fishbach et al., 2022; Li et al., 2022; Mahapatra et al., 2022). What differentiates AGN from other dynamically dense potential hosts of BBH mergers, is the presence of a gaseous disc. Accretion discs around SMBHs are expected to contain compact objects (McKernan et al., 2012; Tagawa et al., 2020). The dynamical evolution of these objects is heavily influenced by the interaction with the gas of the disc. This interaction is expected to make the sMBHs migrate towards the innermost region of the AGN disk on timescales inversely proportional to their mass (McKernan et al., 2011; DeLaurentis et al., 2022). This migration should end when the net torque exerted by the gas on the migrating compact object is null. This is expected to happen at specific distances from the central SMBH, the so-called "migration traps" (Bellovary et al., 2016; Peng and Chen, 2021). Due to the large localisation volumes associated to GW detections, the fractional contribution to the total merger rate of each individual binary formation channel is still unknown. The direct detection of an ElectroMagnetic (EM) counterpart of a BBH merger would be optimal to identify its host galaxy. The identification of candidate EM counterparts of mergers from AGN discs have been claimed (Graham et al., 2020, 2023, however, see also Ashton et al., 2021), and several works have investigated what should be the features of such counterparts (Palenzuela et al., 2010; Loeb, 2016; Bartos, 2016; McKernan et al., 2019; Petrov et al., 2022). However, the current observational evidence based on EM counterparts is still not sufficient to constrain what fraction of the detected BBH mergers come from a specific channel. Besides the search for EM counterparts, another method to investigate the contribution of a formation channel to the total detected merger rate is to infer what should be for that specific formation path, the distribution of the parameters of the merging binary, and then compare these prediction to the data obtained by the LIGO and Virgo interferometers. This approach has been utilised in several previous works focused on the eccentricity of the binary (Romero-Shaw et al., 2021, 2022; Samsing et al., 2022), the components' mass distribution (Gayathri et al., 2021, 2023; Belczynski et al., 2022; Stevenson and Clarke, 2022), its redshift dependence (Karathanasis et al., 2022), and its relation with the distribution of the magnitude and the orientation of the spins (McKerran et al., 2020; Qin et al., 2022; Wang et al., 2022; Zevin and Bavera, 2022). These works agree in saying that BBHs that merge in a dynamical environment tend to have higher masses involved, and more isotropically orientated spins. However, there is still no general agreement on the relative contributions to the total merger rate of all the possible formation channels. Finally, a promising possibility to directly infer the fraction of the observed GW events that happened in a specific host environment is trough the investigation of the spatial correlation between GW sky maps and the positions of such potential hosts. The statistical power of this approach has been investigated using simulated data, finding that it is possible to put constraints on the fraction of observed GW events that happened in an AGN, (\(f_{\rm AGN}\)), especially when rare (i.e. very luminous) potential sources are taken into account (Bartos et al., 2017; Corley et al., 2019; Veronesi et al., 2022). These previous works used as main inputs the size of the 90 per cent Credibility Level localisation volume (further referred to as V90) of the each GW observation and the number of AGN within it. In this work we put for the first time upper limits on \(f_{\rm AGN}\), based on the observed GW-AGN spatial correlation in the case of high-luminosity AGN. These upper limits are obtained through the application of a statistical method that uses for the first time as input the exact position of every AGN. The likelihood function \(\mathcal{L}\) (\(f_{\rm AGN}\)) described in Section 3.1 takes also into account the incompleteness that characterizes the catalogue of potential hosts. We implement a likelihood maximization algorithm and check its performance on 3D Gaussian probability distributions as emulators of GW sky maps, and a mock catalogue of AGN. We then apply this method to check the spatial correlation between the objects of three all-sky catalogues of observed AGN and the 13 closest BBH mergers detected during third observing run of the LIGO and Virgo interferometers (O3). Every AGN catalogue is characterized by a different lower cut in bolometric luminosity. This paper is organized as follows: in Section 2 we describe the properties of the observed all-sky AGN catalogues and of the detected GW events our statistical method is applied on. In the same section, we report how we generate the AGN mock catalogue and the Gaussian probability distributions necessary to test the likelihood performance. In Section 3 we describe in detail the analytical form of the likelihood function, how we test it on the mock AGN catalogue, and how we apply it to real data. In Section 4 we present the results of this application and the constraints on \(f_{\rm AGN}\) it produces. Finally, in Section 5 we draw conclusions from these results and discuss how they can be improved and generalised in the near future. We adopt the cosmological parameters of the Cosmic Microwave Background observations by Planck (Planck Collaboration et al., 2016): \(H_{0}=(67.8\pm 0.9)\) km s\({}^{-1}\)Mpc\({}^{-1}\), \(\Omega_{\rm m}=0.308\pm 0.012\), \(n_{\rm s}=0.968\pm 0.006\). ## 2 Datasets In this section we first describe the selection criteria that we adopt to build the three all-sky catalogues of observed AGN, and we present the 13 detected GW events used when applying our statistical method to real data. We then describe the creation of the AGN mock catalogue and of the 3D Gaussian probability distributions used to validate our statistical method. ### AGN catalogues In order to construct our AGN catalogues, we start from the unWISE catalogue (Schlafly et al., 2019), which is based on the images from the WISE survey (Wright et al., 2010), and cross-match it with version 7.7b of the Milliquas catalogue (Flesch, 2021). This Milliquas catalogue puts together all quasars from publications until October 2022, and contains a total of 2,970,254 objects. The cross-match is performed to associate a spectroscopic redshift measurement to as many unWISE objects as possible. We then select the objects with redshift estimates of \(z\leq 0.3\). The reason in favour of restricting our analysis to \(z\leq 0.3\) is that the constraining power of our approach scales linearly with the completeness of the AGN catalogue that is used, and this redshift cut allows us to have an AGN completeness \(\gtrsim 0.5\). We then use the flux in the W1 band of the WISE survey to calculate the bolometric luminosity of every object and select only the ones brighter than the luminosity threshold that characterizes each of the three catalogues we create. These thresholds are \(10^{45}\,\rm erg\,s^{-1}\), \(10^{45.5}\,\rm erg\,s^{-1}\), and \(10^{46}\,\rm erg\,s^{-1}\). Finally, we perform a color selection. We select objects with \(\rm mag(W1)-mag(W2)\geq 0.8\), where \(\rm mag(W1)\) is the magnitude in the W1 band and \(\rm mag(W2)\) is the magnitude in the W2 band. This is done to select only objects that have a high chance of being AGN, based on their features related to thermal emission from hot dust, filtering out any contribution from the host galaxy to the AGN luminosity (Assef et al., 2013). In the lowest luminosity threshold catalogue, this removes \(\approx 62\) per cent of all AGN, while this percentage drops to \(\approx 5\) per cent and \(\approx 2\) per cent for the \(10^{45.5}\,\rm erg\,s^{-1}\) and \(10^{46}\,\rm erg\,s^{-1}\) threshold catalogues, respectively. We are left with three catalogues containing 5,791, 1,412, and 242 AGN for the bolometric luminosity thresholds of \(10^{45}\,\rm erg\,s^{-1}\), \(10^{45.5}\,\rm erg\,s^{-1}\), and \(10^{46}\,\rm erg\,s^{-1}\), respectively. These three catalogues will be further referred to as CAT450, CAT455, and CAT460. Even if the AGN in the catalogues are not uniformly distributed in the sky (see Figure 1), they show no significant redshift-dependent incompleteness (see Figure 2). A simple three-regions partition of the catalogues is used to identify areas with similar 2D sky-projected number density of AGN. For CAT455 we have that: * 809 objects are within the footprint of the seventeenth data release of the Sloan Digital Sky Survey (SDSS) (York et al., 2000; Blanton et al., 2017; Abdurrouf et al., 2022) (which corresponds approximately to 35.28 per cent of the sky). This is the most crowded region of the three, with a 2D number density of \(\approx 0.0556\) objects per square degree; \(\bullet\) 41 objects are characterized by a galactic latitude \(b\) with an absolute value smaller than \(10^{\circ}\) (approximately 17.36 per cent of the sky). In this region the Galactic plane of the Milky Way prevents observations from detecting most of the extra-galactic content, and is therefore the least crowded region of our catalogue, with 2D number density of \(\approx 0.0057\) objects per square degree; \(\bullet\) The remaining 562 objects populate the remaining 47.36 per cent of the sky. The average 2D number density in this region is \(\approx 0.0288\) objects per square degree. Because the AGN we consider and their host galaxies are relatively bright, many of them fall within the flux limit of the SDSS spectroscopic galaxy sample (Strauss et al., 2002), which has a completeness close to 100 per cent. In addition, the SDSS spectroscopic target selection (Richards et al., 2002) is tuned to target AGN or quasars below this flux limit. For this reason, the completeness of our catalogues in the SDSS footprint can be assumed to be close to 100 per cent. We calculate the incompleteness of the other two regions from the ratio of the projected 2D densities. Small deviations from unity for the completeness in the SDSS footprint are not expected to significantly change our final results. The same partition of the sky has been used to estimate the completeness of CAT450 and CAT460. The estimated completeness, weighted over the area occupied by each region, are \(\approx 48\) per cent, \(\approx 61\) per cent, and \(\approx 87\) per cent for CAT450, CAT455, and CAT460, respectively. We calculate the number densities of the AGN catalogues we create, correcting for their completeness. We obtain a completeness-corrected number density of \(1.53\cdot 10^{-6}\)Mpc\({}^{-3}\), \(2.93\cdot 10^{-7}\)Mpc\({}^{-3}\), and \(3.54\cdot 10^{-8}\)Mpc\({}^{-3}\) for CAT450, CAT455, and CAT460, respectively. To illustrate the content of our catalogues, we show in Table 1 as an example the first ten entries of CAT450. ### Detected Gravitational Wave events When applying our statistical method to real data, we exploit the localisation volumes of the 13 events detected during O3 with estimated luminosity distances correspondent to \(z\leq 0.2\) that have a false alarm rate below 1 per year, and have therefore been used in The LIGO Scientific Collaboration et al. (2021) to infer the parameters of the sMBH astrophysical population. These sky maps have Figure 1: Positions of the AGN in CAT450 (blue dots), CAT455 (red dots), and CAT460 (green dots) described in Section 2.1, and 90 per cent CL localisation surfaces of the 13 BBH mergers detected during O3 with an expectation value of the luminosity distance correspondent to \(z\leq 0.2\) (coloured regions). Regions with different colours correspond to different events. The sky map is visualized in equatorial coordinates. Figure 2: Number of AGN in our catalogues as a function of comoving distance. The black, blue, and red histograms refer to CAT450, CAT455, and CAT460, respectively. The black solid line, the blue dashed one, and the red dotted one show the best fit functions we obtain when fitting the number of objects per bin using the following form: \(N_{AGN}\propto D_{\rm{com}}^{2}\). These fits show no evidence of a significant redshift-dependent incompleteness of the catalogues. been downloaded from the Gravitational Wave Open Science Center (Abbott et al., 2021). Table 2 lists these events. Among the parameters we report for each event, three are intrinsic properties of the binary. These are the masses of the two components of the binary, and the effective inspiral spin parameter. The latter is a weighted average of the projections of the two components' spins on the direction of the angular momentum of the binary (for a more detailed description of this parameter, see Ajith et al., 2011; The LIGO Scientific Collaboration et al., 2021, 2021). The other parameters reported for each detected GW event in Table 2 are the redshift, the SNR, V90, and the number of AGN from our all-sky observed catalogues that are inside V90. The 90 per cent CL sky regions of the same BBH mergers that are listed in Table 2 are displayed in Figure 1. ### AGN mock catalogue We test our statistical method explained below on an AGN mock catalogue characterized by a non-uniform incompleteness. In order to create it, we first have to construct a _complete_ parent mock catalogue, where we assume that all AGN are accounted for. These are uniformly distributed in comoving volume between \(z=0.0\) and \(z=0.4\) with a number density of \(n_{\rm AGN}=10^{-7}\rm Mpc^{-3}\). The non-uniform _incomplete_ catalogue is a sub-sample of this complete one. Non-uniform incompleteness is a feature present also in the observed AGN catalogues exploited in this paper (see section 2.1). The incomplete mock catalogue is created by dividing the complete one in three different regions, and sub-sampling each of them in a different way as follows: * The first region has galactic coordinate \(b\) bigger than \(30^{\circ}\). This corresponds to 25 per cent of the sky. In this first region no sub-sampling has been performed, hence its completeness is 100 per cent; * The second region has \(b\) between \(-30^{\circ}\) and \(30^{\circ}\). This corresponds to 50 per cent of the sky. In this second region, we remove 30 per cent of the objects from the parent complete catalogue, hence the completeness in this region is 70 per cent. * The third region has Galactic coordinate \(b\) smaller than \(-30^{\circ}\). This corresponds to the remaining 25 per cent of the sky. Here we removed the 70 per cent of the objects from the complete catalogue, so the completeness of this region is 30 per cent. The incomplete mock catalogue has a total of 1,160 objects, and a weighted average completeness of 67.5 per cent. ### Simulated Gravitational Wave sky maps The sky maps of our simulated GW events are described for simplicity as 3D Gaussian probability distributions. These distributions are created such that the _size_ of their 90 per cent Credibility Level volume is the same as the size of an actual V90 simulated with the same source parameters, assuming the O3 configuration of the LIGO and Virgo detectors. For these simulated events we assume a Black Hole mass distribution that follows the Power Law + Peak model described in The LIGO Scientific Collaboration et al. (2021). The spins of the components of the binaries are assumed to be aligned with the binary angular momentum, with a magnitude uniformly distributed between 0 and 1. The inclination \(\iota\) of the binaries is sampled from a uniform distribution in \(\arccos\iota\). Once we have sampled the distributions of all the parameters of the merging BBH (masses and spins of the components, position of the merger and inclination of the binary), we model its GW signal with an IMRPhenomD waveform type (Husa et al., 2016; Khan et al., 2016). We then simulate the detection of this signal with a network composed of three interferometers: LIGO Hanford, LIGO Livingston, and Virgo. The sensitivity curves use we use for these three detectors are the ones correspondent to the following IDs: ALIGOMidLowSensitivityP1200087 for the LIGO interferometers, and AdVmidLowSensitivityP1200087 for Virgo. A duty cycle of 0.78 is used for all three detectors. We keep a Signal to Noise Ratio (SNR) detection threshold of 8 for the network, and require \(\rm SNR\geq 4\) for at least two of the three detectors. We finally measure the size of V90 for every simulated detection using the Bayesstar algorithm (Singer & Price, 2016). To each simulated detection we therefore associate a value of V90. We call R90 the radius of a sphere of volume V90. The 3D spherically symmetric Gaussian distributions we use as mock GW sky maps are combinations of three 1D Gaussian distributions with equal standard deviation. For every value of R90, we calculate the standard deviation each of the 1D distributions must have in order for the 90 per cent credibility contour of the 3D Gaussian distribution to be a spherical surface of radius R90. Knowing the exact position of each GW event we simulate, we can then sample the coordinates of the centre of the correspondent mock sky map from a Gaussian distribution centered on it. The \begin{table} \begin{tabular}{l l l l l l l l l} Name & Citation for Name & unWISE ID & R.A. & Dec. & \(z\) & Citation for \(z\) & W1 mag & \(Lw_{1}\) \\ & & [deg] & [deg] & & & & & [erg s\({}^{-1}\)] \\ \hline UVOSJ0000001.5-200427.7 & Monroe et al. (2016) & 0000197.0005716 & 0.00065 & \(-20.07433\) & 0.291 & Monroe et al. (2016) & 13.65 & \(2.72\cdot 10^{44}\) \\ SDSS J000005.49+310527.6 & Ahumada et al. (2020) & 000003180001234 & 0.02290 & 31.019102 & 0.286 & Ahumada et al. (2020) & 14.20 & \(1.58\cdot 10^{44}\) \\ PHL 2525 & Lanatomag et al. (2000) & 000001220001902 & 0.10172 & \(-12.76238\) & 0.200 & Lamongong et al. (2000) & 11.04 & \(1.29\cdot 10^{45}\) \\ 2MASX 0000042008-0541012 & Masci et al. (2010) & 000002016015237 & 0.16774 & \(-5.68361\) & 0.094 & Masci et al. (2010) & 11.33 & \(1.90\cdot 10^{44}\) \\ RXS J00009+1723 & Wei et al. (1999) & 0000016660024250 & 0.23319 & 17.39413 & 0.215 & Wei et al. (1999) & 12.93 & \(2.64\cdot 10^{44}\) \\ SDSS J000128-102326.9 & Lyka et al. (2020) & 00001070041745 & 0.25911 & \(-10.30978\) & 0.294 & Lyka et al. (2020) & 14.75 & \(1.01\cdot 10^{44}\) \\ RX J00013-0728 & Tesch \& Engels (2000) & 00000750003033 & 0.23544 & \(-7.47432\) & 0.270 & Tesch \& Engels (2008) & 14.06 & \(1.57\cdot 10^{44}\) \\ PGC 929358 & Paturel et al. (2000) & 0000137.37000468 & 0.33219 & \(-14.073010\) & 0.087 & Mauch \& Sadler (2007) & 11.65 & \(1.21\cdot 10^{44}\) \\ PGC 1698547 & Paturel et al. (2003) & 000002420009501 & 0.38474 & \(24.04179\) & 0.104 & Ahumada et al. (2020) & 11.72 & \(1.65\cdot 10^{44}\) \\ RX J00015+0529 & Tesch \& Engels (2000) & 0000060003070 & 0.38896 & \(5.48926\) & 0.250 & Ahumada et al. (2020) & 12.67 & \(4.71\cdot 10^{44}\) \\ \end{tabular} \end{table} Table 1: First ten objects from our publicly available catalogue of AGN with a bolometric luminosity higher than \(10^{45}\,\rm erg\,s^{-1}\), in ascending order of Right Ascension. For every object we indicate the original ID from the literature, the paper that first presented it, its unWISE ID, Right Ascension, Declination, redshift, the paper that first presented it, its unWISE ID, Right Ascension, Declination, redshift, the paper that first presented that redshift estimate, the magnitude in the W1 band, and the luminosity in the same band, \(Lw_{1}\). We calculate the bolometric luminosity multiplying \(Lw_{1}\) by a bolometric correction factor, approximated to 10 for this band and in the luminosity range we consider (Hopkins et al., 2007). Out of the 5,791 objects in the catalogue, a total of 3,561 have a redshift measurement obtained from SDSS. In particular, 1,582 of these measurements are taken from Lyke et al. (2020), 1,025 from Ahumada et al. (2020), and 954 from Liu et al. (2019). The full catalogue will be made available on the journal website and at [https://github.com/niccolovernesi/AGNallskycat_Veronesi23.git](https://github.com/niccolovernesi/AGNallskycat_Veronesi23.git). standard deviation of such Gaussian is calculated from the value of R90 associated to the simulated BBH merger. The sample of mock sky maps for the testing of our statistical method is therefore represented by 3D Gaussian distributions characterized by the positions of their centres and the radii of their 90 per cent credibility level regions (R90). ## 3 Method ### Likelihood function The statistical framework we present in this work aims to compare two scenarios. The first scenario is the one in which AGN are physically associated to BBH mergers. In this case one of the AGN present within the localisation volume of a GW event is expected to be causally connected to the event itself. In the second scenario, AGN and GW events are not causally related. In this case every object of our catalogues that lies within the localisation volume of a BBH merger represents a background source, and its presence inside V90 is due to chance. The analytical form of the likelihood function used in this work is based on the one described in Braun et al. (2008). The general expression for this function can be written as follows: \[\mathcal{L}\left(f_{\rm AGN}\right)=\prod_{i=1}^{N_{\rm GW}}\left[c \cdot 0.9\cdot f_{\rm AGN}\cdot S_{i}+\left(1-c\cdot 0.90\cdot f_{\rm AGN} \right)\mathcal{B}_{i}\right]\ \, \tag{1}\] where \(f_{\rm AGN}\) is the fraction of GW events that originate from an AGN, \(N_{\rm GW}\) is the total number of GW events, \(c\) is the average 1 completeness of the AGN catalogue, and \(\mathcal{S}_{i}\) (\(\mathcal{B}_{i}\)) is the signal (background) probability density function. The 0.9 pre-factor in front of \(f_{\rm AGN}\) is used to take into account that we expect that only the true origin of 90 per cent of the GW events will be within the correspondent V90. Footnote 1: As we mention later on, we tested that working with an average incompleteness over the whole catalogue gives indistinguishable (correct) results with respect to accounting for a position-dependent incompleteness. To draw conclusions about the detectability of a relation between AGN and the BBH mergers detected by LIGO and Virgo, previous works have used a similar Likelihood function to investigate the spatial correlation between the localisation volumes of simulated sMBH mergers and the positions of AGN in mock catalogues (Bartos et al., 2017; Corley et al., 2019; Veronesi et al., 2022). These previous studies used as main input the size of each GW event's V90 and the number of AGN within it (\(N_{\rm V90}\)). In this work, we additionally exploit the information embedded in the exact position of every AGN within the localisation volume: i.e. the value of the 3D GW localisation probability density function at the AGN position. We therefore write the signal probability density function for the \(i\)-th GW as: \[\mathcal{S}_{i}=\frac{\sum_{j=1}^{N_{\rm V90_{i}}}p_{j}}{n_{\rm AGN}\mathrm{ V90}_{i}}\ \, \tag{2}\] where \(n_{\rm AGN}\) is the average number density of AGN in the catalogue, and \(p_{j}\) is the probability density associated to the position of the \(j\)-th AGN. On the other hand, the probability density function associated to the scenario where AGN are background sources, accidentally present in GW localisation volumes, can be expressed with a flat probability for an AGN to be found anywhere in V90: \[\mathcal{B}_{i}=\frac{0.9}{\mathrm{V90}_{i}}\ \, \tag{3}\] where the 0.9 term at the numerator guarantees that \(\mathcal{S}_{i}\) and \(\mathcal{B}_{i}\) are equally normalized. In our statistical analysis the prior on \(f_{\rm AGN}\) is assumed to be uniform between 0 and 1. ### Test on mock data To test the performance of the likelihood we use data coming from the cross-match between the incomplete AGN mock catalogue described in section 2.3 and the mock GW detections described in section 2.4. We start by fixing the total number of detected BBH mergers to \(N_{\rm GW}=50\). We then choose a value for \(f_{\rm AGN}\) and create a realization of a universe in which that fraction of the detected GW events comes from an AGN. For this reason, such chosen value of \(f_{\rm AGN}\) will be further referred to as \(f_{\rm AGN,true}\). Therefore, out of the 50 simulated detections, \(f_{\rm AGN,true}\cdot 50\) come from objects of the complete AGN mock catalogue below \(z=0.2\), while the remaining \(\left(1-f_{\rm AGN,true}\right)\cdot 50\) come from a random position in the same redshift range. The redshift cut on the potential sources of both the signal and the background events is performed to be sure that the entirety of \begin{table} \begin{tabular}{l c c c c c c c c} Event ID & \(m_{1}\) & \(m_{2}\) & \(\chi_{\rm eff}\) & \(z\) & SNR & V90 & \(N_{\rm V90,CAT450}\) & \(N_{\rm V90,CAT450}\) & \(N_{\rm V90,CAT460}\) \\ & M\({}_{\odot}\) & M\({}_{\odot}\) & & & & [Mpc\({}^{3}\)] & & \\ \hline GW190412\_0, 053044 & \(27.7^{+6.0}_{-6.0}\) & \(9.9^{+2.0}_{-2.0}\) & \(0.21^{+0.12}_{-0.13}\) & \(0.15^{+0.03}_{-0.1}\) & \(19.8^{+0.2}_{-0.2}\) & \(9.16\cdot 10^{6}\) & 20 & 3 & 0 \\ GW190630\_182505 & \(35.5^{+6.9}_{-5.2}\) & \(2.40^{+5.4}_{-5.2}\) & \(0.10^{+0.13}_{-0.13}\) & \(0.19^{+0.0}_{-0.07}\) & \(16.4^{+0.0}_{-0.2}\) & \(1.23\cdot 10^{8}\) & 148 & 33 & 4 \\ GW190707\_093236 & \(12.1^{+2.6}_{-2.0}\) & \(7.9^{+1.6}_{-1.3}\) & \(-.04^{+0.0}_{-0.05}\) & \(0.16^{+0.07}_{-0.07}\) & \(13.1^{+0.2}_{-0.2}\) & \(9.20\cdot 10^{7}\) & 17 & 3 & 1 \\ GW190708\_232457 & \(19.8^{+4.3}_{-3.3}\) & \(11.6^{+2.0}_{-0.0}\) & \(0.05^{+0.09}_{-0.09}\) & \(0.18^{+0.07}_{-0.07}\) & \(13.4^{+0.3}_{-0.3}\) & \(1.09\cdot 10^{9}\) & 1560 & 305 & 43 \\ GW190720\_000836 & \(14.2^{+5.6}_{-3.7}\) & \(5.7^{+2.2}_{-3.7}\) & \(0.19^{+0.4}_{-0.14}\) & \(0.16^{+0.12}_{-0.0}\) & \(10.9^{+0.7}_{-0.8}\) & \(4.24\cdot 10^{7}\) & 20 & 7 & 1 \\ GW190728\_04510 & \(12.5^{+6.9}_{-6.9}\) & \(8.0^{+1.7}_{-1.7}\) & \(0.13^{+0.19}_{-0.0}\) & \(0.18^{+0.05}_{-0.0}\) & \(13.1^{+0.0}_{-0.0}\) & \(3.88\cdot 10^{7}\) & 17 & 4 & 0 \\ GW190728\_04510 & \(16.8^{+2.3}_{-1.8}\) & \(5.1^{+2.5}_{-1.7}\) & \(0.03^{+0.08}_{-0.08}\) & \(0.12^{+0.07}_{-0.08}\) & \(12.0^{+0.04}_{-0.04}\) & \(1.27\cdot 10^{7}\) & 13 & 4 & 0 \\ GW190903\_133541 & \(14.2^{+4.0}_{-0.0}\) & \(6.9^{+2.4}_{-2.1}\) & \(0.19^{+0.2}_{-0.0}\) & \(0.16^{+0.06}_{-0.06}\) & \(0.9^{+0.7}_{-0.3}\) & \(1.32\cdot 10^{3}\) & 60 & 13 & 2 \\ GW19129\_134029 & \(10.7^{+6.7}_{-1.7}\) & \(6.7^{+1.7}_{-1.7}\) & \(0.06^{+0.08}_{-0.06}\) & \(0.16^{+0.06}_{-0.06}\) & \(13.1^{+0.3}_{-0.3}\) & \(5.92\cdot 10^{7}\) & 101 & 20 & 2 \\ GW191204\_171526 & \(11.9^{+6.7}_{-1.8}\) & \(8.2^{+4.1}_{-1.6}\) & \(0.16^{+0.08}_{-0.08}\) & \(0.13^{+0.04}_{-0.08}\) & \(17.5^{+0.2}_{-0.2}\) & \(1.24\cdot 10^{7}\) & 12 & 1 & 0 \\ GW19126\_233338 & \(12.1^{+4.6}_{-3.3}\) & \(7.7^{+1.8}_{-1.6}\) & \(0.11^{+0.05}_{-0.0}\) & \(0.07^{+0.05}_{-0.0}\) & \(18.6^{+0.2}_{-0.2}\) & \(3.66\cdot 10^{6}\) & 2 & 2 & 0 \\ GW200129\_06548 & \(34.5^{+7.9}_{-3.2}\) & \(28.9^{+7.3}_{-9.3}\) & \(0.1 V90 is within \(z=0.4\) for every simulated event. This is necessary in order to avoid any boundary-related underestimation of \(\mathcal{S}_{i}\) during the cross-match of these localisation volumes with the incomplete AGN mock catalogue. We cross-match the 3D Gaussian distributions representing the sky maps of the 50 GW events with the incomplete AGN mock catalogue and calculate the value of the likelihood as a function of \(f_{\rm AGN}\) using Equations 1, 2, and 3. We call \(f_{\rm AGN,estimated}\) the value of \(f_{\rm AGN}\) that maximizes the likelihood. We do this for 500 realizations and calculate the median of the distribution of \(f_{\rm AGN,estimated}\). We repeat the same process for 11 different values of \(f_{\rm AGN,true}\) between 0 and 1. Figure 3 shows our results, where the median values of \(f_{\rm AGN,estimated}\), as a function of \(f_{\rm AGN,true}\) are marked with blue circles. We fit the trend of the median values of the estimates' distributions of \(f_{\rm AGN}\) as a function of \(f_{\rm AGN,true}\) with a linear function: \[f_{\rm AGN,estimated}=a+b\cdot f_{\rm AGN,true}\enspace. \tag{4}\] The best-fit values are \(a=0.001\pm 0.007\) and \(b=0.998\pm 0.011\). This values are consistent with the ones of the intercept and the angular coefficient of the bisector of the first quadrant. This means that maximizing the likelihood described in Equations 1, 2 and 3 leads to an accurate estimate of \(f_{\rm AGN}\). Finally, we test that our results do not change if we use in Equation 1 the actual value of the catalogue completeness (\(c\)) in each localisation volume. More specifically, this individual completeness is calculated as a weighted average of the completeness of the AGN catalogue in the 3D region occupied by each V90. Our test yields indistinguishable results, therefore, for simplicity, we only present the ones computed using the average catalogue completeness. ### Application to real data Once we have tested the accuracy of the statistical method, we apply it to real data. We cross-match the skymaps of the 13 detected BBH mergers presented in 2.2, and listed in Table 2 with the all-sky AGN catalogues described in Section 2.1. We then calculate \(\mathcal{L}\left(f_{\rm AGN}\right)\) using Equations 1, 2, and 3. In the case of CAT455 and CAT460 the combination of the data coming from the cross-match with the 13 GW events leads to a monotonically decreasing likelihood, as a function of \(f_{\rm AGN}\). We therefore decide to evaluate upper limits on this parameter integrating the likelihood between \(f_{\rm AGN}=0\) and \(f_{\rm AGN}=1\). Since the prior is assumed to be uniform, through this integration (and after normalizing the resulting values) we obtain a cumulative posterior distribution on \(f_{\rm AGN}\). The same process has been followed also for CAT450, even if in this case the likelihood turns out to be rather insensitive to \(f_{\rm AGN}\). Specifically, in this last case, the posterior is prior-dominated: data do not allow us to put much tighter constraints on \(f_{\rm AGN}\) than the ones imposed by the flat prior only. This is caused by the high number of objects contained in the AGN catalogue (Veronesi et al., 2022), combined with the non-negligible level of incompleteness that characterizes the same catalogue. We therefore decide not to repeat the analysis with an AGN catalogue characterized by a lower luminosity threshold. Such a catalogue would likely also show redshift-dependent completeness, which will have to be taken into account in future works aimed to explore the relation between BBH mergers and lower-luminosities AGN. A meaningful exploitation of AGN catalogues denser than the ones used in this work will be possible only when we will have data from more and/or better localized BBH mergers. ## 4 Results The cumulative posterior distributions over \(f_{\rm AGN}\) we obtain through the application of our statistical method to observed data are shown in Figure 4. The black solid line shows the posterior distribution in the case of the cross-math of the observed GW events with CAT460, while the dashed (dotted) line shows it in the case of a CAT455 (CAT450). On the vertical axis there is the probability for the true value of \(f_{\rm AGN}\) being smaller than the correspondent value on the horizontal axis. As an example, the solid blue line shows that the upper limit of the 95 per cent credibility interval is \(f_{\rm AGN}=0.33\) in the case of the cross-match with CAT460. Figure 5 shows a region of the two-dimensional parameter space that has been investigated in this work. On the \(y\)-axis one can read the thresholds in bolometric luminosities of AGN on the left-hand side, and the correspondent values of number densities on the right-hand side. The three number densities correspondent to the three luminosity thresholds we use to create the CAT450, CAT455, and CAT460 have been calculated taking into account their estimated completeness. For each of these completeness-corrected number densities we calculate their ratio with respect to the number density obtained integrating in the same luminosity range the best-fit AGN luminosity function at \(z=0.1\) presented in Hopkins et al. (2007). The mean of this ratios, together with the number density estimated from Hopkins et al. (2007) for a bolometric luminosity threshold of \(10^{44.5}\) erg s\({}^{-1}\), has been used to calculate the completeness-corrected number density for such a luminosity cut. All the possible values of \(f_{\rm AGN}\) are on the \(x\)-axis. The maroon (blue) region is the part of the parameter space that we reject with a 90 (95) per cent credibility level. In the LIGO Scientific Collaboration et al. (2021) the total BBH merger rate per comoving volume has been parametrized as a power law as a function of redshift: \(\mathcal{R}(z)\propto(1+z)^{\kappa}\). The value of the spectral index has been estimated to be \(\kappa=2.7^{+1.8}_{-1.9}\), and the best measurement Figure 3: Estimated value of the fraction of GW events coming from AGN (\(f_{\rm AGN,estimated}\)) as a function of the true value used to create mock realizations (\(f_{\rm AGN,true}\)). The blue circular markers correspond to the median of \(f_{\rm AGN,estimated}\) calculated over 500 realisations for a given \(f_{\rm AGN,true}\). of the merger rate \(\mathcal{R}\) occurs at \(z\approx 0.2\): \(\mathcal{R}(z=0.2)\leq 41\)\(\mathrm{Gpc^{-3}yr^{-1}}\) at 90 per cent credibility. Combining this result with the upper limit of \(f_{\mathrm{AGN}}\leq 0.74\) (\(f_{\mathrm{AGN}}\leq 0.33\)) obtained in this work, we find that the 95 per cent credibility upper limit on the rate of BBH merging in AGN brighter than \(10^{45.5}\) erg s\({}^{-1}\) (\(10^{46}\) erg s\({}^{-1}\)) is \(\mathcal{R}_{\mathrm{AGN}}(z=0.2)=31\)\(\mathrm{Gpc^{-3}yr^{-1}}\) (\(\mathcal{R}_{\mathrm{AGN}}(z=0.2)=14\)\(\mathrm{Gpc^{-3}yr^{-1}}\)) at \(z\approx 0.2\). It is important to remember that these results have been obtained assuming 100 per cent completeness in the SDSS footprint in our catalogues of luminous, redshift selected AGN. However, small variations over this assumption are not expected to produce qualitatively different results with respect to the ones presented in this section, since they scale linearly with the AGN catalogue completeness (see Equation 1). ## 5 Discussion and Conclusion We present a likelihood-based method to constrain the fractional contribution of the AGN channel to the observed merger rate of BBHs. In particular we compare the scenario in which AGN are physically associated to BBH mergers to the one in which the presence of AGN in localisation volumes of GW events is only due by chance. We use as input data the size of each GW localisation volume and the exact position of all the AGN that are in it. By maximizing the likelihood we obtain an estimate over the fraction of GW events that originated in an AGN. We first test this method on a mock AGN catalogue characterized by a non-uniform completeness. Figure 3 shows the results of this test. We fit the trend of the estimated value for \(f_{\mathrm{AGN}}\) as a function of the true one with a linear function. The best fit values of the intercept and the angular coefficient for such a fit are compatible with 0 and 1, respectively. This compatibility with the bisector of the first quadrant indicate that the likelihood we present in this paper is able to produce correct results when applied to mock data. We then apply the same statistical analysis to observed data. We use the sky maps of the 13 BBH mergers detected in O3 that are associated to an estimate of the luminosity distance correspondent to \(z\leq 0.2\). We cross-match these sky maps with three all-sky catalogues of AGN we create starting from cross-matching the unWISE catalogue (Schlafly et al., 2019) with the Milliquas one (Flesch, 2021). We select only the objects with a spectroscopic measurement of redshift correspondent to \(z\leq 0.3\) and with a bolometric luminosity higher than \(10^{45}\) erg s\({}^{-1}\), \(10^{45.5}\) erg s\({}^{-1}\), and \(10^{46}\) erg s\({}^{-1}\). We calculate the posterior cumulative distribution on \(f_{\mathrm{AGN}}\) and conclude that in the case of the two highest luminosity thresholds we can put upper limits on this parameter that are tighter with respect to the ones one can obtain from the sole assumption of a uniform prior between 0 and 1. In the case of the cross-match with the AGN catalogue characterized by the highest (intermediate) luminosity threshold we find that \(f_{\mathrm{AGN}}=0.33\) (\(f_{\mathrm{AGN}}=0.74\)) is the upper limit of the 95 per cent credibility interval. Figure 4 shows the entire cumulative posterior distributions, while Figure 5 shows more explicitly which parts of the two-dimensional AGN luminosity-\(f_{\mathrm{AGN}}\) parameter space are rejected with a 90 and a 95 per cent credibility. Previous works used only simulated GW data and mock AGN catalogues to draw conclusions about the possibility of exploring the spatial correlation between the two. Instead, we present the first constraints on \(f_{\mathrm{AGN}}\) based only on observational data. Moreover, in the previous analyses the number of potential hosts within the V90 of every GW event was used as the main source of information, together with the size of V90. As mentioned above, the likelihood function we present in this work also takes into account for the first time the exact position of every AGN within V90 and the overall completeness of the AGN catalogue. One way for generalizing the results presented in this paper is the creation of a more complete all-sky AGN catalogue. The introduction of objects with only a photometric measurement of the redshift is a possible method of doing that. This would increase the number density of the catalogue, but will also increase the probability of considering objects that have been erroneously identified as AGN. This confidence on the classification of each object will have to be taken into account in the expression of the likelihood function. The results concerning the posterior distributions shown in Figure 4 are relative to the fraction of BBH mergers that have happened in an AGN with a bolometric luminosity higher than the three thresholds we have considered. We perform this luminosity cuts in order to be sure to have a good level of completeness in our observed AGN catalogues. Future works will investigate the correlation between GW events and AGN in a broader range of luminosities. Such an investigation will have to take into consideration the fact that low values of completeness and its dependence on redshift lower the statistical power of the method, increasing the uncertainty on the predictions. The analysis described in this paper is restricted to BBH mergers whose host environment is expected to be at \(z\leq 0.2\). This selection has been done because a higher level of completeness for catalogues of observed AGN can be reached if we restrict our analysis to the local Universe. Future works might explore the GW-AGN correlation on a wider redshift range. The effectiveness of their results will be Figure 4: Black solid line: Cumulative posterior distribution for the fraction of detected GWs originated in an AGN (\(f_{\mathrm{AGN}}\)) with a bolometric luminosity higher than \(10^{46}\) erg s\({}^{-1}\). Every value on the vertical axis corresponds to the probability associated to the true value of \(f_{\mathrm{AGN}}\) being smaller than the correspondent value on the horizontal axis. The dashed (dotted) line shows the posterior distribution obtained using a luminosity threshold of \(10^{45.5}\) erg s\({}^{-1}\) (\(10^{45}\) erg s\({}^{-1}\)). The muon lines indicate that the upper limit of the 90 per cent credibility interval corresponds to \(f_{\mathrm{AGN}}=0.27\) for the \(10^{46}\) erg s\({}^{-1}\) luminosity cut, to \(f_{\mathrm{AGN}}=0.62\) for the \(10^{45.5}\) erg s\({}^{-1}\) luminosity cut, and to \(f_{\mathrm{AGN}}=0.89\) for the \(10^{45.5}\) erg s\({}^{-1}\) luminosity cut. The blue lines indicate that the upper limit of the 95 per cent credibility interval corresponds to \(f_{\mathrm{AGN}}=0.33\) for the \(10^{46}\) erg s\({}^{-1}\) luminosity cut, to \(f_{\mathrm{AGN}}=0.74\) for the \(10^{45.5}\) erg s\({}^{-1}\) luminosity cut, and to \(f_{\mathrm{AGN}}=0.95\) for the \(10^{45.5}\) erg s\({}^{-1}\) luminosity cut, and to \(f_{\mathrm{AGN}}=0.95\) for the \(10^{45.5}\) erg s\({}^{-1}\) luminosity cut. increased because of the possible exploitation of more detected BBH mergers, but might also be dampened by low levels of completeness of the considered AGN catalogues. The prediction power of the method presented in this work depends mainly on three elements: the completeness of the AGN catalogue, the number of GW detections, and the size of their localisation volumes. Observational limitations (e.g. the presence of the Milky Way plane that prevents the detection of light coming from objects behind it) prevent us from having an AGN catalogue with a completeness level close to unity. On the other hand, \(79^{+89}_{-44}\) BBH mergers are expected to be observed via GWs during the fourth observing run (O4) of the LIGO-Virgo-KAGRA collaboration (Abbott et al., 2020), and at least the same amount of detections can be predicted for the fifth observing run (O5). This would triple the amount of detected events available for statistical analyses on the BBH population. This increase of the number of detections, together with the improvement on the localisation power expected for O4 and O5 with respect to O3, will noticeably increase the prediction power of likelihood-based methods like the one presented in this paper. ## Acknowledgements EMR acknowledges support from ERC Grant "VEGA P.", number 101002511. This research has made use of data or software obtained from the Gravitational Wave Open Science Center (gwosc.org), a service of LIGO Laboratory, the LIGO Scientific Collaboration, the Virgo Collaboration, and KAGRA. LIGO Laboratory and Advanced LIGO are funded by the United States National Science Foundation (NSF) as well as the Science and Technology Facilities Council (STFC) of the United Kingdom, the Max-Planck-Society (MPS), and the State of Niedersachsen/Germany for support of the construction of Advanced LIGO and construction and operation of the GEO600 detector. Additional support for Advanced LIGO was provided by the Australian Research Council. Virgo is funded, through the European Gravitational Observatory (EGO), by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale di Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by institutions from Belgium, Germany, Greece, Hungary, Ireland, Japan, Monaco, Poland, Portugal, Spain. KAGRA is supported by Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan Society for the Promotion of Science (JSPS) in Japan; National Research Foundation (NRF) and Ministry of Science and ICT (MSIT) in Korea; Academia Sinica (AS) and National Science and Technology Council (NSTC) in Taiwan. _Software_: Rumpy (Harris et al., 2020); Matplotlib(Hunter, 2007); SciPy(Virtanen et al., 2020); Astropy (Astropy Collaboration et al., 2013, 2018); BAYESTAR (Singer and Price, 2016). ## Data Availability The data underlying this article are available in niccoloveronesi/AGNallskycat_Veronesi23, at [https://github.com/niccoloveronesi/AGNallskycat_Veronesi23.git](https://github.com/niccoloveronesi/AGNallskycat_Veronesi23.git).
2304.07241
Nakajima's creation operators and the Kirwan map
We consider the Hilbert scheme of points in the affine complex plane. We find explicit formulas for the Nakajima's creation operators and their K-theoretic counterparts in terms of the Kirwan map. We obtain a description of the action of Nakajima's creation operators on the Chern classes of the tautological bundle.
Jakub Koncki, Magdalena Zielenkiewicz
2023-04-14T16:43:38Z
http://arxiv.org/abs/2304.07241v2
# Nakajima's creation operators and the Kirwan map ###### Abstract. We consider the Hilbert scheme of points in the affine complex plane. We find explicit formulas for the Nakajima's creation operators and their K-theoretic counterparts in terms of the Kirwan map. We obtain a description of the action of Nakajima's creation operators on the Chern classes of the tautological bundle. ## 1. Introduction The Hilbert scheme of \(n\) points in the complex plane \(\operatorname{Hilb}_{n}:=\operatorname{Hilb}_{n}(\mathbb{A}_{\mathbb{C}}^{2})\) is a quasi-projective variety parametrizing zero-dimensional subschemes of degree \(n\) in the complex affine plane \(\mathbb{A}_{\mathbb{C}}^{2}\). It is a well-studied object, and a lot is known about its geometry. Many results on the Hilbert scheme of points in the plane have non-obvious applications elsewhere, from geometry [10, 21] to algebra and representation theory [11, 12, 13], mathematical physics [14, 15] and combinatorics [16, 17]. While it has been intensively studied ever since the 1960's, it is still an active research topic [1, 20, 21] and there remain open questions to be addressed. Many connections between the Hilbert scheme of points in the plane and other branches of mathematics have been unearthed by the Nakajima-Grojnowski [22, 23] milestone result which endowed the sum of all the rational homology groups with a structure of a representation of the Heisenberg algebra. Such a structure can be obtained by constructing certain operators on homology, which satisfy the commutation relations of the Heisenberg algebra. In Nakajima's construction such operators, called creation and annihilation operators, are given by correspondences defined by subvarieties in the nested Hilbert scheme. Originally defined as operators in homology, one can extend them to equivariant cohomology using a similar construction, see [10]. The situation is more complicated for K-theory, as pointed out already by Nakajima (see [22, Question 8.35]). There are, however, generalizations of the Nakajima-Grojnowski result to K-theory. In [10], the actions of the shuffle and the Ding-Iohara algebras are constructed on the equivariant K-theory. In turn, in [23], a subalgebra of the convolution algebra in the equivariant K-theory of \(\operatorname{Hilb}_{n}\) is studied and is identified with the elliptic Hall algebra. Most of this paper is devoted to K-theory of the Hilbert scheme of points in the plane, and the final chapters present implications of the K-theoretical results to cohomology. While equivariant cohomology of the Hilbert scheme of points has been extensively studied (see e.g. [1, 2, 10, 11, 12]), there remain questions which do not have a satisfactory answer. One such question, stated in [22, Question 9.6], is how do the creation operators act on the Chern classes of the tautological bundle on the Hilbert scheme. We present an answer to this question. Similar problem is considered in [20]. There Chern classes of the tautological bundle are expressed in Nakajima's basis. Results presented here and in [20] are connected, yet not directly comparable, see Remark 9.9 for a detailed discussion. Our result is an expression for the action of the Nakajima's operators \(\mathfrak{q}_{i}\) on the characteristic classes of the tautological bundle \(\mathcal{V}_{n}\) in terms of the Kirwan map \[\kappa:\mathbb{Z}[x_{0},\dots,x_{n-1}]^{\Sigma_{n}}\to\mathrm{H}^{*}(\mathrm{ Hilb}_{n})\,.\] Here \(\Sigma_{n}\) denotes the symmetric group and the map \(\kappa\) sends variables \(x_{i}\) to the Chern roots of the tautological bundle \(\mathcal{V}_{n}\). Chern classes of \(\mathcal{V}_{n}\) are images of the elementary symmetric polynomials. We prove a formula for their images in \(\mathfrak{q}_{1}\) which is a polynomial expression in the Chern classes of the tautological bundle and the power-sum polynomials \(P_{k}^{n}\) of Chern roots of the tautological bundle \[\mathfrak{q}_{1}(c_{k}(\mathcal{V}_{n}))=\sum_{m=0}^{k}(-1)^{m}(m+1)\cdot c_{ k-m}(\mathcal{V}_{n+1})\cdot P_{m}^{n+1}\,.\] It turns out that the power sum basis of the symmetric polynomial ring is better suited for our formulas. Our main result is presented below. **Theorem** (9.12).: _Let \(\lambda=(\lambda_{1},\dots,\lambda_{l})\) be a sequence of nonnegative integers and \(m\) a positive integer. For a subset \(A\subset\{1,\dots,l\}\), let \(\lambda_{A}\) be the sequence obtained from \(\lambda\) by removing indices corresponding to elements of \(A\) and \(l(A):=\sum_{i\in A}\lambda_{i}\). Then_ \[\mathfrak{q}_{m}(P_{\lambda}^{n})=(-1)^{m+1}\cdot\sum_{A\subseteq\{1,\dots,l\} }(-m)^{|A|}\cdot(l(A)+m)\cdot P_{\lambda_{A}}^{n+m}\cdot P_{l(A)+m-1}^{n+m}\,.\] _In particular for a nonegative integer \(k\geq 0\) we have_ \[\mathfrak{q}_{m}(P_{k}^{n})=(-1)^{m+1}\cdot m\cdot\big{(}P_{k}^{n+m}\cdot P_{ m-1}^{n+m}-(m+k)\cdot P_{k+m-1}^{n+m}\big{)}\.\] To calculate images of the Chern classes one need to change basis in the symmetric polynomial ring. For the operator \(\mathfrak{q}_{1}\) we obtain equivariant version (with respect to the one dimensional coordinate torus) of the above formulas. Moreover, we prove their K-theoretical analogue. Let \(\mathfrak{q}_{1,m}^{\mathrm{K}}\) be the creation operators in the Ding-Iohara algebra, or the elliptic Hall algebra (see Section 6). **Proposition** (9.7).: _The following holds in nonequivariant K-theory_ \[\mathfrak{q}_{1,m}^{\mathrm{K}}(\mathcal{P}_{k}^{n})=\mathcal{P}_{k}^{n+1} \cdot\big{(}m\mathcal{P}_{m}^{n+1}-(m-1)\mathcal{P}_{m-1}^{n+1}\big{)}-(k+m) \mathcal{P}_{k+m}^{n+1}+(k+m-1)\mathcal{P}_{k+m-1}^{n+1}\,.\] We first prove theorem 9.12 for the operator \(\mathfrak{q}_{1}\) (Theorem 9.4). For higher \(\mathfrak{q}_{i}\), [1, Theorem 34] gives a recursive formula using the auxiliary operator \(\rho\) (see also [1, Proposition 3.12]). We prove an explicit formula describing the action of \(\rho\) on the power sum basis in Proposition 9.10. This allows to extend our result on \(\mathfrak{q}_{1}\) to all the operators \(\mathfrak{q}_{m}\). The tautological bundle \(\mathcal{Q}_{n}\) on the nested Hilbert scheme \(\mathrm{Hilb}_{n,n+1}\) plays a crucial role in our proofs. The operator \(\mathfrak{q}_{1}\) is fully determined by pushforwards of certain classes of this bundle. We study these classes in the torus equivariant K-theory. We use equivariant Grothendieck-Riemann-Roch theorem [1] to transport our results to the torus equivariant cohomology. As a corollary we obtain formulas in the nonequivariant theories. **Theorem** (7.9 and 8.1).: _Let \(\pi:\mathrm{Hilb}_{n,n+1}(\mathbb{A}_{\mathbb{C}}^{2})\to\mathrm{Hilb}_{n+1}( \mathbb{A}_{\mathbb{C}}^{2})\) be the projection. For an integer \(m\geq 1\) we have_ \[\pi_{*}[\mathcal{Q}_{n}^{m}]=\mathcal{P}_{m}\sum_{i=0}^{m-1}t^{-i}-\mathcal{P }_{m-1}\sum_{i=1}^{m-1}t^{-i}\in\mathrm{K}_{\mathbb{T}_{y}}(\mathrm{Hilb}_{n+1} (\mathbb{A}_{\mathbb{C}}^{2}))\,.\] _Moreover_ \[\pi_{*}\left(c_{1}(\mathcal{Q}_{n})^{m}\right)=\sum_{k=0}^{m}a_{k,m}\cdot t^{m-k} \cdot P_{k}^{n+1}\in\mathrm{H}_{\mathbb{T}_{y}}^{2m}(\mathrm{Hilb}_{n+1}( \mathbb{A}_{\mathbb{C}}^{2}))\,,\] _where the \(a_{k,m}\) are the coefficients of the polynomial_ \[\sum_{k=0}^{m}a_{k,m}\cdot x^{k}=x^{m}(x+1)-(x-1)^{m}x\,.\] **Corollary**.: _For an integer \(m\geq 1\) we have_ \[\pi_{*}[\mathcal{Q}_{n}^{m}] =m\cdot\mathcal{P}_{m}-(m-1)\cdot\mathcal{P}_{m-1}\in\mathrm{K}( \mathrm{Hilb}_{n+1}(\mathbb{A}_{\mathbb{C}}^{2}))\,,\] \[\pi_{*}\left(c_{1}(\mathcal{Q}_{n})^{m}\right) =(m+1)\cdot P_{m}^{n+1}\in\mathrm{H}^{2m}(\mathrm{Hilb}_{n+1}( \mathbb{A}_{\mathbb{C}}^{2}))\,.\] The standard action of the two-dimensional torus \(\mathbb{T}=(\mathbb{C}^{*})^{2}\) on \(\mathbb{C}^{2}\) induces an action on the Hilbert scheme \(\mathrm{Hilb}_{n}\). The fixed point set of this action is finite, which allows for the use of localization techniques in the study of invariants such as equivariant cohomology or K-theory. Moreover, since the fixed points of the action are indexed by partitions, such computations may be used to approach conjectures in combinatorics, as in [10], or give new descriptions of known symmetric functions such as Jack polynomials as in [11]. The torus action is not very well-behaved from a certain point of view, for example it does not yield a GKM-variety structure, but it is often powerful enough to allow for explicit computations of algebraic invariants such as rational cohomology or K-theory classes, which is the approach we use in this paper. This approach has been applied by many authors; see e.g. [1, 2] for cohomological computations, [12, 13] for K-theory and [14] for elliptic cohomology. The structure of this paper is as follows. Sections 2-6 provide background. Section 2 introduces notation for partitions, Young diagrams and symmetric functions, and defines some combinatorial constructions which we use later. In Section 3 we recall the basics of equivariant K-theory for torus actions. Section 4 provides some introductory information on the Hilbert scheme of points in the plane, including the basics of its equivariant cohomology and K-theory. In Section 5 we give a combinatorial description of the tangent space to the nested Hilbert scheme of points in the plane, and explain how to compute the push-forward in equivariant K-theory using localization. Section 6 introduces the creation operators and their K-theoretic analogues. These analogues are defined using a line bundle \(\mathcal{Q}\) on the nested Hilbert scheme, and one of the key technical results is a computation of its push-forward along the projection to \(\mathrm{Hilb}_{n+1}(\mathbb{A}_{\mathbb{C}}^{2})\). This is done in Theorem 7.9 and Corollaries 7.11, 7.12. The core of this paper are Sections 7, 8 and 9. In Section 7 we compute the equivariant and nonequivariant K-theoretic push-forward of \(\mathcal{Q}^{m}\). Section 8 presents the cohomological equivalents of the results obtained in Section 7. Section 9 puts together the results of the two preceding sections to provide a description of the action of the creation operators on the Chern classes of the tautological bundle on \(\mathrm{Hilb}_{n}(\mathbb{A}_{\mathbb{C}}^{2})\). **Acknowledgements:** JK is supported by NCN grant 2016/23/G/ST1/04282 (Beethoven 2). MZ is supported by NCN grant SONATA 2020/39/D/ST1/00132. Both authors are grateful to Joachim Jelisiejew and Andrzej Weber for helpful comments. ## 2. Partitions, Young diagrams and symmetric functions Let \(n\geq 1\) be an integer. A _partition of \(n\) of length \(l\)_ is a non-increasing sequence of positive integers \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{l})\), such that \(\sum_{i=1}^{l}\lambda_{i}=n\). We write \(\lambda\dashv n\) to denote that \(\lambda\) is a partition of \(n\). We represent each partition by a Young diagram with \(n\) boxes and \(l\) columns of lengths \(\lambda_{1},\lambda_{2},\ldots,\lambda_{l}\). We think of the boxes in the Young diagram as pairs of natural numbers, and draw the Young diagrams accordingly, e.g. \[\lambda=(3,1)\ \ \text{ corresponds to}\ \ \raisebox{-1.72pt}{\includegraphics[]{images/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young//young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young//young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young//young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young//young/young/young/young/young/young/young/young/young//young/young/young/young/young//young/young/young/young/young/young/young/young/young/young/young/young/young//young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young//young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young//young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young/young//young/young/young/young/young/young/young/young/young/young//young/ In particular for \(i=0\) we have \[p_{0}(x_{0},\dots,x_{n})=\sum_{j=0}^{n}x_{j}^{0}=\sum_{j=0}^{n}1=n+1\,.\] For a partition \(\lambda=(\lambda_{1},\dots,\lambda_{l})\), we set \[e_{\lambda}:=\prod_{i=1}^{l}e_{\lambda_{i}}\,,\qquad p_{\lambda}:=\prod_{i=1}^{l }p_{\lambda_{i}}\,.\] Finally, we assume that _monomials_ have coefficient \(1\), so that \(x^{2}y^{3}\) is a monomial but \(3x^{2}y^{3}\) in not. ## 3. Equivariant K-theory Let \(\mathbb{T}\simeq(\mathbb{C}^{*})^{\operatorname{rank}\mathbb{T}}\) be an algebraic torus. We call elements of the free abelian group \(\operatorname{Hom}(\mathbb{T},\mathbb{C}^{*})\) characters. Let \(\operatorname{Rep}(\mathbb{T})\) denote the representation ring of the torus \(\mathbb{T}\). Every finitely dimensional \(\mathbb{T}\)-representation decomposes as a direct sum of one-dimensional representations, therefore \[\operatorname{Rep}(\mathbb{T})\simeq\mathbb{Z}[\operatorname{Hom}(\mathbb{T},\mathbb{C}^{*})]\,. \tag{1}\] Let \(X\) be a quasiprojective complex \(\mathbb{T}\)-variety. The algebraic K-theory of \(X\), denoted \(\operatorname{K}(X)\), is the Grothendieck group of isomorphism classes of algebraic vector bundles on \(X\). The equivariant algebraic K-theory of \(X\), denoted \(\operatorname{K}_{\mathbb{T}}(X)\), is the Grothendieck group of isomorphism classes of \(\mathbb{T}\)-equivariant algebraic vector bundles on \(X\). We do not distinguish in the notation between a bundle and its K-theory class, when it is clear from the context. It follows from (1) that the equivariant K-theory of a point is isomorphic to the ring of Laurent polynomials \[\operatorname{K}_{\mathbb{T}}(pt)\simeq\operatorname{Rep}(\mathbb{T})\simeq \mathbb{Z}[\operatorname{Hom}(\mathbb{T},\mathbb{C}^{*})]\simeq\mathbb{Z}[t_{ 1}^{\pm},\dots,t_{\operatorname{rank}\mathbb{T}}^{\pm}]\,,\] where \(t_{1},\dots,t_{\operatorname{rank}\mathbb{T}}\) are coordinate characters. The equivariant K-theory of an arbitrary \(\mathbb{T}\)-variety is a \(\operatorname{K}_{\mathbb{T}}(pt)\)-module. Let \(S\subset\operatorname{K}_{\mathbb{T}}(pt)\) be the multiplicative system consisting of all nonzero elements. We denote the localized K-theory ring of \(X\) by \(S^{-1}\operatorname{K}_{\mathbb{T}}(X)\). The localization theorem [13, Theorem 2.1] implies that the restriction map induces an isomorphism \[S^{-1}\operatorname{K}_{\mathbb{T}}(X)\simeq S^{-1}\operatorname{K}_{\mathbb{ T}}(X^{\mathbb{T}})\,.\] Let \(X\) be a smooth projective \(\mathbb{T}\)-variety such that the fixed point set \(X^{\mathbb{T}}\) is finite. Then the equivariant K-theory of \(X\) is a free \(\operatorname{K}_{\mathbb{T}}(pt)\)-module. In this case we have an inclusion of rings \[\operatorname{K}_{\mathbb{T}}(X)\subset S^{-1}\operatorname{K}_{\mathbb{T}}(X )\simeq S^{-1}\operatorname{K}_{\mathbb{T}}(X^{\mathbb{T}})\,.\] Therefore, restriction to the fixed point set induces the inclusion \[\operatorname{K}_{\mathbb{T}}(X)\hookrightarrow\operatorname{K}_{\mathbb{T}}( X^{\mathbb{T}})\simeq\bigoplus_{x\in X^{\mathbb{T}}}\mathbb{Z}[t_{1}^{\pm}, \dots,t_{\operatorname{rank}\mathbb{T}}^{\pm}]\,.\] _Remark 3.1_.: In the above reasoning one may weaken the assumption that \(X\) is projective. It is enough to assume that there exists a one parameter subgroup of \(\mathbb{T}\) such that the Bialynicki-Birula cells (see [1]) cover \(X\). The proof is an induction on Bialynicki-Birula skeleta, similar to [10, section 9]. For an equivariant complex vector bundle \(\mathcal{E}\) over a \(\mathbb{T}\)-variety \(X\) we define its K-theoretic Euler class as \[\operatorname{eu}(\mathcal{E}):=\sum_{k=0}^{\operatorname{rk}(\mathcal{E})}(-1)^ {k}[\Lambda^{k}\mathcal{E}^{*}]\in\operatorname{K}_{\mathbb{T}}(X)\,.\] It is a multiplicative class such that for \(\mathbb{T}\)-equivariant line bundles \(\mathcal{L}\) we have \[\operatorname{eu}(\mathcal{L}):=1-\mathcal{L}^{*}\,.\] The Lefschetz-Riemann-Roch formula (LRR for short) allows to compute push-forwards in equivariant K-theory using restriction to the fixed point set. Let us recall it here. **Theorem 3.2** ([10, Theorem 3.5] and [11, Theorem 5.11.7]).: _Let \(X\) and \(Y\) be smooth \(\mathbb{T}\)-varieties. Suppose that the fixed point sets \(X^{\mathbb{T}}\) and \(Y^{\mathbb{T}}\) are finite. Let \(p:X\to Y\) be an equivariant proper morphism. For a fixed point \(y\in Y^{\mathbb{T}}\) and an arbitrary class \(\alpha\in S^{-1}K_{\mathbb{T}}(X)\) we have_ \[\frac{(p_{*}\alpha)_{|y}}{\operatorname{eu}(T_{y}Y)}=\sum_{x\in p^{-1}(y) \cap\lambda^{\mathbb{T}}}\frac{\alpha_{|x}}{\operatorname{eu}(T_{x}X)}\in S^{ -1}K_{\mathbb{T}}(pt)\,.\] ## 4. Hilbert scheme of points in a complex plane In this section we recall some standard facts about the Hilbert scheme of points in the plane. For reference, see [14] or [15]. ### Definition and torus action Let \(\operatorname{Hilb}_{n}:=\operatorname{Hilb}_{n}(\mathbb{A}_{\mathbb{C}}^{2})\) be the Hilbert scheme of \(n\) points in the complex plane. It is a quasi-projective variety parametrising zero-dimensional subschemes of \(\mathbb{A}_{\mathbb{C}}^{2}\) of length \(n\). Set-theoretically, \(\operatorname{Hilb}_{n}\) can be identified with the set of ideals \(I\lhd\operatorname{\mathbb{C}}[x,y]\) such that the \(\mathbb{C}\)-vector space \(\mathbb{C}[x,y]/I\) has dimension \(n\), i.e., \[\operatorname{Hilb}_{n}:=\{I\lhd\operatorname{\mathbb{C}}[x,y]|\ \dim\operatorname{\mathbb{C}}[x,y]/I=n\}\,.\] The Hilbert scheme comes with a universal flat family \[F_{n}\subset\operatorname{Hilb}_{n}\times\mathbb{C}^{2}\,,\] whose fibre over a subscheme \(Z\in\operatorname{Hilb}_{n}\) is \(Z\), treated as a subscheme in \(\mathbb{C}^{2}\). The push-forward of the sheaf of regular functions on \(F_{n}\) along the projection \(p:F_{n}\to H_{n}\) defines (a sheaf of sections of) a rank \(n\) vector bundle on \(\operatorname{Hilb}_{n}\) \[\mathcal{V}_{n}=p_{*}\mathcal{O}_{F_{n}}\in\operatorname{Vect}(\operatorname{ Hilb}_{n})\,.\] We call it the _universal rank \(n\) vector bundle on \(\operatorname{Hilb}_{n}\)_. The fibre of the universal bundle \(\mathcal{V}_{n}\) over \(I\) is the \(n\)-dimensional vector space \(\operatorname{\mathbb{C}}[x,y]/I\). Let \(\mathbb{T}=(\mathbb{C}^{*})^{2}\) be the algebraic torus. The standard \(\mathbb{T}\)-action on \(\mathbb{C}^{2}\) given by \[(t_{1},t_{2})\cdot(x,y):=(t_{1}x,t_{2}y)\] induces an action on polynomials \(f\in\mathbb{C}[x,y]\), via \[(t_{1},t_{2})\cdot f(x,y):=f(t_{1}^{-1}x,t_{2}^{-1}y)\,.\] It induces a well-defined action on ideals \(I\lhd\operatorname{\mathbb{C}}[x,y]\). The fixed points of this action are monomial ideals, i.e. ideals generated by monomials contained in \(\operatorname{Hilb}_{n}\). To each such an ideal one can assign a Young diagram of a certain partition \(\lambda\), in the following way. Let \(I\) be a corank \(n\) monomial ideal in \(\mathbb{C}[x,y]\). Then there are exactly \(n\) monomials that lie outside of \(I\) (recall that we assume all monomials to be monic). They constitute a \(\mathbb{C}\)-basis of the vector space \(\mathbb{C}[x,y]/I\). The Young diagram corresponding to \(I\) consists of boxes \((i,j)\) for which \(x^{i}y^{j}\notin I\), e.g. \[I=\langle y^{3},xy,x^{2}\rangle\ \ \text{ corresponds to}\ \ \ \begin{array}{c}\framebox{$y^{2}$}\\ \framebox{$y$}\\ \framebox{$1$}\ \ x\end{array}.\] _Remark 4.1_.: It is commonplace not to distinguish in the notation between the partition \(\lambda\) and the associated Young diagram. We extend this convention also to the monomial ideals, so that the monomial ideal corresponding to the partition \(\lambda\) is also denoted by \(\lambda\) when this does not lead to confusion. When we want to distinguish between the three objects, \(\Delta_{\lambda}\) stands for the Young diagram associated to \(\lambda\) and \(I_{\lambda}\) denotes the monomial ideal associated to \(\lambda\). The Hilbert scheme \(\operatorname{Hilb}_{n}\) is a smooth irreducible quasi-projective variety of dimension \(2n\), [11]. The tangent space at the point \(I\in\operatorname{Hilb}_{n}\) can be described as the vector space of \(\mathbb{C}[x,y]\)-module homomorphisms from \(I\) to the quotient \(\mathbb{C}[x,y]/I\): \[T_{I}\operatorname{Hilb}_{n}=\operatorname{Hom}_{\mathbb{C}[x,y]}(I,\mathbb{C }[x,y]/I).\] At a fixed point \(I_{\lambda}\), the torus action induced on the tangent space \[T_{\lambda}\operatorname{Hilb}_{n}:=T_{I_{\lambda}}\operatorname{Hilb}_{n}\] equips this \(2n\)-dimensional vector space with a structure of a representation of \(\mathbb{T}\). Its weights can be read from the corresponding Young diagram. Denote by \(q,t\) the coordinate characters of the torus \(\mathbb{T}\). Then \[\operatorname{Rep}(\mathbb{T})=\mathbb{Z}[q^{\pm 1},t^{\pm 1}]\,.\] Each pair of integers \((k,l)\) determines a representation of \(\mathbb{T}\), given by the character \[(t_{1},t_{2})\mapsto t_{1}^{k}t_{2}^{l}\,,\] and we call \((k,l)\) the _weight_ of this one-dimensional representation. Each box in the diagram represents two weights \((a+1,-b)\) and \((-a,b+1)\), where \(a=a_{\lambda}(\bullet),b=b_{\lambda}(\bullet)\) for a box \(\bullet\), see [10, Proposition 5.8]. _Remark 4.2_.: Note that with the above convention each box \((i,j)\) in the Young diagram is naturally assigned a weight which is equal to \((i,j)\), since we have an identification between boxes in the diagram and monomials. ### Equivariant cohomology and K-theory Consider a one-parameter subgroup \(\sigma:\mathbb{C}^{*}\to\mathbb{T}\) given by \[\sigma(t)=(t,t^{N})\,,\] for a sufficiently big integer \(N\). The fixed point sets \(\operatorname{Hilb}_{n}^{\mathbb{T}}\) and \(\operatorname{Hilb}_{n}^{\sigma}\) coincide. Moreover, the Bialynicki-Birula cells cover the Hilbert scheme, see [10] or [12, Lemma 1.2.4]. Therefore, for nonequivariant cohomology and K-theory we have the following isomorphisms of \(\mathbb{Z}\)-modules \[\operatorname{K}(\operatorname{Hilb}_{n})\simeq\operatorname{H^{*}}( \operatorname{Hilb}_{n};\mathbb{Z})\simeq\bigoplus_{\lambda\vdash n}\mathbb{Z}\,.\] In the equivariant case one has \(\operatorname{K}_{\mathbb{T}}(\operatorname{Hilb}_{n})\simeq\bigoplus_{ \lambda\vdash n}\mathbb{Z}[q^{\pm},t^{\pm}]\) as \(\operatorname{K}_{\mathbb{T}}(pt)\)-modules and \(\operatorname{H}_{\mathbb{T}}^{*}(\operatorname{Hilb}_{n};\mathbb{Z})\simeq \bigoplus_{\lambda\vdash n}\mathbb{Z}[q,t]\) as \(\operatorname{H}_{\mathbb{T}}^{*}(pt)\)-modules. In particular \(\operatorname{K}_{\mathbb{T}}(\operatorname{Hilb}_{n})\) is a free \(\operatorname{K}_{\mathbb{T}}(pt)\)-module and \(\operatorname{H}_{\mathbb{T}}^{*}(\operatorname{Hilb}_{n})\) is a free \(\operatorname{H}_{\mathbb{T}}^{*}(pt)\)-module. Let \(i_{\lambda}:\{I_{\lambda}\}\to\operatorname{Hilb}_{n}\) be the inclusion of the fixed point \(I_{\lambda}\) into the Hilbert scheme. Given a class \(\mathcal{E}\in\operatorname{K}_{\mathbb{T}}(\operatorname{Hilb}_{n})\), its _restriction to the fixed point_\(I_{\lambda}\) is its pullback along this inclusion, i.e. \[i_{\lambda}^{*}\mathcal{E}\in\mathbb{Z}[q^{\pm 1},t^{\pm 1}]\,.\] We denote it by \(\mathcal{E}(\lambda)\) or \(\mathcal{E}_{|\lambda}\). The localization theorems (see Section 3) and the Bialynicki-Birula theorem (see Remark 3.1) imply the following. **Proposition 4.3**.: _Restriction to the fixed point variety induces an inclusion of rings_ \[\operatorname{K}_{\mathbb{T}}(\operatorname{Hilb}_{n}) \hookrightarrow\operatorname{K}_{\mathbb{T}}(\operatorname{Hilb}_{n}^{ \mathbb{T}}) \simeq\bigoplus_{\lambda\vdash n}\mathbb{Z}[q^{\pm},t^{\pm}]\,,\] \[\operatorname{H}_{\mathbb{T}}^{*}(\operatorname{Hilb}_{n}) \hookrightarrow\operatorname{H}_{\mathbb{T}}^{*}(\operatorname{Hilb}_{n}^{ \mathbb{T}}) \simeq\bigoplus_{\lambda\vdash n}\mathbb{Z}[q,t]\,.\] As a direct consequence, in order to prove equality of two classes in \(\operatorname{K}_{\mathbb{T}}(\operatorname{Hilb}_{n})\) or \(\operatorname{H}_{\mathbb{T}}^{*}(\operatorname{Hilb}_{n})\) it is enough to check that their restrictions to each fixed point \(I_{\lambda}\) are equal. ### Restriction to a one-dimensional subtorus Consider two one-dimensional subtori of \(\mathbb{T}\): \[\mathbb{T}_{x}=\left\{(t_{1},1)\in\mathbb{T}\right\},\qquad\mathbb{T}_{y}= \left\{(1,t_{2})\in\mathbb{T}\right\}.\] In this section we study the fixed points of these subtori and the \(\mathbb{T}_{y}\)- and \(\mathbb{T}_{x}\)-equivariant K-theory of \(\operatorname{Hilb}_{n}\). We state all propositions for the torus \(\mathbb{T}_{y}\). The results for \(\mathbb{T}_{x}\) are analogous. Section 1.2 of [1] implies the following description of the fixed point set \(\operatorname{Hilb}_{n}^{\mathbb{T}_{y}}\). **Proposition 4.4**.: _The fixed point set \(\operatorname{Hilb}_{n}^{\mathbb{T}_{y}}\) is a disjoint union of affine spaces. Each affine space contains exactly one fixed point of the big torus \(\mathbb{T}\)._ **Corollary 4.5**.: _The restriction map_ \[\operatorname{K}_{\mathbb{T}_{y}}(\operatorname{Hilb}_{n}^{\mathbb{T}_{y}}) \stackrel{{\simeq}}{{\longrightarrow}}\operatorname{K}_{\mathbb{ T}_{y}}(\operatorname{Hilb}_{n}^{\mathbb{T}})\] _is an isomorphism._ The Bialynicki-Birula decomposition and the localization theorem imply the following result. **Proposition 4.6**.: _The equivariant \(\operatorname{K}\)-theory of the Hilbert scheme \(\operatorname{K}_{\mathbb{T}_{y}}(\operatorname{Hilb}_{n})\) is a free \(\operatorname{K}_{\mathbb{T}_{y}}(pt)\)-module. Restriction to the fixed point set induces an inclusion of rings_ \[\operatorname{K}_{\mathbb{T}_{y}}(\operatorname{Hilb}_{n})\hookrightarrow \operatorname{K}_{\mathbb{T}_{y}}(\operatorname{Hilb}_{n}^{\mathbb{T}_{y}})\,.\] **Corollary 4.7**.: _The composition_ \[\operatorname{K}_{\mathbb{T}_{y}}(\operatorname{Hilb}_{n})\hookrightarrow \operatorname{K}_{\mathbb{T}_{y}}(\operatorname{Hilb}_{n}^{\mathbb{T}_{y}}) \stackrel{{\simeq}}{{\longrightarrow}}\operatorname{K}_{\mathbb{ T}_{y}}(\operatorname{Hilb}_{n}^{\mathbb{T}})\] _is a monomorphism._ As a direct consequence, in order to prove equality of two classes in \(\operatorname{K}_{\mathbb{T}_{y}}(\operatorname{Hilb}_{n})\) it is enough to check that their restrictions to every fixed point of \(\mathbb{T}\) are equal. ### Kirwan map The Kirwan map in the equivariant cohomology is a morphism of \(\operatorname{H}_{\mathbb{T}}^{*}(pt)\) modules \[\kappa^{\operatorname{H}}:\mathbb{Z}[q,t][x_{0},\dots,x_{n-1}]^{\Sigma_{n}} \rightarrow\operatorname{H}_{\mathbb{T}}^{*}(\operatorname{Hilb}_{n}) \tag{2}\] sending the variables \(x_{i}\) to the Chern roots of the tautological bundle \(\mathcal{V}_{n}\). Analogously, the Kirwan map in the equivariant \(\operatorname{K}\)-theory is a morphism of \(\operatorname{K}_{\mathbb{T}}(pt)\)-modules \[\kappa^{\operatorname{K}}:\mathbb{Z}[q^{\pm 1},t^{\pm 1}][x_{0}^{\pm 1},\dots,x_{n- 1}^{\pm 1}]^{\Sigma_{n}}\rightarrow\operatorname{K}_{\mathbb{T}}(\operatorname{Hilb} _{n})\,, \tag{3}\] sending the variables \(x_{i}\) to the K-theoretic Chern roots of the tautological bundle \(\mathcal{V}_{n}\). _Example 4.8_.: We have \[\kappa^{\mathrm{K}}(x_{0}+x_{1}+\cdots+x_{n-1})=\mathcal{V}_{n}\in\mathrm{K}_{ \mathbb{T}}(\mathrm{Hilb}_{n})\,.\] More generally, the \(k\)'th elementary symmetric polynomial is mapped to the class of the \(k\)'th exterior power of \(\mathcal{V}_{n}\), i.e. \[\kappa^{\mathrm{K}}\left(e_{k}(x_{0},\dots,x_{n-1})\right)=\Lambda^{k} \mathcal{V}_{n}\in\mathrm{K}_{\mathbb{T}}(\mathrm{Hilb}_{n})\,.\] **Definition 4.9**.: Let \(k>0\) be a positive integer. * Let \(\mathcal{P}_{k}^{n}\in\mathrm{K}_{\mathbb{T}}(\mathrm{Hilb}_{n})\) be the image under \(\kappa^{\mathrm{K}}\) of the \(k\)'th power-sum polynomial \(p_{k}(x_{0},\dots,x_{n-1})\), i.e., \[\mathcal{P}_{k}^{n}=\kappa^{\mathrm{K}}(x_{0}^{k}+x_{1}^{k}+\cdots+x_{n-1}^{k })\in\mathrm{K}_{\mathbb{T}}(\mathrm{Hilb}_{n})\,.\] We set \(\mathcal{P}_{0}^{n}=n\in\mathrm{K}_{\mathbb{T}}(\mathrm{Hilb}_{n})\). * Let \(P_{k}^{n}\in\mathrm{H}_{\mathbb{T}}^{2k}(\mathrm{Hilb}_{n})\) be the image under \(\kappa^{\mathrm{H}}\) of the \(k\)'th power-sum polynomial. We set \(P_{0}^{n}=n\in\mathrm{H}_{\mathbb{T}}^{*}(\mathrm{Hilb}_{n})\). * For a sequence of non-negative integers \(\lambda=(\lambda_{1},\dots,\lambda_{l})\) we set \[P_{\lambda}^{n}=\prod_{k=1}^{l}P_{\lambda_{k}}^{n}\in\mathrm{H}_{\mathbb{T}}^ {*}(\mathrm{Hilb}_{n})\,.\] For the empty sequence \(\lambda=\varnothing\) let \(P_{\varnothing}^{n}=1\). * We use the same notation \(\mathcal{P}_{k}^{n},P_{k}^{n}\) for the corresponding elements of \(\mathrm{K}(\mathrm{Hilb}_{n})\) and \(\mathrm{H}^{*}(\mathrm{Hilb}_{n})\). The nonequivariant Kirwan map is surjective, cf. [1]. It is a folklore knowledge that analogous result holds in the equivariant setting, i.e. the maps (2) and (3) are surjective, see e.g. [15, Paragraph 3.2]. As \(\mathrm{Hilb}_{n}\) is a Nakajima quiver variety, in particular a symplectic reduction, this may be deduced from a much more general fact about Nakajima quiver varieties [14]. In this paper we do not use the surjectivity of mentioned maps. The bundle \(\mathcal{V}\) has fibre \(\mathbb{C}[x,y]/I_{\lambda}\) at the point \(I_{\lambda}\). Therefore, the restriction of \([\mathcal{V}]\) to \(I_{\lambda}\) is equal to the sum of monomials corresponding to the boxes of \(\lambda\). More generally, consider a polynomial \[W\in\mathbb{Z}[q^{\pm 1},t^{\pm 1}][x_{0}^{\pm 1},\dots,x_{n-1}^{\pm 1}]\,.\] To compute the restriction \(\kappa^{\mathrm{K}}(W)_{|\lambda}\) one substitutes the monomials corresponding to boxes in the Young diagram of \(\lambda\) as the variables \(x_{i}\). _Example 4.10_.: For \(\lambda=(2,1)\), i.e. \[\Delta_{\lambda}=\begin{bmatrix}\framebox{$y$}\\ \framebox{$1$}\framebox{$x$}\end{bmatrix}\] we have \[\mathcal{V}(\lambda)=1+x+y\,,\qquad\quad\mathcal{P}_{k}(\lambda)=1+x^{k}+y^{k }\,,\qquad\quad\Lambda^{2}\mathcal{V}(\lambda)=xy+x+y\,.\] For the empty partition \(\lambda=\varnothing\) we set \(\mathcal{V}(\varnothing)=\Lambda^{k}\mathcal{V}(\varnothing)=\mathcal{P}_{k}( \varnothing)=0\). ## 5. Nested Hilbert Scheme ### Definition and torus action _The nested Hilbert scheme_\(\operatorname{Hilb}_{n,n+i}\) is a scheme parametrizing pairs of subschemes of \(\mathbb{A}_{\mathbb{C}}^{2}\) contained in one another, i.e., \[\operatorname{Hilb}_{n,n+i}:=\{(I,J):I\supseteq J\}\subset\operatorname{Hilb}_{ n}\times\operatorname{Hilb}_{n+i}\;.\] It is not smooth unless \(i=1\), and for \(i=1\), the nested Hilbert scheme \(\operatorname{Hilb}_{n,n+1}\) is a smooth irreducible variety of dimension \(2n+2\), see [1]. The diagonal action of \(\mathbb{T}\) on the product \(\operatorname{Hilb}_{n}\times\operatorname{Hilb}_{n+1}\) restricts to an action on \(\operatorname{Hilb}_{n,n+1}\). The fixed points of this action are indexed by pairs of Young diagrams contained in one another, which means that the diagram \(\Delta_{J}\) is obtained from \(\Delta_{I}\) by adding one box. We represent such pairs of diagrams by shading a box in the bigger diagram, representing the added box. \[\begin{array}{c}\includegraphics[width=14.226378pt]{images/.eps}\end{array}\] represents the pair \((I,J)=(\langle y^{2},x\rangle,\langle y^{2},xy,x^{2}\rangle)\,.\) Therefore, a fixed point in \(\operatorname{Hilb}_{n,n+1}^{\mathbb{T}}\) is uniquely determined by a partition \(\lambda\) of \(n+1\) and a choice of a corner box \(c\in C(\lambda)\). ### Tangent space Let \((I,J)\) be a point in \(\operatorname{Hilb}_{n,n+1}\). A vector in the tangent space at \((I,J)\) is given by a pair of \(\mathbb{C}[x,y]\)-module homomorphisms \[\varphi_{1}\in T_{I}\operatorname{Hilb}_{n}\simeq\operatorname{Hom}_{\mathbb{C }[x,y]}(I,\mathbb{C}[x,y]/I)\,,\qquad\varphi_{2}\in T_{J}\operatorname{Hilb}_{ n+1}\simeq\operatorname{Hom}_{\mathbb{C}[x,y]}(J,\mathbb{C}[x,y]/J)\,,\] compatible with the inclusion \(J\subseteq I\), so that the diagram (4) is commutative. The weights of the torus action on the tangent space at a fixed point are described in [1, Propositions 15 and 16] (see also [1, Lemma 3.1]). Here we present an alternative description, better suited to our computations. Let \((I_{\lambda},J_{\mu})\) be a \(\mathbb{T}\)-fixed point in \(\operatorname{Hilb}_{n,n+1}\), in particular \(\mu\in\lambda[1]\). Recall that every box \(\bullet\) in \(\Delta_{\mu}\) determines two of the \(2(n+1)\) tangent weights at \(J_{\mu}\) in \(\operatorname{Hilb}_{n+1}\), i.e., \((a+1,-b)\) and \((-a,b+1)\), where \(a=a_{\mu}(\bullet),b=b_{\mu}(\bullet)\). One can identify these weights as minus the slopes of the two arrows depicted below One arrow starts at the box just above the last box in the column in which the box \(\bullet\) is located and ends at the right-most box in the row of \(\bullet\), while the other arrow starts at the box one to the right of the right-most box in the row of \(\bullet\) and ends at the top box in the column on \(\bullet\). In particular, every box contributes one weight corresponding to an arrow going south-east (or south) and one arrow going north-west (or west). The tangent weights at the point \((I_{\lambda},J_{\mu})\in\operatorname{Hilb}_{n,n+1}^{\mathbb{T}}\) are computed as follows. The diagram \(\Delta_{\mu}\) is obtained from \(\Delta_{\lambda}\) by adding one box, denote it by \(\blacksquare\). Let \(i_{0},j_{0}\) be the indices of the row and the column in which the added box is located. Take the set of tangent weights to \(\operatorname{Hilb}_{n+1}\) at \(J_{\mu}\). Recall that each of the weights corresponds to some box in \(\Delta_{\mu}\). For every box below replace the weight \((-a,b+1)\) by \((-a,b)\), i.e. shorten the south-east arrow vertically by \(1\), and for every box left of replace the weight \((a+1,-b)\) by \((a,-b)\), i.e. shorten the north-west arrow horizontally by \(1\). Keep all the remaining weights intact. For example (the added box is shaded grey) the south-east arrow determined by the box \((0,0)\): is replaced by The reasoning behind this procedure is simple. Each weight represented by an arrow is in fact a \(\mathbb{C}[x,y]\)-module homomorphism which sends the generator of \(J_{\mu}\) at the tail of the arrow to the generator of \(\mathbb{C}[x,y]/J_{\mu}\) at the head of the arrow (see [1] for details). Unless the box defining the weight is in row \(i_{0}\) or column \(j_{0}\), this is also a well-defined homomorphism from \(I_{\lambda}\) to \(\mathbb{C}[x,y]/I_{\lambda}\), making diagram (4) commutative. For boxes in row \(i_{0}\) or column \(j_{0}\), one of the associated arrows still defines a good tangent vector at \(I_{\lambda}\), but the other one does not because it does not determine where the monomial corresponding to the shaded box should be sent - this is the arrow that we modify. See [1, Section 2.6] for an algebraic computation of the tangent weights which justifies the above description. _Example 5.1_.: The following table compares the tangent weights at two chosen fixed points in \(\operatorname{Hilb}_{4}\) and \(\operatorname{Hilb}_{3,4}\), diagrams with a marked box are in \(\operatorname{Hilb}_{3,4}\). Each row lists the two weights associated to a chosen box. ### Tautological line bundle Let \[p:\operatorname{Hilb}_{n,n+1}\to\operatorname{Hilb}_{n}\,,\qquad\pi: \operatorname{Hilb}_{n,n+1}\to\operatorname{Hilb}_{n+1}\] be the restrictions of the projections to the first and the second factor in the product \(\operatorname{Hilb}_{n}\times\operatorname{Hilb}_{n+1}\). Let \(\mathcal{V}_{n},\mathcal{V}_{n+1}\) be the universal bundles on the respective Hilbert schemes. The pullback \(\pi^{*}\mathcal{V}_{n+1}\) admits canonical epimorphism onto the bundle \(p^{*}\mathcal{V}_{n}\). The kernel of this map is a line bundle on \(\operatorname{Hilb}_{n,n+1}\). We denote it by \[\mathcal{Q}_{n}:=\ker(\pi^{*}\mathcal{V}_{n+1}\twoheadrightarrow p^{*} \mathcal{V}_{n})\,.\] This bundle plays a central role in this paper. Intuitively, it corresponds to "the added point"; its fibre over the point \((I,J)\in\operatorname{Hilb}_{n,n+1}\) is equal to \(I/J\). Suppose that \(\lambda\) is a nonempty partition and \(c=(k,l)\) is its corner. Restriction of \(\mathcal{Q}_{n}\) to the fixed point \((\lambda,c)\in\operatorname{Hilb}_{n,n+1}^{\mathbb{T}}\) is equal to \[(\mathcal{Q}_{n})_{|(\lambda,c)}=q^{k}t^{l}\in\mathbb{Z}[q,t]\,.\] Graphically, the setup for the rest of the paper can be summarized as follows: ### Pushforward in the equivariant K-theory The map \(\pi:\operatorname{Hilb}_{n,n+1}\to\operatorname{Hilb}_{n+1}\) is proper. We will be interested in the pushforwards \(\pi_{*}(\mathcal{Q}_{n}^{m})\in\operatorname{K}_{\mathbb{T}}(\operatorname{ Hilb}_{n+1})\). We may apply the LRR formula (Theorem 3.2). Let us recall that a fixed point in \(\operatorname{Hilb}_{n,n+1}^{\mathbb{T}}\) is uniquely determined by a partition \(\lambda\) of \(n+1\) and a choice of a corner box \(c\in C(\lambda)\). **Proposition 5.2**.: _Let \(\mathcal{E}\in\operatorname{K}_{\mathbb{T}}(\operatorname{Hilb}_{n,n+1})\) be an arbitrary class and \(\lambda\) be a partition of \(n+1\). The push-forward \(\pi_{*}\mathcal{E}\) can be computed by summing up the local contributions at the fixed points, i.e._ \[(\pi_{*}\mathcal{E})_{|\lambda}=\sum_{c\in C(\lambda)}\mathcal{E}_{|(\lambda, c)}\cdot\frac{\operatorname{eu}(T_{\lambda}\operatorname{Hilb}_{n+1})}{ \operatorname{eu}(T_{(\lambda,c)}\operatorname{Hilb}_{n,n+1})}\,, \tag{5}\] The quotient of the Euler classes in Equation (5) is a rational function in variables \(q,t\). For a corner \(c\) in a partition \(\lambda\) we use the notation \[r_{\lambda,c}:=\frac{\operatorname{eu}(T_{\lambda}\operatorname{Hilb}_{n+1}) }{\operatorname{eu}(T_{(\lambda,c)}\operatorname{Hilb}_{n,n+1})}\in\mathbb{Z} (q,t)\,.\] Both the numerator and the denominator consist of products of factors of the form \((1-q^{-i}t^{-j})\), where \((i,j)\) are the weights of the action of \(\mathbb{T}\) on tangent spaces to the respective Hilbert schemes. For technical reasons it is convenient to consider a rescaled variant of function \(r_{\lambda,c}\). Let \(\lambda\) be a nonempty partition and \(c=(i,j)\) be its corner. We consider \[\tilde{r}_{\lambda,c}:=(\mathcal{Q}_{n})_{|(\lambda,c)}\cdot\frac{\operatorname {eu}(T_{\lambda}\operatorname{Hilb}_{n+1})}{\operatorname{eu}(T_{(\lambda,c)} \operatorname{Hilb}_{n,n+1})}=q^{i}t^{j}\cdot r_{\lambda,c}\in\mathbb{Z}(q,t)\,.\] A lot of combinatorics of the tangent weights is hidden in the properties of the functions \(r_{\lambda,c}\) and \(\tilde{r}_{\lambda,c}\). At the end of the paper we include an appendix with all the necessary proofs of the technical properties of these functions. **Definition 5.3**.: For an integer \(m\) and a nonempty partition \(\lambda\), we use the notation \[f_{m}(\lambda)=(\pi_{*}[\mathcal{Q}_{n}]^{m})_{|\lambda}\in\mathbb{Z}[q,t]\,.\] For the empty partition \(\lambda=\varnothing\) we set \(f_{m}(\varnothing)=0\). In the case \(\mathcal{E}=\mathcal{Q}_{n}^{m}\), Proposition 5.2 may be restated as follows. **Corollary 5.4**.: _Let \(\lambda\) be a nonempty partition. We have_ \[f_{m}(\lambda)=\sum_{c\in C(\lambda)}\left(q^{k(c)}t^{l(c)}\right)^{m}\cdot r_{ \lambda,c}=\sum_{c\in C(\lambda)}\left(q^{k(c)}t^{l(c)}\right)^{m-1}\cdot\tilde{ r}_{\lambda,c}\,,\] _where \(c=(k(c),l(c))\)._ ## 6. Nakajima operators Let \[\mathbb{H}:=\bigoplus_{n\geq 0}\mathrm{H}^{*}(\mathrm{Hilb}_{n};\mathbb{Q})\] be the sum of the cohomology groups of the Hilbert schemes \(\mathrm{Hilb}_{n}\) for all \(n\), with rational coefficients. The Nakajima-Grojnowski creation and annihilation operators equip the vector space \(\mathbb{H}\) with an action of the Heisenberg algebra, see [10, Chapter 8]. These operators are defined by intersecting with fundamental classes of certain subvarieties in the product of two Hilbert schemes. Let \[Q_{i}^{n}:=\{(I,J):I\supseteq J,\mathrm{supp}(I/J)=\{x\}\text{ for some }x\in \mathbb{A}_{\mathbb{C}}^{2}\}\subset\mathrm{Hilb}_{n}\times\mathrm{Hilb}_{n+i},\] i.e \(Q_{i}^{n}\) consists of pairs of nested subschemes that differ by a subscheme of length \(i\) concentrated in one point. Note that for \(i=1\) we have \(Q_{1}^{n}=\mathrm{Hilb}_{n,n+1}\). Let \([Q_{i}^{n}]\) denote the fundamental class of \(Q_{i}^{n}\) in the Borel-Moore homology of \(\mathrm{Hilb}_{n}\times\mathrm{Hilb}_{n+i}\) (we need to take the Borel-Moore homology groups because the topological space \(\mathrm{Hilb}_{n}\) is not compact). Denote the projections in the product \(\mathrm{Hilb}_{n}\times\mathrm{Hilb}_{n+i}\) by \(p,q\) respectively, so that \(p\) is the projection to \(\mathrm{Hilb}_{n}\) and \(q\) is the projection to \(\mathrm{Hilb}_{n+i}\). The restriction of the map \(q\) to \(Q_{i}\) is proper. The creation operator \[\mathfrak{q}_{i}:\mathrm{H}^{*}(\mathrm{Hilb}_{n};\mathbb{Q})\to\mathrm{H}^{*+ 2(i-1)}(\mathrm{Hilb}_{n+i};\mathbb{Q})\] is defined by \[\mathfrak{q}_{i}(\alpha):=PD^{-1}\big{(}q_{*}(p^{*}\alpha\cap[Q_{i}^{n}])\big{)}\,,\] where \(PD:\mathrm{H}_{*}^{BM}(\mathrm{Hilb}_{n+i})\to\mathrm{H}^{2n+2i-*}(\mathrm{ Hilb}_{n+i})\) is Poincare duality. The annihilation operators \(\mathfrak{q}_{-i}\) are defined by the same subvariety \(Q_{i}^{n}\) by replacing the roles of \(p\) and \(q\) (see [10, Section 8.2] for the discussion of technical issues arising from the fact that \(p\) is not proper). For a partition \(\lambda=(\lambda_{1},\dots,\lambda_{l})\) let \[\mathfrak{q}_{\lambda}:=\mathfrak{q}_{\lambda_{l}}\circ\dots\circ\mathfrak{q}_ {\lambda_{1}}\,.\] Let \(\mathbb{1}\) be a generator of \(\mathrm{H}^{0}(\mathrm{Hilb}_{0})\simeq\mathbb{Q}\). Applying operators \(\mathfrak{q}_{\lambda}\) for all partitions \(\lambda\) of \(n\) to \(\mathbb{1}\) one obtains an additive basis of \(\mathrm{H}^{*}(\mathrm{Hilb}_{n})\). The same construction works for the equivariant cohomology with respect to the \(\mathbb{T}\)-action, or for the \(\mathbb{T}\)-equivariant Chow ring (see [1] for a construction with all the technical details). _Remark 6.1_.: In [10], the subvarieties defining creation operators are constructed in the same way for any smooth projective surface \(X\). They lie in \(\mathrm{Hilb}_{n}(X)\times\mathrm{Hilb}_{n+i}(X)\times X\). Each class \(\alpha\) in the cohomology of \(X\) gives rise to another operator, by pushing down the product of the pullback of \(\alpha\) with the class of the correspondence (see [10, Section 8.3]). For the affine plane the cohomology class \(\alpha\) is irrelevant, so we omit it in the construction. In particular, our operator \(\mathfrak{q}_{i}\) is the same as Nakajima's \(P_{\mathbb{1}}[i]\). Similarly as for cohomology, one can define certain operators on K-theory using correspondences (for details of this construction see [1, Section 5.2.20], and for technical issues resulting from the non-projectiveness of \(\operatorname{Hilb}_{n}\) see [13, Section 3.2]). In particular, an element \(\mathcal{E}\in\operatorname{K}_{\mathbb{T}}(\operatorname{Hilb}_{n}\times \operatorname{Hilb}_{n+1})\) with support proper over \(\operatorname{Hilb}_{n+1}\) defines a map \(\operatorname{K}_{\mathbb{T}}(\operatorname{Hilb}_{n})\to\operatorname{K}_{ \mathbb{T}}(\operatorname{Hilb}_{n+1})\) given by \[\mathcal{E}_{n}\mapsto q_{*}(p^{*}\mathcal{E}_{n}\otimes\mathcal{E}).\] This construction allows us to define the analogue of the cohomological creation operator. For an integer \(m\) let \(\mathfrak{q}^{\operatorname{K}}_{1,m}\) be the operator corresponding to \(\mathcal{E}=i_{*}\mathcal{Q}_{n}^{m}\), where \(i\) denotes the inclusion of \(\operatorname{Hilb}_{n,n+1}\) into the product, so that \(\mathfrak{q}^{\operatorname{K}}_{1,m}(\mathcal{E}_{n})=q_{*}(p^{*}\mathcal{E} _{n}\otimes i_{*}\mathcal{Q}_{n}^{m})\). _Remark 6.2_.: Operators \(\mathfrak{q}^{\operatorname{K}}_{1,m}\) are the creation operators in the Ding-Iohara algebra action of [11], or the elliptic Hall algebra considered in [13]. They correspond to operators \(e_{m}\) in the [11] notation, and \(\mathbf{f}_{1,m}\) in the [13] notation. _Remark 6.3_.: Let \(p:\operatorname{Hilb}_{n,n+1}\to\operatorname{Hilb}_{n}\) and \(\pi:\operatorname{Hilb}_{n,n+1}\to\operatorname{Hilb}_{n+1}\) be the standard maps. The projection formula implies that \[\mathfrak{q}_{1}(\alpha)=\pi_{*}p^{*}(\alpha)\in\operatorname{H}_{\mathbb{T}} ^{*}(\operatorname{Hilb}_{n+1})\,,\qquad\mathfrak{q}^{\operatorname{K}}_{1,m} (\mathcal{E})=\pi_{*}\big{(}p^{*}\mathcal{E}\otimes\mathcal{Q}_{n}^{m}\big{)} \in\operatorname{K}_{\mathbb{T}}(\operatorname{Hilb}_{n+1})\,.\] ## 7. Pushforward of the tautological bundle In this chapter we compute the push-forward \(\pi_{*}[\mathcal{Q}_{n}^{m}]\) of the K-theory class of the \(m\)'th power of the universal bundle on \(\operatorname{Hilb}_{n,n+1}\). Let us recall that (cf. Definition 5.3) \[f_{m}(\lambda)=\big{(}\pi_{*}[\mathcal{Q}_{n}^{m}]\big{)}_{|\lambda}\in \mathbb{Z}[q^{\pm},t^{\pm}]\,.\] ### The push-forward of \(\mathcal{Q}\) We begin with the case \(m=1\). The following holds. **Proposition 7.1**.: _For all \(n\in\mathbb{N}\) we have_ \[\pi_{*}[\mathcal{Q}_{n}]=[\mathcal{V}_{n+1}]\in\operatorname{K}_{\mathbb{T}}( \operatorname{Hilb}_{n+1})\,.\] Let us start by illustrating this theorem with an example. _Example 7.2_.: Let \(n=2\), so that \(\pi:\operatorname{Hilb}_{2,3}\to\operatorname{Hilb}_{3}\). We want to compute \(\pi_{*}[\mathcal{Q}_{2}]\). By the Localization Theorem 4.3, it is enough to understand what happens at each fixed point in \(\operatorname{Hilb}_{3}\). Let \(\lambda=(2,1)\). There are two fixed points in the preimage of \(\lambda\) under \(\pi\), i.e. The restriction of \(\pi_{*}[\mathcal{Q}_{2}]\) to \(\lambda\) is equal to the following sum of local contributions \[\pi_{*}[\mathcal{Q}_{2}]_{|\lambda}=[\mathcal{Q}_{2}]_{|\lambda_{1}}\cdot \frac{\operatorname{eu}(T_{\lambda}\operatorname{Hilb}_{3})}{\operatorname{ eu}(T_{\lambda_{1}}\operatorname{Hilb}_{2,3})}+[\mathcal{Q}_{2}]_{| \lambda_{2}}\cdot\frac{\operatorname{eu}(T_{\lambda}\operatorname{Hilb}_{3}) }{\operatorname{eu}(T_{\lambda_{2}}\operatorname{Hilb}_{2,3})}.\] The class of \(\mathcal{Q}\) restricted to \(\lambda_{i}\) is equal to \(q^{i}t^{j}\), where \((i,j)\) is the marked box. Using the descriptions of weights given in Section 4 and cancelling the factors which repeat in the numerator and the denominator, this sum reduces to \[\pi_{*}[\mathcal{Q}_{2}]_{|\lambda}=t\cdot\frac{1-\frac{q}{t^{2}}}{1-\frac{q}{ t}}+q\cdot\frac{1-\frac{t}{q^{2}}}{1-\frac{t}{q}}=1+q+t,\] which is the sum of the monomials corresponding to the weights of the boxes of \(\lambda\). Therefore it is equal to the class of \([\mathcal{V}_{3}]\) restricted to the fixed point \(\lambda\). Let us go back to the general case. Since the only two bundles involved are \(\mathcal{Q}_{n}\) and \(\mathcal{V}_{n+1}\) we omit subscripts indicating the number of points. We split the proof into two lemmas. **Lemma 7.3**.: _Let \(\lambda=(\lambda_{1},...,\lambda_{l})\) be a nonempty partition. Then_ \[\lim_{q\to 0}f_{1}(\lambda)=\sum_{i=0}^{\lambda_{1}-1}t^{i}\,.\] Proof.: Denote the corners of \(\lambda\) by \[c_{1}=(k_{1},l_{1}),c_{2}=(k_{2},l_{2}),\ldots,c_{N}=(k_{N},l_{N})\,,\] ordering them from upper-left to lower-right, so that \(l_{1},l_{2},\ldots,l_{N}\) form a descending sequence. Set \(l_{N+1}=-1\). Corollaries 5.4 and A.8 imply that \[\lim_{q\to 0}f_{1}(\lambda)=\sum_{i=1}^{N}\lim_{q\to 0}\tilde{r}_{\lambda,c_{i}} =\sum_{i=1}^{N}\sum_{l_{i-1}+1}^{l_{i}}t^{i}=\sum_{i=0}^{l_{1}}t^{i}\,.\qed\] **Lemma 7.4**.: _Let \(\lambda\) be a nonempty partition and \(\tilde{\lambda}\) be the partition \(\lambda\) without the first column (cf. Definition 2.2). Then the limit_ \[\lim_{q\to\infty}\big{(}f_{1}(\lambda)-qf_{1}(\tilde{\lambda})\big{)}\] _exists, i.e., the difference \(f_{1}(\lambda)-qf_{1}(\tilde{\lambda})\) does not contain positive powers of \(q\)._ Proof.: Denote the corners of \(\lambda\) by \[c_{1}=(k_{1},l_{1}),\ldots,c_{N}=(k_{N},l_{N})\,,\] ordering them from upper-left to lower-right, so that \(l_{1},l_{2},\ldots,l_{N}\) form a descending sequence. Suppose that \(k_{1}\neq 0\). Then the corners of \(\tilde{\lambda}\) are of the form \[c_{1}^{\prime}=(k_{1}-1,l_{1}),\ldots,c_{N}^{\prime}=(k_{N}-1,l_{N})\,.\] By Corollary 5.4 we have \[\lim_{q\to\infty}\big{(}f_{1}(\lambda)-qf_{1}(\tilde{\lambda})\big{)}=\sum_{i =1}^{N}\lim_{q\to\infty}\left(\tilde{r}_{\lambda,c_{i}}-q\tilde{r}_{\tilde{ \lambda},c_{i}^{\prime}}\right)\,.\] This limit exists due to Corollary A.10. Suppose now that \(k_{1}=0\). Then the corners of \(\tilde{\lambda}\) are of the form \[c_{2}^{\prime}=(k_{2}-1,l_{2}),\ldots,c_{N}^{\prime}=(k_{N}-1,l_{N})\,.\] By Corollary 5.4 we have \[\lim_{q\to\infty}f_{1}(\lambda)-qf_{1}(\tilde{\lambda})=\lim_{q\to\infty} \tilde{r}_{\lambda,c_{1}}+\sum_{i=2}^{N}\lim_{q\to\infty}\left(\tilde{r}_{ \lambda,c_{i}}-q\tilde{r}_{\tilde{\lambda},c_{i}^{\prime}}\right)\,.\] This limit exists due to Corollaries A.10 and A.12. Proof of Proposition 7.1.: Thanks to the Localization Theorem (Proposition 4.3) we only need to prove that for every fixed point \(\lambda\) in \(\mathrm{Hilb}_{n+1}^{\mathbb{T}}\) we have \(f_{1}(\lambda)=\mathcal{V}(\lambda)\). This is equivalent to the following equality \[f_{1}(\lambda)=\sum_{(i,j)\in\Delta_{\lambda}}q^{i}t^{j}\in\mathbb{Z}[q^{\pm},t^{\pm}]\,.\] The element \(f_{1}(\lambda)\) is a Laurent polynomial, therefore it can be written as \[f_{1}(\lambda)=a_{i,j}^{\lambda}\cdot q^{i}t^{j}\in\mathbb{Z}[q^{\pm},t^{\pm}]\] for some \(a_{i,j}^{\lambda}\in\mathbb{Z}\). We need to prove that \[a_{i,j}^{\lambda}=\begin{cases}1\text{ if }(i,j)\in\Delta_{\lambda}\,,\\ 0\text{ otherwise }\,.\end{cases} \tag{6}\] We use induction on the sum of \(\lambda\). For the empty partition the claim is obvious. Let us focus on the inductive step. Let \(\tilde{\lambda}\) be the partition \(\lambda\) without the first column, cf. Definition 2.2. By the inductive assumption the thesis holds for \(\tilde{\lambda}\). Lemma 7.3 implies that when \(i\leq 0\), Equation (6) holds. For \(i\geq 1\), Lemma 7.4 implies that \[c_{i,j}^{\lambda}=c_{i-1,j}^{\tilde{\lambda}}\,.\] Therefore, for \(i\geq 1\) Equation (6) follows from the inductive assumption. **Corollary 7.5**.: _For all \(n\in\mathbb{N}\) we have_ \[\pi_{*}[\mathcal{O}_{\mathrm{Hilb}_{n,n+1}}]=[\mathcal{V}_{n+1}^{*}]\in \mathrm{K}_{\mathbb{T}}(\mathrm{Hilb}_{n+1})\,.\] Proof.: Thanks to the localization theorem 4.3 we only need to prove that for every fixed point \(\lambda\) in \(\mathrm{Hilb}_{n+1}^{\mathbb{T}}\) we have \(f_{0}(\lambda)=\mathcal{V}^{*}(\lambda)\). Let \[\tau^{*}:\mathbb{Z}[q^{\pm},t^{\pm}]\to\mathbb{Z}[q^{\pm},t^{\pm}]\,.\] be the homomorphism of rings given by \(\tau^{*}(q)=q^{-1}\) and \(\tau^{*}(t)=t^{-1}\). Corollary A.6 implies that \[f_{0}(\lambda)=\tau^{*}(f_{1}(\lambda))\,.\] We also have \(\mathcal{V}^{*}(\lambda)=\tau^{*}(\mathcal{V}(\lambda))\,.\) The result follows from Proposition 7.1. ### Inductive formula We will prove an inductive formula for the Laurent polynomials \(f_{m}(\lambda)\). **Proposition 7.6**.: _Let \(\lambda\) be a nonempty partition, \((k,l)\in C(\lambda)\) be its uppermost corner and \(\hat{\lambda}\) be the partition \(\lambda\) without the first rectangular block, cf. Definition 2.2. For an arbitrary integer \(m\) we have_ \[f_{m}(\lambda)-q^{k}t^{l}f_{m-1}(\lambda)=q^{m(k+1)}f_{m}(\hat{\lambda})-q^{k }t^{l}\cdot q^{(m-2)(k+1)}f_{m-1}(\hat{\lambda})\,.\] _Remark 7.7_.: The above formula allows for a computation of \(f_{m}\) for \(m>1\) using double induction (on \(m\) and on the length of partition \(\lambda\)). **Lemma 7.8**.: _Let \(\lambda\) be a partition consisting of a single rectangular block, i.e. the partition \(\hat{\lambda}\) is empty. Let \((k,l)\) be its only corner. Then for an arbitrary integer \(m\) we have_ \[f_{m}(\lambda)=(q^{k}t^{l})^{m-1}\mathcal{V}(\lambda)\,.\] Proof.: Corollary 5.4 implies that \[f_{m}(\lambda)=(q^{k}t^{l})^{m-1}f_{1}(\lambda)\,.\] The lemma follows from Proposition 7.1. Proof of Proposition 7.6.: We split the proof into two cases. If the partition \(\lambda\) has only one corner then the theorem simplifies to Lemma 7.8. Suppose now that \(\lambda\) has more than one corner, i.e. the partition \(\hat{\lambda}\) is nonempty. Denote the corners of \(\lambda\) by \[c_{1}=(k_{1},l_{1}),\ldots,c_{N}=(k_{N},l_{N})\,,\] ordering them from upper-left to lower-right, so that \(l_{1},l_{2},\ldots,l_{N}\) form a descending sequence. Then \((k,l)=(k_{1},l_{1})\). The set of corners \(C(\hat{\lambda})\) consists of \[c_{2}^{\prime}=(k_{2}-k_{1}-1,l_{2}),\ldots,c_{N}^{\prime}=(k_{N}-k_{1}-1,l_{ N})\,.\] We have \[f_{m}(\lambda)-q^{k}t^{l}f_{m-1}(\lambda) =\sum_{i=2}^{N}(q^{k_{i}}t^{l_{i}})^{m-2}\cdot(q^{k_{i}}t^{l_{i}}-q^{ k}t^{l})\cdot\tilde{r}_{\lambda,c_{i}}\] \[=\sum_{i=2}^{N}(q^{k_{i}}t^{l_{i}})^{m-2}\cdot(q^{k_{i}+k+1}t^{l_{i }}-q^{k}t^{l})\cdot\tilde{r}_{\hat{\lambda},c_{i}^{\prime}}\] \[=q^{m(k+1)}\cdot f_{m}(\hat{\lambda})-q^{k}t^{l}\cdot q^{(m-2)(k+ 1)}\cdot f_{m-1}(\hat{\lambda})\,.\] The first and third equality follow from Localization Theorem 5.4. The second is a consequence of Corollary A.14. ### Solution for a one-dimensional subtorus Let \(\mathbb{T}_{y}\subset\mathbb{T}\) be a one-dimensional subtorus acting only on the second variable, cf. Section 4.3. We solve the induction from Proposition 7.6 after restriction to \(\operatorname{K}_{\mathbb{T}_{y}}(\operatorname{Hilb}_{n+1})\). In the \(\mathbb{T}_{y}\)-equivariant K-theory, we obtain the following pushforward formula. **Theorem 7.9**.: _For an integer \(m\geq 1\) we have_ \[\pi_{*}[\mathcal{Q}^{m}]=\mathcal{P}_{m}\sum_{i=0}^{m-1}t^{-i}-\mathcal{P}_{m -1}\sum_{i=1}^{m-1}t^{-i}\in\operatorname{K}_{\mathbb{T}_{y}}(\operatorname{ Hilb}_{n+1})\,,\] _where \(\mathcal{P}_{i}\) denote the images of the power-sum polynomials under the Kirwan map, see Definition 4.9._ _Remark 7.10_.: For \(m=1\) we use the convention \(\sum_{i=1}^{0}t^{-i}=0\). Before proving the above theorem let us note several corollaries. **Corollary 7.11**.: _For an integer \(m\leq 0\) we have_ \[\pi_{*}[\mathcal{Q}^{m}]=\mathcal{P}_{m-1}\sum_{i=0}^{-m}t^{i}-\mathcal{P}_{m} \sum_{i=1}^{-m}t^{i}\in\operatorname{K}_{\mathbb{T}_{y}}(\operatorname{Hilb}_ {n+1})\,.\] Proof.: Consider the group homomorphism \(\tau:\mathbb{T}_{y}\to\mathbb{T}_{y}\) given by taking inversion. Let \(\tau^{*}\) be the induced map on \(\mathbb{T}_{y}\)-equivariant K-theory. Corollary A.6 and Localization Theorem 4.3 imply that \[\tau^{*}(\pi_{*}[\mathcal{Q}^{m}])=\pi_{*}[\mathcal{Q}^{1-m}]\,.\] Moreover \[\tau^{*}(t)=t^{-1}\,,\qquad\tau^{*}(\mathcal{P}_{m})=\mathcal{P}_{-m}\,.\] Therefore, the corollary follows from Theorem 7.9 As a consequence, we get the following pushforward formula in the non-equivariant K-theory. **Corollary 7.12**.: _For an arbitrary integer \(m\) we have_ \[\pi_{*}[\mathcal{Q}^{m}]=m\mathcal{P}_{m}-(m-1)\mathcal{P}_{m-1}\in \operatorname{K}(\operatorname{Hilb}_{n+1})\,.\] Proof.: Let \(\rho:\operatorname{K}_{\mathbb{T}_{y}}(\operatorname{Hilb}_{n+1})\to \operatorname{K}(\operatorname{Hilb}_{n+1})\) be the map forgetting the torus action. It commutes with the Kirwan map, therefore \(\rho(\mathcal{P}_{m})=\mathcal{P}_{m}\). Moreover \(\rho(t)=1\). For \(m\geq 1\) the corollary follows from Theorem 7.9, for \(m\leq 0\) from Corollary 7.11 In the equivariant K-theory with respect to the two-dimensional torus, the following holds. **Corollary 7.13**.: _For an integer \(m\geq 1\) we have_ \[\pi_{*}[\mathcal{Q}^{m}]=\mathcal{P}_{m}\sum_{i=0}^{m-1}q^{-i}t^{-i}-\mathcal{P} _{m-1}\sum_{i=1}^{m-1}q^{-i}t^{-i}+(1-q)(1-t)S_{n,m}\in\mathrm{K}_{\mathbb{T}}( \mathrm{Hilb}_{n+1})\,,\] _for a certain class \(S_{n,m}\in\mathrm{K}_{\mathbb{T}}(\mathrm{Hilb}_{n+1})\)._ Proof.: We need to prove that the class \[\tilde{S}:=\pi_{*}[\mathcal{Q}^{m}]-\left(\mathcal{P}_{m}\sum_{i=0}^{m-1}q^{-i }t^{-i}-\mathcal{P}_{m-1}\sum_{i=1}^{m-1}q^{-i}t^{-i}\right)\in\mathrm{K}_{ \mathbb{T}}(\mathrm{Hilb}_{n+1})\] is divisible by \((q-1)(t-1)\). It is enough to check divisibility after restriction to the fixed point set, i.e. to prove that for every partition \(\lambda\) we have \[(q-1)(t-1)\ |\ \tilde{S}_{|\lambda}\,.\] Theorem 7.9 implies that the class \(\tilde{S}_{|\lambda}\) is equal to zero after restriction to \(\mathrm{K}_{\mathbb{T}_{y}}(pt)\). Therefore, it is divisible by \((q-1)\). An analogous argument for the subtorus of \(\mathbb{T}_{x}\subset\mathbb{T}\) acting only on the second variable proves that \((t-1)\) also divides this polynomial. _Remark 7.14_.: Proposition 7.1 states that for every \(n\) we have \(S_{n,1}=0\). Using the inductive formula 7.6 one may prove that \[S_{n,2}=\frac{\Lambda^{2}\mathcal{V}}{qt}\,.\] The rest of this section is devoted to the proof of Theorem 7.9. **Definition 7.15**.: Let \(\lambda\) be a nonempty partition and \(m\) an integer. We consider the pushforward in \(\mathbb{T}_{y}\)-equivariant K-theory \[\tilde{f}_{m}(\lambda)=\left(\pi_{*}^{\mathbb{T}_{y}}[\mathcal{Q}^{m}]\right) _{|\lambda}\in\mathbb{Z}[t^{\pm}]\,.\] For the empty partition \(\lambda=\varnothing\) We set \(\tilde{f}_{m}(\varnothing)=0\). **Lemma 7.16**.: _Let \(\lambda\) be a nonempty partition, \((k,l)\) be its uppermost corner and \(\hat{\lambda}\) be the partition \(\lambda\) without the first rectangular block, cf. Definition 2.2. For an arbitrary integer \(m\) we have_ \[\tilde{f}_{m}(\lambda)-\tilde{f}_{m}(\hat{\lambda})=t^{l}\big{(}\tilde{f}_{m- 1}(\lambda)-\tilde{f}_{m-1}(\hat{\lambda})\big{)}\,.\] Proof.: By definition, the polynomial \(\tilde{f}_{m}(\lambda)\) can be obtained from \(f_{m}(\lambda)\) using substitution \(q=1\), i.e. \[\tilde{f}_{m}(\lambda)=f_{m}(\lambda)_{q:=1}\,.\] The lemma is a consequence of Proposition 7.6. **Lemma 7.17**.: _Let \(\lambda\) be a nonempty partition, \((k,l)\) its uppermost corner and \(\hat{\lambda}\) the partition without the first rectangular block, cf. Definition 2.2. For an arbitrary integer \(m\geq 1\), we have_ \[\tilde{f}_{m}(\lambda)-\tilde{f}_{m}(\hat{\lambda})=t^{(m-1)l}\cdot(l+1) \cdot(1+t+\cdots+t^{k})\,. \tag{7}\] Proof.: We have \[\tilde{f}_{m}(\lambda)-\tilde{f}_{m}(\hat{\lambda}) =t^{(m-1)l}\cdot\big{(}\tilde{f}_{1}(\lambda)-\tilde{f}_{1}(\hat{ \lambda})\big{)}\] \[=t^{(m-1)l}\cdot\big{(}\mathcal{V}(\lambda)-\mathcal{V}(\hat{ \lambda})\big{)}_{q:=1}\,,\] where the first equality follows from Lemma 7.16 and the second from Proposition 7.1. The Young diagram of \(\hat{\lambda}\) is contained in the Young diagram of \(\lambda\). Their difference has exactly \(l+1\) boxes in each of the first \(k+1\) rows. Therefore \[(\mathcal{V}(\lambda)-\mathcal{V}(\hat{\lambda}))_{q:=1}=(l+1)\cdot(1+t+\cdots+ t^{k})\,.\qed\] Proof of Theorem 7.9.: For \(m=1\), the theorem follows from Proposition 7.1. Suppose that \(m\geq 2\). By the Localization Theorem (Corollary 4.7) it is enough to check that the equality holds after restriction to the fixed point set of the two-dimensional torus, i.e. that \[\tilde{f}_{m}(\lambda)=\mathcal{P}_{m}(\lambda)\left(1+t^{-1}+\cdots+t^{-(m-1) }\right)-\frac{\mathcal{P}_{m-1}(\lambda)}{y}\big{(}1+t^{-1}+\cdots+t^{-(m-2)} \big{)}\,. \tag{8}\] We prove the above formula by induction on the number of corners in \(\lambda\). If there is exactly one corner the above formula follows from 7.8. Suppose that the partition \(\lambda\) has more than one corner. In the inductive step it is enough to check that polynomial (8) satisfies the inductive formula (7). Let \((k,l)\) be the uppermost corner of \(\lambda\). Denote by \(H\) the sum of the weights from the first rectangular block of \(\lambda\), i.e. \[H:=(\mathcal{V}(\lambda)-\mathcal{V}(\hat{\lambda}))_{q:=1}=(l+1)(1+t+\cdots+ t^{k})\,.\] We use the notation \(\mathcal{P}_{N}(H)\) for the sum of the \(N_{th}\) powers of the weights appearing in the first rectangular block. After substitution of the polynomial (8) into the left hand side of formula (7) we obtain \[\mathcal{P}_{m}(H)\big{(}1+t^{-1}+\cdots+t^{-(m-1)}\big{)}-\frac{\mathcal{P}_{ m-1}(H)}{t}\big{(}1+t^{-1}+\cdots+t^{-(m-2)}\big{)}\,. \tag{9}\] For every positive integer \(N\) we have \[\mathcal{P}_{N}(H)\big{(}1+t^{-1}+...+t^{-(N-1)}\big{)}=(l+1)\sum_{i=1-N}^{kN} t^{i}\,.\] After this substitution, Equation (9) becomes the right-hand side of (7). ## 8. Transition to cohomology In this section we transfer our results to cohomology. Our aim is to prove the following theorem. **Theorem 8.1**.: _In the \(\mathbb{T}_{y}\)-equivariant cohomology of the Hilbert scheme \(\mathrm{H}^{2m}_{\mathbb{T}_{y}}(\mathrm{Hilb}_{n+1})\), we have_ \[\pi_{*}\,(c_{1}(\mathcal{Q}_{n})^{m})=\sum_{k=0}^{m}a_{k,m}\cdot t^{m-k}\cdot P _{k}^{n+1}\,,\] _where the \(a_{k,m}\) are the coefficients of the polynomial_ \[\sum_{k=0}^{m}a_{k,m}\cdot x^{k}=x^{m}(x+1)-(x-1)^{m}x\,.\] **Corollary 8.2**.: _In non-equivariant cohomology, we have_ \[\pi_{*}\,(c_{1}(\mathcal{Q}_{n})^{m})=(m+1)\cdot P_{m}^{n+1}\in\mathrm{H}^{2m} (\mathrm{Hilb}_{n+1})\,.\] To prove the above theorem we use the equivariant Grothendieck-Riemann-Roch theorem [1]. Let \(\operatorname{ch}^{\mathbb{T}_{y}}\) be the \(\mathbb{T}_{y}\)-equivariant Chern character and \(\operatorname{td}^{\mathbb{T}_{y}}\) the \(\mathbb{T}_{y}\)-equivariant Todd class. The Todd class is a multiplicative characteristic class corresponding to the power series \[\frac{x}{1-e^{-x}}=1+\frac{x}{2}+\cdots\,.\] The Grothendieck-Riemann-Roch theorem implies that for an element \(\mathcal{E}\in\operatorname{K}_{\mathbb{T}_{y}}(\operatorname{Hilb}_{n,n+1})\), we have \[\operatorname{ch}^{\mathbb{T}_{y}}(\pi_{*}\mathcal{E})=\pi_{*}(\operatorname{ ch}^{\mathbb{T}_{y}}(\mathcal{E})\cdot\operatorname{td}^{\mathbb{T}_{y}}(T_{ \pi}))\in\operatorname{H}^{*}_{\mathbb{T}_{y}}(\operatorname{Hilb}_{n+1})\,, \tag{10}\] where \(T_{\pi}\) is the relative tangent bundle to the projection \(\pi\), i.e. \[\operatorname{td}^{\mathbb{T}_{y}}(T_{\pi})=\frac{\operatorname{td}^{ \mathbb{T}_{y}}(T\operatorname{Hilb}_{n,n+1})}{\operatorname{td}^{\mathbb{T}_ {y}}(\pi^{*}T\operatorname{Hilb}_{n+1})}\,.\] The Todd class corresponds to a power series starting with \(1\), therefore \[\operatorname{td}^{\mathbb{T}_{y}}(T_{\pi})=1+\operatorname{H}^{>0}_{\mathbb{T }_{y}}(\operatorname{Hilb}_{n,n+1})\,. \tag{11}\] We split the proof of Theorem 8.1 into several lemmas. **Lemma 8.3**.: _Let \(\mathcal{L}\) be a \(\mathbb{T}_{y}\)-equivariant line bundle on \(\operatorname{Hilb}_{n,n+1}\). Consider the class_ \[\operatorname{ch}^{\mathbb{T}_{y}}\big{(}\pi_{*}\big{(}(\mathcal{L}-1)^{m} \big{)}\big{)}\in\operatorname{H}^{*}_{\mathbb{T}_{y}}(\operatorname{Hilb}_{n +1})\,.\] _Its homogeneous part of degree \(2m\) is equal to \(\pi_{*}\left(c_{1}(\mathcal{L})^{m}\right)\)._ Proof.: Formula (10) implies that \[\operatorname{ch}^{\mathbb{T}_{y}}\big{(}\pi_{*}\big{(}(\mathcal{L}-1)^{m} \big{)}\big{)}=\pi_{*}\big{(}\operatorname{ch}^{\mathbb{T}_{y}}\big{(}( \mathcal{L}-1)^{m}\big{)}\cdot\operatorname{td}^{\mathbb{T}_{y}}(T_{\pi}) \big{)}\,.\] The Chern character is multiplicative and additive, therefore \[\operatorname{ch}^{\mathbb{T}_{y}}\big{(}(\mathcal{L}-1)^{m}\big{)}=\big{(}e^ {c_{1}(\mathcal{L}_{n})}-1\big{)}^{m}=\begin{cases}0\text{ in degrees }0,1,\dots,2m-1\\ c_{1}(\mathcal{L})^{m}\text{ in degree }2m.\end{cases}\] Equation (11) implies that the same formula is true after multiplication with the Todd class, i.e. \[\operatorname{ch}^{\mathbb{T}_{y}}\big{(}(\mathcal{L}-1)^{m}\big{)}\cdot \operatorname{td}^{\mathbb{T}_{y}}(T_{\pi})=\begin{cases}0\text{ in degrees }0,1,\dots,2m-1\\ c_{1}(\mathcal{L})^{m}\text{ in degree }2m.\end{cases}\] The lemma follows from the fact that the pushforward \(\pi_{*}\) preserves the grading. **Lemma 8.4**.: _Let \(m\) be a non-negative integer. There exist rational numbers \(a_{0,m},\dots,a_{m,m}\) such that for an arbitrary \(n\in\mathbb{N}\) we have_ \[\pi_{*}\big{(}c_{1}(\mathcal{Q}_{n})^{m}\big{)}=\sum_{k=0}^{m}a_{k,m}\cdot t^{ m-k}\cdot P_{k}^{n+1}\in\operatorname{H}^{2m}_{\mathbb{T}_{y}}(\operatorname{Hilb}_{n +1})\,.\] Proof.: Theorem 7.9 and Corollary 7.5 imply that there exist polynomials \(A_{s,m}\in\mathbb{Z}[x]\) such that, for an arbitrary \(n\in\mathbb{N}\), we have \[\pi_{*}\big{(}(\mathcal{Q}_{n}-1)^{m}\big{)}=\sum_{s=-1}^{m}A_{s,m}(t^{-1}) \cdot\mathcal{P}_{s}^{n+1}\in\operatorname{K}_{\mathbb{T}_{y}}(\operatorname{ Hilb}_{n+1})\,.\] The Chern character satisfies \[\operatorname{ch}^{\mathbb{T}_{y}}(A_{s,m}(t^{-1}))=A_{s,m}(e^{-t})\,,\qquad \operatorname{ch}^{\mathbb{T}_{y}}(\mathcal{P}_{s}^{n+1})=e^{sP_{1}^{n+1}}=\sum_ {k=0}^{\infty}\frac{s^{k}\cdot P_{k}^{n+1}}{k!}\,.\] Therefore, we have the following equality in \(\operatorname{H}^{*}_{\mathbb{T}_{y}}(\operatorname{Hilb}_{n+1})\). \[\operatorname{ch}^{\mathbb{T}_{y}}\big{(}\pi_{*}\big{(}(\mathcal{Q }_{n}-1)^{m}\big{)}\big{)} =\sum_{s=-1}^{m}\operatorname{ch}^{\mathbb{T}_{y}}(A_{s,m}(t^{-1}) )\cdot\operatorname{ch}^{\mathbb{T}_{y}}(\mathcal{P}_{s}^{n+1})\] \[=\sum_{s=-1}^{m}\left(A_{s,m}(e^{-t})\cdot\sum_{k=0}^{\infty} \frac{s^{k}\cdot P_{k}^{n+1}}{k!}\right)\] \[=\sum_{k=0}^{\infty}\left(P_{k}^{n+1}\cdot\sum_{s=-1}^{m}\frac{s^ {k}\cdot A_{s,m}(e^{-t})}{k!}\right)\,.\] The expression \[B_{k,m}=\sum_{s=-1}^{m}\frac{s^{k}\cdot A_{s,m}(e^{-x})}{k!}\in\mathbb{Q}[[x]]\] is a power series. Thanks to Lemma 8.3 the number \(a_{k,m}\) is the coefficient of \(B_{k,m}\) corresponding to \(x^{m-k}\). _Remark 8.5_.: The polynomials \(A_{s,m}\) from the proof of Lemma 8.4 can be computed explicitly. For \(s\geq 1\) we have \[A_{s,m}(x)=\binom{m}{s}\cdot(-1)^{s}\cdot(1+x+\cdots+x^{s})-\binom{m}{s+1} \cdot(-1)^{s+1}\cdot(x+\cdots+x^{s})\,.\] For \(s\in\{-1,0\}\) we have \(A_{0,m}(x)=0\) and \(A_{-1,m}(x)=(-1)^{m}\). **Lemma 8.6**.: _Let \(\lambda=(n+1)\) be the partition of \(n+1\) consisting of a single summand. In the \(\mathbb{T}\)-equivariant cohomology we have_ \[\pi_{*}\big{(}c_{1}(\mathcal{Q}_{n})^{m}\big{)}_{|\lambda}=n^{m}(n+1)\cdot t^ {m}\in\mathbb{Z}[t,q]\,.\] _The same formula is true in the \(\mathbb{T}_{y}\)-equivariant cohomology._ Proof.: The Young diagram of the partition \(\lambda\) has only one corner \(c\). The Lefschetz-Riemann-Roch formula in cohomology [1, 2] implies that \[\pi_{*}\left(c_{1}(\mathcal{Q}_{n})^{m}\right)_{|\lambda}=\frac{\operatorname {eu}^{\operatorname{H}}(T_{\lambda}\operatorname{Hilb}_{n+1})}{\operatorname {eu}^{\operatorname{H}}(T_{\lambda,c}\operatorname{Hilb}_{n+1})}\cdot c_{1}( \mathcal{Q}_{n})_{|\lambda}^{m}=(n+1)\cdot(nt)^{m}=n^{m}(n+1)\cdot t^{m}\,.\] Here \(\operatorname{eu}^{\operatorname{H}}\) denotes the equivariant cohomological Euler class. The result in \(\mathbb{T}_{y}\)-equivariant case follows from the formula in \(\mathbb{T}\)-equivariant cohomology. Proof of Theorem 8.1.: Fix a positive integer \(m\). Consider a polynomial \[W_{m}(x)=\sum_{k=0}^{m}a_{k,m}x^{k}\in\mathbb{Q}[x]\] where \(a_{k,m}\) are the rational numbers from Lemma 8.4. We need to prove that \[W_{m}(x)=x^{m}(x+1)-(x-1)^{m}x\,. \tag{12}\] Let \(\lambda=(n+1)\) be the partition of \(n+1\) consisting of a single summand. It corresponds to the vertical Young diagram. We have \[\Big{(}\sum_{k=0}^{m}a_{k,m}\cdot t^{m-k}\cdot P_{k}^{n+1}\Big{)} _{|\lambda} =\sum_{k=0}^{m}a_{k,m}\cdot t^{m-k}\cdot p_{k}(0,t,2t,\ldots,nt)\] \[=(W_{m}(0)+W_{m}(1)+\cdots+W_{m}(n))\cdot t^{m}\,.\] Lemmas 8.4 and 8.6 provide an alternative way to compute this element. It follows that for all \(n\in\mathbb{N}\) we have \[W_{m}(0)+W_{m}(1)+\cdots+W_{m}(n)=n^{m}(n+1)\,.\] Therefore for all \(n\in\mathbb{N}\) we have \[W_{m}(n)=n^{m}(n+1)-(n-1)^{m}n\,,\] which implies formula (12). ## 9. Formula for Nakajima's creation operators The main goal of this paper is to answer the question of Nakajima [20, Question 9.6], i.e. to compute classes \[\mathfrak{q}_{i}(c_{k}(\mathcal{V}_{n}))\in\mathrm{H}^{k+i-1}(\mathrm{Hilb}_{n +i})\] in terms of the Kirwan map. We also consider a variant of this question in equivariant cohomology. ### Operator \(\mathfrak{q}_{1}\) Let us recall that we consider the projection maps \[p:\mathrm{Hilb}_{n,n+1}\to\mathrm{Hilb}_{n}\,,\qquad\pi:\mathrm{Hilb}_{n,n+1} \to\mathrm{Hilb}_{n+1}\.\] **Proposition 9.1**.: _We have the following equality in equivariant K-theory \(\mathrm{K}_{\mathbb{T}}(\mathrm{Hilb}_{n,n+1})\)._ \[[p^{*}\mathcal{V}_{n}]=[\pi^{*}\mathcal{V}_{n+1}]-[\mathcal{Q}_{n}]\,.\] _We have the following equalities in equivariant cohomology \(\mathrm{H}^{*}_{\mathbb{T}}(\mathrm{Hilb}_{n,n+1})\)._ \[p^{*}c_{\bullet}(\mathcal{V}_{n}) =\pi^{*}\big{(}c_{\bullet}(\mathcal{V}_{n+1})\big{)}\cdot(1+c_{1} (\mathcal{Q}_{n}))^{-1}\,,\] \[p^{*}P_{k}^{n} =\pi^{*}P_{k}^{n+1}-c_{1}(\mathcal{Q}_{n})^{k}\,,\] _where \(c_{\bullet}(-)\) denotes the full Chern class._ Proof.: All equations follow from the short exact sequence \[0\to\mathcal{Q}_{n}\to\pi^{*}\mathcal{V}_{n+1}\to p^{*}\mathcal{V}_{n}\to 0\q _Remark 9.5_.: An analogous formula may be obtained in the equivariant cohomology \(\mathrm{H}^{k}_{\mathbb{T}_{y}}(\mathrm{Hilb}_{n+1})\) by using Theorem 8.1 instead of Corollary 8.2. Chern classes are images of elementary symmetric polynomials under the Kirwan map. It turns out that the power sum basis is better suited for our computations. For a sequence \(\lambda=(\lambda_{1},\dots,\lambda_{l})\) and a subset \(A\subset\{1,\dots,l\}\), let \(\lambda_{A}\) be the sequence obtained by removing indices corresponding to elements of \(A\). We use the notation \[l(A)=\sum_{i\in A}\lambda_{i}\,.\] For the empty subset, we let \(l(\varnothing)=0\). Let us recall that \(P_{0}^{n}=n\) and \(P_{\varnothing}^{n}=1\). **Theorem 9.6**.: _Let \(k\geq 0\). The following holds in the nonequivariant cohomology \(\mathrm{H}^{k}(\mathrm{Hilb}_{n+1})\)._ \[\mathfrak{q}_{1}(P_{k}^{n})=(n-k)\cdot P_{k}^{n+1}\,.\] _More generally, let \(\lambda=(\lambda_{1},\dots,\lambda_{l})\) be a sequence of non-negative integers. Then_ \[\mathfrak{q}_{1}(P_{\lambda}^{n})=\sum_{A\subseteq\{1,\dots,l\}}(-1)^{|A|}(l (A)+1)\cdot P_{\lambda_{A}}^{n+1}\cdot P_{l(A)}^{n+1}\,.\] Proof.: The first part follows from Corollaries 9.2 and 8.2. The second part uses the fact that \[p^{*}P_{\lambda}^{n}=\sum_{A\subseteq\{1,\dots,l\}}(-1)^{|A|}\cdot\pi^{*}P_{ \lambda_{A}}^{n+1}\cdot c_{1}(\mathcal{Q}_{n})^{l(A)}\in\mathrm{H}^{*}( \mathrm{Hilb}_{n,n+1})\,,\] which is a consequence of Proposition 9.1. Analogously, one may describe the action of \(\mathfrak{q}_{1,m}^{K}\) operator. **Proposition 9.7**.: _The following holds in the nonequivariant K-theory \(\mathrm{K}(\mathrm{Hilb}_{n+1})\)._ \[\mathfrak{q}_{1,0}^{K}(\mathcal{P}_{k}^{n})=\mathcal{P}_{k}^{n+1}\cdot \mathcal{P}_{-1}^{n+1}-k\mathcal{P}_{k}^{n+1}+(k-1)\mathcal{P}_{k-1}^{n+1}\,,\] _More generally_ \[\mathfrak{q}_{1,m}^{K}(\mathcal{P}_{k}^{n})=\mathcal{P}_{k}^{n+1}\cdot\big{(} m\mathcal{P}_{m}^{n+1}-(m-1)\mathcal{P}_{m-1}^{n+1}\big{)}-(k+m)\mathcal{P}_{k+m} ^{n+1}+(k+m-1)\mathcal{P}_{k+m-1}^{n+1}\,.\] Proof.: We have \[\mathfrak{q}_{1,m}^{K}(\mathcal{P}_{k}^{n}) =\pi_{*}\big{(}(\pi^{*}\mathcal{P}_{k}^{n+1}-\mathcal{Q}_{n}^{k}) \cdot\mathcal{Q}_{n}^{m}\big{)}\] \[=\mathcal{P}_{k}^{n+1}\pi_{*}(\mathcal{Q}_{n}^{m})-\pi_{*}( \mathcal{Q}_{n}^{k+m})\,.\] The result follows from Corollary 7.12. _Remark 9.8_.: An analogous formula in the equivariant K-theory \(\mathrm{K}_{\mathbb{T}_{y}}(\mathrm{Hilb}_{n+1})\) can be stated using Theorem 7.9 instead of Corollary 7.12. _Remark 9.9_.: Relation between characteristic classes of the tautological bundle and Nakajima's operators is studied in [11]. There an arbitrary smooth irreducible surface \(X\) is considered. The author considers operators on \(\mathrm{H}^{*}(\mathrm{Hilb}_{n}(X))\) associated with vector bundles on \(X\) - every bundle on \(X\) canonically determines a vector bundle on \(\mathrm{Hilb}_{n}(X)\), which acts on the cohomology of \(\mathrm{Hilb}_{n}(X)\) by multiplication with its total Chern class. The author describes the action of these operators on the Nakajima-Grojnowski basis of \(\mathrm{H}^{*}(\mathrm{Hilb}_{n}(X))\). For the affine plane the action of the tautological bundle is described by [11, Theorems 4.6 and 4.10], see also [16, Remark 54]. In these formulas generating function for characteristic classes of the tautological bundle in terms of Nakajima's basis is computed. For a power sum polynomial [15, Theorem 4.10] implies that \[P_{k}^{n}=\left(\frac{(-1)^{k}}{n!\cdot(k+1)}\cdot\prod_{i=n-k+1}^{n}i\right) \cdot\mathfrak{q}_{k+1}\circ\mathfrak{q}_{1}^{n-k}(\mathbb{1})\,.\] Our formula (13) may be deduced as a corollary. The remaining formulas are, up to our knowledge, new results. For Chern classes \(c_{n}(\mathcal{V})\) or power sum polynomials \(P_{\lambda}^{n}\) coefficients in the Nakajima basis are too complicated to use the straightforward approach which worked for \(P_{k}^{n}\). Formulas in the equivariant setting are independent of the Lehn results as [15, Theorem 4.10] does not hold in the equivariant cohomology. ### Higher operators Formulas for higher Nakajima's operators in the nonequivariant cohomology or \(\mathbb{T}_{y}\)-equivariant cohomology may be deduced from formulas for \(\mathfrak{q}_{1}\). The auxiliary operator \[\rho:\mathrm{H}_{\mathbb{T}}^{*}(\mathrm{Hilb}_{n})\to\mathrm{H}_{\mathbb{T}}^ {*+1}(\mathrm{Hilb}_{n+1})\] is defined in [14, definition 33]. Thanks to [14, corollary 30 and theorem 34] it satisfies \[\rho(\mathcal{E})=(-1)\cdot\pi_{*}(c_{1}(\mathcal{Q}_{n})\cdot p^{*} \mathcal{E})\,.\] Therefore, a reasoning analogous to the one in the proof of Theorem 9.6 implies the following proposition. **Proposition 9.10**.: _In the nonequivariant cohomology \(\mathrm{H}^{k}(\mathrm{Hilb}_{n+1})\) the following holds._ \[\rho(P_{k}^{n})=(k+2)\cdot P_{k+1}^{n+1}-2\cdot P_{k}^{n+1}\cdot P_{1}^{n+1}\,.\] _More generally, let \(\lambda=(\lambda_{1},\dots,\lambda_{l})\) be a sequence of non-negative integers. Then_ \[\rho(P_{\lambda}^{n})=\sum_{A\subseteq\{1,\dots,l\}}(-1)^{|A|+1}(l(A)+2)\cdot P _{\lambda_{A}}^{n+1}\cdot P_{l(A)+1}^{n+1}\,.\] To compute the higher operators we use the following inductive result. **Theorem 9.11** ([14, Theorem 34]).: _For \(i\geq 2\), we have_ \[\mathfrak{q}_{i}=\frac{\rho\circ\mathfrak{q}_{i-1}-\mathfrak{q}_{i-1}\circ \rho}{i-1}\,.\] _This result is valid also in the equivariant setting._ Our formulas (Theorem 9.6 and Proposition 9.10) allow for an inductive computation of the Nakajima's operators in terms of the Kirwan map. They yield a formula for the image of an arbitrary symmetric polynomial written in the power-sum basis. **Theorem 9.12**.: _Let \(\lambda=(\lambda_{1},\dots,\lambda_{l})\) be a sequence of nonnegative integers and \(m\) a positive integer. Then_ \[\mathfrak{q}_{m}(P_{\lambda}^{n})=(-1)^{m+1}\cdot\sum_{A\subseteq\{1,\dots,l \}}(-1)^{|A|}m^{|A|}\cdot(l(A)+m)\cdot P_{\lambda_{A}}^{n+m}\cdot P_{l(A)+m-1 }^{n+m}\,.\] **Corollary 9.13**.: _let \(k\geq 0\) be a nonegative integer and \(m>0\) a positive integer. Then_ \[\mathfrak{q}_{m}(P_{k}^{n})=(-1)^{m+1}\cdot\big{(}m\cdot P_{k}^{n+m}\cdot P_{ m-1}^{n+m}-m(m+k)\cdot P_{k+m-1}^{n+m}\big{)}\,.\] To make the proof more readable we omit superscript in the notation of power sum elements \(P\). We use the notation \([l]\) for the set \(\{1,\dots,l\}\). We write \(\lambda_{A}\), \(k\) for a sequence \(\lambda_{A}\) with one added element \(k\), i.e. \[P_{\lambda_{A},\,k}:=P_{\lambda_{A}}\cdot P_{k}\,.\] We need the following lemma. **Lemma 9.14**.: _Let \(\lambda=(\lambda_{1},\dots,\lambda_{l})\) be a sequence of non-negative integers and \(C\subseteq[l]\) a subset. For an arbitrary positive number \(m\) we have_ \[\sum_{A\subseteq C}\big{(}m^{|A|}\cdot l(A)\big{)}=\sum_{A\subseteq C}\big{(} m^{|A|+1}\cdot l(C\setminus A)\big{)}=l(C)\cdot m(1+m)^{|C|-1}\,.\] Proof.: we have \[\sum_{A\subseteq C}m^{|A|}\cdot l(A)=\sum_{A\subseteq C}\Big{(} m^{|A|}\cdot\sum_{i\in A}\lambda_{i}\Big{)}=\sum_{i\in C}\Big{(}\lambda_{i} \cdot\sum_{i\in A\subseteq C}m^{|A|}\Big{)}=\\ =\Big{(}\sum_{i\in C}\lambda_{i}\Big{)}\cdot\Big{(}\sum_{A^{ \prime}\subseteq C\setminus\{*\}}m^{|A^{\prime}|+1}\Big{)}=l(C)\cdot m\cdot(1+ m)^{|C|-1}\,.\] On the other hand \[\sum_{A\subseteq C}m^{|A|+1}\cdot l(C\setminus A)=\sum_{A\subseteq C }\Big{(}m^{|A|}\cdot\sum_{i\notin A}\lambda_{i}\Big{)}=\sum_{i\in C}\Big{(} \lambda_{i}\cdot\sum_{i\notin A\subseteq C}m^{|A|+1}\Big{)}=\\ =\Big{(}\sum_{i\in C}\lambda_{i}\Big{)}\cdot\Big{(}\sum_{A^{ \prime}\subseteq C\setminus\{*\}}m^{|A^{\prime}|+1}\Big{)}=l(C)\cdot m\cdot(1+ m)^{|C|-1}\,.\qed\] Proof of Theorem 9.12.: We proceed by induction on \(m\). For \(m=1\) theorem simplifies to Theorem 9.4. Suppose that the theorem holds for \(m\). We will prove that it holds also form \(m+1\). We want to use Theorem 9.11. First, let us consider the summand \(\rho\circ\mathfrak{q}_{m}(P_{\lambda})\). The inductive assumption implies that it is equal to \[(-1)^{m+1}\cdot\sum_{A\subseteq[l]}(-1)^{|A|}m^{|A|}(l(A)+m)\cdot\rho\big{(}P _{\lambda_{A},\,l(A)+m-1}\big{)}\,. \tag{14}\] We use Proposition 9.10 to compute summands \(\rho(P_{\lambda_{A},\,l(A)+m-1})\). It yields a sum indexed by subsets of the set \[([l]\setminus A)\cup\{\infty\}\,,\] where \(\{\infty\}\) corresponds to factor \(P_{l(A)+m-1}\). Every such subset \(\tilde{B}\) is uniquely determined by a subset \(B\subseteq[l]\) such that \(A\cap B=\varnothing\) and information whether the additional point \(\{\infty\}\) belongs to \(\tilde{B}\). Therefore, the sum computing \(\rho(P_{\lambda_{A},l(A)+m-1})\) may be split into two sums indexed by the set \[\{B\subseteq[l]|\,A\cap B=\varnothing\}\,.\] We substitute Proposition 9.10 to equation (14) and perform the mentioned splitting. The sum corresponding to \(\infty\notin\tilde{B}\) is of the form \[(-1)^{m+1}\cdot\sum_{\begin{subarray}{c}A,B\subseteq[l],\\ A\cap B=\varnothing\end{subarray}}(-1)^{|A|+|B|+1}m^{|A|}(l(A)+m)(l(B)+2)\cdot P _{\lambda_{A\cup B},\,l(B)+2,\,l(A)+m}\,. \tag{15}\] The second sum, corresponding to \(\infty\in\tilde{B}\) is equal to \[(-1)^{m+1}\cdot\sum_{\begin{subarray}{c}A,B\subseteq[l]\\ A\cap B=\varnothing\end{subarray}}(-1)^{|A|+|B|+2}m^{|A|}(l(A)+m)(l(A)+l(B)+m+1) \cdot P_{\lambda_{A\cup B},\,l(A)+l(B)+m}\,. \tag{16}\] We have \[\rho\circ\mathfrak{q}_{m}(P_{\lambda})=(15)+(16)\,.\] We apply analogous procedure to the expression \(\mathfrak{q}_{m}\circ\rho(P_{\lambda})\). By Proposition 9.10 it is equal to \[\sum_{B\subseteq[l]}(-1)^{|B|+1}(l(B)+2)\cdot\mathfrak{q}_{m}\big{(}P_{ \lambda_{B},\,l(B)+1}\big{)}\,.\] We use inductive assumption and split the obtained sum into two parts. The one corresponding to subsets not containing the additional point is equal to \[(-1)^{m+1}\sum_{\begin{subarray}{c}A,B\subseteq[l],\\ A\cap B=\varnothing\end{subarray}}(-1)^{|A|+|B|+1}m^{|A|}(l(A)+m)(l(B)+2)\cdot P _{\lambda_{A\cup B},\,l(B)+2,\,l(A)+m}\,. \tag{17}\] The other one is equal to \[(-1)^{m+1}\cdot\sum_{\begin{subarray}{c}A,B\subseteq[l],\\ A\cap B=\varnothing\end{subarray}}(-1)^{|A|+|B|+2}m^{|A|+1}(l(B)+2)(l(A)+l(B)+m+ 1)\cdot P_{\lambda_{A\cup B},\,l(A)+l(B)+m}\,. \tag{18}\] We have \[\rho\circ\mathfrak{q}_{m}(P_{\lambda})=(17)+(18)\,.\] The summands (17) and (15) are identical. By Theorem 9.11 we need to compute \[m\cdot\mathfrak{q}_{m+1}(P_{\lambda})=\rho\circ\mathfrak{q}_{m}(P_{\lambda})- \rho\circ\mathfrak{q}_{m}(P_{\lambda})=(16)-(18)\,. \tag{19}\] Grouping terms with the same \(A\cup B\) we obtain that it is equal to \[(-1)^{m+1}\sum_{C\subseteq[l]}\Big{(}(-1)^{|C|}(l(C)+m+1)P_{\lambda_{C},\,l(C )+m}\sum_{A\subseteq C}\big{(}m^{|A|}(l(A)+m)-m^{|A|+1}(l(B)+2)\big{)}\Big{)}\] where \(B=C\setminus A\). Lemma 9.14 implies that \[\sum_{\begin{subarray}{c}A\subseteq C,\\ B=C\setminus A\end{subarray}}\big{(}m^{|A|}l(A)-m^{|A|+1}l(B)\big{)}=0\,.\] On the other hand \[\sum_{A\subseteq C}\big{(}m^{|A|+1}-2m^{|A|+1}\big{)}=(-m)\cdot\sum_{A \subseteq C}m^{|A|}=(-m)(1+m)^{|C|}\,.\] The theorem follows from formula (19). ## Appendix A Combinatorics of rational functions Let \(\lambda\) be a partition of \(n\) and let \((k,l)\) be its corner. We introduced the notation (cf. Section 5.4) \[r_{\lambda,(k,l)}:=\frac{\operatorname{eu}(T_{\lambda}\operatorname{Hilb}_{n+1} )}{\operatorname{eu}(T_{\lambda,(k,l)}\operatorname{Hilb}_{n,n+1})}\in\mathbb{Z }(q,t)\,.\] Both Euler classes in the expression above are easy to compute - they are products of the form \(\prod_{(i,j)}(1-q^{-i}t^{-j})\), where \((i,j)\) are tangent weights. Recall from Section 4 that most of the tangent weights at the fixed point \(\lambda\) are identical in \(\operatorname{Hilb}_{n+1}\) and in \(\operatorname{Hilb}_{n,n+1}\), so factors corresponding to identical weights cancel out. What is left in the numerator are the tangent weights at \(\operatorname{Hilb}_{n+1}\) which are not tangent weights in the nested Hilbert scheme - each of them corresponds to a box below or to the left of the corner \((k,l)\). For boxes below the corner, the numerator weights are of the form \((-a,b+1)\) and the denominator weights of the form \((-a,b)\), while for the boxes to the left the numerator weights are \((a+1,-b)\) and the denominator weights are \((a,-b)\). Hence \(r_{\lambda,(k,l)}\) decomposes into the following product: \[r_{\lambda,(k,l)}=\prod_{d=0}^{l-1}\frac{1-q^{a_{d}}t^{-b_{d}-1}}{1-q^{a_{d}}t ^{-b_{d}}}\prod_{s=0}^{k-1}\frac{1-q^{-a_{s}-1}t^{b_{s}}}{1-q^{-a_{s}}t^{b_{s}} }=\prod_{d=0}^{l-1}\frac{1}{t}\cdot\frac{q^{a_{d}}-t^{b_{d}+1}}{q^{a_{d}}-t^{b _{d}}}\prod_{s=0}^{k-1}\frac{1}{q}\cdot\frac{q^{a_{s}+1}-t^{b_{s}}}{q^{a_{s}}-t ^{b_{s}}}\,.\] where \(a_{d}=a_{\lambda,(k,d)}\), \(b_{d}=b_{\lambda,(k,d)}\), \(a_{s}=a_{\lambda,(s,l)}\), \(b_{s}=b_{\lambda,(s,l)}\). Factoring out the \(\frac{1}{q},\frac{1}{t}\) terms one gets \[r_{\lambda,(k,l)}=\frac{1}{q^{k}t^{l}}\prod_{d=0}^{l-1}\frac{q^{a_{d}}-t^{b_{d }+1}}{q^{a_{d}}-t^{b_{d}}}\prod_{s=0}^{k-1}\frac{q^{a_{s}+1}-t^{b_{s}}}{q^{a_{ s}}-t^{b_{s}}}\,.\] _Remark A.1_.: In the case \(k=0\) or \(l=0\) we use convention \(\prod_{s=0}^{-1}(\dots)=1\). Let us introduce two functions, corresponding to the two types of products which appear in the expression for \(r_{\lambda,(k,l)}\). **Definition A.2**.: For a pair of non-negative integers \((a,b)\), we define rational functions \[W_{a,b}(q,t):=\frac{q^{a+1}-t^{b}}{q^{a}-t^{b}}\,, U_{a,b}(q,t):=\frac{q^{a}-t^{b+1}}{q^{a}-t^{b}}\,.\] Given a nonempty partition \(\lambda\) and a box \((k,l)\) in the Young diagram of \(\lambda\), we let \[W_{\lambda,(k,l)}(q,t):= W_{a_{\lambda,(k,l)},b_{\lambda,(k,l)}}(q,t)\,, U_{\lambda,(k,l)}(q,t):= U_{a_{\lambda,(k,l)},b_{\lambda,(k,l)}}(q,t)\,.\] Let us note some basic properties of these functions, all of which are checked by an easy computation. **Proposition A.3**.: 1. _We have_ \[W_{a,b}(q,t)=U_{b,a}(t,q)\,.\] 2. _For_ \(a\neq 0\)_, we have_ \[\lim_{q\to 0}W_{a,b}(q,t)=1\,, \lim_{q\to 0}U_{a,b}(q,t)=t\,, \lim_{q\to\infty}U_{a,b}(q,t)=1\,.\] 3. _We have_ \[W_{a,b}(q,t)=qW_{a,b}(q^{-1},t^{-1})\,, U_{a,b}(q,t)=tU_{a,b}(q^{-1},t^{-1})\,.\] **Definition A.4**.: Let \(\lambda\) be a nonempty partition and let \((k,l)\) be a corner in the Young diagram of \(\lambda\). Let \[R_{\lambda,(k,l)}(q,t):=\prod_{d=0}^{l-1}U_{\lambda,(k,d)}(q,t)\cdot\prod_{s=0}^ {k-1}W_{\lambda,(s,l)}(q,t)\,.\] **Proposition A.5**.: _Let \(\lambda\) be a nonempty partition and \((k,l)\) be its corner. We have_ \[R_{\lambda,(k,l)}(q,t)=q^{k}t^{l}\cdot R_{\lambda,(k,l)}(q^{-1},t^{-1})\,.\] Proof.: It follows directly from Proposition A.3 (3). Note that the function \(R_{\lambda,(k,l)}\) is the same as \(\tilde{r}_{\lambda,(k,l)}\) from Section 5.4, which in turn is equal to \(r_{\lambda,(k,l)}\) rescaled by the factor \(q^{k}t^{l}\). We have \[\tilde{r}_{\lambda,(k,l)} =R_{\lambda,(k,l)}(q,t)\,,\] \[r_{\lambda,(k,l)} =\frac{1}{q^{k}t^{l}}R_{\lambda,(k,l)}(q,t)=R_{\lambda,(k,l)}(q^ {-1},t^{-1})\,.\] The remaining part of the appendix is devoted to proving some technical results about the function \(R_{\lambda,(k,l)}\), which are used throughout the paper. **Corollary A.6**.: _Consider the situation as in Proposition A.5. Let \(\tau:\mathbb{T}\to\mathbb{T}\) be a group homomorphism given by taking inverse and \(\tau^{*}\) be the induced map on the \(\mathbb{T}\)-equivariant \(\mathrm{K}\)-theory. For an arbitrary integer \(m\) we have_ \[\tau^{*}\big{(}(q^{k}t^{l})^{m}\cdot\tilde{r}_{\lambda,(k,l)}\big{)}=(q^{k}t^{ l})^{1-m}\cdot\tilde{r}_{\lambda,(k,l)}\,.\] **Proposition A.7**.: _Let \(\lambda\) be a nonempty partition and let \((k_{1},l_{1}),\ldots,(k_{N},l_{N})\) be its corners sorted from the uppermost one to the lowest one. Set \(l_{N+1}=-1\). Then_ \[\lim_{q\to 0}R_{\lambda,(k_{i},l_{i})}(q,t)=\sum_{i=l_{i-1}+1}^{l_{i}}t^{i}\,.\] Proof.: By definition \[\lim_{q\to 0}R_{\lambda,(k_{i},l_{i})}(q,t)=\lim_{q\to 0}\prod_{s=0}^{k_{i}-1}W_{ \lambda,(s,l_{i})}(q,t)\cdot\lim_{q\to 0}\prod_{s=0}^{l_{i-1}}U_{\lambda,(k_{i},s )}(q,t)\cdot\lim_{q\to 0}\prod_{s=l_{i-1}+1}^{l_{i}-1}U_{\lambda,(k_{i},s )}(q,t)\] Proposition A.3 (2) implies that the first limit is equal to \(1\) and the second to \(t^{l_{i-1}+1}\). The third rational function does not depend on the variable \(q\). Thus \[\lim_{q\to 0}R_{\lambda,(k_{i},l_{i})}(q,t)=t^{l_{i-1}+1}\cdot\prod_{s=1}^{l_{i }-l_{i-1}-1}U_{1,s}(q,t)=t^{l_{i-1}+1}\cdot\frac{1-t^{l_{i}-l_{i-1}}}{1-t}= \sum_{i=l_{i-1}+1}^{l_{i}}t^{i}\,.\qed\] **Corollary A.8**.: _Consider the situation as in Proposition A.7. Then_ \[\lim_{q\to 0}\tilde{r}_{\lambda,(k_{i},l_{i})}=\sum_{i=l_{i-1}+1}^{l_{i}}t^{i}\,.\] **Proposition A.9**.: _Let \(\lambda\) be a nonempty partition and \(\tilde{\lambda}\) the partition \(\lambda\) without the first column. Suppose that \((k,l)\) is a corner of \(\lambda\) such that \(k\neq 0\). Then the limit_ \[\lim_{q\to\infty}\big{(}R_{\lambda,(k,l)}(q,t)-qR_{\tilde{\lambda},(k-1,l)}(q,t)\big{)}\] _exists._ Proof.: By definition \[R_{\lambda,(k,l)}(q,t)=R_{\hat{\lambda},(k-1,l)}(q,t)\cdot W_{k,b_{\lambda,(0,l)} }(q,t)\,.\] Let \(b:=b_{\lambda,(0,l)}\). It follows that \[R_{\lambda,(k,l)}(q,t)-qR_{\hat{\lambda},(k-1,l)}(q,t) =R_{\hat{\lambda},(k-1,l)}(q,t)\cdot(W_{k,b}-q)\] \[=R_{\hat{\lambda},(k-1,l)}(q,t)\cdot\frac{t^{b}\cdot(q-1)}{q^{k}- t^{b}}\] \[=R_{\hat{\lambda},(k-1,l)}(q^{-1},t^{-1})\cdot\frac{q^{k-1}t^{b+l }\cdot(q-1)}{q^{k}-t^{b}}\,,\] where the last equality follows from Proposition A.5. Proposition A.7 implies that the limit \[\lim_{q\to\infty}R_{\hat{\lambda},(k-1,l)}(q^{-1},t^{-1})\] exists. The limit of the second factor also exists (it is equal to \(t^{b+l}\)). **Corollary A.10**.: _Consider the situation as in Proposition A.9. Then the limit_ \[\lim_{q\to\infty}\left(\tilde{r}_{\lambda,(k,l)}-q\tilde{r}_{\hat{\lambda},(k -1,l)}\right)\] _exists._ **Proposition A.11**.: _Let \(\lambda\) be a nonempty partition and \((0,l)\) be its corner. Then the limit_ \[\lim_{q\to\infty}R_{\lambda,(0,l)}(q,t)\] _exists._ Proof.: By Proposition A.5 we have \[R_{\lambda,(0,l)}(q,t)=t^{l}R_{\lambda,(0,l)}(q^{-1},t^{-1})\,.\] The limit of the right hand side exists thanks to Proposition A.7. **Corollary A.12**.: _Consider the situation as in Proposition A.11. Then the limit_ \[\lim_{q\to\infty}\tilde{r}_{\lambda,(0,l)}(q,t)\] _exists._ **Proposition A.13**.: _Let \(\lambda\) be a nonempty partition, \((k,l)\) be its uppermost corner and \(\hat{\lambda}\) be the partition \(\lambda\) without the first rectangular block. Suppose that the partition \(\hat{\lambda}\) is nonempty. Let \((k_{i},l_{i})\) be a corner of \(\lambda\) different than \((k,l)\). Then_ \[R_{\lambda,(k^{\prime},l_{i})}=R_{\hat{\lambda},(k^{\prime}-k-1,l_{i})}\cdot \frac{q^{k_{i}+1}-t^{l-l_{i}}}{q^{k_{i}-k}-t^{l-l_{i}}}\,.\] Proof.: By definition \[R_{\lambda,(k_{i},l_{i})}(q,t) =R_{\hat{\lambda},(k_{i}-k-1,l_{i})}(q,t)\cdot\prod_{i=0}^{k}W_{ \lambda,(i,l)}(q,t)\] \[=R_{\hat{\lambda},(k_{i}-k-1,l_{i})}(q,t)\cdot\prod_{i=0}^{k}W_{ i+k_{i}-k,l-l_{i}}(q,t)\] \[=R_{\hat{\lambda},(k_{i}-k-1,l_{i})}(q,t)\cdot\frac{q^{k_{i}+1}-t ^{l-l_{i}}}{q^{k_{i}-k}-t^{l-l_{i}}}\,.\qed\] **Corollary A.14**.: _Consider the situation as in Proposition A.13. We have_ \[(q^{k_{i}}t^{l_{i}}-q^{k}t^{l})\cdot\tilde{r}_{\lambda,(k_{i},l_{i})}=(q^{k_{i}+k +1}t^{l_{i}}-q^{k}t^{l})\cdot\tilde{r}_{\hat{\lambda},(k_{i}-k-1,l_{i})}\cdot\]
2303.03080
Defining and comparing SICR-events for classifying impaired loans under IFRS 9
The IFRS 9 accounting standard requires the prediction of credit deterioration in financial instruments, i.e., significant increases in credit risk (SICR). However, the definition of such a SICR-event is inherently ambiguous, given its current reliance on evaluating the change in the estimated probability of default (PD) against some arbitrary threshold. We examine the shortcomings of this PD-comparison approach and propose an alternative framework for generating SICR-definitions based on three parameters: delinquency, stickiness, and the outcome period. Having varied these framework parameters, we obtain 27 unique SICR-definitions and fit logistic regression models accordingly using rich South African mortgage and macroeconomic data. For each definition and corresponding model, the resulting SICR-rates are analysed at the portfolio-level on their stability over time and their responsiveness to economic downturns. At the account-level, we compare both the accuracy and dynamicity of the SICR-predictions, and discover several interesting trends and trade-offs. These results can help any bank with appropriately setting the three framework parameters in defining SICR-events for prediction purposes. We demonstrate this process by comparing the best-performing SICR-model to the PD-comparison approach, and show the latter's inferiority as an early-warning system. Our work can therefore guide the formulation, modelling, and testing of any SICR-definition, thereby promoting the timeous recognition of credit losses; the main imperative of IFRS 9.
Arno Botha, Esmerelda Oberholzer, Janette Larney, Riaan de Jongh
2023-03-06T12:41:21Z
http://arxiv.org/abs/2303.03080v3
# Defining and comparing SICR-events for classifying impaired loans under IFRS 9 ###### Abstract The IFRS 9 accounting standard requires the prediction of credit deterioration in financial instruments, i.e., significant increases in credit risk (SICR). However, the definition of such a SICR-event is inherently ambiguous, given its reliance on comparing two subsequent estimates of default risk against some arbitrary threshold. We examine the shortcomings of this approach and propose an alternative framework for generating SICR-definitions, based on three parameters: delinquency, stickiness, and the outcome period. Having varied these parameters, we obtain 27 unique SICR-definitions and fit logistic regression models accordingly using rich South African mortgage data; itself containing various macroeconomic and obligor-specific input variables. This new SICR-modelling approach is demonstrated by analysing the resulting portfolio-level SICR-rates (of each SICR-definition) on their stability over time and their responsiveness to economic downturns. At the account-level, we compare both the accuracy and flexibility of the SICR-predictions across all SICR-definitions, and discover several interesting trends during this process. These trends form a rudimentary expert system for selecting the three parameters optimally, as demonstrated in our recommendations for defining SICR-events. In summary, our work can guide the formulation, testing, and modelling of any SICR-definition, thereby promoting the timeous recognition of credit losses; the main imperative of IFRS 9. keywords: IFRS 9; Credit risk modelling; Classification systems; SICR definitions *C31, C44, G21. Word count (excluding front matter and appendix): 9077 Figure count: 10 ## 1 Introduction It is no easy task to define a so-called SICR-event, or a _significant increase in credit risk_ (SICR), which is essentially a binary event. One common approach relies on estimating a loan's default risk, also known as its _probability of default_ (PD) where 'default' is another type of binary event. Let this PD be denoted by \(p_{D}(x,t)\) given risk information \(x\) observed at time \(t\) for a specific loan account. A SICR-event can then be defined by comparing \(p_{D}(x,t_{r})\) with \(p_{D}(x,t_{0})\) between reporting time \(t_{r}\) and account origination time \(t_{0}\), which reflects SS5.5.9 in the global accounting standard IFRS 9 (2014). Should the magnitude \(p_{D}(x,t_{r})-p_{D}(x,t_{0})\) exceed some arbitrarily chosen threshold, then a SICR-event is said to have occurred. This approach immediately highlights at least two challenges in establishing whether credit quality has deteriorated significantly. Selecting an appropriate threshold for the magnitude is non-trivial and highly subjective, which is exacerbated by IFRS 9 being principled instead of overly prescriptive. Secondly, any reliance on the point estimate \(p_{D}(x,t)\) tacitly requires a certain degree of accuracy, lest the subsequent comparison become meaningless. However, attaining sufficient accuracy can itself become challenging given the stochastic nature of default risk, especially when considering an ever-changing macroeconomic environment. We explore an alternative way of identifying SICR-events using predictive modelling instead of a PD-comparison, without diverging from the principles of IFRS 9 (discussed later). In particular, a predictive model (or supervised classifier) can incorporate both forward-looking and past-due information in predicting a SICR-event. Training any supervised classifier, however, first requires defining the target event, which can itself be challenging. In this regard, we formulate a concise SICR-framework from which various SICR-definitions can be generated, before training classifiers. By varying the framework's three parameters, we obtain a small list of viable SICR-definitions. Each resulting definition is then used as the target definition in training a specific classifier from the same input data. Accordingly, SICR-definitions can be implicitly evaluated by comparing the performance of these classifiers against one another. By doing so, we demonstrate the inherent trade-offs amongst the various SICR-definitions themselves. These trade-offs and broad relationships can be encoded into an informal expert system, thereby helping banks select a suitable SICR-definition given their unique contexts. In closing, our approach relies fundamentally on finding a suitable SICR-definition, followed by building a bespoke SICR-model and classifying loans accordingly. The present study is closest in design to the work of Harris (2013a), Harris (2013b), Botha et al. (2021), and Botha et al. (2022). In particular, Harris (2013a) proposed an algorithm (using random forests with data from Barbados) that yields the 'best' default definition based on maximising prediction accuracy. When measured in days past due (DPD), these definitions included: 30, 60, and 90 days. This work was later extended in Harris (2013b) using Support Vector Machines (SVMs) and included 120 and 150 DPD as additional definitions. In both studies, the author demonstrated that the overall prediction accuracy is significantly affected by the chosen definition of default. In Botha et al. (2021) and Botha et al. (2022), a procedure was devised wherein a delinquency threshold is found at which loan recovery (including legal action) is loss-optimal, thereby informing the default definition. Our work differs contextually in that we explore various SICR-definitions (and its underlying parameters) instead of default definitions. The notion of SICR-events under IFRS 9 is critically reviewed in section 2, which includes an in-depth examination of the PD-comparison approach and its aforementioned challenges in defining SICR-events. Literature on alternative approaches is then surveyed, followed by examining the support in IFRS 9 for such alternatives. In section 3, we present a simple three-parameter SICR-framework for generating SICR-definitions by sensibly varying its parameters, as illustrated with a few examples. These SICR-definitions are then used in building various supervised classifiers using logistic regression; itself reviewed in the appendix. The subsequent modelling results are discussed and compared across SICR-definitions in section4, having used residential mortgage data from a large South African bank. We demonstrate various relationships amongst SICR-definitions across a variety of aspects; all of which forms a reusable analytical framework in guiding the selection of a SICR-definition. Finally, we conclude the study in section5 with recommendations and outline avenues of future research. The source code accompanying this study is published in Botha and Oberholzer (2023). ## 2 Towards identifying SICR-events: A critical re-evaluation under IFRS 9 The recent introduction of IFRS 9 prompted a paradigm shift in the modelling of credit risk. Generally, the value of a financial asset should be comprehensively adjusted over time in line with a bank's (evolving) expectation of credit risk. The principle is to forfeit a portion of income today into a loss provision that ideally offsets amounts that may be written-off tomorrow. Doing so helps to smooth overall earnings volatility, which is itself a central tenet of risk management, as explained in Van Gestel and Baesens (2009, pp. 38-44). IFRS 9 requires that this loss provision be regularly updated based on a statistical model, i.e., the asset's _Expected Credit Loss_ (ECL). Given a new ECL-value, a bank adjusts its loss provision either by raising more from earnings or releasing a portion thereof back into the income statement. This ECL-model represents the probability-weighted sum of cash shortfalls that a bank expects to lose over a certain horizon; see IFRS 9 (2014, SS5.5.17-18, SSB5.5.28-35, SSB5.5.44-48), as well as Xu (2016). Regarding the ECL's calculation, IFRS 9 adopts a staged approach in SS5.5.3 and SS5.5.5 that is based on the _extent_ of the perceived deterioration in the underlying risk. In principle, each of the three stages requires a progressively more severe ECL-estimate, as illustrated in Fig.1. Stage 1 typically includes most loan assets, provided they either have low credit risk or have not experienced an SICR-event since origination. Stage 2 includes those assets that have deteriorated quite significantly in their credit quality (regardless of measure or SICR-definition), but do not yet qualify as fully 'credit-impaired' (i.e., default); a middle ground of sorts. Lastly, Stage 3 includes those assets with objective evidence of credit impairment, i.e., their future cash flows are likely compromised, e.g., defaulted accounts. These stages can be differentiated from one another by the time horizon of the eventual ECL-estimate: 12 months for Stage 1 and lifetime for Stages 2-3. In particular, a first-stage loss is the portion of lifetime ECL that may occur over the next 12 months, whereas all possible loss-inducing events over the entirety of the asset's remaining life are considered for a second-stage (or third-stage) loss. Together, these stages ought to reflect a more general pattern of deterioration (or improvement) in credit quality over time, which allows for recognising credit losses more timeously; see SSB5.5.2 of IFRS 9, EY (2014), and PWC (2014). Migration between Stage 1 and 2 requires a SICR-component, as conceptualised in Fig.1. A loan's loss estimate will generally attract a greater provision charge (or coverage rate) when in Stage 2 than in Stage 1. IFRS 9 primarily defines a SICR-event (thus Stage 2) by comparing \(p_{D}(x,t_{r})\) against \(p_{D}(x,t_{0})\) at two different points in time \(t_{r}>t_{0}\), whereupon the difference \(p_{D}(x,t_{r})-p_{D}(x,t_{0})=m(x,t_{r})\) is evaluated against a chosen threshold \(u>0\). If \(m(x,t_{r})>u\), then the loan is migrated to Stage 2, otherwise it remains in Stage 1. The converse is presumably true as well: a Stage 2 loan is migrated back to Stage 1 once its risk has improved, i.e., if \(m(x,t^{\prime}_{r})\leq u\) at some future time \(t^{\prime}_{r}>t_{r}\). Doing so would be cost-efficient, particularly since overzealous Stage 2 classification can become prohibitively costly, even if risk-prudent. However, this so-called PD-comparison approach suffers from at least two challenges in identifying SICR-events. Firstly, the approach presumes that the estimation of PD is indeed accurate; a presumption challenged by Crook and Bellotti (2010) and Chawla et al. (2016). In particular, severe model risk is introduced when selecting an inappropriate modelling technique or when failing to capture the time-dynamic nature of lifetime PD. Moreover, the era of big data and associated high-dimensional input spaces are exceptionally challenging when selecting predictive variables; see Hastie et al. (2009, SS2.5). Furthermore, issues concerning data quality (and data preparation) still persist in practice, which means the accuracy of estimation remains questionable. Notwithstanding quality, the paucity of data is another problem when calibrating any technique, perhaps even more so for low default portfolios, as discussed in Baesens et al. (2016, SS8). These issues clearly demonstrate the challenges of producing a single PD-estimate, let alone two. Secondly, the choice of an appropriate threshold \(u\) against which \(m(x,t)\) should be evaluated is ambiguous and contentious. Neither IFRS 9 nor most regulators offer any firm guidance on the choice of \(u\). Conversely, the European Banking Authority (2018) defines \(u=200\%\), yet provides no explanation for this seemingly arbitrary value. In fact, the PRA (2019) observed multiple threshold-values that were in use across UK banks and even across different portfolios; all of which attests to further arbitrariness. In fact, a single loan portfolio can theoretically even use multiple \(u\)-values in rendering overall SICR-classification within that portfolio as more risk-sensitive. The UK-regulator is admittedly unsurprised by these differences, presumably due to the underlying differences across banks in their risk appetites, strategies, and portfolio compositions. It is feasible that one bank's SICR-approach can react differently to the same macroeconomic reality, compared to a competing bank's SICR-approach. Given this complexity, one cannot fault the UK-regulator for expecting greater consistency in the design of SICR-approaches over the longer term, without necessarily ignoring the idiosyncrasies amongst banks or their portfolios. However, IFRS 9 was always intended to be _"principles-based and less complex"_ (see SSIN2 in IFRS 9); the lack of firm guidance on the choice of \(u\) is therefore unsurprising. Instead of detailed prescriptions, the emphasis is on the purpose behind the rule, which in turn will likely encourage better substantive compliance; see Black et al. (2007). Accordingly, the lack of prescription regarding SICR-classification seems particularly appropriate, given Figure 1: Illustrating the one-period evolution of credit risk within the IFRS 9 staged impairment framework. Each subsequent stage implies a greater ECL-estimate to reflect deeper credit deterioration. Arrows indicate possible migrations, subject to meeting certain qualitative criteria. The exception is the probabilistic SICR-component (shaded in dark green), which can include various factors that may predict a SICR-event. From Botha (2021). that it promotes careful evaluation of relevant factors that may influence the individual bank's SICR-classification. Furthermore, the literature is relatively scant regarding the choice of \(u\) and is largely limited to corporate lending. In particular, Chawla et al. (2016) introduced three metrics that translate a portfolio's PD term-structure into a measure of spread, which is then used in measuring credit deterioration since origination for SICR-classification. However, not only do these measures depend on observable market prices, but their application still requires appropriate thresholds, with little guidance offered by the authors. In contrast, Ewanchuk and Frei (2019) proposed that a threshold be found based on the trade-off between income volatility and early default recognition, formulated within a Merton-type framework. That said, the method's success still relies on subjective parameter choices and the availability of market prices. Lastly, Brunel (2016) suggested an approach for verifying Stage 2 classification by using an underlying PD-model and its accuracy ratio. This approach is centred on maximising the so-called Stage 2 "hit rate", or proportion of SICR-flagged accounts that eventually defaulted. However, the premise hereof is perhaps a bit myopic and even cynical: that all SICR-flagged loans are destined to default. In contrast, a loan's risk profile may very well improve after Stage 2 classification, whereupon it should rightfully cure back to Stage 1. The dynamicity of credit risk and its evolution over time is therefore completely ignored when simply maximising the Stage 2 "hit rate". Notwithstanding the previous challenges, SSB.5.5.12 in IFRS 9 provides a reprieve. It is not strictly necessary to compare explicit PD-estimates at two points, provided that the evolution of default risk over time is incorporated in some other way. In principle, a SICR-event should reasonably preempt a default event in most cases, which suggests using loan delinquency (and its evolution) directly in defining a SICR-event itself. Finding a statistical relationship between future SICR-events and present inputs becomes the basis of so-called 'SICR-modelling'. Such a binary classification task can assist greatly in predicting SICR-events quite accurately, perhaps using a rich set of macroeconomic and obligor-specific inputs. In fact, IFRS 9 already requires the use of _"all reasonable and supportable information"_ to identify a SICR-event (cf. SS5.5.4, SS5.5.9, SS5.5.11, SS5.5.17), which further supports statistical modelling. Moreover, the PD-comparison approach requires both an accurate PD-model and a suitable threshold \(u\), all of which is a relatively 'indirect' way of trying to identify a SICR-event. Instead, SICR-modelling is arguably a more direct approach of classifying future impaired loans into Stage 2, given _"all reasonable and supportable information"_ that is observed today and subsequently used as predictive inputs. A resulting SICR-model is likely to be more parsimonious than a PD-model since the inputs of the latter predominantly relate to default risk and not necessarily to the _increase_ in credit risk. Regarding macroeconomic factors, SSB5.5.4 in IFRS 9 already mandates their use in identifying SICR-events, which is also implicitly required in SSB5.5.14. In addition, many authors have found that macroeconomic information can significantly improve PD-prediction; see Simons and Rolwes (2009), Bellotti and Crook (2009), Bonfim (2009), and Crook and Bellotti (2010). The work of Leow and Crook (2016) explored default survival models that were trained before and after the 2008 Global Financial Crisis (GFC), which yielded markedly different parametrisations. In turn, the authors explicitly show the dynamic effect (and value) of using macroeconomic information explicitly within PD-prediction. A study by Gaffney and McCann (2019) further showed that SICR-classification is highly pro-cyclical and sensitive to economic downturns, at least within the Irish market. The authors followed the PD-comparison approach with \(u=200\%\) for SICR-classification, having used Irish residential mortgage data from 2008 to 2015. These previous studies, together with the IFRS 9 prescription, should bode well for building bespoke SICR-models wherein macroeconomic covariates are explicitly used. A concise three-parameter SICR-framework for generating SICR-definitions A useful starting point for defining a SICR-event is that of a _delinquency measure_, which should quantify the gradual erosion of trust between bank and borrower in honouring the credit agreement. The \(g_{0}\)-measure (or the unweighted number of payments in arrears) is selected from Botha et al. (2021) for its intuitive appeal and industry-wide ubiquity. Now consider an account's \(g_{0}\)-measured delinquency over its lifetime \(T\), as measured in monthly cohorts \(t=1,\ldots,T\). In defining a SICR-event, one can compare \(g_{0}(t)\) at time \(t\) against a specifiable threshold \(d\geq 0\), i.e., \(g_{0}(t)\geq d\). In fact, delinquency can be tested over multiple consecutive months, thereby ensuring that a 'true' SICR-event is identified at \(t\). More formally, a SICR-event is said to have occurred at time \(t\) if \(g_{0}(v)\geq d\) holds true across a fixed time span \(v\in[t-(s-1),t]\). The specifiable parameter \(s\geq 1\) is the number of consecutive months for which delinquency is tested; put differently, \(s\) is the so-called'stickiness' of the aforementioned delinquency test. These ideas are formalised within the Boolean-valued decision function \(\mathcal{G}(d,s,t)\) that yields a binary SICR-status at an end-point \(t\), defined as \[\mathcal{G}(d,s,t)=\left[\left(\sum_{v=t-(s-1)}^{t}[g_{0}(v)\geq d]\right)=s \right]\text{ for }t\geq s\,, \tag{1}\] where \([a]\) are Iverson brackets that outputs \(1\) if the enclosed statement \(a\) is true and \(0\) otherwise. We illustrate Eq. 1 for \(s=1\) and \(s=2\) in Table 1 using a hypothetical loan with monthly delinquency observations. For \(s=1\), the SICR-status relies on testing \(g_{0}(t)\geq d\) at a single period \(t\), which is akin to having no \(s\)-parameter. For \(s=2\), \(g_{0}(t)\geq d\) is tested twice at two consecutive periods \(t-1\) and \(t\). If both delinquency tests are true, then the resulting sum of the two Iverson statements will equal \(s\), thereby signalling a SICR-event at time \(t\). The \(s\)-parameter smooths away rapid 0/1-fluctuations in the SICR-status over time, i.e., the SICR-status becomes'stickier' as \(s\) increases. Eq. 1 relies on two specifiable parameters \(d\) and \(s\) in classifying a loan's accrued delinquency over time \(t\). The loan's resulting binary-valued SICR-statuses, i.e., its \(\mathcal{G}(d,s,t)\)-values, can now be used within a typical cross-sectional modelling setup for predicting future SICR-events. In preparing the modelling dataset, we observe all predictive information of loan \(i\) at a particular time \(t\). Then, the loan's future SICR-status at time \(t+k\) is duly merged, thereby taking a'snapshot' at two points in time, or a cross-section. However, the chosen value for this third parameter \(k\geq 0\) (or outcome period) can significantly affect modelling results. In particular, both Kennedy \begin{table} \begin{tabular}{l l l l l l l} \hline \hline Time & Delinquency & SICR-status & SICR-Outcome & SICR-status & SICR-Outcome & Default \\ \(t\) & \(g_{0}(t)\) & \(\mathcal{G}(1,1,t)\) & \(\mathcal{Z}_{t}(1,1,3)\) & \(\mathcal{G}(1,2,t)\) & \(\mathcal{Z}_{t}(1,2,3)\) & \(g_{0}(t)\geq 3\) \\ \hline 3 & 0 & 0 & 0 & & 0 & 0 \\ 4 & 0 & 0 & 1 & 0 & 0 & 0 \\ 5 & 1 & 1 & 1 & 0 & 1 & 0 \\ 6 & 0 & 0 & 1 & 0 & 1 & 0 \\ 7 & 1 & 1 & & 0 & & 0 \\ 8 & 2 & 1 & & 1 & & 0 \\ 9 & 3 & 1 & & 1 & & 1 \\ \hline \hline \end{tabular} \end{table} Table 1: Illustrating two formulations of the SICR-decision function \(\mathcal{G}\) from Eq. 1 for (\(d=1,s=1\)) and (\(d=1,s=2\)). Accordingly, SICR-statuses are created using \(\mathcal{G}\) for a hypothetical loan and its \(g_{0}\)-measured delinquency over time \(t\). The \(\mathcal{Z}_{t}(d,s,k)\)-process from Eq. 2 then lags each SICR-status back \(k\) periods in creating SICR-outcomes, e.g., \(\mathcal{Z}_{t}(1,1,3)\) at \(t=4\) will equate to \(\mathcal{G}(1,1,4+3)=1\) three months later in the β€˜future’. et al. (2013) and Mushava and Murray (2018) examined the outcome period's effect in predicting default risk, using Irish and South African credit data respectively. Too short a horizon yielded overly volatile results, largely due to risk immaturity and/or seasonal effects. Too long a window led to increasingly inaccurate models, in addition to greater asynchronism with market conditions or even the portfolio's risk composition. Since a SICR-event should ideally preempt a default event in reality, our (cross-sectional) study also contends with various parameter choices for \(k\). More formally, a process \(\mathcal{Z}_{t}(d,s,k)\) prepares a given loan's monthly performance history by evaluating Eq. 1 at 'future' time \(t+k\), though assigns the result to time \(t\); see Table 1 as an example of using \(k=3\). Accordingly, the binary-valued SICR-outcome variable \(Y_{t}\) at time \(t=1,\ldots,T-k\) is created as \[\mathcal{Z}_{t}(d,s,k)\ :\quad Y_{t}=\mathcal{G}(d,s,t+k). \tag{2}\] Various SICR-definitions are generated using the \(\mathcal{Z}_{t}(d,s,k)\)-process from Eq. 2 (or 'SICR-framework'), simply by systematically varying its parameters \((d,s,k)\). For this study, the parameter space includes: 1) the threshold \(d\in\{1,2\}\) of \(g_{0}\)-measured delinquency beyond which SICR is triggered; 2) the level of stickiness \(s\in\{1,2,3\}\) within the delinquency test; and 3) the choice of outcome period \(k\in\{3,6,9,12\}\) when modelling SICR-events. While the parameter spaces of \(d\) and \(s\) are appreciatively small, the same luxury does not hold for the outcome period \(k\), which can indeed assume many values. Its enumeration is ultimately guided by experimentation and expert judgement in balancing rigour against practicality. That said, more extreme periods of \(k>12\) are investigated later in subsection 4.2, though having restricted \(d\) and \(s\). Regardless, the combined parameter space yields 24 different combinations of the triple \((d,s,k)\), as enumerated in Table 2. Each combination serves as a particular target definition in building a corresponding SICR-model. In this regard, the chosen modelling technique is binary logistic regression, given its ubiquity in credit risk modelling, as reviewed in the appendix. The resulting logit-models yield probability scores \(h\left(\mathbf{x}_{it}\right)\in[0,1]\) for realised inputs \(\mathbf{x}_{it}\) of a particular account \(i\) at time \(t\). \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline **\#** & **Definition** & **Delinquency** & **Stickiness** & **Outcome** & **\#** & **Definition** & **Delinquency** & **Stickiness** & **Outcome** \\ & & **threshold** & & **period** & & & **threshold** & & **period** \\ \hline 1 & 1a(i) & \(d\geq 1\) & \(s=1\) & \(k=3\) & 13 & 2a(i) & \(d\geq 2\) & \(s=1\) & \(k=3\) \\ 2 & 1a(ii) & \(d\geq 1\) & \(s=1\) & \(k=6\) & 14 & 2a(ii) & \(d\geq 2\) & \(s=1\) & \(k=6\) \\ 3 & 1a(iii) & \(d\geq 1\) & \(s=1\) & \(k=9\) & 15 & 2a(iii) & \(d\geq 2\) & \(s=1\) & \(k=9\) \\ 4 & 1a(iv) & \(d\geq 1\) & \(s=1\) & \(k=12\) & 16 & 2a(iv) & \(d\geq 2\) & \(s=1\) & \(k=12\) \\ 5 & 1b(i) & \(d\geq 1\) & \(s=2\) & \(k=3\) & 17 & 2b(i) & \(d\geq 2\) & \(s=2\) & \(k=3\) \\ 6 & 1b(ii) & \(d\geq 1\) & \(s=2\) & \(k=6\) & 18 & 2b(ii) & \(d\geq 2\) & \(s=2\) & \(k=6\) \\ 7 & 1b(iii) & \(d\geq 1\) & \(s=2\) & \(k=9\) & 19 & 2b(iii) & \(d\geq 2\) & \(s=2\) & \(k=9\) \\ 8 & 1b(iv) & \(d\geq 1\) & \(s=2\) & \(k=12\) & 20 & 2b(iv) & \(d\geq 2\) & \(s=2\) & \(k=12\) \\ 9 & 1c(i) & \(d\geq 1\) & \(s=3\) & \(k=3\) & 21 & 2c(i) & \(d\geq 2\) & \(s=3\) & \(k=3\) \\ 10 & 1c(ii) & \(d\geq 1\) & \(s=3\) & \(k=6\) & 22 & 2c(ii) & \(d\geq 2\) & \(s=3\) & \(k=6\) \\ 11 & 1c(iii) & \(d\geq 1\) & \(s=3\) & \(k=9\) & 23 & 2c(iii) & \(d\geq 2\) & \(s=3\) & \(k=9\) \\ 12 & 1c(iv) & \(d\geq 1\) & \(s=3\) & \(k=12\) & 24 & 2c(iv) & \(d\geq 2\) & \(s=3\) & \(k=12\) \\ \hline \hline \end{tabular} \end{table} Table 2: Numbered SICR-definitions, generated by varying the parameters within the \(\mathcal{Z}_{t}(d,s,k)\)-process. Definitions are grouped into six classes and shaded accordingly. ## 4 Comparing SICR-definitions using South African mortgage data The SICR-modelling results are structured as follows. First, the resampling scheme (and underlying data) is explained and verified in subsection 4.1, followed by broadly describing the selection process of input variables and the dichotomisation of probability scores into IFRS 9 staging decisions. In subsection 4.2, we examine the effect of the outcome period \(k\) within the 1a-definition class in Table 2 (light blue), having included additional outcome periods beyond the 12-month boundary. Thereafter, the stickiness parameter \(s\) is investigated in subsection 4.3 for \(d=1\) across all \(k\) and \(s\), i.e., classes 1a-c in Table 2 (lighter shades). Lastly, we demonstrate in subsection 4.4 the futility of using \(d=2\) across all \(k\) and \(s\), having analysed the remaining classes 2a-c in Table 2 (darker shades). ### Data calibration: resampling scheme, feature selection, and dichotomisation SICR-models are trained and validated using a data-rich portfolio of mortgages that was provided by a large South African bank. After applying the \(\mathcal{Z}_{t}(d,s,k)\)-process from Eq. 2, the resulting credit dataset is structured as \(\mathcal{D}=\{i,t,Y_{it},\mathbf{X}\}\) for each SICR-definition in Table 2 using a portfolio of \(N\) loans, indexed by \(i=1,\ldots,N\). In this respect, \(Y_{it}\in\{0,1\}\) indicates at time \(t\) whether account \(i\) experienced a SICR-outcome \(k\) periods later, given a particular SICR-definition. In predicting \(Y_{it}\), consider \(\mathbf{X}=\{\mathbf{X}_{i},\mathbf{X}_{t},\mathbf{X}_{it}\}\) as a random vector of input variables that are thematically grouped as follows: 1) account-level information \(\mathbf{X}_{i}\) for loan \(i\), e.g., repayment type (debit order, cash); 2) macroeconomic information \(\mathbf{X}_{t}\) at time \(t\), e.g., the prevailing inflation rate; 3) time-dependent behavioural information \(\mathbf{X}_{it}\), e.g., time in performing spell. In circumventing computing constraints and for confidentiality purposes, this mortgage portfolio is sub-sampled using two-way stratified sampling across a wide sampling window of January-2007 up to November-2019. For every SICR-definition, the resulting \(\mathcal{D}\) is grouped by the binary-valued SICR-outcomes \(Y_{it}\) within each monthly cohort \(t\), thereby resulting in about 310 strata. Observations are then sampled randomly within each stratum in creating the sub-sampled dataset \(\mathcal{D}_{S}\). The sampling proportion is dynamically set for each SICR-definition such that \(\mathcal{D}_{S}\) will be of fixed size, i.e., about 250,000 monthly observations in total. Finally, a simple cross-validation resampling scheme is used (with a 70%-30% ratio) to partition the data \(\mathcal{D}_{S}\) into two non-overlapping sets: a training set \(\mathcal{D}_{\mathcal{T}}\) and a validation set \(\mathcal{D}_{V}\); see Hastie et al. (2009, pp. 249-254). In verifying sampling representativeness, one can compare each sample's event rate over time, i.e., the so-called SICR-rate per context. More formally, the SICR-rate (or incidence rate) at \(t\) is the conditional probability \(\mathbb{P}\left(Y_{t+k}=1\,|\,Y_{t}=0\right)\) across all Stage 1 accounts \(i\) at \(t\) that became SICR-flagged at \(t+k\). This SICR-rate is estimated by \(\sum_{i}\left[Y_{it}=1\right]/n_{t}\), where \([\cdot]\) are Iverson brackets and \(n_{t}\) denotes the number of at-risk Stage 1 accounts at \(t\). Evidently, the line graphs in Fig. 2 for the 1a(i)-definition from Table 2 are reasonably close to one another across all samples. Further, the Mean Absolute Error (MAE) of the SICR-rates between \(\mathcal{D}\) and each respective sample is calculated for the same 1a(i)-definition as \(\mathcal{D}_{T}\): 0.28% and \(\mathcal{D}_{V}\): 0.43%; both of which are deemed as reasonably low. Similar results hold for all other SICR-definitions, which suggests that the resampling scheme is indeed representative of the population at large. This bodes well for deriving SICR-models later that can generalise beyond training data. Feature selection is mainly conducted using repeated logistic regressions on a bigger sub-sampled dataset \(\mathcal{D}_{S}\) of 1 million observations (instead of the previous 250,000), before resampling into \(\mathcal{D}_{\mathcal{T}}\) and \(\mathcal{D}_{V}\) as before. The selection process is itself interactive and guided by expert judgement, model parsimony, statistical significance, macroeconomic theory, and predictive performance on \(\mathcal{D}_{V}\). In particular, predictive performance is extensively evaluated using classical ROC-analysis and the resulting AUC-measure; see Fawcett (2006). Highlights of this interactive selection process are given in the appendix, along with the input space of each SICR-definition. Earlier modelling attempts experimented with both a best subset approach (stepwise regression) and the LASSO shrinkage method in selecting inputs, as discussed in James et al. (2013, SS6). However, the necessary computation times proved to be excessive (especially for the stepwise method) and even unstable, whilst yielding negligible predictive performance and overly small models. Moreover, the declaration of Henderson and Velleman (1981) - "the data analyst knows more than the computer" - seems apt, cautioning against potential data dredging when trying to automate feature selection, devoid of human expertise. In this work, we are examining the effect of a SICR-definition within a broader multi-definition setup. Therefore and as a last step, selected features are'standardised' within each definition class in Table 2 such that all SICR-models have the same input space per \((d,s)\)-tuple across all \(k\)-values. By standardising the input space, one can therefore ascribe observable patterns in model performance only to variations in the SICR-definition itself, without contending too much with changes in the input space. Furthermore, large sample sizes are known to affect \(p\)-values when testing the statistical significance of regression coefficients, as demonstrated in Lin et al. (2013). The \(p\)-values can easily approach zero as the sample size increases, notwithstanding the greater statistical power avail Figure 2: Comparing observed conditional SICR-rates (given Stage 1) over monthly periods across different samples, using the 1a(i)-definition. The Mean Absolute Error (MAE) between each sample and the full set \(\mathcal{D}\) is overlaid in summarising the line graph discrepancies over time. by larger sizes. This phenomenon overlaps with the so-called _Hughes principle_ from Hughes (1968): a model's predictive power will generally increase for every additional input, but decrease again after reaching some point, provided that the sample size stays constant. Accordingly, the input space in our study is retested for statistical significance after deliberately decreasing the sample size of \(\mathcal{D}_{S}\) from 1 million to 250,000 observations. The vast majority of the inputs remain statistically significant across all \(k\)-values within each definition class, which further reassures our standardisation process as robust. The logit-models will need to be dichotomised and duly transformed into probabilistic classifiers to render 0/1 staging decisions under IFRS 9; see the appendix for details. In particular, an appropriate cut-off \(c_{dsk}\)-value is required for the probability scores \(h\left(\mathbf{x}_{it}\right)\) resulting from each SICR-model, given an underlying \(\left(d,s,k\right)\)-based SICR-definition. However, SICR-events are relatively rare outcomes and the consequences of misclassifying positives vs. negatives are reasonably unequal. Under IFRS 9, false negatives \(F^{-}\) should be costlier than false positives \(F^{+}\) in that the former implies the bank has failed to increase its loss provision for those accounts with _increasing_ credit risk, i.e., actual SICR-events. Accordingly, misclassification costs are assigned as \(c_{F^{-}}=6\) for false negatives and \(c_{F^{+}}=1\) for false positives. These costs are deduced using expert judgement and experimentation, though can certainly be refined or better estimated in future work. At present, these costs imply a ratio of \(a=6/1\), which is deemed intuitive and risk-prudent. Given \(a\), each \(c_{dsk}\)-value is then found using the Generalised Youden Index \(J_{a}\) (see appendix), as implemented within the R-function optimal.cutpoints() from Lopez-Raton and Rodriguez-Alvarez (2021). ### The effect of the outcome period \(k\) when defining and predicting SICR-events In general, SICR-classification should react dynamically to changes in credit risk and its evolution over time. Shorter outcome periods \(k\) are therefore more sensible than longer periods in achieving this dynamicity. However, the 'optimal' choice of outcome period is yet unclear, as is the very idea of 'optimality' within this SICR-modelling context. To help fill this gap, we deliberately vary \(k\) within this particular subsection from 3 months up to an extreme of 36 months when training SICR-models. The other parameters are kept constant at \(d=1\) and \(s=1\), i.e., definition class 1a within Table 2. These two values are relatively benign for the following two reasons. Firstly, the underlying SICR-test \(g_{0}(t)\geq d\) from Eq. 1 suggests that \(d=2\) will yield a subset of SICR-cases that are already selected by \(d=1\); the latter choice therefore leads to a 'broader' SICR-definition. Secondly, \(s=1\) implies zero'stickiness' and simplifies the resulting SICR-definition. Both choices of \(d\) and \(s\) should therefore have minimal interference when studying the effect of \(k\), as intended in this subsection. The observed SICR-rates are shown in Fig. 3 over time and across all chosen \(k\)-values. Evidently, each time graph has a different but increasing mean-level as \(k\) increases, especially when examined after the anomalous 2008 Global Financial Crisis (GFC). However, a longer outcome period invariably allows greater opportunity for a Stage 1 account to develop sufficient delinquency; enough to enter - and remain in - default. Since \(g_{0}(t+k)\geq 3>d\) from Eq. 1 will hold for both default and SICR-events respectively, larger \(k\)-values will therefore increasingly capture a greater proportion of defaulting accounts. This phenomenon explains the increasing mean SICR-rates that are correlated with increasing \(k\)-values; a relationship that becomes almost linear when removing the 2008-GFC period during mean-calculation. In fact, Fig. 3 shows that overall mean-levels seemingly reach a plateau near 4% for \(k\geq 12\), having rapidly increased for smaller \(k\)-values. At the very least, this may suggest that choosing \(k\geq 18\) has a negligible contribution to the overall SICR-mean, which may already have stabilised at \(k=12\). At worst, choosing \(k\geq 18\) will increasingly select default-instances into the sample, thereby 'contaminating' the SICR-mean. Doing so can detract from the very idea of SICR-staging, which should ideally act as a pro-cyclical "early warning system" for impending credit risk; see SSB.5.5.21 in IFRS 9 (2014) and Gaffney and McCann (2019). Apart from differing SICR-means, Fig. 3 shows that each \(k\)-value results in a time graph with a different volatility pattern. Evidently, the SICR-rates of extreme \(k\)-values are more stable over time relative to other \(k\)-values. In particular, the series resulting from \(k\leq 3\) and \(k>24\) exhibit lower standard deviation than their peers. However, stable SICR-rates may not necessarily be a useful pursuit, especially not during an unfolding macroeconomic crisis and its subsequent effect on default rates. In this respect, the SICR-rates associated with extreme \(k\)-values are Figure 3: Comparing actual SICR-rates over time and across outcome periods \(k\in\{3,6,9,12,18,24,36\}\) within \(\mathcal{D}_{S}\) for SICR-definition class 1a from Table 2. The mean and standard deviation of each resulting time series are summarised within the inset graphs. Encircled points denote the maximum of each series over time. relatively stable, though also failed to track increasing default rates (not shown) during the volatile 2008-GFC period. Accordingly, when defining a SICR-event, the resulting SICR-rates should reasonably exceed default rates (or some lagged variant thereof) since SICR-staging should ideally preempt default. This principle avails a useful heuristic in disqualifying both \(k\leq 3\) and \(k>24\), given 12-month default rates of about 6% that prevailed at the height of the 2008-GFC. For each of the remaining \(k\)-values, the earliest SICR-rate \(a(k)\) (at January-2007) can be compared to the maximum SICR-rate \(b(k)\), which typically occurs during the 2008-GFC. Effectively, this comparison constitutes the degree to which a SICR-definition can respond to unfolding calamities. The so-called early-warning degree \(b(k)-a(k)\) is graphed in Fig. 4 in pink; larger values are deemed as better. Clearly, \(k\in[6,12]\) yield SICR-rates that reassuringly increased by about 2%-3% points from January-2007 leading into the crisis, thereby demonstrating considerable responsiveness to externalities like the 2008-GFC. Furthermore, the maxima \(b(k)\) can be compared to the post-crisis SICR-means \(c(k)\) in measuring the magnitude by which the resulting SICR-rate can normalise. The so-called recovery degree \(b(k)-c(k)\) is shown in Fig. 4 in green; larger values are again deemed as better. The SICR-rates resulting from \(k\in[6,12]\) recovered substantially by about 5%-7% points back to their post-GFC SICR-means, which suggests appropriate resiliency within the underlying definitions. While \(k\in[18,24]\) had similar recovery degrees, the resulting SICR-rates are already high even before the 2008-GFC. Accordingly, longer outcome periods cannot timeously signal impending distress and would likely result in SICR-models producing overly punitive and 'paranoid' predictions. In the extreme case, very long outcome periods, e.g., \(k\geq 36\), can miss the entirety of a crisis period, as evidenced by low-values for both the early-warning and post-GFC recovery degrees. In contrast, SICR-definitions with shorter outcome periods \(k\leq 12\) can react more flexibly during market failures Figure 4: Various summary statistics of the actual SICR-rates from Fig. 3 across chosen outcome periods \(k\) for SICR-definition class 1a from Table 2. Summaries include the earliest, maximum, and mean after Dec-2009 (β€˜post-GFC’), as well as differences amongst these summaries, i.e., the early-warning degree and the post-GFC recovery degree. Desirable \(k\)-values are encircled and discussed. without becoming punitive ahead of time; a desirable quality. In assessing the resulting SICR-model built from each SICR-definition in class 1a, various performance measures are calculated, shown in Table 3. Considering the expected probabilities from each logit-model, the resulting AUC-values suggest that smaller outcome periods yield more accurate SICR-models than longer periods; see Fig. 5a. This trend is also reflected in the widening confidence intervals of these AUC-values as \(k\) increases. Reassuringly, this finding corroborates the work of Kennedy et al. (2013) and Mushava and Murray (2018) wherein the outcome period was similarly varied in PD-modelling - an older 'cousin' of SICR-modelling - which resulted in a similar AUC-trend across \(k\)-values. Moreover, the nonlinear reduction in successive AUC-values seems to subside after \(k\geq 24\), which suggests yet again that examining smaller \(k\leq 18\) values is more worthwhile when defining SICR-outcomes. When dichotomising the logit-models into discrete classifiers using fixed \(c_{dsk}\)-values, the resulting AUC-values follow a similar (though even more pronounced) downwards trend as \(k\) increases; see Fig. 5b. Longer outcome periods also result in fewer observed SICR-events, as evidenced by a decreasing prevalence rate \(\phi_{dsk}\). A rarer SICR-event partly explains why the cut-off \(c_{dsk}\) also decreases in tandem with greater \(k\)-values, given the presence of \(\phi_{dsk}\) in Eq. 5 when calculating the Generalised Youden Index \(J_{a}\); see appendix. Fewer SICR-events can generally exacerbate the task of finding a statistical relationship using logistic regression, hence the lower AUC-values of logit-models as \(k\) increases. Pursuing greater AUC-values may at first seem worthwhile when selecting a SICR-definition; however, there are other considerations. In particular, one can measure the degree to which a SICR-model's predictions vary over the lifetime of an average account, i.e., the so-called prediction flexibility \(\omega_{dsk}\) in Table 3. Evidently, SICR-models built from shorter outcome periods yield probability scores \(h\left(\mathbf{x}_{it}\right)\) that vary more over time \(t\), relative to longer outcome periods (or larger \(k\)-values). Like AUC-values, \(\omega_{dsk}\) appears to be a monotonically decreasing function of \(k\), which invariably couples greater accuracy with greater prediction flexibility (or variance). From Fig. 3, it is clear that the underlying SICR-process is itself dynamic and stochastic upon its aggregation, i.e., the portfolio-level SICR-rate. It is therefore unsurprising that more accurate SICR-models (lower \(k\)-values) also exhibit greater variance in their predictions as a necessary consequence. Barring the extremes (\(k\leq 3\)), this result is corroborated by Fig. 4 wherein shorter outcome periods also yield more responsive SICR-rates leading up to the 2008-GFC, despite \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline **Definition** & **Outcome** & **Prevalence** & **AUC-** & & **Flexibility** & **Instability** & **Cut-off** & **AUC-** \\ & **period**\(k\) & \(\phi_{dsk}\) & **Probabilistic** & \(\omega_{dsk}\) & \(\sigma_{dsk}\) & \(c_{dsk}\) & **Discrete** \\ \hline 1a(i) & \(k=3\) & 6.16\% & 91.3\% \(\pm\) 0.47\% & 4.2\% & 1.00\% & 12.1\% & 82.4\% \(\pm\) 0.65\% \\ 1a(ii) & \(k=6\) & 6.13\% & 88.5\% \(\pm\) 0.53\% & 3.6\% & 1.43\% & 10.9\% & 78.1\% \(\pm\) 0.69\% \\ 1a(iii) & \(k=9\) & 6.07\% & 86.5\% \(\pm\) 0.55\% & 3.4\% & 1.64\% & 11.5\% & 75.0\% \(\pm\) 0.70\% \\ 1a(iv) & \(k=12\) & 5.99\% & 84.8\% \(\pm\) 0.60\% & 3.3\% & 1.81\% & 11.6\% & 73.4\% \(\pm\) 0.72\% \\ 1a(v) & \(k=18\) & 5.73\% & 82.2\% \(\pm\) 0.66\% & 2.7\% & 1.59\% & 11.5\% & 70.3\% \(\pm\) 0.76\% \\ 1a(vi) & \(k=24\) & 5.46\% & 80.6\% \(\pm\) 0.71\% & 2.4\% & 1.27\% & 11.5\% & 68.6\% \(\pm\) 0.75\% \\ 1a(vii) & \(k=36\) & 5.19\% & 79.0\% \(\pm\) 0.75\% & 1.9\% & 0.65\% & 9.4\% & 68.2\% \(\pm\) 0.76\% \\ \hline \hline \end{tabular} \end{table} Table 3: Various performance measures of SICR-models across different \(k\)-values for definition class 1a (\(d=1,s=1\)) from Table 2. The SICR-prevalence \(\phi_{dsk}\) estimates \(\mathbb{P}\left(\mathcal{C}_{1}\right)\) per definition. Prediction flexibility \(\omega_{dsk}\) is measured by calculating the sample mean across all \(\sigma_{i}\) within \(\mathcal{D}_{\mathcal{V}}\), where \(\sigma_{i}\) is the standard deviation of the predicted SICR-probabilities \(h\left(\mathbf{x}_{it}\right)\) for account \(i\) over time \(t\). SICR-rate instability \(\sigma_{dsk}\) is measured by taking the standard deviation of actual SICR-rates in Fig. 3. In dichotomising \(h\left(\mathbf{x}_{it}\right)\), cut-offs \(c_{dsk}\) are found using the Generalised Youden Index \(J_{a}\) with a misclassification cost ratio of \(a=6/1\). AUC-values are given with bootstrapped 95% confidence intervals using \(\mathcal{D}_{\mathcal{V}}\) for both probabilistic and discrete SICR-classifiers. the 'unstable' account-level SICR-predictions. Furthermore, this relationship between accuracy and variance is reminiscent of a well-known phenomenon within all statistical learning, called the bias-variance trade-off; see Hastie et al. (2009, SS2.9, SS7) and James et al. (2013, SS2). On the other hand, the supposed 'flexibility' in account-level SICR-predictions, as measured by \(\omega_{dsk}\) when varying \(k\), may not necessarily produce the same dynamicity at the portfolio-level. In particular, defining SICR-outcomes using extremely short outcome periods \(k\leq 3\) may demonstrably lead to highly flexible (and accurate) SICR-predictions at the account-level. However, the same volatile SICR-predictions'stabilise' somewhat upon their aggregation into the portfolio-level expected SICR-rate, as shown in Fig. 6a in orange. It is therefore useful to compare the account-level \(\omega_{dsk}\) against the standard deviation of a SICR-rate's time graph, i.e., the so-called instability \(\sigma_{dsk}\) in Table 3; itself graphed in Fig. 3. Evidently, extremely flexible SICR-predictions, as produced with \(k\leq 3\), can lead to rapid oscillations in moving an account between Stages 1 and 2 over time. This oscillatory effect dampens the overall transition into Stage 2 when aggregating across accounts, hence the less responsive SICR-rate in Fig. 4 (in pink) and the lower \(\sigma_{dsk}\)-values when selecting \(k\leq 3\). It is therefore questionable to adopt such extremely short outcome periods when, despite their greater prediction accuracy, the associated volatility of the resulting SICR-predictions do not meaningfully translate into more dynamic SICR-rates, as expected at the portfolio-level. Having disqualified \(k\leq 3\), we can similarly disregard \(k\geq 18\) given the worsening accuracy (AUC) of SICR-predictions. Larger \(k\)-values also result in less flexible SICR-predictions over time, as measured by decreasing \(\omega_{dsk}\)-values. Simultaneously, the resulting SICR-rates from these larger \(k\)-values are less responsive to externalities such as the 2008-GFC, as supported by decreases in both the early-warning degree and in \(\sigma_{dsk}\). Regarding the remaining \(k\)-choices, we note that the rate of change in \(\omega_{dsk}\) from Table 3 slows down significantly for \(k\in[6,12]\) Figure 5: ROC-graphs using logistic regression as classifier with SICR-definition class 1a (\(d=1,s=1\)), having varied the outcome period \(k\). In **(a)**, the probability scores are evaluated for each resulting logit model. In **(b)**, the probability scores are discretised by imposing a cost-sensitive cut-off score (Generalised Youden Index), whereupon new ROC-graphs are obtained. In both **(a)** and **(b)**, the AUC-values are reported for each SICR-definition using \(\mathcal{D}_{\mathcal{V}}\), together with bootstrapped 95% confidence intervals and corresponding Gini-values. This plateauing effect suggests that selecting slightly larger \(k\)-values within this range will not overly erode the flexibility of account-level SACR-predictions. At the same time, the increasingly higher \(\sigma_{dsk}\)-values, as associated respectively with \(k\in\{6,9,12\}\), implies greater dynamicity in the overall SACR-rate of the portfolio, especially when subjected to an unfolding macroeconomic crisis. The midpoint hereof is \(k=9\), which seems 'optimal' when considering the trade-offs amongst the AUC-values, \(\omega_{dsk}\), \(\sigma_{dsk}\), and the responsiveness of the resulting SACR-rate to externalities. The time graphs of actual versus expected SACR-rates are shown in Fig. 6 for each \(k\in\{3,6,9,12\}\), having aggregated actual SACR-statuses and associated SACR-predictions respectively into the series \(A_{t}\) and \(B_{t}\) over time \(t\). Ideally, both time graphs (green and orange) should closely overlap each other, which would indicate aggregated predictions agreeing with reality; hence an excellent SACR-model. Accordingly, we measure the discrepancy between \(A_{t}\) and \(B_{t}\) using the MAE across all \(k\)-values (including \(k\geq 18\), even if not shown), denoted by \(m_{1}\) and Figure 6: Comparing actual versus expected SACR-rates over time within \(\mathcal{D}_{S}\) for SACR-definition class \(1\)a (\(d=1,s=1\)) across shorter outcome periods \(k\in[3,6,9,12]\). Expected SACR-rates are the mean probability scores obtained from the corresponding SACR-model. The discretised variety result from imposing the corresponding cut-off \(c_{dsk}\) on these probability scores, followed by taking the mean SACR-rate. The MAE between the actual and expected SACR-rates are overlaid in summarising the line graph discrepancies over time. given in Fig. 6. Barring the extreme case of \(k\geq 36\), all \(m_{1}\)-values are fairly similar with a mean error of \(0.44\%\) across \(k\), which is reassuringly low and corroborates the relatively high AUC-values in Table 3. Evidently, the underlying SICR-models can accurately predict SICR-events regardless of \(k\), despite the aggregated SICR-rates becoming less responsive to externalities as \(k\) increases. Another type of SICR-rate emerges in Fig. 6 when first dichotomising the SICR-models' probability scores \(h\left(\mathbf{x}_{it}\right)\) using \(c_{dsk}\). The resulting 'discrete' expected SICR-rate \(C_{t}\) (purple) is similarly compared to \(A_{t}\) by calculating the associated MAE; itself denoted as \(m_{2}\) and printed in Fig. 6. Clearly, there are some large discrepancies between \(A_{t}\) and \(C_{t}\) during some periods, particularly during the 2008-GFC, with a mean \(m_{2}\)-value of \(1.12\%\) across \(k\in\{3,6,9,12,18,24,36\}\); almost three times larger than the mean \(m_{1}\)-value. Nonetheless, the smallest \(m_{2}\)-value occurred at \(k=9\), which further supports its choice as the 'optimal' outcome period. Experimentation showed however that \(C_{t}\) is highly sensitive to the choice of the \(c_{dsk}\)-value, which can certainly be tweaked in future work. In this regard, when estimating the associated Generalised Youden Index \(J_{a}\), the misclassification cost ratio \(a\) can be further increased in minimising false negatives. Doing so should at least increase the prevalence of \(C_{t}\geq A_{t}\) over \(t\), i.e., rather the bank be over-provided than under-provided in its loss provision; a risk-prudent outcome under IFRS 9. Moreover, the \(a\)-value can itself be attenuated to a chosen SICR-definition and the resulting SICR-model, instead of fixing \(a\) to a single value across all SICR-definitions, as in this study. ### Varying the level of stickiness \(s\) within SICR-definitions Recall that the \(s\)-parameter in Eq. 1 controls the number of account-level delinquency tests that are conducted over time in sequence. The premise of larger \(s\)-values is to filter out fickle \(\mathcal{G}(d,s,t)\)-values (or SICR-statuses) that fluctuate between \(0\) and \(1\) over time \(t\) for a given account. As \(s\) increases, the resulting \(\mathcal{Z}_{t}(d,s,k)\)-values (or SICR-outcomes) will increasingly equal \(1\) for more persistent bouts of delinquency. Put differently, overall SICR-classification becomes more strenuous and "less paranoid" for larger \(s\)-values, which results in SICR-events becoming rarer. In turn, scarcer SICR-events at the account-level imply slt on the portfolio-level, as shown in Fig. 7. Consider any actual SICR-rate \(A_{t}(k,s)\) from either Fig. 3 or Fig. 7 resulting from a particular \((k,s)\)-tuple, and compare with another series \(A_{t}(k,s+1)\). Evidently, the latter SICR-rate is generally smaller than the former over time, i.e., \(A_{t}(k,s)>A_{t}(k,s+1)\) is true at most \(t\) for \(s\in\{1,2\}\). Moreover, both the mean values and standard deviation of the respective SICR-rates decrease as \(s\) increases when keeping \(k\) constant at any value, as shown in the inset graphs of Fig. 3 and Fig. 7. These results attest to the stabilising yet strenuous effect of larger \(s\)-values on overall SICR-classification, thereby availing \(s\) as a useful lever in defining SICR-events. Notwithstanding, Fig. 7 shows that larger \(k\)-values still yield increasingly more unstable SICR-rates across \(s\geq 2\) with progressively greater SICR-means, which is similar to subsection 4.2 for \(s=1\). Accordingly, the \(s\)-parameter does not seem to override the 'base' destabilising effects of the \(k\)-parameter, despite the former's generally stabilising yet debilitating effect on SICR-rates and its prevalence. Furthermore, if SICR-staging should ideally preempt default, then the resulting SICR-rates should reasonably exceed default rates; a principle already used in partially disqualifying certain \(k\)-values in subsection 4.2. As such, Fig. 7 shows that both \(A_{t}(k=3,s=2)\) and \(A_{t}(k\leq 6,s=3)\) do not exceed the prevailing \(6\%\) default rates during the 2008-GFC, which suggests discarding the associated SICR-definitions. As in subsection 4.2, we calculate the early-warning degree \(b(k,s)-a(k,s)\) for \(s\geq 2\), graphed in Fig. 8 in pink. Clearly, \(k\in\{6,9\}\) produce SICR-rates that react strongly to the 2008-GFC by increasing quite reassuringly by \(2\%\)-\(3\%\) points, which are similar to the SICR-rates in Fig. 4 for \(s=1\). However, the SICR-rates yielded by \(k=12\) and \(s\geq 2\) had a lacklustre response to unfolding crises with \(b(k,s)-a(k,s)\leq 1\%\) points, at least relative to their sibling series in Fig. 4 for \((k=12,s=1)\). That said, the affected SICR-rate series in Fig. 7 is already high even before the 2008-GFC, which explains the lacklustre response. This result is reminiscent to the SICR-rates in Fig. 4 for longer outcome periods \(k\geq 18\) and \(s=1\), where the unresponsiveness partially disqualified the associated SICR-definitions. Nevertheless, we also calculate the recovery degree \(b(k,s)-c(k,s)\) for \(s\geq 2\), shown in Fig. 8 in green. Larger \(k\)-values imply SICR-rates that recover slightly faster (by 5%-6% points) back to their post-GFC SICR-means, thereby affirming appropriate resiliency within the underlying SICR-definitions; similar to Fig. 4 for \(k\leq 12\) and \(s=1\). On the other hand, the \(s\)-parameter does seem to dampen the resiliency-level itself as \(s\) increases, when keeping \(k\) constant at any value. In particular, \(b(k,s+1)-c(k,s+1)\) is universally less than Figure 7: Comparing actual SICR-rates over time and across outcome periods \(k\in\{3,6,9,12\}\) within \(\mathcal{D}_{S}\) for SICR-definition classes 1b and 1c from Table 2. Graph design follows that of Fig. 3. \(b(k,s)-c(k,s)\) for \(s\leq 2\) at every investigated \(k\)-value. This result again suggests that, although increasingly muted by the \(s\)-parameter, the 'base' effects of the \(k\)-parameter remain within the resulting SICR-rates. The same performance measures from Table 3 are repeated in Table 4 for evaluating the SICR-models developed using SICR-definition classes 1b and 1c. Evidently, larger \(k\)-values lead to increasingly scarcer SICR-events (progressively lower prevalence \(\phi_{dsk}\)-values) across all \(s\)-values. This scarcity is further exacerbated by larger \(s\)-values, which further scuppers SICR-prevalence \(\phi_{dsk}\) when fixing \(k\) to any value. Furthermore, the same trend in AUC-estimates (and associated 95% confidence intervals) remains intact such that the discriminatory power wanes as the outcome period lengthens. However, stickier SICR-definitions (or larger \(s\)-values) seemingly produce increasingly more accurate SICR-models, when keeping \(k\) constant at any value. This result suggests that the \(s\)-parameter succeeds in filtering out fickle SICR-statuses and retains only the more persistent cases of delinquency, as described earlier. Amongst other effects, larger \(s\)-values therefore equate to "purifying the intrinsic SICR-signal" in finding a more accurate statistical relationship amongst input variables, thereby explaining larger AUC-values. Larger \(s\)-values also produce SICR-predictions \(h\) (\(\mathbf{x}_{it}\)) that are progressively less flexible over \(t\) for an average account \(i\). This decreased flexibility is evident in the smaller \(\omega_{dsk}\)-values when fixing \(k\) to any value in either Table 3 or Table 4. Furthermore, and similar to AUC-values, \(\omega_{dsk}\) remains a monotonically decreasing function of \(k\), as shown previously in subsection 4.2, regardless of the \(s\)-parameter. Consequentially, SICR-models with shorter outcome periods and stickier SICR-definitions will yield more accurate predictions \(h\) (\(\mathbf{x}_{it}\)), albeit with greater variance as \(k\) decreases and lower variance as \(s\) increases. However, upon aggregating the highly flexible/accurate SICR-predictions from models built with \(k=3\), we again observe less dynamic SICR-rates in Fig. 7 across all Figure 8: Various summary statistics of the actual SICR-rates from Fig. 7 across chosen outcome periods \(k\) for SICR-definition classes 1b and 1c from Table 2. Graph design follows that of Fig. 4. \(s\)-values; itself further corroborated by lower instability \(\sigma_{dsk}\)-values for \(k=3\). As in subsection 4.2, using extremely short outcome periods \(k=3\) are seemingly futile since the resulting SICR-rates are surprisingly stagnant and insensitive to externalities, despite the greater volatility of account-level SICR-predictions. Having disqualified \(k=3\) for all \(s\)-values, we similarly discard \(k=12\) for \(s\geq 2\), given: 1) the lacklustre response of the associated SICR-rates to the 2008-GFC in Fig. 8; and 2) the underlying SICR-predictions that are relatively inaccurate and inflexible, given their AUC-values and \(\omega_{dsk}\)-values. Notwithstanding the remaining \(k\)-values, the average change in \(\omega_{dsk}\)-values across \(s\) is also the smallest for \(k\in\{6,9\}\), respectively -0.4% and -0.45%. This plateauing effect suggests that the loss in prediction flexibility of slightly larger \(k\)-values is not too onerous. As in Fig. 6, we compare the time graphs of actual (\(A_{t}\)) versus expected (\(B_{t}\)) SICR-rates across both \(k\in\{3,6,9,12\}\) and \(s\in\{1,2,3\}\); see Figs. 9-10. Having calculated the MAE \(m_{1}(k,s)\) between \(A_{t}(k,s)\) and \(B_{t}(k,s)\) for a given \((k,s)\)-tuple, the discrepancy is mostly similar across \(k\)-values when keeping \(s\) constant. By implication, the aggregated predictions from SICR-models agree closely with observed reality for any \(k\), notwithstanding that both \(A_{t}\) and \(B_{t}\) become less dynamic as \(k\) increases. A decreasing trend appears in the mean errors across \(k\) as \(s\) increases, i.e., \(\{0.44\%,0.37\%,0.32\%\}\) respectively for \(s\in\{1,2,3\}\), which corroborates the associated increases in AUC-values within Tables 3-4. Lastly, the discrete expected SICR-rates \(C_{t}\) (purple) can be similarly analysed by computing the MAE \(m_{2}(k,s)\) between \(A_{t}(k,s)\) and \(C_{t}(k,s)\) for any given \((k,s)\)-tuple. The mean errors \(\{1.03\%,1.07\%,1.22\%\}\) across \(k\) remain 2-4 times greater than the mean \(m_{1}\)-values, respective to each \(s\)-value. However, every \(C_{t}\) series remains highly sensitive to the chosen cut-off \(c_{dsk}\)-value across both \(k\) and \(s\), which impedes meaningful analysis. Larger \(s\)-values can however achieve a greater prevalence of \(C_{t}\geq A_{t}\) without changing the misclassification cost ratio \(a\), which is reassuringly risk-prudent under IFRS 9. These results suggest the following 'optima' in defining SICR-events for \(d=1\), given the trade-offs amongst AUC-values, flexibility \(\omega_{dsk}\), instability \(\sigma_{dsk}\), and the responsiveness/resiliency of resulting SICR-rates amidst macroeconomic malaise. For no stickiness \(s=1\), choose \(k\in[6,12]\); for some stickiness \(s=2\), choose \(k\in[6,9]\); for lots of stickiness \(s=3\), choose \(k=9\). Smaller \(k\)-values within these ranges will yield more accurate and flexible SICR-predictions, whereupon the resulting SICR-rates become less dynamic, have lower means, and are less responsive to externalities. Larger \(s\)-values will also produce more accurate but less flexible SICR-predictions, though the resulting SICR-rates have markedly lower means and are increasingly insensitive to externalities. These trade-offs are intuitively balanced at \(k=9\) across \(s\) as well as at \(s=2\) across \(k\in\{6,9\}\), though future studies can certainly investigate further. \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline **Definition** & **Outcome** & **Prevalence** & **AUC-** & **Flexibility** & **Instability** & **Cut-off** & **AUC-** \\ & **period** \(k\) & \(\phi_{dsk}\) & **Probabilistic** & \(\omega_{dsk}\) & \(\sigma_{dsk}\) & \(c_{dsk}\) & **Discrete** \\ \hline 1b(i) & \(k=3\) & 4.74\% & 93.8\% \(\pm\) 0.41\% & 3.7\% & 1.07\% & 10.8\% & 85.4\% \(\pm\) 0.70\% \\ 1b(ii) & \(k=6\) & 4.72\% & 89.4\% \(\pm\) 0.60\% & 3.0\% & 1.38\% & 11.3\% & 78.4\% \(\pm\) 0.80\% \\ 1b(iii) & \(k=9\) & 4.68\% & 88.0\% \(\pm\) 0.62\% & 2.9\% & 1.54\% & 8.6\% & 76.8\% \(\pm\) 0.83\% \\ 1b(iv) & \(k=12\) & 4.61\% & 86.5\% \(\pm\) 0.60\% & 2.6\% & 1.71\% & 10.3\% & 73.8\% \(\pm\) 0.79\% \\ 1c(i) & \(k=3\) & 3.82\% & 95.7\% \(\pm\) 0.37\% & 3.1\% & 0.97\% & 6.8\% & 90.3\% \(\pm\) 0.69\% \\ 1c(ii) & \(k=6\) & 3.81\% & 91.5\% \(\pm\) 0.57\% & 2.8\% & 1.28\% & 9.2\% & 81.8\% \(\pm\) 0.86\% \\ 1c(iii) & \(k=9\) & 3.78\% & 88.9\% \(\pm\) 0.64\% & 2.5\% & 1.44\% & 10.8\% & 76.0\% \(\pm\) 0.91\% \\ 1c(iv) & \(k=12\) & 3.73\% & 86.7\% \(\pm\) 0.68\% & 2.2\% & 1.57\% & 8.6\% & 74.0\% \(\pm\) 0.89\% \\ \hline \hline \end{tabular} \end{table} Table 4: Various performance measures of SICR-models across different \(k\)-values for SICR-definition classes 1b (\(d=1,s=2\)) and 1c (\(d=1,s=3\)) from Table 2. Table design follows that of Table 3. ### The negative impact of greater delinquency \(d=2\) within Sick-definitions In the interest of prudence, a rebuttable presumption is articulated in SS5.5.11 and SSB5.5.20 within IFRS 9 (2014), such that a SICR-event occurs once the arrears has reached 30 days past due, i.e., \(g_{0}(t)\geq d\) where \(d=1\). However, this presumption (or 'backstop') can be rebutted if there is evidence against the supposed deterioration of credit quality, despite the delinquency accrving to \(g_{0}(t)=2\). Examples hereof include simple administrative errors that wrongfully advanced delinquency; or, perhaps more fundamentally, when the increased delinquency proved to be temporary. We examine the latter case by adjusting \(d=2\) within our SICR-framework, thereby resulting in the remaining twelve SICR-definitions and associated SICR-models, i.e., classes 2a-2c in Table 2 (darker shades). Reassuringly, Table 5 shows that the high-level trends in most performance measures remain largely intact (albeit greatly muted) across \(k\) and \(s\) for \(d=2\), compared to the results of \(d=1\) from Sections 4.2-4.3. The exception is the AUC-measure, whose values are similar to those AUC-values from \(d=1\), particularly for \(s\leq 2\). By implication, the underlying SICR-models can still produce reasonably accurate SICR-predictions when setting \(d=2\). However, the confidence intervals of these AUC-estimates expand quite noticeably as either \(k\) or \(s\) increases. This loss of Figure 9: Comparing actual versus expected SICR-rates over time within \(\mathcal{D}_{\mathcal{S}}\) for SICR-definition class 1b (\(d=1,s=2\)) across outcome periods \(k\in[3,6,9,12]\). Graph design follows that of Fig. 6. confidence is likely due to the growing scarcity of SICR-events when fixing \(d=2\), as reflected in the extremely low prevalence rates \(\phi_{2sk}\). In isolation, the extremely low \(\phi_{2sk}\)-rates in Table 5 imply that the resulting Stage 2 provisions under IFRS 9 would be similarly small, which is certainly unintuitive given the greater underlying delinquency of using \(d=2\). In fact, the mean \(\phi_{2sk}\)-rates over \(k\) respective to \(s=\{1,2,3\}\) are about \(\{9.5,23.4,52.4\}\) times smaller than the corresponding mean \(\phi_{1sk}\)-rates for \(d=1\). Since the vast majority of delinquent cases \(g_{0}(t)\geq d\) clearly coincide with \(d=1\), it is therefore questionable to build bespoke SICR-models with \(d=2\), particularly since the former already includes the latter cases by definition. Furthermore, the resulting actual SICR-rates implied by \(d=2\) are significantly lower than those of \(d=1\), i.e., \(A_{t}(d=2,s,k)<A_{t}(d=1,s,k)\) over time \(t\) and across all \((s,k)\)-combinations. Accordingly, the default rates of the 2008-GFC exceed these \(A_{t}(2,s,k)\)-rates considerably, which contradicts IFRS 9 in providing timeously for credit losses. Moreover, these \(A_{t}(2,s,k)\)-rates are not as dynamic as their \(A_{t}(1,s,k)\)-counterparts in responding to macroeconomic crises, as partially measured by \(\sigma_{dsk}\) Figure 10: Comparing actual versus expected SICR-rates over time within \(\mathcal{D}_{S}\) for SICR-definition class 1c (\(d=1,s=3\)) across outcome periods \(k\in[3,6,9,12]\). Graph design follows that of Fig. 6. in Table 5. Given these results, we recommend against using \(d=2\) and therefore support using the backstop, as implicitly included within our SICR-framework when using \(d=1\). ## 5 Conclusion The meaning of a SICR-event has become needlessly nebulous when modelling loan impairments under IFRS 9. The resulting complexity is arguably a consequence of using an approach based on drawing arbitrary PD-comparisons; an approach with at least two prominent challenges. Firstly, it requires PD-estimates that are reasonably accurate at any two time points, which is itself challenging. Secondly, the approach requires evaluating the difference between any two PD-estimates against a subjectively-chosen threshold (or'magnitude'), whose selection can be ambiguous and contentious. Intuitively, too small a threshold can trigger the mass migration of loans into Stage 2, which can become prohibitively cost-inefficient and even 'paranoid'. On the other hand, too large a threshold may never be materially breached, thereby keeping loans 'naively' in Stage 1 and leaving a bank grossly under-provided. At the moment, choosing any threshold is non-trivial given the lack of an overarching optimisation framework. Practitioners and regulators alike have little choice but to rely on subjective discretion and/or regulatory prescription; both of which can be sub-optimal. More generally, these two challenges of the PD-comparison approach can counteract the main imperative of IFRS 9, i.e., recognising credit losses timeously. As an alternative, we contribute a concise and simple SICR-framework from which SICR-definitions may be generated and tested. Any such (target) definition can then be used in building a statistical SICR-model (or supervised binary classifier); itself another contribution. This SICR-model can probabilistically classify a loan into either Stage 1 or 2, using a rich and dynamic set of macroeconomic and obligor-specific input variables. As supported by SSB.5.5.12 in IFRS 9, our SICR-modelling approach does not rely on PD-comparisons and therefore requires neither underlying PD-models nor selecting any magnitude threshold. A SICR-modelling approach is relatively more parsimonious since the inputs can relate more directly to the _change_ in credit risk, instead of just default risk alone. Moreover, a SICR-modelling approach prevents any pre-existing issues within a PD-model from bleeding into staged impairment classification under IFRS 9, which can be a practical benefit. \begin{table} \end{table} Table 5: Selected performance measures of SICR-models across all SICR-definitions from Table 2. Table design follows that of Table 3 and Table 4. In generating SICR-definitions, our framework avails three useful parameters: 1) the delinquency threshold \(d\) in testing accrued delinquency at any point; 2) the level of stickiness \(s\) when testing delinquency over consecutive periods; and 3) the outcome period \(k\) over which to predict SICR-statuses. In varying these parameters, we effectively produced 27 different SICR-definitions as unique combinations of the triple \((d,s,k)\). Each SICR-definition is applied on the same South African mortgage data from 2007-2019, whereupon an account-level SICR-model is estimated using binary logistic regression per definition. We demonstrate that shorter outcome periods can yield SICR-predictions that are increasingly more accurate and flexible over loan life, at least for \(k\geq 6\) months. However, upon aggregating these account-level predictions to the portfolio-level, the resulting SICR-rate appears less dynamic over time for smaller \(k\)-values, have progressively lower means, and are increasingly insensitive to unfolding economic crises like the 2008-GFC. Some of these relationships are not necessarily linear: overly long outcome periods (\(k\geq 18\)) yield SICR-rates that are similarly unresponsive to market failures, in addition to the degrading prediction accuracy. The \(s\)-parameter has a stabilising yet strenuous effect on SICR-classification, wherein SICR-events become scarcer as \(s\) increases. Greater stickiness yield account-level SICR-predictions that are more accurate but also less flexible over loan life. From these stickier SICR-definitions, the resulting portfolio-level SICR-rates become less dynamic over time, have lower means, and are increasingly insensitive to the 2008-GFC. Furthermore, both \(k\) and \(s\) parameters interact with each other in that SICR-predictions become more accurate as \(k\) decreases and \(s\) increases. However, the SICR-predictions' variance over time (or flexibility) decreases for larger \(s\) but increases again for smaller \(k\). Lastly, choosing \(d=2\) yields extremely scarce SICR-events across all values of \(s\) and \(k\) that would compromise the resulting Stage 2 provision-levels; all of which supports the 'backstop' (\(d=1\)) of IFRS 9. In conclusion, our work forms a reusable analytical framework in which any SICR-definition can be examined on the accuracy and flexibility of the resulting SICR-predictions, the instability of implied SICR-rates, and its responsiveness to economic downturns. A reasonable trade-off exists amongst these factors when choosing \(k=9\) across any \(s\)-value, as well as when selecting \(s=2\) across \(k\in\{6,9\}\). These parameter choices should yield SICR-models whose predictions are both highly accurate and reasonably flexible over time, while the resulting SICR-rates remain dynamic and reassuringly sensitive to externalities. Future research can examine SICR-modelling using data from other loan portfolios and across other credit markets; both of which may affect the resulting choices of \((k,s)\). In this regard, future studies can explore an even finer-grained list of \(k\)-values, particularly for \(k\in[4,12]\). Doing so can refine the relationships that we have found, which can help in devising a more mature expert system for selecting parameters optimally. Furthermore, the misclassification cost ratio can be tweaked towards improving the discrete output of a specific SICR-model, instead of keeping the ratio constant across all SICR-definitions, as we have done. In minimising false negatives, the misclassification cost can itself be embedded when training a SICR-model, instead of imposing such costs exogenously afterwards. As for modelling techniques, future researchers can certainly expand our study by experimenting with more advanced binary classifiers than logistic regression, e.g., Support Vector Machines as in Harris (2013). Lastly, a future study can focus on stress-testing relevant input variables (e.g., macroeconomic covariates) within a SICR-model, perhaps towards forecasting overall SICR-rates given a particular macroeconomic scenario. ### Acknowledgements This work is not financially supported by any institution or grant, with no known conflicts of interest that may have influenced the outcome of this work. The authors would like to thank Prof. Dirk Tasche for reviewing our work, as well as all anonymous referees and editors for their valuable contributions. ## Appendix A Appendix In subsection A.1, we discuss the fundamentals of a statistical technique called logistic regression, its use in quantitative finance, as well as the Generalised Youden Index \(J_{a}\) in dichotomising a logit-model. Thereafter, the input spaces are summarised in subsection A.2 across all SICR-models, whereupon we discuss some interesting patterns found during feature selection. ### Logistic regression as supervised classifier technique Like many supervised techniques, logistic regression aims to find a statistical relationship between a set of random input variables \(\mathbf{X}\) and the binary-valued outcome (or Bernoulli random variable) \(Y\in\{\mathcal{C}_{0}:0\;;\mathcal{C}_{1}:1\}\). This \(Y\) records either a non-event (\(\mathcal{C}_{0}\)) or a SICR-event (\(\mathcal{C}_{1}\)), both of which are also called a 'negative' or 'positive' respectively. From Hosmer and Lemeshow (2000, pp. 1-10), Bishop (2006, SS4), Hastie et al. (2009, SS4), and James et al. (2013, SS4.3), the focus of logistic regression is on estimating the conditional mean \(\mathbb{E}\left[Y|\mathbf{X}\right]\) as the probability of class \(Y\) given \(\mathbf{X}\). Let \(\pi(\mathbf{X})\in[0,1]\) represent the posterior class probability of \(Y=\mathcal{C}_{1}\) given a \(p\)-dimensional random vector \(\mathbf{X}\). The conditional mean \(\mathbb{E}\left[Y|\mathbf{X}\right]\) for either class \(\mathcal{C}_{0}\) or \(\mathcal{C}_{1}\) is commonly estimated using the standard logistic function \(\sigma:\mathbb{R}\rightarrow[0,1]\), itself defined as \(\sigma(w)=e^{w}/(1+e^{w})=(1+e^{-w})^{-1}\). Then, \(w\) is replaced with a linear combination \(\mathbf{\beta}^{T}\mathbf{x}\) of the realised input variables \(\mathbf{x}=\left\{x_{1},\ldots,x_{p}\right\}\) and its associated coefficient vector \(\mathbf{\beta}=\left\{\beta_{1},\ldots,\beta_{p}\right\}\) with intercept \(\beta_{0}\). Accordingly, \(\mathbb{E}\left[Y|\mathbf{X}\right]\) for either class \(\mathcal{C}_{0}\) or \(\mathcal{C}_{1}\) is expressed respectively as \[\mathbb{P}\left[Y=\mathcal{C}_{1}|\mathbf{X}=\mathbf{x}\right] = \pi(\mathbf{x}) = \frac{\exp\left(\mathbf{\beta}^{T}\mathbf{x}+\beta_{0}\right)}{1+\exp \left(\mathbf{\beta}^{T}\mathbf{x}+\beta_{0}\right)} = \left(1+e^{-\mathbf{w}}\right)^{-1}\quad\text{and}\] \[\mathbb{P}\left[Y=\mathcal{C}_{0}|\mathbf{X}=\mathbf{x}\right] = 1-\pi(\mathbf{x}) = \frac{1}{1+\exp\left(\mathbf{\beta}^{T}\mathbf{x}+\beta_{0}\right)} = \left(1+e^{\mathbf{w}}\right)^{-1}. \tag{3}\] Dividing \(\pi(\mathbf{x})\) in Eq. 3 by \(1-\pi(\mathbf{x})\) yields the _odds_ in favour of class \(\mathcal{C}_{1}\) occurring. Taking the natural log hereof then transforms the odds into a (more appealing) symmetric quantity. E.g., the natural log of an odds (in favour) of 4:1 versus its opposite of 1:4 is 0.602 and -0.602 respectively. More formally, the function \(g(\mathbf{x})=\ln\left[\frac{\pi(\mathbf{x})}{1-\pi(\mathbf{x})}\right]\) effectively maps the linear combination \(\mathbf{\beta}^{T}\mathbf{x}+\beta_{0}\) to the so-called _log-odds_ or _logit_. This logit \(g(\mathbf{x})\) avails many of the desirable properties of a linear regression model, e.g., the output of \(g(\mathbf{x})\) may be continuous, can range from \(-\infty\) to \(\infty\), and is linear in its parameters. Lastly, the unknown regression coefficients \(\mathbf{\beta}\) in Eq. 3 are found using the classical _maximum likelihood estimation_ procedure from statistical literature. That is, \(\mathbf{\beta}\)-values are found such that the predicted probability \(h(\mathbf{x}_{i})=\hat{\pi}(\mathbf{x}_{i})\) of each observation \(i\) approximates the observed 0/1-encoded \(y_{i}\)-value as closely as possible. For each trained logit-model, the probability scores \(h(\mathbf{x})\) that estimate \(\mathbb{P}\left[\mathcal{C}_{1}|\mathbf{x}\right]\) will need to be dichotomised (or discretised) in yielding binary 0/1-decisions. This dichotomisation implies choosing a cut-off \(c\in[0,1]\) such that the discretised classifier \(h^{\prime}(\mathbf{x})=1\) if \(h(\mathbf{x})>c\) and \(h^{\prime}(\mathbf{x})=0\) if otherwise. For every possible \(c\)-value, the probability of a true positive (or SICR-event correctly predicted as such) is \(q(c)=\mathbb{P}\left(h\left(\mathbf{x}\right)>c\mid\mathcal{C}_{1}\right)\), also known as sensitivity. Likewise, the probability of a true negative (or non-event correctly predicted as such) is \(p(c)=\mathbb{P}\left(h\left(\mathbf{x}\right)\leq c\mid\mathcal{C}_{0}\right)\), also called specificity. In finding the optimal cut-off \(c^{*}\) that incorporates both sensitivity and specificity, consider the Youden Index \(J\) that is widely used in the biostatisticsical literature; see Youden (1950), Greiner et al. (2000), and Schisterman et al. (2008). This index \(J\) is defined as the maximisation problem \[J=\max_{c}\left\{q(c)+p(c)-1\right\}\,. \tag{4}\] Clearly, the classical \(J\) assigns equal weight to both sensitivity and specificity, which inappropriately equates the misclassification cost of a false negative to that of a false positive. However, the Generalised Youden Index \(J_{a}\) improves upon \(J\) by rendering \(c\) sensitive to both types of misclassification costs; see Geisser (1998), Kaivanto (2008), and Schisterman et al. (2008). In particular, let \(a>0\) be a cost multiple (or ratio) of a false negative relative to a false positive. If \(\phi\) is the estimated prevalence of the \(\mathcal{C}_{1}\)-event, i.e., the prior \(\mathbb{P}\left(\mathcal{C}_{1}\right)\), then \(J_{a}\) is expressed for a given \(c\) as \[J_{a}(c)=q(c)+\frac{1-\phi}{a\phi}\cdot p(c)-1\,, \tag{5}\] whereupon \(c^{*}\) is given by \[c^{*}=\arg\max_{c}J_{a}(c). \tag{6}\] The use of logistic regression in binary classification is ubiquitous, particularly in the field of application credit scoring, as was first demonstrated in Wiginton (1980). In this regard, logistic regression is considered by many authors to be the most successful regression technique in quantitative finance; see Hand and Henley (1997), Thomas et al. (2002, SS4.5, SS10-11), Siddiqi (2005), Thomas (2009, SS1), and Bolton (2010). Beyond application credit scoring, this technique is also typically used in pre-screening loan offers, detecting fraud cases, scoring collection success, informing direct marketing offers, and in risk-based pricing. As such, the ubiquity of the logistic regression technique suggests its use in the present study. At the very least, this technique and its results can serve as a benchmark when using more advanced classification techniques in future. ### Feature selection: constructing the input space of each SICR-model Several input variables from multiple themes are considered in constructing the'standardised' input space for each SICR-definition class, as explained in subsection 4.1. In Table 6, we summarise and briefly describe the final set of input variables per associated definition class. Given their widespread prevalence, macroeconomic variables (and their lagged variants) have a notable impact on SICR-events irrespective of definition, which supports the forward-looking information requirement of IFRS 9 (2014). This impact is generally greater for SICR-definition class 1 (\(d=1\)) than for class 2 (\(d=2\)), while the exact set of macroeconomic variables (MVs) depends on the choice of both \(d\) and \(s\). By implication, some MVs are more pertinent than others in predicting greater arrears-levels (\(d=2\)), and _vice versa_ for predicting smaller arrears-levels (\(d=1\)). Given the floating interest rates within a mortgage portfolio, it is intuitive that a change in the central bank rate (Repo_Rate_0mo) immediately affects a borrower's affordability and associated SICR-risk; particularly for \(d=1\). Furthermore, we purposefully included both repo rate and inflation growth despite the former's role in controlling the latter, which implies high collinearity. In extreme market conditions, the supposed structural relationship between these two variables can change due to government intervention, such as during the COVID-19 pandemic. Nonetheless, both variables remain statistically significant across most SICR-models, particularly for \(d=1\). \begin{table} \begin{tabular}{l l l l} \hline \hline **Variable** & **Description** & **Definitions** & **Theme** \\ \hline ArrearsTrend\_3mo & The 3-month arrears trend, obtained qualitatively by comparing the current arrears-level to that of 3 months ago & 1a, 1b, 1c, 2a, 2b, 2c & Delinquency \\ BalanceLog & Log-transformed outstanding balance at month- & 2a, 2b, 2c & \\ BalanceToTerm & Outstanding balance divided by the contractual term of the loan & 1b, 2b & Account-level \\ DTI\_Level\_6mo & Debt-to-Income: Average household debt expressed as a percentage of household income per quarter, interpolated monthly & 1a, 1b, 1c, 2a, 2b & Macroeconomic \\ DTI\_Level\_12mo & Debt-to-Income: 12-month lagged version of DTI\_Level\_6mo & 1a, 1b, 1c, 2a, 2b & Macroeconomic \\ Employment\_Growth\_6mo & Year-on-year growth rate in the 4-quarter moving average of employment per quarter, interpolated monthly & 2b, 2c & Macroeconomic \\ Employment\_Growth\_12mo & 12-month lagged version of Employment\_Growth\_6mo & Employment\_Growth\_6mo & 2b, 2c & Macroeconomic \\ g@\_Delinq & Delinquency measure: number of payments in arrears; see Botha et al. (2021) & 1a, 1b, 1c, 2a, 2b, 2c & Delinquency \\ Inflation\_Growth\_6mo & Year-on-year growth rate in inflation index (CPI) per month & 1a, 1b, 1c, 2a, 2b, 2c & Macroeconomic \\ InterestRate\_Margin & Margin between an account’s nominal interest rate and the current prime lending rate; proxy for embedding risk-based pricing principles & 1a, 1b, 1c, 2a, 2b, 2c & Account-level \\ Num\_ArrearsEver\_24mo & Duration (in months) of account delinquency within the last 24 months (excluding current point) & 1a, 1b, 1c, 2a, 2b, 2c & Delinquency \\ PayMethod & Binned instlament payment methods, e.g., cash, debit order, payroll & 1a, 1b, 1c, 2a, 2b, 2c & Behavioural \\ PerfSpell\_Num & Current performing spell number in tracking previous default spells & 1a, 1b, 1c, 2a & Delinquency \\ Prelim\_Perc & Undrawn/prepaid proportion of available credit limit & 1a, 1b, 1c, 2a, 2b, 2c & Behavioural \\ RealGDP\_Growth\_6mo & Year-on-year growth rate in the 4-quarter moving average of real GDP per quarter, interpolated monthly & 1a & Macroeconomic \\ RealIncome\_Growth\_6mo & Year-on-year growth rate in the 4-quarter moving average of real income per quarter, interpolated monthly & 1b, 1c, 2c & Macroeconomic \\ RealIncome\_Growth\_12mo & 12-month lagged version of RealIncome\_Growth\_6mo & 1b, 1c, 2c & Macroeconomic \\ Receipt\_InfLog & Log-transformed inferred customer receits (or cash inflows) at month-end & 2b & Account-level \\ Repo\_Rate\_6mo & Prevailing repurchase rate set by the South African Reserve Bank & 1a, 1b, 1c, 2a, 2b, 2c & Macroeconomic \\ Term & Contractual term of the loan & 1a, 1b, 1c & Account-level \\ TimeInPerfSpell & Duration (in months) of current performing spell before default or competing risk & 1a, 1b, 1c, 2a, 2b & Delinquency \\ \hline \hline \end{tabular} \end{table} Table 6: Describing the selected features across the different SICR-models within the various SICR-definition classes from Table 2. In measuring the relative importance of input variables, we use a technique from coalitional game theory called Shapley-values; see Molnar (2022, SS9.5). Given an instance \(\mathbf{x}_{it}=\left\{x_{it1},\ldots,x_{itp}\right\}\) observed from a \(p\)-dimensional input space for account \(i\) at time \(t\), the contribution (or Shapley-value) of the \(j^{\text{th}}\) input variable to the SICR-prediction \(h(\mathbf{x}_{it})\) is calculated as \[S_{itj}=\beta_{j}x_{itj}-\bar{\mu}_{j}\,, \tag{7}\] where \(\bar{\mu}_{j}=\mathbb{E}_{it}\left(\beta_{j}\mathbf{X}_{j}\right)\) is the average predictor value of input \(j\), estimated across all \((i,t)\)-cases. However, this process quickly becomes time-consuming as dimensionality increases, which is why Strumbelj and Kononenko (2014) proposed an efficient estimator \(\psi\) for Eq. 7 using Monte Carlo sampling, i.e., \(\psi_{itj}\approx S_{itj}\). Using the fastshap R-package from Greenwell (2021), the average absolute Shapley-value of the \(j^{\text{th}}\) input \(\bar{\psi}_{j}=\mathbb{E}_{it}\left(\left\lvert\psi_{itj}\right\rvert\right)\) is fully estimated across all \((i,t)\)-cases for SICR-definitions 1a(i)-(iv). Our results show that the most important input variable across the investigated SICR-models is Prelim_Perc (undrawn/prepaid proportion), where large values indicate an intuitive buffer from which borrowers can draw during distressed times. Since the delinquency-level (g0_Delinq) helps define SICR-events, its high \(\bar{\psi}\)-value reassuringly suggests great importance; likewise for another delinquency-themed input, Num_ArrearsEver_24mo. In distressed times, borrowers tend to switch their payment method to more flexible forms (e.g., from debit order to cash), which explains the \(\bar{\psi}\)-value of PayMethod being the fourth largest. Regarding macroeconomic variables, DTI_Level_0mo contributes at first meaningfully to SICR-prediction for lower \(k\)-values, though its lagged sibling DTI_Level_12mo overtakes for \(k\geq 9\); thereby underscoring the importance of testing lags in SICR-modelling. Lastly, the \(\bar{\psi}\)-value of InterestRate_Margin remained consistently within the top 7 across \(k\), which corroborates the use of risk-based pricing information within SICR-modelling.
2305.07168
Local Life: Stay Informed Around You, A Scalable Geoparsing and Geotagging Approach to Serve Local News Worldwide
Local news has become increasingly important in the news industry due to its various benefits. It offers local audiences information that helps them participate in their communities and interests. It also serves as a reliable source of factual reporting that can prevent misinformation. Moreover, it can influence national audiences as some local stories may have wider implications for politics, environment or crime. Hence, detecting the exact geolocation and impact scope of local news is crucial for news recommendation systems. There are two fundamental things required in this process, (1) classify whether an article belongs to local news, and (2) identify the geolocation of the article and its scope of influence to recommend it to appropriate users. In this paper, we focus on the second step and propose (1) an efficient approach to determine the location and radius of local news articles, (2) a method to reconcile the user's location with the article's location, and (3) a metric to evaluate the quality of the local news feed. We demonstrate that our technique is scalable and effective in serving hyperlocal news to users worldwide.
Deven Santosh Shah, Gosuddin Kamaruddin Siddiqi, Shiying He, Radhika Bansal
2023-05-11T22:47:38Z
http://arxiv.org/abs/2305.07168v1
Local Life: Stay Informed Around You, A Scalable Geoparsing and Geotagging Approach to Serve Local News Worldwide ###### Abstract Local news has become increasingly important in the news industry due to its various benefits. It offers local audiences information that helps them participate in their communities and interests. It also serves as a reliable source of factual reporting that can prevent misinformation. Moreover, it can influence national audiences as some local stories may have wider implications for politics, environment or crime. Hence, detecting the exact geolocation and impact scope of local news is crucial for news recommendation systems. There are two fundamental things required in this process, (1) classify whether an article belongs to local news, and (2) identify the geolocation of the article and its scope of influence to recommend it to appropriate users. In this paper, we focus on the second step and propose (1) an efficient approach to determine the location and radius of local news articles, (2) a method to reconcile the user's location with the article's location, and (3) a metric to evaluate the quality of the local news feed. We demonstrate that our technique is scalable and effective in serving hyperlocal news to users worldwide. ## 1 Introduction Local news is a vital source of information for users who want to stay updated on their surroundings or learn about other places. We follow the definition of local news published by Shah et al. (2023), which states: _We define local news articles that impact a specific set of users at the city/county/state level._ Shah et al. (2023) suggested there are two fundamental things that are required to keep the users informed about their surroundings, _(1) identifying whether an article is local news, and (2) detecting and recognizing it's geolocation and the impact radius,_ so that we could serve right news articles to the right audience. In this paper, we focus on the second step assuming the news articles we get are local. Detecting the article's right location and showcasing it to the right audience not just improves user engagement Robindro et al. (2017) but it helps keep the community bind together, preserving it's culture. It keeps the users informed of their neighborhood, may that be related to crime, events, new restaurants opening up, real estate, schools, sports, etc. Showcasing right local news to the cold start users is beneficial to start engaging them. Since, geolocation is the only information we get for cold start users, personalizing the feed based on the user's geolocation helps them engage better with the articles and convert them into warm users. For warm users, who have more preferences and behaviors, we can increase their retention rate by serving them high-quality local news. We also review the existing literature on local news detection and geolocation extraction, and identify the main challenges and gaps in this research area. The papers (discussed in Section 2) focus on extracting the geolocation information from the articles however, they fail to mention a way to reconcile the user's location with the article's location to serve the right local articles. The main challenges that we found in this research area are: 1. **Acronyms/Teams/Organizations/Highways as locations:** We came across multiple local news articles in which geographical location name wasn't mentioned explicitly but was mentioned in the form of an acronym which might be a local place or a local organization, or a highway like I-5 or even a local sports team name. The locations of these acronyms are difficult to detect. For instance: "UW students voice concerns about recent University District crime"1, UW for University of Washington, "Seahawks have 2 of PFF's top 30 graded safeties for 2022"2, "Seahawks representing Seattle or infact entire Washington state. Footnote 1: [https://www.msn.com/en-us/sports/nfl/seahawks-have-2-of-pffs-top-30-graded-safeties-for-2022/ss-AA17cqCH](https://www.msn.com/en-us/sports/nfl/seahawks-have-2-of-pffs-top-30-graded-safeties-for-2022/ss-AA17cqCH) 2. **Local news of broader interest:** We came across certain local news articles mentioning a specific city but could also be relevant to neighboring cities or regions. For instance: "Washington State Fair announces 2X Platinum artist with tickets on sale Wednesday" 3. Even though Washington State Fair, Tacoma is mentioned in the article, people living in a radius of 50-80 miles of Tacoma would still love to see this article. Another example where local publishers capturing semi-local news articles like, "New WSU study says grocery stores can trick customers into spending more"4 Footnote 3: [https://www.msn.com/en-us/sports/ncaabk/no-4-arizona-strives-for-best-execution-against-offensively-challenged-cal-ar/AA17fxR7](https://www.msn.com/en-us/sports/ncaabk/no-4-arizona-strives-for-best-execution-against-offensively-challenged-cal-ar/AA17fxR7) 3. **Reconcile user's location with the article's location:** These techniques (Bell et al., 2015; Teitler et al., 2008; Middleton et al., 2018) do not mention how would they extract the geographical location of the user. There are certain techniques present that backtraces the IP address of the user to a location (Alt et al., 2010), however, reconciling these locations with the article's location to serve the right local news would be a separate task. 4. **Multiple locations present in the article:** We also came across multiple articles where there were multiple locations present. These approaches fail to handle this scenario. For instance: "No. 4 Arizona strives for best execution against offensively challenged Cal" 5, this article mentions two states Arizona and California. People living in these two states would majorly be interested in seeing this article. Footnote 5: [https://www.msn.com/en-us/sports/nfl/seahawks-have-2-of-pffs-top-30-graded-safeties-for-2022/ss-AA17cqCH](https://www.msn.com/en-us/sports/nfl/seahawks-have-2-of-pffs-top-30-graded-safeties-for-2022/ss-AA17cqCH) Footnote 6: [https://www.msn.com/en-us/sports/nfl/seahawks-have-2-of-pffs-top-30-graded-safeties-for-2022/ss-AA17cqCH](https://www.msn.com/en-us/sports/nfl/seahawks-have-2-of-pffs-top-30-graded-safeties-for-2022/ss-AA17cqCH) Footnote 7: [https://www.msn.com/en-us/sports/ncaabk/no-4-arizona-strives-for-best-execution-against-offensively-challenged-cal-ar/AA17fxR7](https://www.msn.com/en-us/sports/ncaabk/no-4-arizona-strives-for-best-execution-against-offensively-challenged-cal-ar/AA17fxR7) We showcase that not only our technique helps us understand the local publishers in an area but also scales worldwide and is effective in serving relevant hyperlocal news to the users globally. Our technique also helps in converting cold start users to warm users, and helps retain warm users. Our major contributions include: _(1) a novel efficient ensemble approach to detect and recognize the geographical location of the article and it's impact radius, with high precision and high recall, (2) a technique to reconcile the user's location with the geographical location of the article to serve the right local news, and (3) an evaluation metric to measure the quality of the local news feed._ We showcase that not only our technique helps us understand the local publishers in an area but also scales worldwide and is effective in serving relevant hyperlocal news to the users globally. Our technique also helps in converting cold start users to warm users, and helps retain warm users. ## 2 Related Work In the news domain, we realized that majority of the existing literature on geolocation extraction lie in one or more of the four issues buckets mentioned in section 1. Sankaranarayanan et al. (2009) focuses on using user-generated content (UGC) platforms, such as Twitter, to gather breaking news in the area. They proposed a system that clusters tweets based on their geolocation name mentioned and the user's geolocation and assigns the geolocation foci of each cluster to all the tweets falling in it. However, this approach has several limitations: (1) it relies on user's geolocation to extract the geolocation of the tweet, which may not always match the tweet's geolocation, (2) it ignores news impact radius, which depends on topic and audience, and (3) it requires manual identification of the users who post the news as tweets, which is hard to scale worldwide Tahmasebzadeh et al. (2021) proposed a technique to use geolocation extracted from an image to showcase local news pertaining to that area. They proposed a technique that uses ResNets to predict the spatial geolocation and the entity type of an image, and then matches the entity name with the Wikidata corpus and the Event Registry to retrieve the relevant news articles. They restrict the entity types to 12 categories, such as landmarks, monuments, etc. However, this approach also has several limitations: (1) it misses other important local news categories, such as crime, sports, real estate, science, etc., (2) it depends on the quality and completeness of the Wikidata and the Event Registry, which may not be updated and accurate, and (3) it ignores user's location and preferences, which may differ from the image's location and entity type. Bell et al. (2015) uses automatic speech recognition (ASR) to broadcast summaries of the news provided by the broadcasting channels. They proposed an ASR technique to generate the speech transcript and NER to extract and disambiguate geolocation with OpenStreet Map. However, they do not mention the serving technique for the generated local news summaries. Teitler et al. (2008) and Middleton et al. (2018) use a three-step process to get text geolocation: (1) geocoding, which maps the location names to the latitude and longitude coordinates, (2) geoparsing, which identifies the location names in text, and (3) geotagging, which assigns a geolocation to the text. They claim this approach can handle different text types and sources, and can be easily applied to the structured content like news articles present with news aggregators. However, this approach also has several limitations: (1) it depends on the size and scope of the gazetteer, which may not cover all location names or variations, (2) it ignores the impact radius of the local news articles, which may vary depending on the topic and the audience, and (3) they do not mention their approach to reconcile the article's and user's location to serve the right local news, which may differ in distance, relevance, and preference. Adar et al. (2017), inspired by personalizing the feed of users, proposed a tool that personalizes the content of the article based on two properties of the user: geolocation and demographics inference. They capture the location of the user by backtracing its IP address, and the location of the article by the journalist from a list of cities. They claim that this technique can provide more relevant and engaging news content to the user. However, this approach also has several limitations: (1) it restricts the location of the article to a list of cities, which may not cover all the possible local news sources or topics, and (2) it serves the local news content to the user only if they are within 50 miles of a nearby big city, which makes the technique difficult to scale worldwide. Goncalves et al. (2021) and Vaataja et al. (2012) focuses on using crowd-sourcing techniques to generate local news content from the citizens of an area. They proposed a framework to encourage local journalism in Portugal and Finland, respectively, by allowing the citizens to share photos, videos, posts about news-worthy events in their region, and local journalists to pick up the story if it seemed important. They claim that this technique can enhance the quality and diversity of local news content, and foster the community interaction and engagement. However, this approach also has several limitations: (1) it does not specify how to extract the location from the citizen's posts, which may not always contain explicit or accurate location names, (2) it uses the user's IP backtraced location as a proxy for the location of the post, which may not be the correct representative of the location of the event, and (3) it does not reconcile the user's location and the post's location to serve the right local news to the users, which may have different preferences and interests. In contrast to these existing approaches, our proposed approach addresses all the issues and limitations mentioned above, and provides a more efficient and effective way to geoparse and geotag the location and radius of local news articles, and to match them with the user's location and preferences. We also propose an evaluation metric to measure the quality and relevance of the local news feed, and demonstrate that our approach is scalable and effective in serving hyperlocal news to users worldwide. ## 3 Methodology We elaborated the local news serving problem into three subtasks, namely: (1) Geoparsing, detecting the geolocation and geotagging, recognizing it and determining the impact radius of the article, (2) reconciling the article and user's geolocation and preferences, and (3) delivering the personalized local news feed to the user. ### Geolocation as a form of geohash To match the user's geolocation with the article's geolocation, we use geohashes to represent both the user and the article. A geohash is a string of letters and numbers that encodes a rectangular area on the Earth's surface. The longer the geohash, the smaller the area it covers. We use a single geohash to represent the user's geolocation, and a set of geohashes to represent the article's impacted geolocation. Table 1 shows the width and height of the rectangular areas corresponding to different geohash lengths6. Footnote 6: [https://www.movable-type.co.uk/scripts/geohash.html](https://www.movable-type.co.uk/scripts/geohash.html) ### Article Location detection Geoparsing and geotagging the geolocation and determining the impact radius of the article is a key task in local news serving. We use an ensemble of geoparsing and geotagging techniques for high precision and recall. The techniques we use are: (1) Location Table (LT) lookup, which detects and maps the location names to geohashes, (2) Bing Maps API, which detects and map the location names from the article and provides the bounding box coordinates to convert them into geohashes, and (3) Publisher-to-Location affinity mapping, which assigns geohashes to the article based on its publisher. For each article, we stamp geohashes of the article's impacted geolocation by applying the following rules: (1) if the publisher of the article is in the publisher-to-location mapping, stamp mapped geohashes of length four, (2) if LT geohash and publisher-to-location geohash have same prefix of length two, stamp all LT geohashes of length four, (3) if BMA geohash and publisher-to-location geohash have same prefix of length two, stamp all BMA geohashes of length four, (4) if publisher-to-location mapping isn't available and if LT geohash and BMA geohash have same prefix of length two, stamp all LT and BMA geohashes of length four, (5) if publisher-to-location mapping and LT lookup are not available for the article, use BMA geohashes that are predicted with high confidence, and (6) if none of the rules apply, stamp no geohash to keep high precision over recall. We describe each of the techniques below: #### 3.2.1 Location Table Lookup We compiled a list of locations in the United States, Canada, United Kingdom, and Australia. We checked if the article contained any location names or alias, and used the Bing Maps API bounding box coordinates to obtain geohashes of the location's city-county/district-state-country geochain. We crowdsourced the evaluation of this technique using the UHRS hit apps 7, and got accuracy of 0.86 and recall of 0.38. However, we found that this technique is not scalable worldwide. Footnote 7: [https://prod.uhrs.playmsn.com/UHRS/](https://prod.uhrs.playmsn.com/UHRS/) #### 3.2.2 Bing Maps API Location There are several APIs that provide location information from a query/text, for instance, Google geocoding API8, OpenStreetMap Nominatim9 and Bing Maps API10. We use Bing Maps API. Bing Maps has an API to geoparse and geotag location information from text with confidence levels and location type. The confidence levels are High, Medium and Low, we use only those locations predicted with High and Medium confidence. Bing Maps API has different entity types, from Hospital, Building, Neighborhood to City, States, Monument and National Parks. We also use the coordinates information, like point coordinate of the location, which equates to the center point of the location and the bounding box coordinates that identifies four corners of a box like shape containing the location. We then extract all the geohashes within these coordinates. Footnote 8: [https://developers.google.com/maps/documentation/geocoding](https://developers.google.com/maps/documentation/geocoding) Footnote 9: [http://wiki.openstreetmap.org/wiki/Nominatim](http://wiki.openstreetmap.org/wiki/Nominatim) Footnote 10: [https://www.microsoft.com/en-us/maps/choose-your-bing-maps-api](https://www.microsoft.com/en-us/maps/choose-your-bing-maps-api) To judge the performance of the Bing Maps API on new articles, we generated recall metrics. As a ground truth, we assumed each news article is stamped with a location. To gain maximum recall we considered different parts of the articles, such as title, snippet, url and body and various combinations. (Table 2). We found that the Title + Snippet + Body query gave the best precision and recall for the BMA location. We also noticed that URL in query could bias the BMA to the location name in the provider's \begin{table} \begin{tabular}{|p{113.8pt}|p{56.9pt}|} \hline **Geohash length** & **Cell width** & **Cell height** \\ \hline \(1\leq\) & 5000 km & 5000 km \\ \(2\leq\) & 1250 km & 625 km \\ \(3\leq\) & 156 km & 156 km \\ \(4\leq\) & 39.1 km & 19.5 km \\ \(5\leq\) & 4.89 km & 4.89 km \\ \(6\leq\) & 1.22 km & 0.61 km \\ \(7\leq\) & 153 m & 153 m \\ \hline \end{tabular} \end{table} Table 1: Rectangular geohash cell height and width coverage for geohashes of different lengths \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Market** & **Title** & **Snippet** & **URL** & **Body** & **Title** & **Title** + & **Title** + & **Sapi- \\ & & & & **+** & **Sapi-** & **+** & **get** + \\ & & & & & **URL** & **Body** \\ \hline **en-au** & 0.75 & 0.45 & 0.75 & 0.73 & 0.88 & 0.85 & **0.90** \\ **en-ca** & 0.73 & 0.54 & 0.72 & 0.73 & 0.88 & 0.83 & **0.89** \\ **en-gb** & 0.59 & 0.31 & 0.58 & 0.55 & 0.69 & 0.69 & **0.74** \\ **en-us** & 0.72 & 0.66 & 0.69 & 0.75 & 0.91 & 0.86 & **0.94** \\ \hline \end{tabular} \end{table} Table 2: Comparing BMA Location Recall Metrics for news article attributes; Snippet extracted by in-house trained text-rank model; Title, snippet, body trimmed to starting and ending 10 words each for QPS reduction. name, which might not be article's location. For example, KOMO-TV Seattle reported a news from Sammamish area, for instance: "Person shot during home invasion in Sammamish"11, but the BMA might pick Seattle as the article's location, which would prevent the article from reaching the Sammamish users, and also annoy the Seattle users who might not care about Sammamish news. Therefore, we decided to exclude the URL from the query for BMA, thus avoiding label bias Shah et al. (2019). Footnote 11: [https://www.msn.com/en-us/news/crime/person-shot-during-home-invasion-in-sammamish/ar-AA17qJG0](https://www.msn.com/en-us/news/crime/person-shot-during-home-invasion-in-sammamish/ar-AA17qJG0) We measured the precision of the recalled locations with crowd-sourced evaluation using the UHRS hit apps. For each article, we shared, the title, snippet, body and made the URL available for the user to read the article. We then asked the users if detected location is correct for the given article. These numbers aggregated by different locales are listed in Table 3. #### 3.2.3 Publisher-Location affinity We observed that some local news articles did not explicitly mention the location name in them, such as "Boba exhibit opening at the Chinese American Museum"12, or "Eastbound 520 Bridge Closure Planned Saturday Night"13, or "Warriors win against OKC reveals silver lining to Steph Curry's absence"14which were published by local news providers like "CBS Los Angeles", "Patch", and "Mercury News" respectively. We also noticed that users in nearby cities, such as Seattle and Bellevue, will be interested in each other's local news, and local news providers often covered news from neighboring cities as well. To handle these cases, we identified the strongly local providers using the method proposed by Shah et al. (2023). We then computed the publisher-to-location affinity for these providers using all the LT and BMA locations recognized on their articles in a month. We used the gap ratio Shah et al. (2023) to filter out the remote locations that the local providers rarely covered, and retained the locations that they frequently covered. We assumed that the articles that did not have the location name in them were from these frequent locations, which could be at the city/county/district level. For example, we mapped the publisher "KOMO-TV Seattle" to the location "King County, Washington, United States", suggesting that KOMO-TV Seattle mainly covers news articles from King County. We created a mapping of publisher-to-location affinity using the following steps, which are also illustrated in Figure 1: Footnote 12: [https://www.msn.com/en-us/sports/nba/warriors-win-against-okc-reveals-silver-lining-to-steph-curry-s-absence/ar-AA17qJGb8](https://www.msn.com/en-us/sports/nba/warriors-win-against-okc-reveals-silver-lining-to-steph-curry-s-absence/ar-AA17qJGb8) * Collect all the articles from a specific provider in a given time range. * Apply Bing Maps API and Location Table lookup to extract the locations from these articles. * Use Bing Maps API to get the geohashes from the bounding box of the extracted locations. * Use gap ratio on geohashes of length three to filter out the outlier geohashes, and map the remaining geohashes of length four to their corresponding locations * Use gap ratio to filter out outlier counties and states. * The remaining locations are the high-affinity locations that the provider covers. The mapped location is converted to a set of geohashes using the Bing Maps API bounding box coordinates. The publisher-to-location affinity mapping helped address the issues of **"Acronyms as locations"**, and **"Local news of broader interest"** issues discussed in the Introduction section. Figure 1: _Publisher-to-Location Affinity: Illustrating the steps to obtain the local publisher-to-location affinity._ \begin{table} \begin{tabular}{|l|l|l|l|} \hline **Market** & **Baseline Recall** & **BMA Recall** & **BMA Precision** \\ \hline en-au & 0.35 & 0.80 & 0.94 \\ en-gb & 0.18 & 0.84 & 0.97 \\ en-ca & 0.25 & 0.85 & 0.93 \\ en-us & 0.66 & 0.84 & 0.91 \\ de-de & 0.41 & 0.73 & 0.95 \\ es-es & NA & 0.69 & 0.96 \\ fr-fr & NA & 0.63 & 0.94 \\ ii-it & NA & 0.59 & 0.85 \\ ja-jp & NA & 0.83 & 0.84 \\ jh-cn & NA & 0.86 & 0.92 \\ \hline \end{tabular} \end{table} Table 3: Precision-Recall Metrics comparison between baseline and BMA’s extraction model. ### Local news serving We obtain the user location from various sources, such as their IP address or their input on the weather card. We convert this location, especially the latitude and longitude, to a geohash of length four, which is used to retrieve the articles to show to the users. If there are not enough articles in that geohash, we backfill it with the nearest popular city, assuming that some users might commute to big cities for work and want to stay informed about them. The retrieved articles are then ranked by an in-house trained ranker to display to the users. Converting geolocations to geohashes helped reconcile the user's and article's location to correct serving. ## 4 Evaluation Metrics We evaluated our End-To-End Local News serving technique, by performing an online A/B experiment. We measured our technique specifically with the earth's distance between the user's location and identified document location. In addition to this, we accounted the fact that if a user's location for example a city Seattle, Washington matches with document's location, for example the same city, we assume the distance as 0 kms. This suggests our ability to serve hyper-local news at the most granular level such as a city. Similarly, we take into account for other geographical divisions such as county and state. ## 5 Results For our baseline, we used the publisher-to-DMA mapping. We manually mapped strongly local publishers to a DMA, and stamped all the articles from the local publisher with all the geohashes corresponding to the mapped DMA. DMA stands for Designated Market Area, which is a geographical region comprising a group of counties and zip codes defined by Neilsen 15. There are 210 DMAs in the United States, mostly used by the television and radio channels to broadcast to a set of users. We used lexicon-based knowledge constraints Shah et al. (2021) to tag DMA-based geohashes on the article. These lexicons were manually created. We compared our ensemble technique of detecting the article's location to the DMA tagging, to measure the localness of the delivered local news articles in terms of distance, and to ensure the relevance of the local news shown to the users. We conducted on online experiment to compare the distance between users and document. Treatment group was exposed to End to End Local News serving technique and we observed improvement on 50th and 75th Percentile distance. 50th Percentile distance improved from 15 kms on average to 8 kms on average (Fig:2) and 75th Percentile distance improved from 120 kms on average to 80 kms on average (Fig:3). We also observed an improvement in content interactions per user session for cold as well as warm start users. Footnote 15: [https://markets.nielsen.com/us/en/contact-us/intl-campaigns/dma-maps/](https://markets.nielsen.com/us/en/contact-us/intl-campaigns/dma-maps/) ## 6 Conclusion In this paper, we proposed (1) an ensemble approach to determine the geographical location of the article and it's impact radius, (2) a technique to reconcile the user's location with the geographical impacted location of the article to serve the right local news, and (3) an offline and online evaluation metrics to measure the quality of the local news feed. We also showcased our technique resolves the major issues with local news to showcase right local articles to the right set of audience. We eventually showcased that our technique is scalable worldwide, and helps convert cold start users to warm start users. Figure 3: _P75 Distance Metrics_: Comparing P75 metrics between treatment with Local News serving technique and Control Figure 2: _P50 Distance Metrics_: Comparing P50 metrics between treatment with Local News serving technique and Control ## Limitations Currently we have a rule based ensemble approach to stamp geolocation on an article. There is a good potential to train a ML model to make the selection from the models that provide the geolocation. Bing Maps API is a query based API, and hence one limitation that we currently have in our system is we use starting and ending 10 words to save on the QPS. Having an in-house or off-the-shelf NER model to detect the location mention in the article and pass it as an input to the Bing Maps API would help increase the precision of the BMA Location extraction.
2304.08172
Pointwise convergence of Fourier series and deep neural network for the indicator function of d-dimensional ball
In this paper, we clarify the crucial difference between a deep neural network and the Fourier series. For the multiple Fourier series of periodization of some radial functions on $\mathbb{R}^d$, Kuratsubo (2010) investigated the behavior of the spherical partial sum and discovered the third phenomenon other than the well-known Gibbs-Wilbraham and Pinsky phenomena. In particular, the third one exhibits prevention of pointwise convergence. In contrast to it, we give a specific deep neural network and prove pointwise convergence.
Ryota Kawasumi, Tsuyoshi Yoneda
2023-04-17T11:38:22Z
http://arxiv.org/abs/2304.08172v5
# Pointwise convergence theorem of generalized mini-batch gradient descent in deep neural network ###### Abstract. The theoretical structure of deep neural network (DNN) has been clarified gradually. Imaizumi-Fukumizu (2019) and Suzuki (2019) clarified that the learning ability of DNN is superior to the previous theories when the target function is non-smooth functions. However, as far as the author is aware, none of the numerous works to date attempted to mathematically investigate what kind of DNN architectures really induce pointwise convergence of gradient descent (without any statistical argument), and this attempt seems to be closer to the practical DNNs. In this paper we restrict target functions to non-smooth indicator functions, and construct a deep neural network inducing pointwise convergence provided by mini-batch gradient descent process in ReLU-DNN. Key words and phrases:deep neural network, ReLU function, gradient descent, pointwise convergence 2020 Mathematics Subject Classification: Primary 68T27; Secondary 68T07, Tertiary 41A29 ## 1. Introduction Recently, deep leaning has been the successful tool for various tasks of data analysis (see [10, 9, 18, 21] for example). Also, the theoretical structure of deep neural network (DNN) has been clarified gradually. In particular, Amari [1] gave a simple observation showing that any target function is in a sufficiently small neighborhood of any randomly connected DNN, with sufficiently large number of neurons in a layer (see also Kawaguchi-Huang-Kaelbling [14] and references therein). Keeping these celebrated results in mind, our next work would be clarifying the precise convergence structure of DNN even if initial data are already close to the target function. Imaizumi-Fukumizu [11] examined learning of non-smooth functions, which was not covered by the previous theory, and clarified that, compare the DNN with the previous theories (such as the kernel methods), the convergence rates are almost optimal for non-smooth functions, while some of the popular models do not attain this optimal rate. Suzuki [23] (also references therein) clarified that the learning ability of ReLU-DNN is superior to the linear method when the target function is in the supercritical Besov spaces \(B_{p,q}^{s}\) with \(p<2\) and \(s<d/2\) (\(d\) is the dimension, note that the case \(s=d/2\) is called "critical"), which indicates the spatial inhomogeneity of the shape of the target function including the non-smooth functions. Thus, with the aid of these results, we can conclude that ReLU-DNN is suitable for recognizing jump discontinuity of the non-smooth functions. We now briefly explain the key idea of [23]. To show the approximation error theorems, he first applied the wavelet expansion to the target functions and then approximated each wavelet bases (composed of spline functions) by ReLU-DNN (see [28]). More specifically, let \(g:[0,1]\to[0,1]\) be a tent function such that \[g(x)=\begin{cases}2x\quad(x<1/2),\\ 2(1-x)\quad(x\geq 1/2)\end{cases}\] and let \(g_{s}\) be a \(s\) times composite function, and \(f_{m}\) be a function approximating to the second order polynomial such that \[g_{s}(x)=\underbrace{g\circ g\circ\cdots\circ g}_{s}(x)\quad\text{and}\quad f _{m}(x)=x-\sum_{s=1}^{m}\frac{g_{s}(x)}{2^{2s}}.\] Note that \(f_{m}(x)\to x^{2}\) (\(m\to\infty\)) uniformly. For deriving the multi-dimensional polynomials, it suffices to apply the following formula: \[xy=\frac{1}{2}((x+y)^{2}-x^{2}-y^{2}),\] and then we can easily approximate multi-dimensional spline functions by ReLU-DNN. The other idea (for variance) is applying the statistical argument in [22] combined with a covering number evaluation. However, as far as the author is aware, none of the numerous works to date attempted to mathematically investigate what kind of DNN architectures really induce pointwise convergence of gradient descent (without any statistical argument) even if the initial data are already close to the target function, and this attempt seems to be closer to the practical DNNs. In what follows, we investigate this problem. Before going any further, we point out that employing supercritical function spaces may not be enough to capture the discontinuity structure of the target functions. This means that we may need to directly analyze each DNNs, if the target function is bounded and discontinuous (i.e. in a critical function space). The flavor of this insight seems similar to the recent mathematical studies on the incompressible inviscid flows. See [2, 3, 6, 7, 19] for example. More precisely, these have been directly looking into the behavior of inviscid fluids in the critical function spaces (to show ill-posedness), and the argument seems quite different from the previous studies focusing on well-posedness in subcritical type of function spaces. See [4, 12, 13, 20, 27] for example. To show the well-posedness, the structure of function spaces, more precisely, commutator estimates are crucially indispensable. This paper is organized as follows: In the next section, we construct target functions and the corresponding estimators. In Section 3, we investigate the pointwise convergence of gradient descent in terms of ReLU-DNN if initial data are already close to the target function. In the last section, we give the key lemma and its proof. ## 2. Target functions and the corresponding estimators In this section, we define a set of target functions and the corresponding estimators, which is one of the typical function set assuring pointwise convergence. For \((y_{j},\tau_{j})\in[-1,1)^{d}\times\mathbb{S}^{d-1}\) (\(j=1,2,\cdots\)), let us define half spaces \(H^{\circ}\) and \(H^{\circ}_{\epsilon}\) as follows: \[H^{\circ}(y_{j},\tau_{j}) :=\{x\in[-1,1)^{d}:x\cdot\tau_{j}-y_{j}\cdot\tau_{j}<0\},\] \[H^{\circ}_{\epsilon}(y_{j},\tau_{j}) :=\{x\in[-1,1)^{d}:x\cdot\tau_{j}-y_{j}\cdot\tau_{j}<-\epsilon\}.\] We employ a set of indicator functions \(\{\chi_{\Omega},\Omega\in\mathcal{M}\}\) as the set of target functions, where \(\mathcal{M}\) is a set of convex smooth manifolds (with internal filling) as follows: \[\mathcal{M}:= \bigg{\{}\Omega\subset[-1,1)^{d}:\partial\Omega\text{ is smooth, and the following three conditions hold:}\] \[\text{There exists }\{(y_{j},\tau_{j})\}_{j=1}^{\infty}\subset \partial\Omega\times\mathbb{S}^{d-1}\text{ such that }\bigcap_{j=1}^{\infty}H^{\circ}(y_{j},\tau_{j})=\Omega.\] \[\text{For each }j\text{ and any }N\in\mathbb{N},\] \[\max_{j^{\prime}\neq j,\ 1\leq j^{\prime}\leq N}\text{dist }\left(y_{j}-N^{-\frac{2}{d-1}}\tau_{j},\partial H^{\circ}_{N^{-\frac{2}{d-1} }}(y_{j^{\prime}},\tau_{j^{\prime}})\right)\lesssim N^{-\frac{1}{d-1}}.\] \[\text{For each }j,\text{ there is a set of points}\] \[\{c_{ji}\}_{i=1}^{d}\subset(\Omega\cap\partial H^{\circ}_{N^{-\frac{2}{d- 1}}}(y_{j},\tau_{j}))\text{ which are linearly independent.}\bigg{\}}\] The first condition is nothing more than expressing the convexity. The second one is needed for the estimate of the difference between the target function and the corresponding estimator (see (2)). Note that, we choose \(\bigcap_{j^{\prime}=1}^{N}\partial H^{\circ}_{e}(y_{j^{\prime}},\tau_{j^{ \prime}})\) as a regular polytope, and by the dimensional analysis, the power \(-\frac{1}{d-1}\) naturally appears. The points \(\{c_{ji}\}_{i=1}^{d}\) in the third one is needed for the construction of training samples for mini-batch (see the next section). **Remark 1**.: An interesting question naturally arises: for \(\Omega\in\mathcal{M}\), whether or not \(\partial\Omega\) is a manifold isometric to the sphere. We leave it as an open question (c.f. Tsukamoto [25, 26]). **Definition 1**.: (Definition of estimator.) For a target function \(f^{\circ}=\chi_{\Omega}\ (\Omega\in\mathcal{M})\), we define the corresponding estimator \(f^{\circ}_{N}\) as follows: \[f^{\circ}_{N}:=\chi_{\Omega^{\circ}_{N}},\quad\Omega^{\circ}_{N}:=\bigcup_{j=1 }^{N}H^{\circ}_{N^{-\frac{2}{d-1}}}(y_{j},\tau_{j}).\] **Lemma 1**.: _We have_ \[f^{\circ}_{N}(x)\to f^{\circ}(x)\quad(N\to\infty)\quad\text{for any}\quad x\in[- 1,1)^{d}. \tag{1}\] _Moreover we have the following convergence rate:_ \[\|f^{\circ}_{N}-f^{\circ}\|_{L^{r}}^{r}\lesssim_{d}N^{-\frac{2}{d-1}}\quad \text{for}\quad 1\leq r<\infty. \tag{2}\] Proof.: By applying a diagonal argument, we immediately have (1). To show (2), let us choose a set of \(\{\tau_{ji}^{\perp}\}_{i=1}^{d-1}\subset\mathbb{S}^{d-1}\) satisfying \(\tau_{ji}^{\perp}\cdot\tau_{j}=0\ (i=1,2,\cdots,d-1)\) and \(\tau_{ji}^{\perp}\cdot\tau_{ji^{\prime}}^{\perp}\ (i\neq i^{\prime})\). Then by using a standard local coordinate system, we have \[\left\{y_{j}-N^{-\frac{2}{d-1}}\tau_{j}+\sum_{i=1}^{d-1}s_{i}\tau_{ji}^{\perp}: s_{i}\in\mathbb{R},\ |s|\lesssim N^{-\frac{1}{d-1}}\right\}\subset\partial H^{\circ}_{N^{-\frac{2}{d- 1}}}(y_{j},\tau_{j})\] and \[\left\{y_{j}+\sum_{i=1}^{d-1}s_{i}\tau_{ji}^{\perp}+g(s)\tau_{j}:s_{i}\in \mathbb{R},\ |s|\lesssim N^{-\frac{1}{d-1}}\right\}\subset\partial\Omega^{\circ},\] where \(g(s)=c_{1}s_{1}^{2}+\cdots+c_{d-1}s_{d-1}^{2}+O(|s|^{3})\) for some positive constants \(c_{i}>0\) (independent of \(N\)). Thus we have \[|\Omega_{N}^{\circ}\setminus\Omega^{\circ}|\lesssim(N^{-\frac{2}{d-1}}+c_{1}s_ {1}^{2}+\cdots c_{d-1}s_{d-1}^{2})|\partial\Omega^{\circ}|\lesssim_{d}N^{- \frac{2}{d-1}}.\] Therefore \[\|f_{N}^{\circ}-f^{\circ}\|_{L^{r}}^{r}\lesssim_{d}N^{-\frac{2}{d-1}}\quad \text{for}\quad 1\leq r<\infty.\] ## 3. Pointwise convergence of gradient descent In what follows we mathematically investigate the pointwise convergence of gradient descent in terms of ReLU-DNN, which seems to be closer to the practical ReLU-DNN. In order to do that, first we formulate the mini-batch gradient descent in pure mathematics. Let \(f^{\circ}\) be a target function and \(\{f_{N}(W^{t})\}_{t=0}^{\infty}\) (\(t\in\mathbb{Z}_{\geq 0}\)) be a sequence of functions generated by the following gradient descent: \[E(W^{t}):=\frac{1}{2}\int_{\mathcal{D}}|f_{N}(W^{t},x)-f^{\circ}(x)|^{2}dx,\] \[W^{t+1}=W^{t}-\epsilon\frac{1}{|\mathcal{D}|}\nabla_{W^{t}}E(W^{t}),\] where \(f_{N}\) is a prescribed neural network with \(N\)-nodes, \(\{W^{t}\}_{t}\) is a set of weight and bias, \(\epsilon\in\mathbb{R}_{>0}\) is a leaning coefficient and \(\mathcal{D}\subset[-1,1)^{d}\) is a set of training samples for mini-batch. Note that, since the gradient is normalized by \(1/|\mathcal{D}|\), \(\mathcal{D}\) can be replaced by a non-zero measure set or a set of lines. Let \(\{f_{N}^{\circ}\}_{N}\) be a sequence of estimators such that \[f_{N}^{\circ}(x):=\lim_{t\to\infty}f_{N}(W^{t},x).\] Our specific purpose is to find neural networks \(f_{N}\), suitable \(\mathcal{D}\) and \(\epsilon\) assuring pointwise convergence to the corresponding estimator \(f_{N}^{\circ}\), which is already given in the last section. **Remark 2**.: This problem setting clarifies the crucial difference between the shallow and deep neural networks, as follows: Since \(\sin\) and \(\cos\) functions are continuous, we can recover them from linear combination of activation functions (see [5] for example). Thus mathematical analysis of shallow neural network can be replaced by a linear combination of \(\sin\) and \(\cos\) functions, which is nothing more than the Fourier series. For \(x\in[-1,1)^{d}\), we set the target function \(f^{\circ}\) as the indicator function of the \(d\) dimensional ball such that \[f^{\circ}(x)=\begin{cases}1,&|x|\leq 1/2,\\ 0,&|x|>1/2,\end{cases}\] and let \(f^{N}\) be a Fourier series with spherical partial sum: \[f_{N}(W^{t},x):=\sum_{|k|<N}c_{k}^{t}e^{ik\cdot x}\in\mathbb{R},\quad W^{t}: =\{c_{k}^{t}\}_{k\in\mathbb{Z}^{d}}\subset\mathbb{C},\ c_{-k}^{t}=\bar{c}_{k}^ {t},\ k\in\mathbb{Z}^{d}.\] Let \(\mathcal{D}=[-1,1)^{d}\) (\(t=0,1,2\cdots\)), and by the Parseval's identity, we immediately have the following estimator (of course, different from the one given in the last section): \[f_{N}^{\circ}(x)=\sum_{|k|<N}\tilde{c}_{k}e^{ik\cdot x}\quad\text{for}\quad \tilde{c}_{k}=\int_{[-1,1)^{d}}f^{\circ}(x)e^{-ik\cdot x}dx.\] Then we obtain the following counterexample, which clarifies the crucial difference between the shallow and deep neural networks. Counterexample.Let \(d\geq 5\) and \(\mathcal{D}=[-1,1)^{d}\). Then, for any \(x\in\mathbb{Q}^{d}\cap[-1,1)^{d}\), \[f_{N}^{\circ}(x)-f^{\circ}(x)\quad\text{diverges as}\quad N\to\infty.\] The proof is just direct consequence of Kuratsubo [15] (see also [16, 17]). Thus we omit its detail. In contrast with the Fourier series case (shallow neural network), we will show pointwise convergence to \(f_{N}^{\circ}\) which is already given in the last section. Let \(N=2^{n}\) (\(n\in\mathbb{N}\)) and let us now construct a deep neural network \(f_{N}\). For the initial layer, we define \[z^{1}:=h(w^{1}x+b^{1}):=\begin{pmatrix}h(w_{1}^{1}\cdot x+b_{1}^{1})\\ \vdots\\ h(w_{2^{n}}^{1}\cdot x+b_{2^{n}}^{1})\end{pmatrix}\] for \(x\in[-1,1)^{d}\), \(w^{1}:=\{w_{j}^{1}\}_{j=1}^{2^{n}}:=\{w_{ji}^{1}\}_{ji}\in\mathbb{R}^{2^{n} \times d}\), \(b^{1},z^{1}\in\mathbb{R}^{2^{n}}\). Recall that \(w\) is the weight and \(b\) is the bias. For the \(2k\)-th layer, we set \[z^{2k}:=h(w^{2k}z^{2k-1}+b^{2k})\] for \(w^{2k}\in\mathbb{R}^{3\cdot 2^{n-k}\times 2^{n-k+1}}\), \(b^{2k},z^{2k}\in\mathbb{R}^{3\cdot 2^{n-k}}\). Moreover, we impose the following sparsity condition: for \(J=1,2,\cdots,2^{n-k}\) and \(1\leq k\leq n\), \[\begin{split} z_{3J-2}^{2k}&=h(w_{3J-2,2J-1}^{2k}z_{ 2J-1}^{2k-1}+w_{3J-2,2J}^{2k}z_{2J}^{2k-1}),\\ z_{3J-1}^{2k}&=h(w_{3J-1,2J-1}^{2k}z_{2J-1}^{2k-1}+w_{3J-1,2J}^ {2k}z_{2J}^{2k-1}),\\ z_{3J}^{2k}&=h(w_{3J,2J-1}^{2k}z_{2J-1}^{2k-1}+w_{3J,2J}^ {2k}z_{2J}^{2k-1}),\end{split} \tag{3}\] where \(z^{2k}=\{z_{j}^{2k}\}_{j}\), \(b^{2k}=\{b_{j}^{2k}\}_{j}\), and \(w^{2k}=\{w_{ji}^{2k}\}_{ji}\), and also impose the following restriction: \[\begin{split} w_{3J-2,2J-1}^{2k}&=w_{3J-1,2J-1}^{2k} =-w_{3J,2J-1}^{2k},\\ w_{3J-2,2J}^{2k}&=-w_{3J-1,2J}^{2k}=w_{3J,2J}^{2k}. \end{split} \tag{4}\] For the \(2k+1\) layer, we set \[z^{2k+1}=w^{2k+1}z^{2k},\] \(w^{2k+1}\in\mathbb{R}^{2^{n-k}\times 3\cdot 2^{n-k}}\), \(z^{2k+1}\in\mathbb{R}^{2^{n-k}}\). In this layer, we impose the following restriction: for \(J=1,2,\cdots 2^{n-k}\), \[z_{J}^{2k+1}=z_{3J-2}^{2k}-z_{3J-1}^{2k}-z_{3J}^{2k}.\] Then we see that, in the \(2n+1\) layer, \(z^{2n+1}\) becomes a real number. In the final layer, we apply the following clipping: \[f_{N}:=z^{2n+2} =\max\{z^{2n+1},1\}\] \[=1-h(1-z^{2n+1}).\] **Remark 3**.: In this paper we employed ReLU function as the activate function, for simplicity. Of course, employing the sigmoid function case is also attractive problem. Then the main theorem is as follows: **Theorem 2**.: _Assume that the initial function \(f_{N}(W^{t=0})\) is already close to the target function \(f^{\circ}\), namely, \(W^{t=0}\) satisfies the initial conditions (8) and (9). Let \(\epsilon=\gamma^{2}\) (\(\gamma\) is given in Proposition 9). Then, by choosing \(\mathcal{D}\) appropriately, and by suitable change of variables: \(W^{t}\mapsto(\alpha^{t},\beta^{t})\), then \(f_{N}(\alpha^{t},\beta^{t})\) converges to \(f^{\circ}_{N}\) pointwisely (as \(t\to\infty\)). The change of variables are explicitly written as_ \[\alpha_{j}:=m_{j}^{2}|w_{j}^{1}|^{2}\quad\text{and}\quad\beta_{j}=m_{j}(w_{j}^ {1}\cdot c_{ji}+b_{j}^{1}),\] _where the definition of \(m_{j}\) is given in (5). Moreover we have the following convergence rate:_ \[\|f_{N}(\alpha^{t},\beta^{t})-f^{\circ}_{N}\|_{L^{r}}^{r}\lesssim_{d}t^{-1/3} \quad\text{for}\quad 1\leq r<\infty.\] **Remark 4**.: It is an open question whether or not the original coefficient \(W^{t}\) case is also converging to the same estimator \(f^{\circ}_{N}\). **Remark 5**.: The initial conditions (8) and (9) are just for the technical reason. We can relax these conditions further. _Proof._ First we consider a pair of \(2k-1\), \(2k\) and \(2k+1\) layers. Let us rewrite (3) in the simpler description as follows: \[\begin{cases}z_{3J-2}^{2k}=h(m^{1}z_{2J-1}^{2k-1}+m^{0}z_{2J}^{2k-1}),\\ z_{3J-1}^{2k}=h(m^{1}z_{2J-1}^{2k-1}-m^{0}z_{2J}^{2k-1}),\\ z_{3J}^{2k}=h(-m^{1}z_{2J-1}^{2k-1}+m^{0}z_{2J}^{2k-1}),\end{cases}\] where \[m_{k,J}^{1}=m^{1} :=w_{3J-2,2J-1}^{2k}=w_{3J-1,2J-1}^{2k}=-w_{3J,2J-1}^{2k},\] \[m_{k,J}^{0}=m^{0} :=w_{3J-2,2J}^{2k}=-w_{3J-1,2J}^{2k}=w_{3J,2J}^{2k}.\] Recall that \[z_{J}^{2k+1}=z_{3J-2}^{2k}-z_{3J-1}^{2k}-z_{3J}^{2k}.\] Taking a derivative, we have \[\partial_{z_{2J-1}^{2k-1}}z_{J}^{2k+1}= m^{1}\partial h(m^{1}z_{2J-1}^{2k-1}+m^{0}z_{2J}^{2k-1})\] \[-m^{1}\partial h(m^{1}z_{2J-1}^{2k-1}-m^{0}z_{2J}^{2k-1})\] \[+m^{1}\partial h(-m^{1}z_{2J-1}^{2k-1}+m^{0}z_{2J}^{2k-1}).\] Due to the cancellation of Heaviside functions in the following domain, \[D_{k,J}^{0}:=\left\{x:m^{0}z_{2J}^{2k-1}<m^{1}z_{2J-1}^{2k-1}\right\},\] we have \[\partial_{z_{2J-1}^{2k-1}}z_{J}^{2k+1}=0\quad\text{for}\quad x\in D_{k,J}^{0}.\] Note that, rigorously saying, \(z^{2k-1}:=z^{2k-1}\circ z^{2k}\circ\cdots\circ z^{1}\). To the contrary, there is no cancellation of Heaviside functions in the following domain: \[D_{k,J}^{1}:=\{x:m^{0}z_{2J}^{2k-1}>m^{1}z_{2J-1}^{2k-1}\}.\] In other words, \[\partial_{z_{2J-1}^{2k-1}}z_{J}^{2k+1}=2m^{1}\quad\text{for}\quad x\in D_{k,J} ^{1}.\] The same argument goes through also in the case \(\partial_{z^{2k-1}_{2J}}z^{2k+1}_{J}\) (omit its detail). In this case, we have \[\partial_{z^{2k-1}_{2J}}z^{2k+1}_{J} =2m^{0}\quad\text{for}\quad x\in D^{0}_{k,J},\] \[\partial_{z^{2k-1}_{2J}}z^{2k+1}_{J} =0\quad\text{for}\quad x\in D^{1}_{k,J}.\] We apply this property inductively in the reverse direction (as the back propergation), and we divide the non-zero region \(\{x:f_{N}(W^{t},x)>0\}\) into several parts appropriately. To do that, we suitably rewrite the natural number \(j\in\{1,2,\cdots,2^{n}\}\) as follows: \[j=\delta^{j}_{1}+2\delta^{j}_{2}+2^{2}\delta^{j}_{3}+\cdots+2^{n-1}\delta^{j} _{n},\] where \(\delta^{j}_{k}\in\{0,1\}\). Let \[D_{j}:=\bigcap_{k=1}^{n}D^{\delta^{j}_{k}}_{k,J^{j}_{k}}\quad\text{for}\quad J ^{j}_{k}:=\sum_{\ell=k}^{n}2^{\ell-k}\delta^{j}_{\ell}.\] By using this \(D_{j}\), the derivative formula becomes much simpler: \[\partial_{x}z^{2n+1}(x)=m_{j}w^{1}_{j}\quad\text{for}\quad x\in D_{j},\quad \text{where}\quad m_{j}:=\prod_{k=1}^{n}\left(2m^{\delta^{j}_{k}}_{k,J^{j}_{k} }\right). \tag{5}\] By the construction of \(D_{j}\), we observe that \[\{x:f_{N}=0\}\cap\partial D_{j}\subset\{x:w^{1}_{j}\cdot x+b^{1}_{j}=0\},\] then, by the fundamental theorem of calculus, we have \[z^{2n+1}(x)=\sum_{j=1}^{2^{n}}\left(h(m_{j}w^{1}_{j}\cdot x+m_{j}b^{1}_{j}) \chi_{D_{j}}(x)\right).\] Therefore we obtain the following explicit formula: \[f_{N}(x)=\max\left\{\sum_{j=1}^{2^{n}}\left(h(\tilde{w}^{1}_{j}\cdot x+\tilde{ b}^{1}_{j})\chi_{D_{j}}(x)\right),1\right\}, \tag{6}\] where \(\tilde{w}^{1}_{j}:=m_{j}w^{1}_{j}\) and \(\tilde{b}^{1}_{j}:=m_{j}b^{1}_{j}\). Then we can apply Lemma 4 in the next section, and complete the proof. ## 4. key lemma for pointwise convergence In this section we give several assumptions and a geometric a-priori region, just for providing much simpler argument. First let us assume \[\tilde{w}^{t=0}_{j}\perp\partial H^{\circ}_{N^{-\frac{2}{d-1}}}(y_{j},\tau_{j }). \tag{7}\] To give the geometric a-priori region, we use parametrized hyper planes. For \(r=\{r_{ji}\}_{ji}\in(-1,1)\) (\(i=1,\cdots,d\), \(j=1,\cdots,2^{n}\)), let \(h_{j}(r)\) be a unique hyper plane composed of points which are also linearly independent (due to the assumption (7)): \[\{\tilde{c}_{ji}(r)\}_{i=1}^{d}:=\left\{2\frac{\tilde{w}_{j}}{|\tilde{w}_{j}| ^{2}}r_{ji}+c_{ji}\right\}_{i=1}^{d}.\] To be more precise. By the Gram-Schmidt process, there is \(\tilde{\tau}_{j}\in\mathbb{S}^{d-1}\) such that \[(\tilde{c}_{ji}(r)-\tilde{c}_{ji^{\prime}}(r))\cdot\tilde{\tau}_{j}=0\quad(i \neq i^{\prime}),\] and then we define the hyper plane \(h_{j}(s)\) as follows: \[h_{j}(r):=\{x:(x-\tilde{c}_{j1}(r))\cdot\tilde{\tau}_{j}=0\}.\] By using this \(h_{j}(s)\), we now define the a-priori region \(\mathcal{L}_{j}\) as follows: \[\mathcal{L}_{j}:=\bigcup_{r_{j1}\in(-1,1)}\bigcup_{r_{j2}\in(-1,1)}\cdots \bigcup_{r_{jd}\in(-1,1)}h_{j}(r).\] Before we state the key lemma, we need the following proposition. **Proposition 3**.: _Assume_ \[\{c_{ji}\}_{i=1}^{d}\subset D_{j}\quad\text{and}\quad\tilde{w}_{j}\perp \partial H^{\circ}_{N^{-\frac{2}{d-1}}}(y_{j},\tau_{j}). \tag{8}\] _Then there exists \(\gamma>0\) such that if_ \[|\tilde{w}_{j}|>\gamma, \tag{9}\] _then_ \[\ell_{ji}(s):=\tilde{w}_{j}^{1}s+c_{ji}\in D_{j}\setminus(\cup_{j\neq j^{ \prime}}\mathcal{L}_{j^{\prime}})\quad\text{for}\quad s\in[-2|\tilde{w}_{j}|^ {-2},2|\tilde{w}_{j}|^{-2}]. \tag{10}\] We need this (10) for providing the simple induction argument (see the proof of Lemma 4). This means that, by a careful computation, we may be able to relax this (10) further. Proof.: The case \(|\tilde{w}_{j}|=\infty\) automatically satisfies (10) and then we just apply the continuity argument. **Lemma 4**.: _Assume that the initial weight and bias \(W^{t=0}\) satisfy (8) and (9). Let \(\epsilon=\gamma^{2}\). Then, by choosing \(\mathcal{D}\subset[-1,1)^{d}\) appropriately, and by suitable change of variables: \(W^{t}\mapsto(\alpha^{t},\beta^{t})\), \(f_{N}(\alpha^{t},\beta^{t})\) converges to \(f_{N}^{\circ}\) pointwisely (as \(t\to\infty\)). The change of variables are explicitly written as_ \[\alpha_{j}:=m_{j}^{2}|w_{j}^{1}|^{2}\quad\text{and}\quad\beta_{j}=m_{j}(w_{j} ^{1}\cdot c_{ji}+b_{j}^{1}),\] _where the definition of \(m_{j}\) is given in (5). Also we have the following convergence rate:_ \[\|f_{N}(\alpha^{t},\beta^{t})-f_{N}^{\circ}\|_{L^{r}}^{r}\lesssim t^{-1/3}.\] **Remark 6**.: Formally, the coefficients of \(f_{N}^{\circ}\) include infinity. But this is rather reasonable, since we need to express discontinuity by using finite times composite function of the ReLU function. Proof of Lemma.For \(t\) times gradient descent, we choose \(2^{n}\cdot d\) elements of straight lines passing through \(\{c_{ji}\}_{ji}\), and we denote them \(\ell_{ji}^{t}\) (\(i=1,2,\cdots,d,j=1,\cdots,2^{n}\)): \[\ell_{ji}^{t}(s)=\tilde{w}_{j}^{1,t}s+c_{ji}^{t}.\] Rigorously this \(\tilde{w}_{j}^{1,t}\) is frozen. More precisely, if we take derivative in \(w_{j}^{1}\) or \(b_{j}^{1}\), we regard this \(\tilde{w}_{j}^{1,t}\) as a constant, not variable. Let \(\mathcal{D}\) be such that \[\mathcal{D}:=\bigcup_{k=1}^{d}\bigcup_{j=1}^{2^{n}}\bigcup_{s=-2\gamma^{-2}}^ {2\gamma^{-2}}\ell_{ji}(s).\] First we show that this \(\mathcal{D}\) is independent of \(t\). We plug this lines into (6), and introduce the new variables \(\alpha_{j}\), \(\beta_{ji}\): \[(\tilde{w}_{j}^{1}\ell_{ji}(s)+\tilde{b}_{j}^{1}) =|\tilde{w}_{j}^{1}|^{2}s+(\tilde{w}_{j}^{1}\cdot c_{ji}+\tilde{b} _{j}^{1})\] \[=:\alpha_{j}s+\beta_{j}. \tag{11}\] Note that \(\tilde{w}_{j}^{1}\cdot c_{ji}\) is independent of \(i\). This means that, \[\text{if}\quad\tilde{w}_{j}^{1,t}\perp\partial H^{\circ}_{N^{-\frac{2}{d-1}}} (y_{j},\tau_{j}),\quad\text{then}\quad\tilde{w}_{j}^{1,t+1}\perp\partial H^{ \circ}_{N^{-\frac{2}{d-1}}}(y_{j},\tau_{j}).\] Now we rewrite the error function \(E\) as follows: \[E(\alpha,\beta) :=\frac{d}{2|\mathcal{D}|}\sum_{j=1}^{2^{n}}E_{j}(\alpha_{j}, \beta_{j})\] \[:=\frac{d}{2|\mathcal{D}|}\sum_{j=1}^{2^{n}}\left(\int_{-2\gamma }^{0}(\alpha_{j}s+\beta_{j})^{2}ds+\int_{0}^{2\gamma}(1-(\alpha_{j}s+\beta_{j} ))^{2}ds\right)\] \[=\frac{d}{2|\mathcal{D}|}\sum_{j=1}^{2^{n}}\left(\int_{-\frac{ \beta_{j}}{\alpha_{j}}}^{0}(\alpha_{j}s+\beta_{j})^{2}ds+\int_{0}^{\frac{1- \beta_{j}}{\alpha_{j}}}(1-(\alpha_{j}s+\beta_{j}))^{2}ds\right).\] Direct calculations yield \(|\mathcal{D}|=4d2^{n}/\gamma^{2}\) and \[E_{j}(\alpha_{j},\beta_{j})=\frac{1-3\beta_{j}+3\beta_{j}^{2}}{3\alpha_{j}}.\] Then we have \[\partial_{\alpha_{i}}E_{j}=-\frac{1}{3\alpha_{j}^{2}}\left(1-3\beta_{j}+3 \beta_{j}^{2}\right)\quad\text{and}\quad\partial_{\beta_{j}}E_{j}=-\frac{1}{ \alpha_{j}}(1-2\beta_{j}).\] Since \(1-3\beta_{j}+3\beta_{j}^{2}\geq 1/4>0\) for any \(\beta_{j}\in\mathbb{R}\), we always have \(\partial_{\alpha_{j}}E<0\) and \[\alpha_{j}^{t+1}=\alpha_{j}^{t}-\epsilon\partial_{\alpha_{j}}E\geq\alpha_{j}^ {t}+\frac{\gamma^{2}}{24|\mathcal{D}|(\alpha_{j}^{t})^{2}}.\] Thus \(\alpha^{t+1}>\alpha^{t}\) and then we have \(|\tilde{w}_{j}^{1,t+1}|>\gamma\) inductively. By directly solving the ODE: \(\frac{d}{dt}g(t)=1/g(t)^{2}\), applying the mean-value theorem and the comparison principle, we have \(\alpha_{j}^{t}\gtrsim t^{1/3}\). Next we consider \(\beta_{j}\). First we show \(0<\beta_{j}<1\). By \[\partial_{\beta_{j}}E=-\frac{1}{\alpha_{j}^{t}}(1-2\beta_{j}^{t})\in(0,1/ \alpha_{j}^{t})\quad\text{for}\quad 1/2<\beta_{j}<1,\] and \(\epsilon=\gamma^{2}<\alpha_{j}^{t}\), we have \(\beta_{j}^{t}>\beta_{j}^{t+1}>0\). To the contrary, if \(\beta_{j}^{t}\in(0,1/2)\), then \(\beta_{j}^{t}<\beta_{j}^{t+1}<1\). Thus \(\beta_{j}^{t}\in(0,1)\). This means that \(\{c_{ji}\}_{i=1}^{d}\subset D_{j}^{t+1}\) inductively. Thus the next step \((\alpha^{t+1},\beta^{t+1})\) also satisfies (10) inductively. Moreover, since \(|\beta_{j}^{t+1}-\beta_{j}^{t}|\to 0\), \(\beta_{j}^{t}\) converges. Since \(\beta_{j}^{t+1}-\beta_{j}^{t}=0\) if and only if \(\beta_{j}^{t}=1/2\), thus \(\beta_{j}^{t}\) converges to \(1/2\). Therefore \(f_{N}(\alpha^{t},\beta^{t})\) converges to \(f_{N}^{\circ}\) pointwisely. Moreover we immediately have the following estimate: \[\|f_{N}(\alpha^{t},\beta^{t})-f_{N}^{\circ}\|_{L^{r}}^{r}\lesssim t^{-\frac{1} {3}}|\partial\Omega_{N}^{\circ}|\lesssim t^{-1/3}.\] This is the desired estimate. ## 5. Conclusion In the previous approximation error analyses, ReLU deep neural networks had been crucially applied for constructing one-dimensional polynomials (spline functions), which is needed for wavelet expansions. In contrast with these, in this paper, we found a ReLU DNN architecture which is suitable for capturing convex shape of discontinuity on indicator functions (target functions), accompanied by pointwise convergence. Our next question would be what kind of ReLU-DNN architectures really attain pointwise convergence (or not) for mixed concave and convex discontinuities, and this is our future work. **Acknowledgments.** I am grateful to Professors Masaharu Nagayama, Eiichi Nakai, Kengo Nakai, Yoshitaka Saiki and Yuzuru Sato for valuable comments. Research of TY was partly supported by the JSPS Grants-in-Aid for Scientific Research 20H01819 and 21K03304. This paper was a part of the lecture note on the class: Mathematical Analysis I (spring semester 2023) for undergrad/graduate courses in Hitotsubashi University.
2307.11173
The Completely Hackable Amateur Radio Telescope (CHART) Project
We present the Completely Hackable Amateur Radio Telescope (CHART), a project that provides hands-on radio instrumentation and design experience to undergraduates while bringing accessible radio astronomy experiments to high school students and teachers. Here we describe a system which can detect 21-cm emission from the Milky Way which is optimized for cost and simplicity of construction. Software, documentation, and tutorials are all completely open source to improve the user experience and facilitate community involvement. We demonstrate the design with several observations which we compare with state-of-the-art surveys. The system is shown to detect galactic 21-cm emission in both rural and urban settings.
Lindsay M. Berkhout, Adam P. Beardsley, Daniel C. Jacobs, Raven Braithwaite, Bryanna Gutierrez-Coatney, Arib Islam, Ahlea Wright
2023-07-20T18:16:09Z
http://arxiv.org/abs/2307.11173v2
# The Completely Hackable Amateur Radio Telescope (CHART) Project ###### Abstract We present the Completely Hackable Amateur Radio Telescope (CHART), a project that provides hands-on radio instrumentation and design experience to undergraduates while bringing accessible radio astronomy experiments to high school students and teachers. Here we describe a system which can detect 21-cm emission from the Milky Way which is optimized for cost and simplicity of construction. Software, documentation, and tutorials are all completely open source to improve the user experience and facilitate community involvement. We demonstrate the design with several observations which we compare with state-of-the-art surveys. The system is shown to detect galactic 21-cm emission in both rural and urban settings. ## 1 Introduction The Completely Hackable Amateur Radio Telescope (CHART) Project provides a platform and tutorials for amateur radio astronomy, with the intent of broadening access to radio science at the secondary school and early undergraduate level. A radio telescope is an excellent educational or amateur astronomy project for several reasons. Optical astronomy is popularized by high profile telescopes such as the Hubble Space Telescope or the James Webb Space Telescope [1] and there are multiple paths for an interested amateur to obtain their own backyard optical telescope. However, optical observations are best done under clear skies from dark rural locations, and most people live in cities where air quality can be low or clouds are common. About 80% of North Americans cannot see the Milky Way from their homes [2]. On the other hand, while cities also pose noise challenges, with careful design it is possible to see the Milky Way at radio frequencies from even a large city. Additionally, radio frequency observations can illuminate different properties of astronomical objects that cannot be observed with low-cost optical instruments, such as Doppler shift from motion and spectral properties of galaxies. Lastly, in a benefit particularly useful to k-12 schools, radio observations can be performed in the day time when class is in session. Though several previous projects have described amateur-grade systems using consumer grade electronics, the part selection, signal processing, and software analysis details are usually left to the user. This increases barriers to participation for those with less technical experience. Here we describe a radio telescope kit which, through a combination of documentation and testing, can be built by a typical high school science teacher or someone with similar experience. The base platform targets measurements of the 21-cm line of neutral hydrogen. Low cost, easy to obtain materials are used for the design, and the open source system design and software are available online2. The total cost currently runs at about $300 of materials, with efforts being made to reduce this further. The design is "hackable" in the sense that enough documentation is provided for users to make improvements or branch out in new directions. The project is also educational for undergraduate students at participating universities who do most of the development work. This contributes further to CHART's educational goals helping the next generation of astronomers get hands-on experience with instruments as well as improves representation at the interface for kit users. Footnote 2: [https://www.cds.org/](https://www.cds.org/) Here we demonstrate CHART with a short primer on our object of choice (Sec. 2), a description of the hardware and software design (Sec. 3), and example observing sessions in multiple settings (Sec. 4). ## 2 Observing the 21 cm Line Atomic neutral hydrogen emits a spectral line when the electron transitions between two hyper-fine levels of the ground state. This "spin-flip" transition occurs when the proton and electron spins go from being aligned to antialigned, emitting a photon with rest wavelength 21 cm, or frequency 1420.4 MHz. A observed deviation from this frequency is a Doppler shift caused by motion. The electromagnetic energy at this particular part of the spectrum can easily penetrate through cosmic dust and Earth's atmosphere, making it an easy target for ground observations. 1420 MHz is in a protected frequency band for radio astronomy, encompassing 1400-1427 MHz in the United States3, and therefore should not be subject to radio frequency interference (RFI) from other sources. For these reasons, radio astronomy with the 21 cm line is an ideal tool for students to learn about emission spectra, galactic motion, or even evidence for dark matter with rotation curves [3]. Footnote 3: [https://www.ntia.doc.gov/files/ntia/publications/2003-allocht.pdf](https://www.ntia.doc.gov/files/ntia/publications/2003-allocht.pdf) Galactic hydrogen has been observed many times with many instruments, beginning with Ewen & Purcell in 1951[4], and followed by a number of more recent surveys [e.g. 5, 6, 7]. Recently hobbyist interest in 21cm radio science has picked up, and there are many avenues to participate at the amateur level, such as the Society for Amateur Radio Astronomy (SARA) grants 4, the Goldstone Apple Valley Radio Telescope [8], and the SALSA project [9], but these opportunities often either assume pre-existing radio science literacy or provide access to a telescope for use, where one does not design or build their own instrument. There is still a significant gap to bridge between the amateur radio astronomy community and high school level physics and astronomy. Footnote 4: [https://www.radio-astronomy.org/grants](https://www.radio-astronomy.org/grants) Regarding 21-cm amateur measurements specifically, there are a number of projects with similar goals. Each project takes a unique approach with differing goals and learning outcomes, and we include a few of the most similar works here. The Digital Signal Processing in Radio Astronomy (DSPIRA) 5 project focuses on teaching signal processing and fourier analysis in this regime. The Physics Open Lab website reports measurements of the Milky way made with an amateur telescope 6. The PICTOR telescope 7 offers measurements of the Milky Way from a remote setup. The BHARAT [10] telescope uses a similar off-the-shelf principle to construct an amateur telescope for use in undergraduate labs. Footnote 5: [https://wwwarial.org/lightwork/](https://wwwarial.org/lightwork/) Footnote 6: [https://physicsopenlab.org/2020/09/08/milky-way-structure-detected-with-the-21-cm-neutral-hydrogen-emission/](https://physicsopenlab.org/2020/09/08/milky-way-structure-detected-with-the-21-cm-neutral-hydrogen-emission/) Footnote 7: [https://pictortelescope.com/#specifications](https://pictortelescope.com/#specifications) The CHART project aims to build on these initiatives by targeting its content towards secondary school teachers as an audience, and providing detailed, open source materials and code. The project is organized as a "follow along" set of tutorials taking a user from building their own horn to a rotation curve constructed with their data. We use accessible materials for building a low-cost instrument and expect no pre-existing radio science expertise. ## 3 System Design A summary of a fully tested example system is included in figure 1. Due to the intended reconfigurability of CHART, many substitute components and designs could be used. This iteration is suggested as an easy, low-cost starting point, and has been well tested by the project participants. The design has been optimized to minimize part count which improves portability and ease of setup. ### Feed-Horn and Antenna The design of the CHART front-end was optimized for observations at the rest frequency of neutral hydrogen, as well as cost-effectiveness and ease of construction. A feed-horn and antenna configuration was chosen as it can be easily constructed from aluminium wrapped cardboard and a length of wire. The purpose of the feed horn is to provide directional gain, acting as a "funnel" for the desired radiation. The Figure 1: The full system diagram for a CHART setup. Electromagnetic radiation enters the horn, and is picked up by the probe inside the waveguide. The probe connects to a combination noise amplifier (LNA) and filter via coaxial cable. The LNA/filter connects to an RTL-SDR software defined radio which is read out by a Raspberry-Pi processor. The RTL provides power to the LNA as a DC bias on the RF cable. A monitor is needed to view the Pi interface, and battery power is needed for both the Pi and monitor. dimensions for the horn were chosen using electromagnetic simulations. The optimized parameter was the beam directivity, or the concentration of an antenna's radiation pattern in a particular direction, at 1420 MHz. Figure 2 shows the dimensions of the horn used for the measurements in this paper. A wire antenna of length 6.3 cm is soldered into a coaxial connector and installed in the side of the waveguide (the rectangular portion at the bottom of the horn), using a soup can lid as a structural support and grounding point. A photo of the fully constructed front end is included in figure 3. ### Electronics The analog signal chain filters out unwanted signals and amplifies the radiation picked up at the antenna. A few different amplification and filtering schemes have been tested. The current suggested hardware based on performance, cost, and ease of use is a Nooelec brand combination bias-tee enabled low noise amplifier (LNA) and filter. The Nooelec module combines both the filtering and amplification into one component and is easily obtained for minimal cost from consumer outlet targeted sellers, and combines both the filtering and amplification into one component. Additionally, the module can be powered using a bias-tee, meaning it can be powered by a USB radio over the RF coaxial cable instead of needing a separate power source. The module has a good performance with about 40 dB of gain over 1375-1450 MHz and a noise figure of 1.05dB or 79K. The attenuation outside of the 65MHz pass-band is between 40 and 60dB. ### Mixing and Digitization The output of the amplified and filtered analog signal is mixed down to a lower frequency and digitized. The digital voltage samples are read out by a computer, transformed into a spectrum, and averaged. Here we have used the "RTL-SDR Blog V3," a popular hobbyist software defined radio which is based on the RTL2832U and Rafael Micro R820T2 chipsets. The RTL-SDR model is a USB dongle that can be obtained for approximately S30. It includes a tunable mixer, adjustable gain blocks, digitizer, and bias-tee for powering an external amplifier. The frequency range of operation is 24 - 1766 MHz, although with direct sampling can go as low as 500 kHz, and the maximum sample rate (without dropped samples) is 2.56 MS/s. We use a Raspberry Pi to process the samples from the SDR, and our tutorials include simple instructions to install and configure the necessary software. This setup provides easy to setup and low cost computing and file storage until the data can be transferred onto a personal computer or server, although any machine that can support a USB SDR and the GNURadio software described section 3.4 could be substituted for the Pi. Figure 3: A fully constructed CHART horn on an observing trip at a weeklong workshop for High School teachers at Winona State University. The CHART documentation kit includes instructions for making a pyramidal horn made from cardboard and tin foil, as well as for setting up the electronics and software. Figure 2: The dimensions and template for the CHART horn construction. ### Software The open source and free GNURadio8 provides the basis of our data collection. One can use GNURadio's graphical user interface (GUI) directly, building a signal processing flowchart to acquire a spectrum and write it to a file. We have also written wrappers to streamline the data collection in our open source software package9 - either in a Linux command line mode, or our dedicated GUI. This is simpler to run, but it does obscure the data taking flow from the user. Footnote 8: gunradio.org Footnote 9: [https://github.com/astrochart](https://github.com/astrochart) The data taking options allow for flexibility depending on the need and expertise of the user. Those who work directly with GNURadio will interact with the engineering aspects of radio instrumentation and learn more signal processing. For those who are more interested in the astronomical analysis of the data, the CHART software provides a quicker method of obtaining data. A custom python package provides analysis functions. Functions include coordinate conversions, bandpass calibration, and plotting in a Doppler velocity frame. ### Tutorials To make the platform as user friendly as possible, and to provide usability for the widest range of experience levels, detailed tutorials are available on the CHART website. There are videos and text walkthroughs of the full system setup. As this setup is reasonably complex and users are not expected to be familiar with advanced computing and coding, the tutorials also cover setting up the Raspberry Pi computer, basics of the Linux operating system, and use of the software. Analysis of the collected data is demonstrated in a Jupyter notebook. This notebook can be run directly on the Raspberry Pi, on any personal computer, or on a community server described below. ### Data Storage and Analysis Server We created a Microsoft Azure server to store data and perform analysis to facilitate community engagement, data sharing, and ease of use. For the pilot implementation, we allocated a virtual machine with 8 CPUs (3rd generation AMD Milan), 32 GB RAM, and a 128 GB hard drive with the option to expand as needed. This should comfortably meet the needs of about ten users. Participants can contact us to acquire an account on the server. With an account, users can upload their data directly from the CHART observer GUI. The server accepts the data upload and adds it to a database which includes metadata about each observation (e.g., location, date, observed frequencies). All uploaded data is visible to all users so they can easily exchange observations and compare results. Once uploaded the users can look at their data in a JupyterHub10 instance running on the server. User accounts are created with all necessary software for data analysis, including the CHART python package, and an example analysis Jupyter Notebook which users can use as a starting point and serves as a self-documented tutorial. This setup avoids initial setup difficulties which often pose a source of friction for new users. Footnote 10: [https://jupyterhub.readthedocs.io/](https://jupyterhub.readthedocs.io/) ## 4 Example Observations While unwanted radio frequency interference (RFI) can be filtered out with a number of analog and digital techniques, interfering signals common in urban environments can be loud enough to cause distortion in the signal chain which can alter the astronomical signal irrevocurably. This makes doing radio astronomy in populated areas more difficult, but not impossible. Results are presented here for two places with differing population densities. This setup has been tested by students in near Winona State University in Winona, Minnesota and in downtown Phoenix near Arizona State University. ### Methods Observations were conducted by several groups of undergraduate students. In each observation the time and orientation were noted. In most cases orientations were chosen towards the galactic Figure 4: An artist’s concept showing principle structures of the Milky Way and illustrating observed sight lines. The labels of the sight lines correspond to the locations of our observations in sections 4.2 and 4.3. Image credit: NASA/JPL-Caltech/R. Hurt (SSC-Caltech) with this link: [https://www.nasa.gov/jpl/charting-the-milky-way-from-the-inside-out](https://www.nasa.gov/jpl/charting-the-milky-way-from-the-inside-out). plane to maximize detection probability. Observations included a wide scan across 20 MHz (2 MHz at a time due to the limits of the SDR) surrounding the 21cm line to assess the interference environment. The center of the telescope pointing was determined by estimating the right ascension and declination of the horn pointing from measured azimuth and elevation. This was then converted to galactic coordinates, and the pointings will be labelled as such. In galactic coordinates, \(l\) is the galactic longitude and \(b\) is galactic latitude; degrees away from the galactic plane. Figure 4 shows the line of sight for the galactic longitudes (\(l\)) used in our analysis. An example view on the sky of one of the pointings shown in figure 4 is included in figure 5. This snapshot from the Stellarium software11 is of the sky above Winona at the time of observing and approximates the area on the sky and location encompassed by one of our pointings. All measurements were taken with the horn described in section 3, which has a beam width of approximately 25 degrees. Footnote 11: stellarium.org Post observation, the data must be preprocessed for analysis. The bandpass of the anti-aliasing filter in the Software Defined Radio must be calibrated out. The response of the RTL-SDR varies as a function of frequency. We measure this spectrum using an "off tuning", where we expect no astronomical signal, and calibrate it out of the data. In order to compare with established survey data, we must convert our data to a common reference frame. Here we use the velocity at local standard of rest (VLSR) observing frame. The LSR follows the mean motion of material in the local Milky Way, defined as stars in radius 100 pc from the Sun [11]. This frame has two parts: the motion of the Sun relative to the LSR, and the orbital motion of the Earth. We use the 'astropy' software package for this conversion [12]. To calibrate and compare our data, we use a data-based simulation of the expected 21 cm spectrum from the EU-HOU project[7], using their web simulator.12 The simulator has a maximum beam width of 20 degrees, which is smaller than the CHART horn beam by about 5 degrees, but provides a reasonable approximation of the field of view. Footnote 12: [https://www.astro.uni-bonn.de/hisurvey/euhou/LABprofile/index.php](https://www.astro.uni-bonn.de/hisurvey/euhou/LABprofile/index.php) The model from the EU-HOU project provides a useful amplitude for calibration of the raw CHART measurements. A two component model for the analog system is described by Eq. 1 which relates the measured spectrum (\(d(\nu)\)) to the true sky (\(m(\nu)\)) via an unknown multiplicative gain (\(g\)) and an unknown additive noise level (\(n\)). \[d(\nu)=g\cdot m(\nu)+n \tag{1}\] Figure 5: A screencap from the Stellarium web simulator of the sky above Winona, Minnesota at the time of data collection. The circle corresponds to the center of the pointing labelled as \(l=117.75\) in figure 4. The size of the circled area approximates the CHART beam size. Figure 6: _Top:_ The uncalibrated CHART data for galactic coordinates \(l=117.75^{\circ}\) and \(b=-3.25^{\circ}\). The individual spectra represent separate 2 MHz tunings of the SDR. The spectrometer scans across the frequency range in 2 MHz tunings, with a step of 1 MHz so that there is some overlap between tunings. The repeating structure in every tuning is due to the anti-aliasing filter of the SDR. _Bottom:_ Calibrated data after dividing out the bandpass, subtracting noise, and correcting for the overall gain. Here we calibrate these two quantities for the CHART data by assuming the EU-HOU simulation is a good model for the true sky. We first estimate the noise where the model predicts low power (\(d(\nu)\approx n\)), then we subtract this noise level and scale the residual to estimate the gain where the model is largest (\(g\approx(d(\nu)-n)/m(\nu)\)). We do not correct for second order effects, and estimate only the noise and gain in a small range around the 21-cm line. We then apply these parameters across the spectrum. This method turns the uncalibrated CHART data into Brightness Temperature (\(T_{B}\)) in Kelvin units. The raw, uncalibrated and uncorrected data for the sight line of \(l=117.75^{\circ}\) taken in Winona is included in figure 6 (top). The shape of the spectrum is dominated by the anti-aliasing filter which is applied to each 2MHz tuning. This shape is divided out in the calibration post-processing step. Sporadic RFI can be seen in a few tunings, and the 21cm line is visible slightly to the left of the rest frequency indicated as a bold vertical line. The whole frequency range has a slight slope caused by the uneven spectral response of the electronics. The vertical axis units are arbitrary and have no correspondence to a physical unit before calibration. Figure 6 (bottom) shows the result of the calibration process. There is still an overall 'u' shape to the bandpass as we do not correct for second order effects when removing noise. ### Results from Winona, Minnesota Figure 7 shows the velocity profile for three different sets of galactic coordinates, following the corresponding lines of sight in figure 4. The data was taken nearby Winona State University in Winona, Minnesota. The CHART data is plotted against the EU-HOU simulation for the same set of galactic coordinates. Our data largely agree with known results for the selected galactic coordinates. There is minor variation in the shapes of the profiles, which could result from a variety of factors. Likely culprits are pointing accuracy errors or variation in the field of view of the data and models. Pointing accuracy is limited by the observer's ability to estimate the coordinates of the center of their pointing with a protractor and phone compass, and the beam size and shape is not exact between the EU-HOU simulation and the real horn. ### Results from Phoenix, Arizona This section presents a more challenging RFI test environment than Winona. Measurements were made in downtown Phoenix, Arizona in the day time. In this location we expect interference at frequencies close to the HI band (including cell phones, FM radio, and other transmitters) at levels strong enough to overcome the bandpass filter and possibly to saturate the amplifier. The data was taken at galactic coordinates \(l=72^{\circ}\) and \(b=0^{\circ}\), following the \(l=72^{\circ}\) line of sight in figure 4. For this data, a few extra analysis steps were taken to remove contaminants. Using a reference pointing with the horn covered, the data was masked for the contaminants that show up in both Figure 8: Observed Brightness Temperature (TB) spectrum as compared to the simulation, with a pointing of galactic coordinates \(l=72^{\circ}\), \(b=0^{\circ}\). This follows the \(l=72^{\circ}\) line in figure 4. This data has been masked and filtered in the Fourier domain. The observed data peak width and center appear to be well matched to the model. There is a ripple across the data on the level of the high negative velocity clouds, so the match of the non-zero velocity peaks cannot be determined from this data. Figure 7: Subfigures show a radial velocity vs brightness temperature measurement taken for 3 different sets of galactic coordinates, rounded to the nearest quarter degree. The coordinates follow the line of sight in figure 4. The data labelled β€œCHART” correspond to the profile from our CHART telescope, and the β€œModel” is calculated with the EU-HOU web simulator. the control data and the sky data. We know sources that show up with the horn covered to be self or environmental interference rather than astronomical in nature, and can safely remove them from the data. We also applied a Fourier domain filter to remove any frequency dependent systematics. The filtered and calibrated signal is seen in figure 8. The data appears well matched in width and peak to the simulation. The noise level of this data is high, so we do not interpret the negative velocity peak as a true match. There is a ripple across the band on the level of this peak, as seen in the bump at positive velocities of 75 km/s. However, the main lobe of our emission line seems to be an excellent fit and is well above the noise level of the ripple. ## 5 Conclusion The CHART platform offers a classroom accessible method for students and educators to get involved in radio astronomy. Here we have demonstrated detection and analysis of the 21-cm line from local neutral hydrogen using a cardboard horn, low cost electronics and freely available software. The base platform can be modified and extended by advanced users. Future directions include further analysis to create galactic rotation curves demonstrating the necessity of dark matter and alternative radio targets at other bands. With some component changes, the platform is easily extensible to other projects including solar observations, radio Jove (Jupiter), pulsars, and Cosmic Microwave Background (CMB) observations. Additionally, with multiple CHART systems, interferometry is possible. The documentation and construction of this project is designed to be easily accessible to secondary school level educators. CHART has been used at summer workshops for High School teachers, as well as by local teachers in the areas surrounding participating universities. We hope to expand project engagement even further as the documentation and science results grow. ## Acknowledgements LMB acknowledges that this material is based upon work supported by a National Science Foundation Graduate Research Fellowship under Grant No. 2233001. APB acknowledges support from an NSF Astronomy and Astrophysics Postdoctoral Fellowship under award AST-1701440, and a grant from the Mt. Cuba Astronomical Foundation. APB and AW acknowledge support from NSF grant AST-2108348. DCJ acknowledges NSF CAREER, grant #2144995. For the modeling data in this paper, we acknowledge the EU-HOU project and the Comenius grant.
2302.05900
Investigating the Effect of Relative Positional Embeddings on AMR-to-Text Generation with Structural Adapters
Text generation from Abstract Meaning Representation (AMR) has substantially benefited from the popularized Pretrained Language Models (PLMs). Myriad approaches have linearized the input graph as a sequence of tokens to fit the PLM tokenization requirements. Nevertheless, this transformation jeopardizes the structural integrity of the graph and is therefore detrimental to its resulting representation. To overcome this issue, Ribeiro et al. have recently proposed StructAdapt, a structure-aware adapter which injects the input graph connectivity within PLMs using Graph Neural Networks (GNNs). In this paper, we investigate the influence of Relative Position Embeddings (RPE) on AMR-to-Text, and, in parallel, we examine the robustness of StructAdapt. Through ablation studies, graph attack and link prediction, we reveal that RPE might be partially encoding input graphs. We suggest further research regarding the role of RPE will provide valuable insights for Graph-to-Text generation.
Sebastien Montella, Alexis Nasr, Johannes Heinecke, Frederic Bechet, Lina M. Rojas-Barahona
2023-02-12T12:43:36Z
http://arxiv.org/abs/2302.05900v1
Investigating the Effect of Relative Positional Embeddings on AMR-to-Text Generation with Structural Adapters ###### Abstract Text generation from Abstract Meaning Representation (AMR) has substantially benefited from the popularized Pretrained Language Models (PLMs). Myriad approaches have linearized the input graph as a sequence of tokens to fit the PLM tokenization requirements. Nevertheless, this transformation jeopardizes the structural integrity of the graph and is therefore detrimental to its resulting representation. To overcome this issue, Ribeiro et al. (2021) have recently proposed StructAdapt, a structure-aware adapter which injects the input graph connectivity within PLMs using Graph Neural Networks (GNNs). In this paper, we investigate the influence of Relative Position Embeddings (RPE) on AMR-to-Text, and, in parallel, we examine the robustness of StructAdapt. Through ablation studies, graph attack and link prediction, we reveal that RPE might be partially encoding input graphs. We suggest further research regarding the role of RPE will provide valuable insights for Graph-to-Text generation. ## 1 Introduction Earliest works on AMR-to-Text generation were mostly based on statistical methods. A common practice was to convert AMR-to-Text task into an already studied problems such as Tree-to-Text Flanigan et al. (2016); Lampouras and Vlachos (2017), aligned text-to-text Pourdamghani et al. (2016), Travel Sales Problems Song et al. (2016) or Grammatical Framework Ranta (2011). Recently, most methods are neural-centered with an encoder-decoder architecture Sutskever et al. (2014) as a backbone Konstas et al. (2017); Takase et al. (2016); Cao and Clark (2019). Unfortunately, this architecture coerces the AMR to be linearized as a sequence of tokens. This ends up in structural information loss. To tackle this issue, several strategies have attempted to integrate structure using message propagation Song et al. (2018); Guo et al. (2019); Damonte and Cohen (2019); Ribeiro et al. (2019); Zhang et al. (2020); Zhao et al. (2020). A limitation of those is the absence of pretraining, as demonstrated by Ribeiro et al. (2021). To this end, Ribeiro et al. (2021) introduced StructAdapt for lightweight AMR-to-Text with structural adapters. As linearization and tokenization of the input graph are mandatory steps for PLMs, StructAdapt first defines a new graph where nodes are the resulting subwords from the tokenization. As a result, adapter can henceforth include GNN layers operating on the subsequent graph while leveraging pretrained representations. However, although studies have been made to probe position embeddings Wang and Chen (2020); Wang et al. (2021); Dufter et al. (2022), their role on graph encoding has remained unanswered. In this paper, we are particularly interested in measuring the saliency of RPE with StructAdapt for AMR-to-Text generation. Our novelty is not in proposing a new method to encode graphs such as (Schmitt et al., 2021) but rather in revealing the interesting behaviours of RPE along with StructAdapt. ## 2 StructAdapt: A Structural Adapter A major issue in AMR-to-Text, and more generally Graph-to-Text with Transformers (Vaswani et al., 2017), is the linearization of the input structure. The linearization of the graph returns a sequence of node and edge labels according to a certain traversal of the graph. Nonetheless, adjacent nodes in the graph may be at multiple positions away from one another in the final serialization. To counteract this, Ribeiro et al. (2021) introduced StructAdapt, a structure-aware (encoder) adapter. It solves the problem of segmented nodes labels by reconstructing a new graph from the resulting subwords. More specifically, the relations are primary reified as new nodes in the AMR graph. Furthermore, labels of those reified relations will be added in the vocabulary as new tokens and therefore will not be decomposed into subwords. However, the labels of the original nodes can still be chunked. To deal with this, each subword node will be connected independently to the reified relation of the initial (non-chunked) node. An example is outlined in Figure 1. As a consequence, the vanilla Adapter can now integrate any GNN-based neural network which operates on the new constructed graph (Figure 1), where nodes are the input tokens. Concretely, StructAdapt replaces the first stacked MLP of vanilla adapter with a GNN-based model as shown in Figure 2. For AMR-to-Text, only the encoder is equipped with StructAdapt in order to encode AMR structure. The decoder layers adopt vanilla adapters. In our study, we consider three different GNN-based models which are Graph Convolutional Network (GCN) (Kipf and Welling, 2017), Graph Attention network (GAT) (Velickovic et al., 2017) and Relational Graph Convolutional Network (RGCN) (Schlichtkrull et al., 2018). GCN computes a representation for each node \(a\) which is a (normalized) aggregation function of representation of its neighbor nodes noted \(\mathcal{N}(a)\). GAT is akin to GCN but differs in that aggregation of neighbors embeddings are weighted using an attention mechanism. Unlike GAT and GCN, RGCN further captures the type of the relation between two nodes. In our case, as AMR relations are reified and stand for new nodes of the graphs, our new relations can either be of _direct_ (\(a\xrightarrow{d}b\)) or _reverse_ (\(a\xleftarrow{r}b\)) connections type as in (Ribeiro et al., 2021). The details of the representations computation for each model can be found in Appendix A. The returned nodes embeddings will then be given as input features to the following MLP. ## 3 Relative Position Embeddings Instead of adding Absolute Position Embeddings (APE) directly to the token embedding as in standard Transformer model, some models such as T5 make use of relative position embeddings inspired from Shaw et al. (2018). As an alternative to APE, RPE offer interesting features. A noteworthy limitation of APE is the need to set a limit of available positions. Therefore, long sequences may have Figure 1: Examples of AMR tokenization for the sentence _β€œOccidentalism and Orientialism.”_. The resulting input graph in (c) contains 8 nodes. Figure 2: Vanilla Adapter vs StructAdapt to be segmented. Furthermore, APE are directly added to the token representation leading to information inconsistency, i.e. position versus semantic information. To this extent, Shaw et al. (2018) came up with relative position encodings which are supplied to the self-attention mechanism by simply adding a scalar to the logits encoding the supposed relation between a current token \(i\) and a token \(j\). ## 4 Experiments Throughout our experiments, we make use of the LDC2020T02 dataset (AMR 3.0 release)1 and use the T5\({}_{\rm base}\) model which employs RPE. The training and evaluation details can be found in Appendix B and C, respectively. Footnote 1: [https://catalog.ldc.upenn.edu/LDC2020T02](https://catalog.ldc.upenn.edu/LDC2020T02) ### Exploring the Salience of RPE In this section, we investigate the influence of RPE on the generation quality using structural adapter. RPE are computed in each Transformer head. Position information is then forwarded to the adapter module on top (Figure 2). However, since connections between input nodes (i.e. tokens) are already given to structural adapters in encoder, it is legitimate to question the necessity for RPE on the encoder part but also how would the generation quality vary without such information. Hence, we propose to remove the RPE from the encoder heads to gauge their salience to structure encoding for downstream language generation. Since decoder is not encoding any graph structure, we leave RPE in decoder untouched. For better readability, MLP, GCN, GAT, and RGCN respectively denote: the vanilla adapter, StructAdapt with a GCN layer, StructAdapt with a GAT layer and StructAdapt with a RGCN layer. MLP-based adapter with RPE is our baseline. Results are given in Table 1. A human evaluation is also provided for some encoder adapters in Table 2. First, it is apparent that using RPE systematically yields better generation performances. For the vanilla adapter (i.e. our baseline), we note a 25.3% absolute drop in BLEU when removing RPE. This can also be seen on human evaluation. More than one point is lost toward meaning preservation. The downturn for linguistic correctness is less important since T5 is pretrained and thus rarely prone to syntax errors. Such a result is not surprising for MLP-based adapter since it solely relies on RPE to differentiate tokens at different positions in the linearized AMR. However, a striking observation is that getting rid of RPE for GNN-based adapters leads to lower performances than our baseline. Indeed, when removing RPE when using structural adapter, we would have expected GNN-based approaches to be as competitive as a MLP-based adapter with RPE. We report a relative drop of 12.5 points in BLEU from the baseline. The same conclusion can be drawn from Table 2. This indicates that RPE are capturing relevant information for final generation. To further assess the impact and the role of RPE, we conduct a _graph attack_ experiment. Instead of conveying the correct adjacency matrix, we propose to corrupt connectivity information. We randomly generate an adjacency matrix such that generated matrix does not contain any actual connection. We suppose that without RPE, structure-aware adapter will lead to a significant decrease in generated text due to the absence of information about the graph nor the position of nodes in the input sequence. We are especially interested to measure to which extent RPE might be able to take over the encoding of the graph for generation. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{**Adapter**} & **BLEU** & **1** & **METEOR** & **Cluff++** & **TER** & **BERTScore** & \(\mathcal{M}\) & \(\mathcal{F}\) \\ \hline \multirow{2}{*}{**MLP**} & **w/ RPE** & **41.4\(\pm\)**0.3** & **56.5\(\pm\)**0.3** & **76.0\(\pm\)**0.2** & **45.9\(\pm\)**0.5** & **954.0\(\pm\)**0.0** & **84.2\(\pm\)**0.1** & **78.0\(\pm\)**0.5** \\ & **w/o RPE** & 16.3\(\pm\)0.5 & 38.5\(\pm\)0.8 & 49.9\(\pm\)1.1 & 85.5\(\pm\)0.9 & 91.8\(\pm\)0.2 & 81.9\(\pm\)1.1 & 76.2\(\pm\)0.6 \\ \hline \multirow{2}{*}{**GCN**} & **w/ RPE** & **42.6\(\pm\)**0.8** & **56.7\(\pm\)**0.4** & **71.0\(\pm\)**0.5** & **44.8\(\pm\)**0.7** & 95.7\(\pm\)**0.1** & **84.6\(\pm\)**0.3** & **79.0\(\pm\)**0.6** \\ & **w/o RPE** & 34.4\(\pm\)0.8 & 52.0\(\pm\)0.7 & 64.8\(\pm\)0.6 & 55.8\(\pm\)1.2 & 64.6\(\pm\)0.1 & 79.2\(\pm\)0.6 & 75.2\(\pm\)1.0 \\ \hline \multirow{2}{*}{**GAT**} & **w/ RPE** & **42.8\(\pm\)**0.1 & **57.0\(\pm\)**0.1 & **71.1\(\pm\)**0.4** & **44.3\(\pm\)**0.3** & **95.8\(\pm\)**0.0** & **84.8\(\pm\)**0.1** & **78.5\(\pm\)**0.8** \\ & **w/o RPE** & 34.8\(\pm\)1.1 & 52.3\(\pm\)0.7 & 64.8\(\pm\)0.7 & 54.9\(\pm\)1.4 & 94.6\(\pm\)0.2 & 79.6\(\pm\)0.8 & 75.6\(\pm\)0.4 \\ \hline \multirow{2}{*}{**RGCN**} & **w/ RPE** & **44.7\(\pm\)**0.6** & **58.2\(\pm\)**0.3 & **72.5\(\pm\)**0.3 & **42.6\(\pm\)**0.4 & 96.0\(\pm\)**0.0** & **85.5\(\pm\)**0.2** & 79.6\(\pm\)0.7 \\ & **w/o RPE** & 39.9\(\pm\)0.8 & 55.7\(\pm\)0.5 & 68.9\(\pm\)0.8 & 48.8\(\pm\)1.3 & 95.3\(\pm\)0.1 & 83.1\(\pm\)0.4 & 78.0\(\pm\)0.6 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparing impact of Relative Positional Embeddings (RPE) on generation. We report mean performances (\(\pm\)s.d.) over 3 seeds. \begin{table} \begin{tabular}{c c c} \hline \hline \multirow{2}{*}{**Adapter**} & **Meaning** & **Linguistic** \\ & **Preservation** & **Correctness** \\ \hline \multirow{2}{*}{**MLP w/ RPE**} & **4.8\(\pm\)**1.2 & **5.5\(\pm\)**0.9 \\ \cline{2-3} & **MLP w/o RPE** & 3.5\(\pm\)**1.5 & 5.2\(\pm\)**1.2 \\ \hline GCN w/ RPE** & **5.0\(\pm\)**1.2 & **5.6\(\pm\)**0.8 \\ \cline{2-3} **GCN w/o RPE** & 4.7\(\pm\)**1.3 & 5.5\(\pm\)**1.0 \\ \hline **RGCN w/ RPE** & **5.2\(\pm\)**1.1 & **5.6\(\pm\)**0.8 \\ \cline{2-3} **RGCN w/o RPE** & 4.7\(\pm\)**1.3 & 5.4\(\pm\)**1.0 \\ \hline \hline \end{tabular} \end{table} Table 2: Human Evaluation. Mean scores (\(\pm\)s.d.) Results are shown in Table 3. Human evaluation for _attacked_ GCN and RGCN adapters is given in Table 4. As hypothesized, providing erroneous connectivity without any position embeddings makes structural adapters no longer compelling. We observe that StructAdapt with RGCN is significantly more affected compared to GAT and GCN based adapters. Since RGCN adds direction information for each edge (_direct_ and _reverse_), we conjecture that RGCN is much more bewildered. Interestingly, using RPE with corrupted graph (Table 3) leads to similar performance than using graph information without RPE (Table 1). This strongly demonstrates the usefulness of RPE to carry out the generation. We additionally provide a _position attack_ experiment in Appendix E where RPE are shuffled randomly. Accordingly, we can further identify the saliency of RPE despite the available GNN. This raises the question of RPE encoding the input graph. ### Can the Graphs Be Reconstructed? As shown in Section 4.1, RPE seem to be as competitive as applying GNNs alone. If claiming that RPE also encode graphs is tempting, no strong evidence has been revealed. Indeed, better generation quality is not necessarily a consequence of better graph encoding. Therefore, we probe whether graphs can indeed be reconstructed from the learned hidden representations. To do so, we train a logistic regression, i.e. our probe, to perform link prediction as a binary classification. More specifically, given two nodes representations at a given layer \(l\), our probe returns the probability that nodes are connected. To train our logistic regression model, we sample \(k\) positive connections, i.e. two connected nodes, and \(k\) negative connections, i.e. two non-connected nodes, for each sample of training and test sets.2 For our experiment, we choose \(k=2\) which leads to 109,490 and 3,770 samples for each class for training and testing respectively. We plot results in Figure 3. Firstly, we observe very high accuracy for the vanilla adapter without RPE. Accuracies over 60% are easily reached while no structure encoding nor positions information are supplied. This might be a side effect of our probe training. Nevertheless, this gives a lower bound for our experiment. We can see that adding RPE increases the link prediction performance for MLP, GCN and GAT-based adapters. We observe a constant gap of about 3.5% on average. However, we remark that RGCN is able to reconstruct edges on its own with best accuracy. We report a maximum accuracy about 76% while other models are not reaching 73%. This strengthens the idea that giving information of reverse connection may add robustness to graph encoding as shown in (Beck et al., 2018). Generally, we observe that the deeper the representation, the better the link prediction. We notice however that after the 10\({}^{th}\) layer, significant drops in link prediction arise regardless \begin{table} \begin{tabular}{c c c} \hline \hline \multirow{2}{*}{**Adapter**} & **Meaning** & **Linguistic** \\ & **Preservation** & **Correctness** \\ \hline **GCN w/ RPE** & 5.0\(\pm\)1.2 & 5.6\(\pm\)0.8 \\ **RGCN w/ RPE** & 5.2\(\pm\)1.1 & 5.6\(\pm\)0.8 \\ \hline \hline \end{tabular} \end{table} Table 4: _Graph Attack_ - Human Evaluation. Mean scores (\(\pm\)s.d.) Figure 3: _Probing_ - Link Prediction Results using hidden representations \(\tilde{h}_{i}\) at different layers \(i\). \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multicolumn{2}{c}{**Adapter**} & **BLEU** & **\(\uparrow\)** & **METER \(\uparrow\)** & **TER \(\downarrow\)** & **BFScore** & \(\mathcal{M}\uparrow\) & \(\mathcal{F}\uparrow\) \\ \hline **GCN** & **w/ RPE** & 35.1\(\pm\)0.5 & **52.0\(\pm\)0.4** & **65.7\(\pm\)0.5** & **53.6\(\pm\)0.9** & **94.8\(\pm\)0.1** & **80.5\(\pm\)0.4** & 74.1\(\pm\)1.2 \\ & **w/o RPE** & 15.3\(\pm\)0.5 & 35.2\(\pm\)0.6 & 46.7\(\pm\)0.5 & 86.5\(\pm\)2.7 & 91.1\(\pm\)0.1 & 77.1\(\pm\)14.4 & **78.0\(\pm\)2.6** \\ \hline **GAT** & **w/ RPE** & **38.6\(\pm\)0.6** & **52.3\(\pm\)0.5** & **67.0\(\pm\)0.5** & **51.1\(\pm\)0.9** & **95.0\(\pm\)0.1** & **81.6\(\pm\)0.5** & 75.7\(\pm\)0.3 \\ & **w/ RPE** & 13.1\(\pm\)2.8 & 34.1\(\pm\)3.6 & 45.2\(\pm\)4.2 & 89.1\(\pm\)0.6 & 90.7\(\pm\)0.1 & 68.9\(\pm\)1.1 & 77.4\(\pm\)1.9 \\ \hline **RGCN** & **w/ RPE** & **38.0\(\pm\)0.9** & **54.1\(\pm\)0.6** & **67.6\(\pm\)0.6** & **49.3\(\pm\)0.8** & **95.2\(\pm\)0.1** & **81.9\(\pm\)0.7** & **75.9\(\pm\)0.8** \\ & **w/o RPE** & 11.3\(\pm\)1.3 & 30.1\(\pm\)1.1 & 41.6\(\pm\)1.0 & 87.2\(\pm\)1.3 & 90.0\(\pm\)0.2 & 66.8\(\pm\)16.3 & 75.5\(\pm\)5.2 \\ \hline \hline \end{tabular} \end{table} Table 3: _Graph Attack_ - We corrupt the structureal adopters in encoder. We report mean performances (\(\pm\)s.d.) over 3 seeds. of adapter type. We assume representations should lose some information about the structure to perform language generation. This may indicate that encoded representation for Graph-to-Text is not just graph-centered. Although counter-intuitive, encoder representations given to the decoder part may not have to encode input graph efficiently in order to verbalize it. We leave this research question for future work. We further provide an analysis on self-attention matrices in Appendix F. ## 5 Conclusion In this paper, we have explored the effect of relative position embeddings on AMR-to-Text generation using structural adapters. We have shown that the generation process could be enabled by relative position embeddings when structure is erroneous or missing. In addition, we have demonstrated the capacity of those representations to encode the input graph to some extent. We have further revealed interesting robustness of RGCN model in graph reconstruction ability. For future work, we believe further experiments on other pretrained models and Graph-to-Text tasks may shed more light on the role of position embeddings. ## Limitations A limitation of our study is that we focus on the T5 model only. Since adapters are additional modules to add, it is required to manually implement and directly modify the original code of the pretrained model which is not easily scalable. In addition, we only evaluate on the LDC2020T02 dataset which is the cleanest AMR dataset available. ## Acknowledgements We would like to thank annotators and reviewers for taking the time and effort necessary to share our contribution. This work was partially funded by the ANR Cifre conventions N\({}^{\circ}\)2020/0400 and Orange Innovation Research.
2301.05288
An Approach to Stochastic Dynamic Games with Asymmetric Information and Hidden Actions
We consider in discrete time, a general class of sequential stochastic dynamic games with asymmetric information with the following features. The underlying system has Markovian dynamics controlled by the agents' joint actions. Each agent's instantaneous utility depends on the current system state and the agents' joint actions. At each time instant each agent makes a private noisy observation of the current system state and the agents' actions in the previous time instant. In addition, at each time instant all agents have a common noisy observation of the current system state and their actions in the previous time instant. Each agent's actions are part of his private information. The objective is to determine Bayesian Nash Equilibrium (BNE) strategy profiles that are based on a compressed version of the agents' information and can be sequentially computed; such BNE strategy profiles may not always exist. We present an approach/methodology that achieves the above-stated objective, along with an instance of a game where BNE strategy profiles with the above-mentioned characteristics exist. We show that the methodology also works for the case where the agents have no common observations.
Yi Ouyang, Hamidreza Tavafoghi, Demosthenis Teneketzis
2023-01-12T20:51:44Z
http://arxiv.org/abs/2301.05288v1
# An Approach to Stochastic Dynamic Games with Asymmetric Information and Hidden Actions ###### Abstract We consider in discrete time, a general class of sequential stochastic dynamic games with asymmetric information with the following features. The underlying system has Markovian dynamics controlled by the agents' joint actions. Each agent's instantaneous utility depends on the current system state and the agents' joint actions. At each time instant each agent makes a private noisy observation of the current system state and the agents' actions in the previous time instant. In addition, at each time instant all agents have a common noisy observation of the current system state and their actions in the previous time instant. Each agent's actions are part of his private information. The objective is to determine Bayesian Nash Equilibrium (BNE) strategy profiles that are based on a compressed version of the agents' information and can be sequentially computed; such BNE strategy profiles may not always exist. We present an approach/methodology that achieves the above-stated objective, along with an instance of a game where BNE strategy profiles with the above-mentioned characteristics exist. We show that the methodology also works for the case where the agents have no common observations. Dynamic games, asymmetric information, hidden actions, common information, information compression, sequential decomposition ## 1 Introduction We study, in discrete time, a general class of sequential stochastic dynamic games with asymmetric information. We consider a setting where the underlying system has Markovian dynamics controlled by the agents' joint actions. Each agent's instantaneous utility depends on the agents' joint actions and the system state. At each time instant each agent makes a private noisy observation that depends on the current system state and the agents' actions in the previous time instant. In addition, at each time instant all agents may have a common noisy observation of the system state and their actions in the previous time instant. The agents' actions are hidden, that is, each agent's actions are not directly observable by the other agents. Therefore, at every time instant agents have asymmetric and imperfect information about the game's history. Dynamic games with the above features arise in engineering (cybersecurity, transportation, energy markets), in economics (industrial organization), and in socio-technological applications. As pointed out in Tang et al (2022), the key challenges in the study of dynamic games with asymmetric information are: (i) The domain of agents' strategies increases with time, as the agents acquire information over time. Thus, the computational complexity of the agents' strategies increases with time. (ii) Due to signaling1(Ho, 1980), in many instances an agent's assessment of the game's status at time \(t\), therefore his strategy at time \(t\), depends on the strategies of agents who acted before him. Consequently, we cannot obtain the standard sequential decomposition (that sequentially determines the components of an equilibrium strategy profile) of the kind provided by the standard dynamic programming algorithm (where the agent's optimal strategy at any time \(t\) does not depend on past strategies (Kumar and Varaiya, 1986, Chapter 6.5)). Footnote 1: Signaling in games is more complex than signaling in teams because the agents have diverging incentives and their strategies are their own private information. To address these challenges, we can look for equilibrium strategy profiles that are based on a compressed version of the agents' information and can be sequentially computed. However, such equilibrium strategy profiles may not exist. In this paper we propose an approach, described in detail in Section 3, that addresses the above-stated challenges. According to this approach, we first compress the agents' private and common information at each time instant. Then, we define strategies based on the compressed information and show that Bayesian Nash Equilibria (BNE) based on these strategies can be determined sequentially in time moving backwards, if each step of this backwards procedure has a solution. Finally, we provide an example where a BNE strategy profile based on compressed information exists. We show that the proposed approach works for the case where the agents have no common observations and their actions are hidden. ### Related Literature Dynamic games with asymmetric information have been extensively investigated in the literature in the context of repeated discounted games; see Zamir (1992); Forges (1992); Aumann et al (1995); Mailath and Samuelson (2006) and the references therein. The key feature of these games is the absence of a dynamic system. Moreover, the works on repeated games study primarily their asymptotic properties when the horizon is infinite and agents are sufficiently patient (i.e. the discount factor is close one). In repeated games, agents play a stage (static) game repeatedly over time. The main objective of this strand of literature is to explore situations where agents can form self-enforcing punishment/reward mechanisms so as to create additional equilibria that improve upon the payoffs they can get by simply playing an equilibrium of the stage game over time. Recent works (see Horner et al (2011); Escobar and Toikka (2013); Sugaya (2012)) adopt approaches similar to those used in repeated games to study infinite horizon dynamic games with asymmetric information when there is an underlying dynamic Markovian system. Under certain conditions on the system dynamics and information structure, the authors of Horner et al (2011); Escobar and Toikka (2013); Sugaya (2012) characterize a set of asymptotic equilibria attained when the agents are sufficiently patient. The problem we study in this paper is different from the ones in Zamir (1992); Forges (1992); Aumann et al (1995); Mailath and Samuelson (2006); Horner et al (2011); Escobar and Toikka (2013); Sugaya (2012) in two aspects. First, we consider a class of dynamic games where the underlying system has general Markovian dynamics and a general information structure, and we do not restrict attention to asymptotic behaviors when the horizon is infinite and the agents are sufficiently patient. Second, we study situations where the decision problem that each agent faces, in the absence of strategic interactions with other agents, is a Partially Observed Markov Decision Process (POMDP), which is a complex problem to solve by itself. Therefore, reaching (and computing) a set of equilibrium strategies, which take into account the strategic interactions among the agents, is a very challenging task. As a result, it is not very plausible for the agents to seek reaching equilibria that are generated by the formation of self-enforcing punishment/reward mechanisms similar to those used in infinitely repeated games. We believe that our results provide new insight into the behavior of strategic agents in complex and dynamic environments, and complement the existing results in the repeated games literature. Stochastic dynamic zero-sum games with asymmetric information have been studied in Renault (2006); Cardaliaguet et al (2015); Gensbittel and Renault (2015); Li et al (2017); Kartik and Nayyar (2021); Zheng and Castanon (2013); Li and Shamma (2014). The authors of Renault (2006); Cardaliaguet et al (2015); Zheng and Castanon (2013); Li and Shamma (2014) study zero-sum games with Markovian dynamics and lack of information on one side (i.e. one informed and one uninformed agent). The authors of Gensbittel and Renault (2015); Li et al (2017); Kartik and Nayyar (2021) study zero-sum games with Markovian dynamics and lack of information on both sides. The works of Renault (2006); Cardaliaguet et al (2015); Gensbittel and Renault (2015); Li et al (2017); Kartik and Nayyar (2021); Zheng and Castanon (2013); Li and Shamma (2014) consider specific information structures. Specifically: the actions of both agents are publicly observed; in Renault (2006); Cardaliaguet et al (2015); Zheng and Castanon (2013); Li and Shamma (2014) the informed agent observes perfectly the state of the dynamic system, the other agent has no direct observation of the system's state; in Gensbittel and Renault (2015); Li et al (2017) each agent observes perfectly part of the system's state and the states observed by the two agents are either independent or conditionally independent (given the observed actions). The authors of Kartik and Nayyar (2021) consider a general information structure where each agent has some private information and the agents share some information about the dynamic system's state and their actions. The authors of Renault (2006); Cardaliaguet et al (2015); Gensbittel and Renault (2015); Li et al (2017); Kartik and Nayyar (2021); Zheng and Castanon (2013); Li and Shamma (2014) derive their results by taking advantage of properties of zero-sum games such as the interchangeability of equilibrium strategies and the unique value of the game. These properties do not extend to non-zero sum games. We study a general class of stochastic dynamic games that include zero-sum stochastic dynamic games with asymmetric information as a special case. We consider general Markovian dynamics for the underlying system in contrast to Renault (2006); Cardaliaguet et al (2015); Gensbittel and Renault (2015); Li et al (2017); Zheng and Castanon (2013); Li and Shamma (2014), where the system has the special structure described above. We consider a general information structure that allows us to capture scenarios with unobservable actions and imperfect observations that are not captured by Renault (2006); Cardaliaguet et al (2015); Gensbittel and Renault (2015); Li et al (2017); Zheng and Castanon (2013); Li and Shamma (2014). The problems investigated in Tang et al (2022); Nayyar et al (2014); Gupta et al (2014); Ouyang et al (2015, 2017); Vasal and Anastasopoulos (2016); Sinha and Anastasopoulos (2016); Gupta et al (2016); Nayyar et al (2013a) are the most closely related to our problem. The authors of Nayyar et al (2014); Gupta et al (2014, 2016); Nayyar et al (2013a) study a class of dynamic games where the agents' common information based belief (defined in Nayyar et al (2014)) is independent of their strategies, that is, there is no signaling among them. This property allows them to apply ideas from the common information approach developed in Nayyar et al (2011, 2013b), and define an equivalent dynamic game with symmetric information among fictitious agents. Consequently, they characterize a class of equilibria for dynamic games called Common Information based Markov Perfect Equilibria. Our results are different from those in Nayyar et al (2014); Gupta et al (2014, 2016); Nayyar et al (2013a) in two aspects. First, we consider a general class of dynamic games where the agents' CIB beliefs are strategy-dependent, thus, signaling is present. Second, the proposed approach in Nayyar et al (2014); Gupta et al (2014, 2016); Nayyar et al (2013a) requires the agents to keep track of all of their private information over time. We propose an approach to effectively compress the agents' private information, and consequently, reduce the number of variables which the agents need to form CIB beliefs. The authors of Tang et al (2022); Ouyang et al (2015, 2017); Vasal and Anastasopoulos (2016); Sinha and Anastasopoulos (2016) study a class of dynamic games with asymmetric information where signaling occurs. When the horizon in finite, the authors of Ouyang et al (2015, 2017) introduce the notion of Common Information Based Perfect Bayesian Equilibrium, and provide a sequential decomposition of the game over time. The authors of Vasal and Anastasopoulos (2016); Sinha and Anastasopoulos (2016) extend the results of Ouyang et al (2015, 2017) to finite horizon Linear-Quadratic-Gaussian (LQG) dynamic games and infinite horizon dynamic games, respectively. The work of Tang et al (2022) extends the model of Ouyang et al (2017) to games among teams of agents. Each agent has his own private information which he shares with the members of his own team with delay \(d\); teams also have common information. The authors of Tang et al (2022) consider two classes of strategies: sufficient private information based (SPIB) strategies, which only compress private information, and sufficient private and common information based (SPCIB) strategies, which compress both common and private information. They show that SPIB-strategy-based BNE exist and the set of payoff profiles of such equilibria is the same as the set of all BNE. They develop a backward inductive sequential procedure, whose solution, if it exists, provides a SPCIB BNE, and identify instances which guarantee the existence of SPCIB BNE. The class of dynamic games studied in Tang et al (2022); Ouyang et al (2015, 2017); Vasal and Anastasopoulos (2016); Sinha and Anastasopoulos (2016) satisfy the following assumptions: (i) agents' actions are observable (ii) each agent has a perfect observation of his own local states/-type (iii) conditioned on the agents' actions, the evolution of the local states are independent. We relax assumptions (i)-(iii) of Tang et al (2022); Ouyang et al (2015, 2017); Vasal and Anastasopoulos (2016); Sinha and Anastasopoulos (2016), and study a general class of dynamic games with asymmetric information, hidden actions, imperfect observations, and controlled and coupled dynamics. ### Contribution We study/analyze, in discrete time, a general class of sequential stochastic dynamic games with asymmetric information, where the underlying system is dynamic, the information structure is non-classical, at each time instant the agents have private and common information and their actions are hidden (each agent's actions are not directly observable by the other agents). Our key contribution is a methodology for the discovery of Bayesian Nash Equilibrium (BNE) strategy profiles that are based on the agents' compressed private and common information and can be determined sequentially in time moving backwards, if each step of this backward procedure has a solution. We present an example where such a BNE strategy profile exists. We show that our methodology works also for the case where the agents have no common observations and their actions are hidden. ### Organization The rest of the paper is organized as follows: We present the game's model along with the equilibrium concept in Section 2. We state our objective and present the methodology that achieves it in Section 3. In Section 4 we first introduce compressed versions of the agents' private and common information that are sufficient for decision making purposes; then we define Sufficient Information Based (SIB) strategies that are based on the agents' compressed information. In Section 5 we first introduce Sufficient Information Based Bayesian Nash Equilibrium (SIB-BNE); then we present a sequential decomposition of the game, that is, a backward inductive procedure that determines SIB-BNE if each step of this procedure has a solution. In Section 6 we present an example that highlights our solution methodology and where a SIB-BNE exists. In Section 7 we show that our solution methodology works for stochastic dynamic games where the agents have no common observations and each agent's actions are part of his private information. The comparison of the definitions of compressed private information as it appears in this paper and in Tavafoghi et al (2022), along with some of the technical details related to the existence of SIB-BNE for the example of Section 6 are presented in the Appendices. ## 2 Model We present our model for dynamic decision problems with strategic agents (dynamic games) below; this model is an analogue to the model of Tavafoghi et al (2022) for dynamic decision problems with non-strategic agents. ### System Dynamics There are \(N\) strategic agents who live in a dynamic Markovian world over horizon \(\mathcal{T}\!:=\!\{1,2,...,T\}\), \(T\!<\!\infty\). Let \(X_{t}\!\in\!\mathcal{X}_{t}\) denote the state of the world at \(t\!\in\!\mathcal{T}\). At time \(t\), each agent, indexed by \(i\!\in\!\mathcal{N}\!:=\!\{1,2,...,N\}\), chooses an action \(a_{t}^{i}\!\in\!\mathcal{A}_{t}^{i}\), where \(\mathcal{A}_{t}^{i}\) denotes the set of available actions to him at \(t\). Given the collective action profile \(A_{t}\!:=\!(A_{t}^{1},...,A_{t}^{N})\), the state of the world evolves according to the following stochastic dynamic equation, \[X_{t+1}=f_{t}(X_{t},A_{t},W_{t}^{x}), \tag{1}\] where \(W_{1:T-1}^{x}\) is a sequence of independent random variables. The initial state \(X_{1}\) is a random variable that has a probability distribution \(\mu_{0}\in\Delta(\mathcal{X}_{1})\). _Dynamic Games with Asymmetric Information and Hidden Actions_ At every time \(t\in\mathcal{T}\), before taking an action, agent \(i\) receives a noisy private observation \(Y_{t}^{i}\in\mathcal{Y}_{t}^{i}\) of the current state of the world \(X_{t}\) and the action profile \(A_{t-1}\), given by \[Y_{t}^{i}=O_{t}^{i}(X_{t},A_{t-1},W_{t}^{i}), \tag{2}\] where \(W_{1:T}^{i}\), \(i\in\mathcal{N}\), are sequences of independent random variables. Moreover, at every \(t\in\mathcal{T}\), all agents receive a common observation \(Z_{t}\in\mathcal{Z}_{t}\) of the current state of the world \(X_{t}\) and the action profile \(A_{t-1}\), given by \[Z_{t}=O_{t}^{c}(X_{t},A_{t-1},W_{t}^{c}), \tag{3}\] where \(W_{1:T}^{c}\), is a sequence of independent random variables. We assume that the random variables \(X_{1}\), \(W_{1:T-1}^{x}\), \(W_{1:T}^{c}\), and \(W_{1:T}^{i}\), \(i\in\mathcal{N}\) are mutually independent. To avoid measure-theoretic technical difficulties and for clarity and convenience of exposition, we assume that all the random variables take values in finite sets. **Assumption 1**: _(finite game) The sets \(\mathcal{N}\), \(\mathcal{X}_{t}\), \(\mathcal{Z}_{t}\), \(\mathcal{Y}_{t}^{i}\), \(\mathcal{A}_{t}^{i}\), \(i\in\mathcal{N}\), are finite._ ### Information Structure Let \(H_{t}\) denote the aggregate information of all agents at time \(t\). Assuming that agents have perfect recall, we have \(H_{t}=\{Z_{1:t},Y_{1:t}^{1:N},A_{1:t-1}^{1:N}\}\), _i.e._\(H_{t}\) denotes the set of all agents' past and present observations and all agents' past actions. The set of all possible realizations of the agents' aggregate information is given by \(\mathcal{H}_{t}:=\prod_{\tau\leq t}\mathcal{Z}_{\tau}\times\prod_{i\in \mathcal{N}}\prod_{\tau\leq t}\mathcal{Y}_{\tau}^{i}\times\prod_{i\in\mathcal{ N}}\prod_{\tau<t}\mathcal{A}_{\tau}^{i}\). At time \(t\!\in\!\mathcal{T}\), the aggregate information \(H_{t}\) is not fully known to all agents. Let \(C_{t}:=\{Z_{1:t}\}\!\in\!\mathcal{C}_{t}\) denote the agents' common information about \(H_{t}\) and \(P_{t}^{i}:=\{Y_{1:t}^{i},A_{1:t-1}^{i}\}\backslash C_{t}\in\mathcal{P}_{t}^{i}\) denote agent \(i\)'s private information about \(H_{t}\), where \(\mathcal{P}_{t}^{i}\) and \(\mathcal{C}_{t}\) denote the set of all possible realizations of agent \(i\)'s private and common information at time \(t\), respectively. We assume that observations \(Y_{\tau}^{i}\), \(\tau\in\{1,2...,t\}\), and actions \(A_{\tau}^{i}\), \(\tau\in\{1,2...,t-1\}\), are known to agent \(i\) but are not necessarily fully known to all other agents, denoted by \(-i\), at \(t\in\mathcal{T}\). Therefore, we have \(P_{t}^{i}\subseteq\{Y_{1:t}^{i},A_{1:t-1}^{i}\}\) for all \(i\in\mathcal{N}\), and \(H_{t}=\left(\bigcup_{i\in\mathcal{N}}P_{t}^{i}\right)\cup C_{t}\) for all \(t\in\mathcal{T}\). As such, \(\left\{C_{t},P_{t}^{i},i\in\mathcal{N}\right\}\) form a partition of \(\mathcal{H}_{t}\) at every time \(t\in\mathcal{T}\). In Section 2.5, we discuss several instances of information structures that can be captured as special cases of our model. ### Strategies and Utilities: Let \(H_{t}^{i}:=\{C_{t},P_{t}^{i}\}\in\mathcal{H}_{t}^{i}\) denote the information available to agent \(i\) at \(t\), where \(\mathcal{H}_{t}^{i}\) denote the set of all possible realizations of agent \(i\)'s information at \(t\). Agent \(i\)'s _behavioral strategy_ at \(t\), denoted by \(g_{t}^{i}\), is defined by \[g_{t}^{i}:\mathcal{H}_{t}^{i}\rightarrow\Delta(\mathcal{A}_{t}^{i}) \tag{4}\] where \(\Delta(\mathcal{A}_{t}^{i})\) is the set of Probability Mass Functions (PMFs) on \(\mathcal{A}_{t}^{i}\). We denote by \[g^{i}:=(g_{1}^{i},g_{2}^{i},\ldots,g_{T}^{i}) \tag{5}\] a strategy of agent \(i\); \(g^{i}\in\mathcal{G}^{i}\), where \(\mathcal{G}^{i}\) is the set of admissible strategies described by (4)-(5). We denote a strategy profile \(g\) by \[g:=(g^{1},g^{2},\ldots,g^{N}) \tag{6}\] \(g\in\mathcal{G}\), where \(\mathcal{G}\) is the set of admissible strategy profiles described by (4)-(6). We denote by \[g^{-i}:=(g^{1},\ldots,g^{i-1},g^{i+1},\ldots,g^{N}) \tag{7}\] Agent \(i\)'s instantaneous utility at \(t\) depends on the system state \(X_{t}\) and the collective action profile \(A_{t}\), and is given by \(u_{t}^{i}(X_{t},A_{t})\). Agent \(i\)'s total utility over horizon \(\mathcal{T}\), is given by, \[U^{i}(X_{1:T},A_{1:T})=\sum_{t\in\mathcal{T}}u_{t}^{i}(X_{t},A_{t}). \tag{8}\] ### Equilibrium Concept: We consider Bayesian Nash Equilibrium (BNE) as the solution concept (Fudenberg and Tirole, 1991). A strategy profile \(g^{*}=(g^{*1},g^{*2},\ldots,g^{*N})\) is a BNE if for all \(i\in\mathcal{N}\) \[\mathbb{E}^{g^{*}}\{U^{i}(X_{1:T},A_{1:T})\}\geq\mathbb{E}^{g^{*-i },\hat{g}^{i}}\{U^{i}(X_{1:T},A_{1:T})\},\ \ \forall\hat{g}^{i}\in\mathcal{G}^{i}. \tag{9}\] ### Special Cases We discuss several instances of dynamic games with asymmetric information that are special cases of the general model described above. _1) Nested information structure:_ Consider a two-player game with one informed player and one uninformed player and general Markovian dynamics. At every time \(t\!\in\!\mathcal{T}\), the informed player makes a private perfect observation of the state \(X_{t}\), _i.e._\(Y_{t}^{1}\!=\!X_{t}\). The uninformed player does not have any observation of the state \(X_{t}\). Both the informed and uninformed players observe each others' actions, _i.e._\(Z_{t}\!=\!\{A_{t-1}\}\). Therefore, we have \(P_{t}^{1}=\{X_{1:t}\}\), \(P_{t}^{2}=\emptyset\), and \(C_{t}\!=\!\{A_{1:t-1}^{1},A_{1:t-1}^{2}\}\) for all \(t\!\in\!\mathcal{T}\). The above nested information structure corresponds to dynamic games considered in Renault (2006); Cardaliaguet et al (2015); Renault (2012); Li and Shamma (2014, 2017); Zheng and Castanon (2013), where in Renault (2012); Li and Shamma (2017) the state \(X_{t}\) is static. _2) Delayed sharing information structure:_ Consider a \(N\)-player game with observable actions where agents observe each others' observations with \(d\)-step delay. That is, \(P_{t}^{i}=\{Y_{t-d+1:t}^{i}\}\) and \(C_{t}=\{Y_{1:t-d},A_{1:t-1}\}\). We note that in our model we assume that the agents' common observation \(Z_{t}\) at \(t\) is only a function of \(X_{t}\) and and \(A_{t-1}\). Therefore, to describe the game with delayed sharing information structure within the context of our model we need to augment our state space to include the agents' last \(d\) observations as part of the augmented state. Define \(\tilde{X}_{t}:=\{X_{t},M_{t}^{1},M_{t}^{2},...,M_{t}^{d}\}\) as the augmented system state where \(M_{t}^{i}:=\{A_{t-i},Y_{t-i}\}\in\mathcal{A}_{t-i}\times\mathcal{Y}_{t-i}\), \(i\in\mathcal{N}\); that is, \(M_{t}^{i}\) serves as a temporal memory for the agents' observation \(Y_{t-i}\) at \(t-i\). Then, we have \(\tilde{X}_{t+1}=\{X_{t+1},M_{t+1}^{1},M_{t+1}^{2},...,M_{t+1}^{d}\}=\{f_{t}(X _{t},A_{t},W_{t}^{x}),(Y_{t}),M_{t}^{1},...,M_{t}^{d-1}\}\) and \(Z_{t}=\{M_{t}^{d},A_{t-1}\}=\{Y_{t-d},A_{t-1}\}\). The above environment captures a connection between the symmetric information structure and asymmetric information structure. The information asymmetry among the agents increases as \(d\) increases. The above delayed sharing information structure corresponds to the dynamic game considered in Tavafoghi et al (2016). _3) Perfectly controlled dynamics with hidden actions:_ Consider a \(N\)-player game where the state \(X_{t}:=(X_{t}^{1},X_{t}^{2},...,X_{t}^{N})\) has \(N\) components. Agent \(i\), \(i\in\mathcal{N}\), perfectly controls \(X_{t}^{i}\), _i.e._\(X_{t+1}^{i}=A_{t}^{i}\). Agent \(i\)'s actions \(A_{t}^{i}\), \(t\in\mathcal{T}\), are not observable by all other agents \(-i\). Every agent \(i\), \(i\in\mathcal{N}\), makes a noisy private observation \(Y_{i}^{t}(X_{t},W_{t}^{i})\) of the system state at \(t\in\mathcal{T}\). Therefore, we have \(P_{t}^{i}:=\{A_{1:t},Y_{1:t}^{i}\}\), \(C_{t}=\emptyset\). ## 3 Objective and Methodology ### Objective Our objective is twofold: (i) To determine BNE strategy profiles that are based on compressed versions of the agents' private and common information. (ii) To compute the above-mentioned strategy profiles by a sequential decomposition of the game, that is, by a backward inductive sequential procedure that identifies an equilibrium strategy profile when every step of the procedure has a solution. ### Methodology We present a methodology that achieves the above-state objective and proceeds as follows: * Step 1. We determine a mutually consistent compression of the agents' private information that is sufficient for decision-making purposes (such a mutually consistent compression may not be unique). Based on this compression we introduce the Sufficient Private Information Based (SPIB) belief system. * Step 2. Based on the result of Step 1, we determine a compression of the agents' common information that is sufficient for decision-making purposes by defining the Common Information Based (CIB) belief system. The CIB belief system ensures that at each time instant each agent's CIB belief is consistent with his SPIB belief even when the agent deviates from his equilibrium strategy and plays an arbitrary strategy. Such a consistency implies that each agent forms his own CIB belief system, and each agent's CIB belief system is common knowledge among all agents. * Step 3. Based on the compression of the agents' private and common information we introduce Sufficient Information Based (SIB) strategies for each agent (i.e., strategies that depend at each time on the agent's sufficient private information and the CIB belief system) and SIB BNE. We show that SIB strategies satisfy a key closedness of best response property. Based on this property we provide a sequential decomposition of the game, that is, a backward inductive sequential procedure that determines a SIB BNE if each step of the procedure has a solution. * Step 4. We provide an example of a stochastic dynamic game with asymmetric information and hidden/unobservable actions where a SIB BNE exists. ## 4 Compression of Private and Common Information In Section 4.1 we characterize/determine mutually consistent compressions of all agents' private information that are sufficient for decision-making purposes. In Section 4.2 we introduce the common information based belief, a compressed version of the agents' common information, that is sufficient for decision making purposes. ### Sufficient private information (Step 1) We present/consider a compression of the agents' private information that is done in a mutually consistent manner so that the compressed information is sufficient for decision making purposes. **Definition 1** (Sufficient private information).: _We say that \(S_{t}^{i},i=1,\ldots,N\), is sufficient private information for the agents if_ 1. \(S_{t}^{i}\) _is a function of_ \(H_{t}^{i}\) _such that_ \(S_{t}^{i}=\zeta_{t}^{i}(H_{t}^{i})\) _for some commonly known functions_ \(\zeta_{t}^{i},i=1,2,\ldots,N\)_._ 2. \(S_{t}^{i}\) _can be sequentially updated as_ \(S_{t}^{i}=\phi_{t}^{i}(S_{t-1}^{i},Y_{t}^{i},Z_{t},A_{t-1}^{i})\) _using some commonly known functions_ \(\phi_{t}^{i},i=1,2,\ldots,N\)_._ 3. _For any realization_ \(x_{t},p_{t}^{-i},p_{t}^{i},c_{t}\)_, and the corresponding_ \(s_{t}^{-i}=\zeta_{t}^{-i}(p_{t}^{-i},c_{t})\) _and_ \(s_{t}^{i}=\zeta_{t}^{i}(p_{t}^{i},c_{t})\)_, and any strategy profile_ \(g\)_, where_ \(g_{t}^{i}:\mathcal{S}_{t}^{i}\times C_{t}\rightarrow\Delta(\mathcal{A}_{t}^{i}),\forall i,\forall t\)_, such that_ \(\mathbb{P}^{g}(p_{t}^{i},c_{t})>0\)_,_ \[\mathbb{P}^{g}(x_{t},s_{t}^{-i}\mid s_{t}^{i},c_{t})=\mathbb{P}^{g}(x_{t},s_{t }^{-i}\mid p_{t}^{i},c_{t})\] (10) **Remark 1**.: _A similar definition of sufficient private information for dynamic teams appears in (Tavafoghi et al, 2022, Definition 2). This definition is slightly different from Definition 1 above because the objectives in Tavafoghi et al (2022) and this paper are different. In Appendix.1 we show that sufficient private information satisfying Definition 1 may violate condition (ii) of Definition 2 in Tavafoghi et al (2022). In Tavafoghi et al (2022) the compression of private (and common) information must entail no loss in performance, that is, we must be able to determine globally optimal team strategy profiles that are based on compressed private and common information. In this paper the goal is to determine BNE strategy profiles that are based on compressed information and be sequentially computed (if such BNE strategy profiles exist). We are not concerned about the equilibria we may lose when we compress information; therefore, we don't need condition (ii) of Definition 2 in Tavafoghi et al (2022). Definition 1 characterizes a set of compressions for agents' private information. In the following, we show the set of sufficient private information \(S_{t}^{i}\), \(i\in\mathcal{N}\), \(t\in\mathcal{N}\), is rich enough to form belief systems on information sets of realizations with positive or zero probability. Let \(\tilde{g}^{i}\) denote the uniform strategy that assigns equal probability to every action of agent \(i\in\mathcal{N}\). Below we show that the policy-independence property of belief (Tavafoghi et al, 2022, Theorem 1) for agent \(i\) is still true when the private information \(p_{t}^{i}\) is replaced with the sufficient private information \(s_{t}^{i}\). That is, \(\mathbb{P}^{\tilde{g}^{i},g^{-i}}(x_{t},x_{t}^{-i}\mid s_{t}^{i},c_{t})\) constructed by \((\tilde{g}^{i},g^{-i})\) captures agent \(i\)'s belief based on \(h_{t}^{i}\) even when he plays an arbitrary strategy \(\hat{g}^{i}\), not necessarily the same as \(g^{i}\) or \(\tilde{g}^{i}\), provided that agents \(-i\) play \(g^{-i}\). Lemma 1: _For \(h_{t}^{i}\) such that \(\mathbb{P}^{\tilde{g}^{i},g^{-i}}(h_{t}^{i})>0\), we have \(\mathbb{P}^{\tilde{g}^{i},g^{-i}}(h_{t}^{i})>0\) and_ \[\mathbb{P}^{\tilde{g}^{i},g^{-i}}(x_{t},s_{t}^{-i}\mid h_{t}^{i})=\mathbb{P} ^{\tilde{g}^{i},g^{-i}}(x_{t},s_{t}^{-i}\mid h_{t}^{i})=\mathbb{P}^{\tilde{g} ^{i},g^{-i}}(x_{t},s_{t}^{-i}\mid s_{t}^{i},c_{t}). \tag{11}\] Proof: Note that \(\mathbb{P}^{\tilde{g}^{i}}(a_{t}^{i})=1/|\mathcal{A}_{t}^{i}|\), so \(\mathbb{P}^{\tilde{g}^{i},g^{-i}}(h_{t}^{i})>0\) given that \(\mathbb{P}^{g}(h_{t}^{i})>0\). Then from part (i) of the definition of sufficient private information and part (i) of Theorem 1 in Tavafoghi et al (2022) we have \[\mathbb{P}^{\tilde{g}^{i},g^{-i}}(x_{t},s_{t}^{-i}\mid h_{t}^{i}) =\sum_{h_{t}^{-i}:\zeta_{t}^{-i}(h_{t}^{-i})=s_{t}^{-i}}\mathbb{P }^{\tilde{g}^{i},g^{-i}}(x_{t},h_{t}^{-i}\mid h_{t}^{i})\] \[=\sum_{h_{t}^{-i}:\zeta_{t}^{-i}(h_{t}^{-i})=s_{t}^{-i}}\mathbb{P }^{\tilde{g}^{i},g^{-i}}(x_{t},h_{t}^{-i}\mid h_{t}^{i})\] \[=\mathbb{P}^{\tilde{g}^{i},g^{-i}}(x_{t},s_{t}^{-i}\mid h_{t}^{i }). \tag{12}\] Furthermore, from condition (iii) of the definition of sufficient private information we have \[\mathbb{P}^{\tilde{g}^{i},g^{-i}}(x_{t},s_{t}^{-i}\mid h_{t}^{i})=\mathbb{P} ^{\tilde{g}^{i},g^{-i}}(x_{t},s_{t}^{-i}\mid s_{t}^{i},c_{t}). \tag{13}\] ### CIB Belief System (Step 2) Given the compressed private information, we next compress the agents' common information in the form of a belief system. We call such a compressed belief system the Common Information Based (CIB) belief system. Similar to Tang et al (2022); Ouyang et al (2017), the CIB belief system is sufficient for decision-making if it is common knowledge among all agents, and every agent \(i\) can compute his belief about the system state and the other agents' sufficient private information using the CIB belief system and his compressed private information. More specifically, agent \(i\) should be able to compute \(\mathbb{P}^{\hat{g}^{i},g^{-i}}(x_{t},s_{t}\mid h_{t}^{i})\) using the CIB belief system and his sufficient private information \(s_{t}^{i}\) whenever other agents follow the strategy profile \(g^{-i}\) and agent \(i\) plays an arbitrary strategy \(\hat{g}^{i}\). To determine a CIB belief system that satisfies the above sufficiency requirement we proceed as follows. We first define \(N\) CIB belief systems \(\Pi^{\psi}:=\{\Pi^{\psi,1},\Pi^{\psi,2},\ldots,\Pi^{\psi,N}\}\), one for each agent (Definition 2 below). Each belief system \(\Pi^{\psi,i}\) consists of a sequence of PMFs on \(\mathcal{X}_{t}\times\mathcal{S}_{t}\) that are sequentially updated according to an update rule \(\psi=(\psi^{1},\psi^{2},\ldots,\psi^{N})\) that is common knowledge among the agents; for each realization \(c_{t}\) of the common information available at \(t\), \(\pi_{t}^{\psi,i}\) describes the belief on \(\mathcal{X}_{t}\times\mathcal{S}_{t}\) based on \(c_{t}\) from agent \(i\)'s point of view. We want \(\pi_{t}^{\psi,i}\), combined with \(s_{t}^{i}\), to enable agent \(i\) to form his own sufficient information-based private belief (given by \(\mathbb{P}^{\hat{g}^{i},g^{\star-i}}(x_{t},s_{t}\mid s_{t}^{i},c_{t})\)) about the current status of the game. Furthermore, we want the CIB belief system to capture the current status of the game when agents utilize strategies based on \((S_{t},\Pi_{t}^{\psi})\). For that matter, we define the notion/concept of Sufficient Information Based (SIB) strategy profile \(\sigma:=(\sigma^{i},i\in\mathcal{N})\), \(\sigma^{i}:=(\sigma^{i}_{t},t\in\mathcal{T}),i\in\mathcal{N}\). Each component \(\sigma^{i}_{t}\) of \(\sigma\) is a function of \(s_{t}^{i}\), agent \(i\)'s sufficient private information at \(t\), and \(\pi_{t}^{\psi}=(\pi_{t}^{\psi,i},i\in\mathcal{N})\) (see Definition 3 below). Using the \(N\) CIB belief systems and the SIB strategy profile \(\sigma\) we define update equations for each \(\pi_{t}^{\psi,i}\) so that each \(\pi_{t}^{\psi,i}\) is consistent with \(s_{t}^{i}\) and with agent \(i\)'s sufficient private information-based belief \(\mathbb{P}^{\hat{g}^{i},g^{\star-i}}(x_{t},s_{t}\mid s_{t}^{i},c_{t})\), defined in Section 4.1 (Definition 1), and each \(\pi_{t}^{\psi,i}\) is common knowledge among all agents (see Definition 4 below). We proceed with the (formal) definitions. **Definition 2** (Common information based (CIB) belief system).: _Given a sequence of update functions \(\psi=\{\psi_{t}^{i},i\in\mathcal{N},t\in\mathcal{T}\}\) that are common knowledge among the \(N\) agents, sequentially define_ \[\Pi_{t}^{\psi,i}=\psi_{t}^{i}(\Pi_{t-1}^{\psi},Z_{t}),i\in\mathcal{N},t\in \mathcal{T} \tag{14}\] _where_ \[\Pi_{t}^{\psi}:=\begin{bmatrix}\Pi_{t}^{\psi,1}\\ \vdots\\ \Pi_{t}^{\psi,N}\end{bmatrix},t\in\mathcal{T} \tag{15}\] _Dynamic Games with Asymmetric Information and Hidden Actions_ \[\Pi_{0}^{\psi}:=\left[\begin{array}{c}\mu_{0}\\ \vdots\\ \mu_{0}\end{array}\right] \tag{16}\] _The sequence \(\Pi_{1:T}^{\psi}=(\Pi_{1}^{\psi},\Pi_{2}^{\psi},\ldots,\Pi_{T}^{\psi})\) defines a CIB belief system; \(\Pi_{t}^{\psi,i}\) denotes the CIB belief over \(\mathcal{X}_{t}\times\mathcal{S}_{t}\) based on \(C_{t}\) from agent \(i\)'s point of view._ **Definition 3** (SIB strategy).: _Given a CIB belief system \(\Pi_{1:T}^{\psi}\), we define a Sufficient Information Based (SIB) strategy profile \(\sigma:=(\sigma^{1},\sigma^{2},\ldots,\sigma^{N})\), \(\sigma^{i}:=(\sigma_{1}^{i},\sigma_{2}^{i},\ldots,\sigma_{T}^{i})\) by the maps_ \[\sigma_{t}^{i}:\mathcal{S}_{t}^{i}\times[\Delta(\mathcal{X}_{t} \times\mathcal{S}_{t})]^{N}\rightarrow\Delta(\mathcal{A}_{t}),t=1,2,\ldots,i =1,2,\ldots,N. \tag{17}\] Based on Definitions 2 and 3 we present a set of conditions that an individual CIB belief system \((\Pi_{t}^{\psi,i},t\in\mathcal{T})\) must satisfy so as to ensure that each agent \(i\) can form his own (private) belief about the current status of the game, given by \((X_{t},S_{t})\), using \(\Pi_{t}^{\psi}\) and \(S_{t}^{i}\) when all other agents \(-i\) employ SIB strategies \(\sigma^{-i}\). This set of conditions describe a sequential update rule of \(\Pi_{t}^{\psi,i}\); the update rule depends on whether or not the (new) common observation at \(t\) is feasible under the agents' strategies. **Definition 4** (Consistent CIB belief system).: _Consider a SIB strategy profile \(\sigma\). Let \(F_{t}^{i}(x_{t+1},s_{t+1},z_{t+1})(\pi_{t}^{\psi};\ \sigma_{t}^{-i})\) denote the CIB belief about \((x_{t+1},s_{t+1},z_{t+1})\) constructed recursively by assuming that (i) \((x_{t},s_{t})\) is distributed according to \(\pi_{t}^{\psi,i}\) (ii) agent \(i\) employs the uniform strategy \(\tilde{g}^{i}\) at \(t\) (i.e., the strategy that chooses every action \(a_{t}^{i}\in\mathcal{A}_{t}^{i}\) with equal probability), and (iii) agent \(-i\) plays according \(\sigma_{t}^{-i}\). That is,_ \[F_{0}^{i}(x_{1},s_{1},z_{1})= \sum_{y_{1}}\left[\mathbb{P}\{z_{1},y_{1}\mid x_{1}\}\mu_{0}(x_{1 })\left(\prod_{j}\mathbb{1}\{s_{1}^{j}=\phi_{1}^{j}(z_{1},y_{1}^{j})\}\right)\right] \tag{18}\] _at \(t=1\), and for \(t\geq 1\)._ \[F_{t}^{i}(x_{t+1},s_{t+1},z_{t+1})(\pi_{t}^{\psi};\ \sigma_{t}^{-i})\] \[= \sum_{y_{t+1},x_{t},s_{t},a_{t}}\left[\mathbb{P}\{z_{t+1},y_{t+1},x_{t+1}\mid x_{t},a_{t}\}\left(\prod_{j}\mathbb{1}\{s_{t+1}^{j}=\phi_{t+1}^{ j}(s_{t}^{j},z_{t+1},y_{t+1}^{j},a_{t}^{j})\}\right)\right.\] \[\left.\left(\frac{1}{\mid A_{t}^{i}}\prod_{j\neq i}\sigma_{t}^{j}( a_{t}^{j})(\pi_{t}^{\psi},s_{t}^{j})\right)\pi_{t}^{\psi,i}(x_{t},s_{t})\right] \tag{19}\] _We define the update rule \(\psi^{\sigma}=(\psi_{t}^{\sigma,i},i\in\mathcal{N},t\in\mathcal{T})\) and the corresponding CIB belief system \(\Pi_{1:T}^{\psi^{\sigma}}\) as follows. At any \(t\)_ 1. _If_ \(\sum_{\hat{x}_{t+1},\hat{s}_{t+1}}F_{t}^{i}(\hat{x}_{t+1},\hat{s}_{t+1},z_{t+1})( \pi_{t}^{\psi^{\sigma}};\ \sigma_{t}^{-i})>0\) _(i.e. the new common observation_ \(z_{t+1}\) _is feasible from the agent_ \(i\)_'s point of view), then_ \(\pi_{t+1}^{\psi^{\sigma},i}\) _can be updated recursively as_ \[\pi_{t+1}^{\psi^{\sigma},i}(x_{t+1},s_{t+1})=\frac{F_{t}^{i}(x_{t+1},s_{t+1},z_ {t+1})(\pi_{t}^{\psi^{\sigma}};\ \sigma_{t}^{-i})}{\sum_{\hat{x}_{t+1},\hat{s}_{t+1}}F_{t}^{i}( \hat{x}_{t+1},\hat{s}_{t+1},z_{t+1})(\pi_{t}^{\psi^{\sigma}};\ \sigma_{t}^{-i})},\] (20) _via Bayes rule._ 2. _If_ \(\sum_{\hat{x}_{t+1},\hat{s}_{t+1}}F_{t}^{i}(\hat{x}_{t+1},\hat{s}_{t+1},z_{t+1} )(\pi_{t}^{\psi^{\sigma}};\ \sigma_{t}^{-i})=0\) _(i.e. the new common observation_ \(z_{t+1}\) _is infeasible from the agent_ \(i\)_'s point of view), then the update rule is_ \[\pi_{t+1}^{\psi^{\sigma},i}(x_{t+1},s_{t+1})=\frac{1}{|\mathcal{X}_{t+1}\times \mathcal{S}_{t+1}|}.\] (21) Based on (20) and (21) we can write \[\Pi_{t+1}^{\psi^{\sigma},i}=\psi_{t+1}^{\sigma,i}(\Pi_{t}^{\psi^{ \sigma}},Z_{t+1}). \tag{22}\] \[\Pi_{t+1}^{\psi^{\sigma}}=\psi_{t+1}^{\sigma}(\Pi_{t}^{\psi^{ \sigma}},Z_{t+1}). \tag{23}\] Furthermore, for all \(i\in\mathcal{N}\), each agent can determine if \(\sum_{\hat{x}_{t+1},\hat{s}_{t+1}}F_{t}^{i}(\hat{x}_{t+1},\hat{s}_{t+1},z_{t+1 })(\pi_{t}^{\sigma^{\psi}};\ \sigma_{t}^{-i})\) is positive or zero; thus each agent knows how agent \(i\) computes \(\pi_{t+1}^{\psi^{\sigma},i}\) from \(\sigma_{t}^{i},z_{t+1},\sigma_{t}^{-i}\) and \(\psi^{\sigma}\). Therefore, \(\pi_{t}^{\psi^{\sigma},i}\) (hence \(\pi_{t}^{\psi^{\sigma}}\)) is common knowledge among all agents. We call \(\Pi_{1:T}^{\psi^{\sigma}}\) the CIB belief system consistent with the SIB strategy profile \(\sigma\). **Remark 2**.: _Since the sufficient private information is a function of the agent's available information, a SIB strategy \(\sigma_{t}^{i}\) corresponds to a strategy \(g_{t}^{i,\sigma}\) given by \(g_{t}^{i,\sigma}(h_{t}^{i}):=\sigma_{t}^{i}(\zeta_{t}^{i}(h_{t}^{i}),\pi_{t}^ {\psi^{\sigma}})\). Therefore, in the rest of the paper we use the following convention: \(\mathbb{P}^{\sigma}(\cdot)=\mathbb{P}^{\sigma^{\prime}}(\cdot)\) and \(\mathbb{E}^{\sigma}[\cdot]=\mathbb{E}^{g^{\sigma}}[\cdot]\)._ **Remark 3**.: _There are many alternative specifications of the update rule \(\psi_{t}^{\sigma},t\in\mathcal{T}\) defined by (22)-(23), that result in consistent CIB belief systems, that is, CIB belief systems which ensure that (i) agent \(i\) can form his private belief over \((X_{t},S_{t}^{-i})\) by incorporating his private sufficient information \(S_{t}^{i}\) into his CIB belief \(\Pi_{t}^{\psi^{\sigma},i}\) given that agents \(-i\) play according to \(\sigma^{-i}\), (ii) agent \(i\)'s private belief formed according to \(i\) is identical to the probability distribution over \((X_{t},S_{t}^{-i})\) conditional on his complete history \(H_{t}^{i}\) even when he plays an arbitrary strategy \(\hat{g}^{i}\) different from \(\sigma^{i}\). An example of such an alternative update rule is described by (20) (Bayes' rule) when \(\sum_{\hat{x}_{t+1},\hat{s}_{t+1}}F_{t}^{i}(\hat{x}_{t+1},\hat{s}_{t+1},z_{t+1 })(\pi_{t}^{\psi^{\sigma}};\ \sigma_{t}^{-i})>0\) and a arbitrary PMF \(\pi_{t+1}^{\psi^{\sigma},i}(\cdot,\cdot)\) on \(X_{t+1}\times S_{t+1}\) when \(\sum_{\hat{x}_{t+1},\hat{s}_{t+1}}F_{t}^{i}(\hat{x}_{t+1},\hat{s}_{t+1},z_{t+1 })(\pi_{t}^{\psi^{\sigma}};\ \sigma_{t}^{-i})=0\)._ Definition 4 ensures that agent \(i\) can form his beliefs over \((X_{t},S_{t}^{-i})\) by incorporating his sufficient private information \(S_{t}^{i}\) into his CIB belief \(\Pi_{t}^{\psi^{\sigma},i}\) given that agents \(-i\) play according to \(\sigma^{-i}\). Moreover, this belief is sufficient to compute the probability distribution over \((X_{t},S_{t}^{-i})\) conditional on his complete history \(H_{t}^{i}\) even when he plays an arbitrary strategy \(\hat{g}^{i}\) different from \(\sigma^{i}\). We formalize the above discussion in Lemma 2 below, by using the notation \(\mathbb{P}^{\hat{g}^{i},\sigma^{-i},\psi^{\sigma}}(\cdot)\) to indicate the belief resulting when agent \(i\) plays \(\hat{g}^{i}\) and agents \(-i\) play \(g^{-i,\sigma}(h_{t}^{-i})=\sigma_{t}^{-i}(\zeta_{t}^{-i}(h_{t}^{-i}),\pi_{t}^{ \psi^{\sigma}})\) using the update rule \(\psi^{\sigma}\). **Lemma 2**.: _Consider a SIB strategy profile \(\sigma\), along with an associated consistent CIB belief system \(\Pi_{t}^{\psi^{\sigma}}\). Suppose \((x_{t},h_{t}^{i},h_{t}^{-i}\) is a realization with positive probability under \((\hat{g}^{i},\sigma^{-i})\), where \(\hat{g}^{i}\) denotes an arbitrary strategy for agent \(i\). Let \(s_{t}^{i}=\zeta_{t}^{i}(h_{t}^{i})\) and \(s_{t}^{-i}=\zeta_{t}^{-i}(h_{t}^{-i})\) be the associated sufficient private information. Then agent \(i\)'s belief at time \(t\) can be computed using \(\pi_{t}^{\psi^{\sigma}}\) as_ \[\mathbb{P}^{\hat{g}^{i},\sigma^{-i},\psi^{\sigma}}(x_{t},s_{t}^{-i}\mid h_{t}^ {i})=\frac{\pi_{t}^{\psi^{\sigma},i}(x_{t},s_{t})}{\sum_{s_{t}^{-i},x_{t}}\pi_ {t}^{\psi^{\sigma},i}(x_{t},s_{t}^{i},s_{t}^{-i})} \tag{24}\] Proof.: From Lemma 1 we have \[\mathbb{P}^{\hat{g}^{i},\sigma^{-i},\psi^{\sigma}}(x_{t},s_{t}^{-i}\mid h_{t}^ {i})=\mathbb{P}^{\bar{g}^{i},\sigma^{-i},\psi^{\sigma}}(x_{t},s_{t}^{-i}\mid h _{t}^{i}).=\mathbb{P}^{\bar{g}^{i},\sigma^{-i},\psi^{\sigma}}(x_{t},s_{t}^{-i} \mid c_{t},s_{t}^{i}). \tag{25}\] By Bayes' rule we obtain \[\mathbb{P}^{\bar{g}^{i},\sigma^{-i},\psi^{\sigma}}(x_{t},s_{t}^{-i}\mid c_{t},s_{t}^{i})=\frac{\mathbb{P}^{\bar{g}^{i},\sigma^{-i},\psi^{\sigma}}(x_{t},s_ {t}\mid c_{t})}{\mathbb{P}^{\bar{g}^{i},\sigma^{-i},\psi^{\sigma}}(s_{t}^{i} \mid c_{t})}=\frac{\pi_{t}^{\psi^{\sigma},i}(x_{t},s_{t})}{\sum_{s_{t}^{-i},x_ {t}}\pi_{t}^{\psi^{\sigma},i}(x_{t},s_{t}^{i},s_{t}^{-i})}. \tag{26}\] Combination of (25) and (26) establishes the assertion of Lemma 2. **Remark 4**.: _Suppose \(X_{t}=(X_{t}^{1},X_{t}^{2},\ldots,X_{t}^{N})\) and we have the conditional independence property, namely, that for any strategy profile \(g\)\(\mathbb{P}^{g}(x_{t},s_{t}\mid c_{t})=\prod_{i}\mathbb{P}^{g^{i}}(x_{t}^{i},s_ {t}^{i}\mid c_{t})\). Then one can show for any \(i\) that_ \[\pi_{t}^{\psi^{\sigma},i}(x_{t},s_{t})=\prod_{j}\pi^{\psi^{\sigma},i}(x_{t}^{j },s_{t}^{j})=\mathbb{P}^{\bar{g}_{t}^{i}}(x_{t}^{i},s_{t}^{i}\mid c_{t})\prod _{j\neq i}\mathbb{P}^{\sigma^{j}}(x_{t}^{j},s_{t}^{j}\mid c_{t})\] _Therefore, for settings with the conditional independence property as in Tang et al (2022); Ouyang et al (2017), one can use the simplified beliefs \(\mathbb{P}^{\bar{g}_{t}^{i}}(x_{t}^{i},s_{t}^{i}\mid c_{t})\) and \(\mathbb{P}^{\sigma^{j}}(x_{t}^{j},s_{t}^{j}\mid c_{t})\) as the compressed common information to compute the CIB belief \(\pi_{t}^{\psi^{\sigma},i}(x_{t},s_{t})\). The conditional independence among the system components in the models of Tang et al (2022); Ouyang et al (2017) could be lost when the agents' actions are not observable._ ## 5 Sequential decomposition (Step 3) In this section we present a sequential decomposition of the game, that is, a backward inductive sequential procedure that determines a Sufficient Information Based Bayesian Nash Equilibrium (SIB-BNE), defined below, if each step of this procedure has a solution. We proceed as follows. We first establish a key closedness of best response property (Section 5.1); we use this property to provide a sequential decomposition of the game (Section 5.2) Definition 5 (Sib-Bne): Consider a SIB strategy profile \(\sigma^{*}=(\sigma^{*1},\sigma^{*2},\ldots,\sigma^{*n})\) and its corresponding consistent update rule \(\psi^{\sigma^{*}}\). The SIB strategy profile \(\sigma^{*}\) is a SIB-BNE if it is a BNE of the dynamic game. That is, for all \(i\in\mathcal{N}\), \[\mathbb{E}^{\hat{g}^{i},\sigma^{*-i},\psi^{\sigma^{*}}}\{U^{i}(X_ {1:T},A_{1:T})\}\leq\mathbb{E}^{\sigma^{*},\psi^{\sigma^{*}}}\{U^{i}(X_{1:T},A _{1:T})\},\] \[\text{for all strategies (not necessarily SIB strategies) }\hat{g}^{i}. \tag{27}\] ### Closedness of best response The key result of this subsection is presented in the following theorem. Theorem 5.1: _Consider a fixed and known SIB strategy profile \(\sigma\) and the corresponding update rule \(\psi^{\sigma}\). Suppose agents \(-i\) use \(\sigma^{-i}\) with \(\psi^{\sigma}\). Then, there exists a SIB strategy \(\hat{\sigma}^{i}\) that uses \(\psi^{\sigma}\) and is a best response to \(\sigma^{-i}\) with \(\psi^{\sigma}\)._ The proof is based on Lemmas 3, 4, and 5 that we state and prove below. Lemma 3: _Consider a SIB strategy profile \(\sigma\) and the corresponding update rule \(\psi^{\sigma}\) along with the consistent CIB belief system \(\Pi^{\psi^{\sigma}}_{1:T}\)._ _If agents \(-i\) play according to the SIB strategies \(\sigma^{-i}\) and use the update rule \(\psi^{\sigma}\), the best response problem for agent \(i\) is a POMDP with state and observation processes_ \[\tilde{X}_{t}=(S_{t},\Pi^{\psi^{\sigma}}_{t},X_{t}),t\in\mathcal{T} \tag{28}\] \[\tilde{Y}_{t}=(Y^{i}_{t},Z_{t}),t\in\mathcal{T} \tag{29}\] _respectively, and instantaneous utility_ \[\tilde{u}^{i}_{t}(\tilde{X}_{t},A^{i}_{t})=\sum_{a^{-i}_{t}}\big{(}\prod_{j \neq i}\sigma^{j}_{t}(a^{j}_{t}\mid S^{j}_{t},\Pi^{\psi^{\sigma}}_{t})\big{)} u^{i}_{t}(X_{t},a^{-i}_{t},A^{i}_{t}),t\in\mathcal{T} \tag{30}\] The assertion of Lemma 3 is a direct consequence of Lemmas 4 and 5. Lemma 4: _Consider a SIB strategy profile \(\sigma\) and the corresponding update rule \(\psi^{\sigma}\). Suppose agents \(-i\) play according to the SIB strategies \(\sigma^{-i}\) using \(\psi^{\sigma}\) _and agent \(i\) follows an arbitrary strategy \(\hat{g}^{i}\) (not necessarily a SIB strategy). Then_ \[\mathbb{P}^{\hat{g}^{i},\sigma^{-i},\psi^{\sigma}}(\tilde{x}_{t+1},\tilde{y}_{t+ 1}\mid\tilde{x}_{1:t},\tilde{y}_{1:t},a^{i}_{1:t})=\mathbb{P}^{\hat{g}^{i}\sigma ^{-i},\psi^{\sigma}}(\tilde{x}_{t+1},\tilde{y}_{t+1}\mid\tilde{x}_{t},a^{i}_{ t}) \tag{31}\] Proof.: The probability for the next state and observation \(\tilde{x}_{t+1},\tilde{y}_{t+1}\) can be computed by \[\mathbb{P}^{\hat{g}^{i},\sigma^{-i},\psi^{\sigma}}(\tilde{x}_{t+ 1},\tilde{y}_{t+1}\mid\tilde{x}_{1:t},\tilde{y}_{1:t},a^{i}_{1:t})\] \[= \mathbb{P}^{\hat{g}^{i},\sigma^{-i},\psi^{\sigma}}(x_{t+1},\pi^{ \psi^{\sigma}}_{t+1},s_{t+1},y^{i}_{t+1},z_{t+1}\mid x_{1:t},\pi^{\psi^{\sigma }}_{1:t},s_{1:t},y^{i}_{1:t},z_{1:t},a^{i}_{1:t})\] \[= \sum_{y^{-i}_{t+1},a^{-i}_{t}}\mathbb{P}^{\hat{g}^{i},\sigma^{-i},\psi^{\sigma}}(x_{t+1},\pi^{\psi^{\sigma}}_{t+1},s_{t+1},y_{t+1},z_{t+1},a^{ -i}_{t}\mid x_{1:t},\pi^{\psi^{\sigma}}_{1:t},s_{1:t},y^{i}_{1:t},z_{1:t},a^{ i}_{1:t})\] \[= \sum_{y^{-i}_{t+1},a^{-i}_{t}}\big{(}\prod_{j}\mathds{1}(s^{j}_{ t+1}=\phi^{j}_{t+1}(s^{j}_{t},y^{j}_{t+1},z_{t+1},a^{j}_{t}))\big{)}\mathbb{P} \{z_{t+1},y_{t+1},x_{t+1}\mid x_{t},a_{t}\}\] \[\mathds{1}(\pi^{\psi^{\sigma}}_{t+1}=\psi^{\sigma}_{t+1}(\pi^{\psi ^{\sigma}}_{t},z_{t+1}))\big{(}\prod_{j\neq i}\sigma^{j}_{t}(a^{j}_{t}\mid s^ {j}_{t},\pi^{\psi^{\sigma}}_{t})\big{)} \tag{32}\] where the last equality follows from the system dynamics, part (ii) of Definition 1, Definition 4, and the form of SIB strategies of agents \(-i\). Since the right hand side of (32) depends only on \((\tilde{x}_{t},a^{i}_{t})\) we conclude that \[\mathbb{P}^{\hat{g}^{i},\sigma^{-i},\psi^{\sigma}}(\tilde{x}_{t+1},\tilde{y}_ {t+1}\mid\tilde{x}_{1:t},\tilde{y}_{1:t},a^{i}_{1:t})=\mathbb{P}^{\hat{g}^{i}, \sigma^{-i},\psi^{\sigma}}(\tilde{x}_{t+1},\tilde{y}_{t+1}\mid\tilde{x}_{t},a^ {i}_{t}) \tag{33}\] Lemma 4 shows that \(\{\tilde{X}_{t},\tilde{Y}_{t},t\in\mathcal{T}\}\) is a Markov process conditional on \(\{A^{i}_{t},t\in\mathcal{T}\}\) **Lemma 5**.: _Consider a SIB strategy profile \(\sigma\) and the corresponding update rule \(\psi^{\sigma}\). Suppose agents \(-i\) follow the SIB strategies \(\sigma^{-i}\) using \(\psi^{\sigma}\) and agent \(i\) follows an arbitrary strategy \(\hat{g}^{i}\) (not necessarily a SIB strategy). Then there are utility functions \(\tilde{u}^{i}_{t}\) such that \(\mathbb{E}^{\hat{g}^{i},\sigma^{-i},\psi^{\sigma}}[\tilde{u}^{i}_{t}(\tilde{X}_ {t},A^{i}_{t})]=\mathbb{E}^{\hat{g}^{i},\sigma^{-i},\psi^{\sigma}}[u^{i}_{t}(X_ {t},A_{t})]\) for all \(t\in\mathcal{T}\)._ Proof.: Recall that \(\tilde{X}_{t}=(S_{t},\Pi^{\psi^{\sigma}}_{t},X_{t})\). Then \[\mathbb{E}^{\hat{g}^{i},\sigma^{-i},\psi^{\sigma}}[u^{i}_{t}(X_{t },A_{t})]\] \[= \mathbb{E}^{\hat{g}^{i},\sigma^{-i},\psi^{\sigma}}[u^{i}_{t}(X_{t },A^{-i}_{t},A^{i}_{t})]\] \[= \mathbb{E}^{\hat{g}^{i},\sigma^{-i},\psi^{\sigma}}\big{[}\, \mathbb{E}^{\hat{g}^{i},\sigma^{-i},\psi^{\sigma}}[u^{i}_{t}(X_{t},A^{-i}_{t},A^ {i}_{t})\mid\tilde{X}_{t},A^{i}_{t}]\big{]}\] \[= \mathbb{E}^{\hat{g}^{i},\sigma^{-i},\psi^{\sigma}}\big{[}\, \sum_{a^{-i}_{t}}\mathbb{P}^{\hat{g}^{i},\sigma^{-i},\psi^{\sigma}}(a^{-i}_{t} \mid S_{t},\Pi^{\psi^{\sigma}}_{t},X_{t},A^{i}_{t})u^{i}_{t}(X_{t},a^{-i}_{t}, A^{i}_{t})]\big{]}\] \[= \mathbb{E}^{\hat{g}^{i},\sigma^{-i},\psi^{\sigma}}\big{[}\,\sum_{a ^{-i}_{t}}\big{(}\prod_{j\neq i}\sigma^{j}_{t}(a^{j}_{t}\mid S^{j}_{t},\Pi^{ \psi^{\sigma}}_{t})\big{)}u^{i}_{t}(X_{t},a^{-i}_{t},A^{i}_{t})]\big{]} \tag{34}\] Therefore, we establish the claim of the lemma by defining \[\tilde{u}_{t}^{i}(\bar{X}_{t},A_{t}^{i})=\sum_{a_{t}^{-i}}\big{(}\prod_{j\neq i} \sigma_{t}^{j}(a_{t}^{j}\mid S_{t}^{j},\Pi_{t}^{\psi^{\sigma}})\big{)}u_{t}^{i} (X_{t},a_{t}^{-i},A_{t}^{i})] \tag{35}\] Proof of Theorem 1.: From Lemma 3 we conclude that the best response of agent \(i\) to \(\sigma^{-i}\) is a POMDP with state \(\bar{X}_{t}\). From the theory of POMDP (Kumar and Varaiya, 1986, Chapter 6) we know that: (i) the belief on the state \(\bar{X}_{t}=(S_{t},\Pi_{t}^{\psi^{\sigma}},X_{t})\) conditioned on available information \(h_{t}^{i}\) is an information state for the agent; (ii) for each \(t\in\mathcal{T}\) there exists an optimal strategy for agent \(i\) that is a function of the information state at \(t\). We now prove that \((S_{t}^{i},\Pi_{t}^{\psi^{\sigma}})\) is an information state for agent \(i\) at \(t,t\in\mathcal{T}\). We note that \(S_{t+1}^{i}=\phi_{t}^{i}(S_{t}^{i},Y_{t+1}^{i},Z_{t+1},A_{t}^{i})\) from part (ii) of Definition 1, and \(\Pi_{t+1}^{\psi^{\sigma}}=\psi_{t+1}^{\sigma}(\Pi_{t}^{\psi^{\sigma}},Z_{t+1})\) from (23). Thus, we only need to show that for any strategy \(\hat{g}^{i}\) and any realization \(h_{t}^{i}\) such that \(\mathbb{P}^{\hat{g}^{i},\sigma^{-i},\psi^{\sigma}}(h_{t}^{i})>0\) the following equality is true: \[\mathbb{P}^{\hat{g}^{i},\sigma^{-i},\psi^{\sigma}}(s_{t},\pi_{t}^{\psi^{ \sigma}},x_{t}\mid h_{t}^{i})=\mathbb{P}^{\hat{g}^{i},\sigma^{-i},\psi^{\sigma }}(s_{t},\pi_{t}^{\psi^{\sigma}},x_{t}\mid s_{t}^{i},\pi_{t}^{\psi^{\sigma}}) \tag{36}\] For that matter, we note that \(s_{t}^{i},\pi_{t}^{\psi^{\sigma}}\) are perfectly known to agent \(i\). Furthermore, from the definition of sufficient private information and Lemma 2 we have \[\mathbb{P}^{\hat{g}^{i},\sigma^{-i},\psi^{\sigma}}(s_{t}^{-i},x_{t}\mid h_{t} ^{i})=\frac{\pi_{t}^{\psi^{\sigma},i}(s_{t},x_{t})}{\sum_{s_{t}^{-i},x_{t}} \pi_{t}^{\psi^{\sigma},i}(s_{t}^{i},s_{t}^{-i},x_{t})}, \tag{37}\] which is a function of \((s_{t}^{i},\pi_{t}^{\psi^{\sigma}})\). Therefore, \[\mathbb{P}^{\hat{g}^{i},\sigma^{-i},\psi^{\sigma}}(s_{t},\pi_{t}^{\psi^{ \sigma}},x_{t}\mid h_{t}^{i})=\mathds{1}(s_{t}^{i}=\zeta_{t}^{i}(h_{t}^{i})) \mathds{1}(\pi_{t}^{\psi^{\sigma}}=\gamma^{\psi^{\sigma}}(h_{t}^{i}))\, \mathbb{P}^{\hat{g}^{i},\sigma^{-i},\psi^{\sigma}}(s_{t}^{-i},x_{t}\mid p_{t}^ {i},c_{t}) \tag{38}\] where \(\gamma^{\psi^{\sigma}}(h_{t}^{i})=\psi_{t}^{\sigma}(\psi_{t-1}^{\sigma},\cdots)\) is the composition of \(\psi^{\sigma}\) from \(1\) to \(t\). Then, equation (36) is true because of (37) and (38). Consequently, \((S_{t}^{i},\Pi_{t}^{\psi^{\sigma}}),t\in\mathcal{T}\) is an information state for the best response problem for agent \(i\) and the assertion of Theorem 1 is true. As a result of Theorem 1, a definition of SIB BNE equivalent to Definition 5 is the following **Definition 6** (Equivalent definition of SIB BNE).: _Consider a SIB strategy profile \(\sigma^{*}=(\sigma^{*1},\sigma^{*2},\ldots,\sigma^{*n})\) and its corresponding consistent update rule \(\psi^{\sigma^{*}}\). The SIB strategy profile \(\sigma^{*}\) is a SIB BNE if for all \(i\in\mathcal{N}\),_ \[\mathbb{E}^{\sigma^{i},\sigma^{*-i},\psi^{\sigma^{*}}}\{U^{i}(X_{1:T},A_{1:T}) \}\leq\mathbb{E}^{\sigma^{*},\psi^{\sigma^{*}}}\{U^{i}(X_{1:T},A_{1:T})\} \tag{39}\] _for all \(\sigma^{i}\in\Lambda^{i}\) where \(\Lambda^{i}\) is the set of SIB strategy profiles of agent \(i\)._ A consequence of Lemmas 3-5 and Theorem 1 is the following. Consider a SIB strategy profile \(\sigma\), the corresponding update rule \(\psi^{\sigma}\) along with the consistent CIB belief system \(\Pi^{\psi^{\sigma}}_{1:T}\); if agents \(-i\) play according to \(\sigma^{-i}\), then the best response of agent \(i\) could be determined by the dynamic program \[\tilde{V}^{i}_{T+1}(\cdot,\cdot)=0\text{ for all }i \tag{40}\] \[\tilde{V}^{i}_{t}(\pi^{\psi^{\sigma}}_{t},s^{i}_{t})=\max_{\hat{ \sigma}^{i}_{t}\in\Lambda^{i}_{t}}\mathbb{E}^{\tilde{s}^{i}_{t},\sigma^{-i}_{ t},\psi^{\sigma}}\{u^{i}_{t}(X_{t},A_{t})+\tilde{V}^{i}_{t+1}(\psi^{\sigma}_{t+1} (\pi^{\psi^{\sigma}}_{t},Z_{t+1}),S^{i}_{t+1})\mid s^{i}_{t}\},\] \[\forall\pi^{\psi^{\sigma}}_{t}\in\Delta(\mathcal{X}_{t}\times \mathcal{S}_{t})^{N},\forall s^{i}_{t}\in\mathcal{S}^{i}_{s},t\in\mathcal{T} \tag{41}\] where \(\Lambda^{i}_{t}\) is the set of SIB strategies of agent \(i\) at time \(t\). ### Sequential decomposition Given a set of value functions \(V_{t+1}=\{V^{i}_{t+1}:\boldsymbol{\Pi}_{t+1}\times\mathcal{S}^{i}_{t+1}\to \mathbb{R},i\in\mathcal{N}\}\), a SIB strategy profile \(\sigma\), the corresponding update rule \(\psi^{\sigma}_{t+1}\) defined by (23), and the consistent CIB belief \(\pi^{\psi^{\sigma}}_{t}\), define the stage-game \(G_{t}(V_{t+1},\pi^{\psi^{\sigma}}_{t})\) as follows. (i) There are \(N\) agents. (ii) The system state is \(X_{t}\). (iii) Each agent \(i\) observes private information \(S^{i}_{t}\) and common information \(\pi^{\psi^{\sigma}}_{t}\). (iv) Agent \(i\)'s belief about the state \(X_{t}\) and other agents' private information \(S^{-i}_{t}\) is given by \(\pi^{\psi^{\sigma,i}}_{t}(x_{t},s^{-i}_{t})\), that is, \[\pi^{\psi^{\sigma},i}_{t}(x_{t},s^{-i}_{t})\in\Delta(\mathcal{X}_{t}\times \mathcal{S}^{-i}_{t}). \tag{42}\] (v) Each agent \(i\) selects action \(A^{i}_{t}\) based on his available information; let \(\hat{\sigma}^{i}_{t}\) denote agent \(i\)'s strategy for this stage-game; then, \[\mathbb{P}^{\hat{\sigma}_{t},\psi^{\sigma}}(A^{i}_{t}=a^{i}_{t}\mid s^{i}_{t},\pi^{\psi^{\sigma}}_{t})=\hat{\sigma}^{i}_{t}(a^{i}_{t}\mid s^{i}_{t},\pi^{ \psi^{\sigma}}_{t}). \tag{43}\] (vi) Each agent \(i\) has utility \[U^{i}_{G_{t}(V_{t+1},\pi^{\psi^{\sigma}}_{t})}=u^{i}_{t}(X_{t},A_{t})+V^{i}_{t+ 1}(\psi^{\sigma}_{t+1}(\pi^{\psi^{\sigma}}_{t},Z_{t+1}),S^{i}_{t+1}) \tag{44}\] where \((Z_{t+1},S^{i}_{t+1})\) conditioned on \((X_{t},S_{t},A_{t})\) follows the conditional probability \(\sum_{x_{t+1},s^{-i}_{t+1}}\mathbb{P}(z_{t+1},x_{t+1},s_{t+1}\mid x_{t},s_{t},a_{t})\) and the conditional probability \(\mathbb{P}(z_{t+1},x_{t+1},s_{t+1}\mid x_{t},s_{t},a_{t})\) is given by \[\mathbb{P}(z_{t+1},x_{t+1},s_{t+1}\mid x_{t},s_{t},a_{t})\] \[= \sum_{y_{t+1}}\mathbb{P}\{x_{t+1}\mid x_{t},a_{t}\}\mathbb{P}\{z_ {t+1},y_{t+1}\mid x_{t+1},a_{t}\}\] \[\left(\prod_{j}\mathbbm{1}\left\{s^{j}_{t+1}=\phi^{j}_{t+1}(s^{j}_ {t},z_{t+1},y^{j}_{t+1},a^{j}_{t})\right\}\right) \tag{45}\] (vii) Given a strategy profile \(\hat{\sigma}_{t}\) for the stage-game, the expected utility of each player \(i\) is given by \[\mathbb{E}^{\hat{\sigma}_{t},\psi^{\sigma}}[U^{i}_{G_{t}(V_{t+1}, \pi_{t}^{\psi^{\sigma}})}\mid s^{i}_{t}]\] \[= \sum_{x_{t},s_{t}^{-i},a_{t},z_{t+1},x_{t+1},s_{t+1}}\pi_{t}^{ \psi^{\sigma},i}(x_{t},s_{t}^{-i})\prod_{j}\hat{\sigma}_{t}^{j}(a^{i}_{t}\mid s ^{i}_{t},\pi_{t}^{\psi^{\sigma}})\,\mathbb{P}(z_{t+1},x_{t+1},s_{t+1}\mid x_{t },s_{t},a_{t})\] \[(u^{i}_{t}(x_{t},a_{t})+V^{i}_{t+1}(\psi^{\sigma}_{t+1}(\pi_{t}^{ \psi^{\sigma}},z_{t+1}),s^{i}_{t+1})) \tag{46}\] Note that all the random variables of the stage-game \(G_{t}(V_{t+1},\pi_{t}^{\psi^{\sigma}})\) may not necessarily be the same as their counterparts in the original dynamic game since each agent \(i\) is allowed to choose an arbitrary SIB strategy \(\hat{\sigma}_{t}^{i}\) which may be different from \(\sigma_{t}^{i}\) specified by the SIB strategy profile \(\sigma\). The stage-game random variables will coincide with their counterparts in the original game if all agents follow \(\sigma\). **Theorem 2** (Sequential decomposition).: _Consider a SIB strategy profile \(\sigma=\{\sigma_{t},t\in\mathcal{T}\}\) and the corresponding update rule \(\psi^{\sigma}=\{\psi^{\sigma}_{t},t\in\mathcal{T}\}\) defined by (22)-(23). Define_ \[V^{i}_{T+1}(\cdot,\cdot)=0\text{ for all }i \tag{47}\] \[V^{i}_{t}(\pi^{\psi^{\sigma}_{t}},s^{i}_{t})=\mathbb{E}^{\sigma_{t},\psi^{ \sigma}}[U^{i}_{G_{t}(V_{t+1},\pi_{t}^{\psi^{\sigma}})}\mid s^{i}_{t}] \tag{48}\] _where the right hand side of (48) is given by (46). If for all \(t\in\mathcal{T}\), there is a SIB strategy profile \(\hat{\sigma}_{t}\) such that \(\hat{\sigma}_{t}\) is a BNE of the stage-game \(G_{t}(V_{t+1},\pi_{t}^{\psi^{\sigma}})\), that is,_ \[\mathbb{E}^{\hat{\sigma}_{t}^{i},\hat{\sigma}_{t}^{-i},\psi^{\sigma}}[U^{i}_{ G_{t}(V_{t+1},\pi_{t}^{\psi^{\sigma}})}\mid s^{i}_{t}]=\max_{\tilde{\sigma}_{t} ^{i}\in\Lambda_{t}^{i}}\mathbb{E}^{\hat{\sigma}_{t}^{i},\hat{\sigma}_{t}^{-i},\psi^{\sigma}}[U^{i}_{G_{t}(V_{t+1},\pi_{t}^{\psi^{\sigma}})}\mid s^{i}_{t}] \tag{49}\] _for all \(i\in\mathcal{N}\) where \(\Lambda_{t}^{i}\) is the set of SIB strategies of agent \(i\) at time \(t\), and_ \[\hat{\sigma}_{t}=\sigma_{t}, \tag{50}\] _then the SIB strategy profile \(\sigma\) is a SIB-BNE of the original dynamic game._ Proof.: Suppose that for all \(t\in\mathcal{T}\) there is a SIB strategy profile \(\hat{\sigma}_{t}=(\hat{\sigma}_{t}^{1},\hat{\sigma}_{t}^{2},\ldots,\hat{ \sigma}_{t}^{N})\) that is a BNE of the stage game \(G_{t}(V_{t+1},\pi_{t}^{\psi^{\sigma}})\). Then for all \(\pi_{t}^{\psi^{\sigma}}\in\Delta(\mathcal{X}_{t}\times\mathcal{S}_{t})^{N},s^ {i}_{t}\in\mathbb{E}^{\hat{\sigma}_{t}^{i},\hat{\sigma}_{t}^{-i},\hat{\sigma} _{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i}, \hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma} _{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i}, \hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma} _{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i}, \hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_ {t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i}, \hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma} _{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i}, \hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma} _{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat {\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{- i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{ \sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{- i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i}, \hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t} ^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{ \sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i}, \hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_ {t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{ \sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{- i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{ \sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{- i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{- i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{ \sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{- i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{ \sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{- i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{ \sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{t}^{-i},\hat{\sigma}_{ _Dynamic Games with Asymmetric Information and Hidden Actions_ \[\mathcal{S}_{s}^{i}\] \[\quad\mathbb{E}^{\hat{\sigma}_{t}^{i},\hat{\sigma}_{t}^{-i},\psi^{ \sigma}}[U^{i}_{G_{t}(V_{t+1},\pi_{t}^{\psi^{\sigma}})}\mid s_{t}^{i}]\] \[= \max_{\hat{\sigma}_{t}^{i}\in\Lambda_{t}^{i}}\mathbb{E}^{\hat{ \sigma}_{t}^{i},\hat{\sigma}_{t}^{-i},\psi^{\sigma}}[u^{i}_{t}(X_{t},A_{t})+V^ {i}_{t+1}(\psi^{\sigma}_{t+1}(\pi_{t}^{\psi^{\sigma}},Z_{t+1}),S^{i}_{t+1})\mid s _{t}^{i}]. \tag{51}\] Equation (51) holds for all \(t\in\mathcal{T}\) with \(V^{i}_{T+1}(\cdot,\cdot)=0\) and for all \(i\in\mathcal{N}\). When \(\hat{\sigma}_{t}=\sigma_{t}\) for all \(t\in\mathcal{T}\), Equation (51) gives, for all \(\pi_{t}^{\psi^{\sigma}}\in\Delta(\mathcal{X}_{t}\times\mathcal{S}_{t})^{N},s _{t}^{i}\in\mathcal{S}_{s}^{i}\), \[V^{i}_{t}(\pi_{t}^{\psi^{\sigma}},s_{t}^{i}) = \mathbb{E}^{\sigma_{t}^{i},\sigma_{t}^{-i},\psi^{\sigma}}[U^{i}_ {G_{t}(V_{t+1},\pi_{t}^{\psi^{\sigma}})}\mid s_{t}^{i}]\] \[= \max_{\hat{\sigma}_{t}^{i}\in\Lambda_{t}^{i}}\mathbb{E}^{\hat{ \sigma}_{t}^{i},\sigma_{t}^{-i},\psi^{\sigma}}[u^{i}_{t}(X_{t},A_{t})+V^{i}_{t +1}(\psi^{\sigma}_{t+1}(\pi_{t}^{\psi^{\sigma}},Z_{t+1}),S^{i}_{t+1})\mid s_{ t}^{i}]\] for all \(i\in\mathcal{N}\). By induction, (52), and the fact that the update rule \(\psi^{\sigma}\) is consistent with \(\sigma\) we have, for all \(i\in\mathcal{N}\) and \(t\in\mathcal{T}\), \[\mathbb{E}^{\hat{\sigma}_{t:T}^{i},\sigma_{t:T}^{-i},\psi^{\sigma}}[\sum_{\tau =t}^{T}u^{i}_{\tau}(X_{\tau},A_{\tau})\mid s_{\tau}^{i}]\leq\mathbb{E}^{\sigma _{t:T}^{i},\sigma_{t:T}^{-i},\psi^{\sigma}}[\sum_{\tau=t}^{T}u^{i}_{\tau}(X_{ \tau},A_{\tau})\mid s_{\tau}^{i}] \tag{53}\] Then (53) at time \(t=1\) gives \[\mathbb{E}^{\hat{\sigma}^{i},\sigma^{-i},\psi^{\sigma}}\{U^{i}(X_{1:T},A_{1:T })\}\leq\mathbb{E}^{\sigma,\psi^{\sigma}}\{U^{i}(X_{1:T},A_{1:T})\} \tag{54}\] for all \(\hat{\sigma}^{i}\in\Lambda^{i}\) for all \(i\in\mathcal{N}\). Therefore, the strategy profile \(\sigma\) is a SIB-BNE of the original dynamic game (sf. Definition 6). **Remark 5**.: _Note that even when the stage-game \(G_{t}(V_{t+1},\pi_{t}^{\psi^{\sigma}})\) has a BNE \(\hat{\sigma}_{t}\), it is possible that \(\hat{\sigma}_{t}\neq\sigma_{t}\). Thus, the existence of BNE for every stage-game \(G_{t}(V_{t+1},\pi_{t}^{\psi^{\sigma}})\) is not sufficient to establish the existence of BNE for the original dynamic game._ **Remark 6**.: _In the model of Tang et al (2022) when each team consists of one agent, a SIB BNE coincides with a SPCIB BNE introduced in Tang et al (2022) with an appropriate mapping of the information state as discussed in Remark 4._ **Remark 7**.: _There may not be a solution for the set of value functions in the sequential decomposition equations described by (47)-(50) for all \(i\in\mathcal{N}\) and for all \(t\in\mathcal{T}\)._ **Remark 8**.: _In Definition 4, (21) could be defined differently, and different (21) would lead to different choices of \(\psi\). And for any choice of (21), the claim of Theorem 2 will still hold._ **Remark 9**.: _The value functions of the sequential decomposition equations defined by Theorem 2 (Eqs. (47)-(50) for all \(i\in\mathcal{N},t\in\mathcal{T}\)) may not be continuous in the CIB belief \(\Pi_{t}^{\psi^{\sigma}}\)._ ## 6 An illustrative example (Step 4) In Section 5 we argued (cf. Remark 7) that the sequential decomposition equations defined by (47)-(50) for all \(i\in\mathcal{N},t\in\mathcal{T}\) may not have a solution, and that the value functions defined by (47)-(50) may not be continuous in the CIB belief \(\Pi_{t}^{\psi^{\sigma}}\) (cf. Remark 9). In this section we present an example that illustrates/highlights the above remarks. In the example, a two-stage stochastic dynamic game, the agents' utilities depend on a parameter \(c\). We show that: (i) the value functions of the corresponding sequential decomposition equations are not continuous in the CIB belief \(\Pi_{t}^{\psi^{\sigma}}\); (ii) for certain values of \(c\) a SIB-BNE exists. ### Model We consider the following two-stage stochastic dynamic game. There are two players/agents, Alice and Bob. At stage one, \(t=1\), the system's state \(X_{1}\) is distributed on \(\{-1,1\}\) with \(\mu_{0}(-1)=\mathbb{P}(X_{1}=-1)=0.5\) and \(\mu_{1}(1)=\mathbb{P}(X_{1}=1)=0.5\). Alice observes perfectly \(X_{1}\), i.e., \(Y_{1}^{Alice}=X_{1}\), and takes action \(A_{1}^{Alice}\in\{-1,1\}\); \(A_{1}^{Alice}\) is not observable by Bob and \(Y_{1}^{Bob}=\emptyset\). Bob does not act at \(t=1\). At stage 2, \(t=2\), the system state is \(X_{2}=X_{1}A_{1}^{Alice}\). Alice and Bob have a common observation \(Z_{2}=X_{2}A_{1}^{Alice}W_{1}=X_{1}W_{1}\), where \(W_{1}\in\{-1,1\}\) and \(\mathbb{P}(Z=i\mid X_{1}=i)=1-p=0.8,i\in\{-1,1\}\), and there are no private observations, i.e., \(Y_{2}^{Alice}=Y_{2}^{Bob}=\emptyset\). Here \(p=0.2=\mathbb{P}(W_{1}=-1)\). Bob acts at \(t=2\). Alice does not act at \(t=2\). Bob's action \(A_{2}^{Bob}\in\{-1,1\}\). Alice's payoffs at \(t=1\) and \(t=2\) are \[u_{1}^{Alice}(X_{1},A_{1})= \left\{\begin{array}{ll}c&\text{if }A_{1}^{Alice}=1\\ 0&\text{if }A_{1}^{Alice}=-1\end{array}\right. \tag{55}\] and \[u_{2}^{Alice}(X_{2},A_{2})= \left\{\begin{array}{ll}2&\text{if }X_{2}=1,A_{2}^{Bob}=1\\ 1&\text{if }X_{2}=-1,A_{2}^{Bob}=-1\\ 0&\text{otherwise}\end{array}\right. \tag{56}\] respectively. Bob's payoffs are \(u_{t}^{Bob}(X_{t},A_{t})=-u_{t}^{Alice}(X_{t},A_{t}),t=1,2\). The game's information structure is \[H_{1}^{Alice}= \{X_{1}\} \tag{57}\] \[H_{2}^{Alice}= \{X_{1},A_{1}^{Alice},X_{2},Z_{2}\}\] (58) \[H_{1}^{Bob}= \emptyset\] (59) \[H_{2}^{Bob}= \{Z_{2}\} \tag{60}\] where \(H_{t}^{Alice},H_{t}^{Bob},t=1,2\), describe the information available to Alice and Bob, respectively, at stages 1 and 2. _Dynamic Games with Asymmetric Information and Hidden Actions_ This example has the same dynamics and utility functions as Example 3 in Tang et al (2022), but Bob doesn't observe Alice's action as in (Tang et al, 2022, Example 3). ### Sequential decomposition Since Alice perfectly observes the state at both times, i.e., \(Y_{1}^{Alice}=X_{1}\) and \(Y_{2}^{Alice}=X_{2}\), and Bob doesn't have private information, \(S_{1}^{Alice}=X_{1},S_{1}^{Bob}=\emptyset\) are sufficient private information for Alice and Bob at stage \(t=1\), respectively, and \(S_{2}^{Alice}=X_{2},S_{2}^{Bob}=\emptyset\) are sufficient private information for Alice and Bob, respectively, at stage \(t=2\) according to Definition 1. Suppose \(\sigma=(\sigma_{1},\sigma_{2})=(\sigma_{1}^{Alice},\sigma_{2}^{Bob})\) is a SIB strategy and \(\psi^{\sigma}\) is the corresponding update rule. Here \(\sigma\) is an equilibrium strategy candidate which serves as the strategy prediction for Alice and Bob. Note that \(\Pi_{1}^{\psi^{\sigma},Alice}(x_{1})=\mu_{0}(x_{1})\) and \(\Pi_{1}^{\psi^{\sigma},Bob}(x_{1})=\mu_{0}(x_{1})\) for all \(x_{1}\in\mathcal{X}_{1}\). To get a BNE using the sequential decomposition of Theorem 2, we first consider the stage-game \(G_{2}(0,\pi_{2}^{\psi^{\sigma}})\) at time \(2\). Since Bob is the only agent who acts at time \(2\) and \(S_{2}^{Bob}=\emptyset\), any BNE \(\sigma_{2}\) of \(G_{2}(0,\pi_{2}^{\psi^{\sigma}})\) must satisfy \[\hat{\sigma}_{2}^{Bob}= \operatorname*{arg\,max}_{\hat{\sigma}_{2}^{Bob}}\,\mathbb{E}^{ \hat{\sigma}_{2}^{Bob},\psi^{\sigma}}[u_{2}^{Bob}(X_{2},A_{2})]\] \[= \operatorname*{arg\,max}_{\hat{\sigma}_{2}^{Bob}}\,\Big{(}-2\, \mathbb{P}^{\hat{\sigma}_{2}^{Bob},\,\psi^{\sigma}}(X_{2}=A_{2}^{Bob}=1)- \mathbb{P}^{\hat{\sigma}_{2}^{Bob},\psi^{\sigma}}(X_{2}=A_{2}^{Bob}=-1)\Big{)}\] \[= \operatorname*{arg\,max}_{\hat{\sigma}_{2}^{Bob}}\,\Big{(}-2\pi _{2}^{\psi^{\sigma},Bob}(1)\tilde{\sigma}_{2}^{\psi^{\sigma},Bob}(1\mid\pi_{2 }^{\psi^{\sigma}})\] \[\qquad\qquad\qquad-(1-\pi_{2}^{\psi^{\sigma},Bob}(1))(1-\tilde{ \sigma}_{2}^{\psi^{\sigma},Bob}(1\mid\pi_{2}^{\psi^{\sigma}}))\Big{)} \tag{61}\] From (61) we conclude that one of the equilibrium SIB strategies is given by \[\sigma_{2}^{Bob}(\pi_{2}^{\psi^{\sigma}})=1,\,\text{if}\,\pi_{2} ^{\psi^{\sigma},Bob}(1)\leq 1/3,\] \[\sigma_{2}^{Bob}(\pi_{2}^{\psi^{\sigma}})=0,\,\text{if}\,\pi_{2} ^{\psi^{\sigma},Bob}(1)>1/3,\] or equivalently \[\sigma_{2}^{Bob}(\pi_{2}^{\psi^{\sigma}})=\mathds{1}(\pi_{2}^{ \psi^{\sigma},Bob}(1)\leq 1/3) \tag{62}\] Note that \(\sigma_{2}^{Bob}(\pi_{2}^{\psi^{\sigma}})\) can take any value in \([0,1]\) if \(\pi_{2}^{\psi^{\sigma},Bob}(1)=1/3\) and \(\sigma_{2}\) is still a BNE of the stage-game. Alice's sufficient private information at time \(2\) is \(S_{2}^{Alice}=X_{2}\). With the stage-game equilibrium SIB strategy \(\sigma_{2}^{Bob}(\pi_{2})\) given by (62), the value function for Alice at \(t=2\) is then given, according to (48), by \[V_{2}^{Alice}(\pi_{2}^{\psi^{\sigma}},x_{2})= \,\mathbb{E}^{\sigma_{2},\psi^{\sigma}}[u_{2}^{Alice}(X_{2},A_{2}) \mid x_{2}]\] \[= \left\{\begin{array}{ll}2\mathds{1}(\pi_{2}^{\psi^{\sigma},Bob}(1) \leq 1/3)&\text{if }x_{2}=1\\ 1-\mathds{1}(\pi_{2}^{\psi^{\sigma},Bob}(1)\leq 1/3)&\text{if }x_{2}=-1\end{array}\right. \tag{63}\] Given the above value functions at time \(t=2\), we now consider the stage-game \(G_{1}(V_{2},\pi_{1}^{\psi^{\sigma}})\) at time \(t=1\). The utility for the stage-game for Alice is given as follows. \[U_{G_{1}(V_{2},\pi_{1}^{\psi^{\sigma}})}^{Alice}=u_{1}^{Alice}(X_{1},A_{1})+V_{ 2}^{Alice}(\psi_{2}^{\sigma}(\pi_{1},Z),X_{2}) \tag{64}\] If Alice uses the SIB strategy \(\tilde{\sigma}_{1}^{Alice}\), the expected utility of the stage-game can be calculated for \(X_{1}=-1\) and \(X_{1}=1\), according to (46), by \[\mathbb{E}^{\tilde{\sigma}_{1}^{Alice},\psi^{\sigma}}[U_{G_{1}(V _{2},\pi_{1}^{\psi^{\sigma}})}^{Alice}\mid X_{1}=-1]\] \[= c\tilde{\sigma}_{1}^{Alice}(1\mid-1)+\mathbb{E}^{\tilde{\sigma} _{1}^{Alice},\psi^{\sigma}}[V_{2}^{A}(\psi_{2}^{\sigma}(\pi_{1}^{\psi^{ \sigma}},X_{1}W_{1}),X_{1}A_{1}^{Alice})\mid X_{1}=-1]\] \[= (1+c)(1-\tilde{\alpha}_{1})+(3\tilde{\alpha}_{1}-1)((1-p)\mathds{ 1}(q_{-1}\leq 1/3)+p\mathds{1}(q_{1}\leq 1/3))\] \[=: r_{-1}^{A}(\tilde{\alpha}_{1},q) \tag{65}\] \[\mathbb{E}^{\tilde{\sigma}_{1}^{Alice},\psi^{\sigma}}[U_{G_{1}(V _{2},\pi_{1}^{\psi^{\sigma}})}^{Alice}\mid X_{1}=1]\] \[= c\tilde{\sigma}_{1}^{Alice}(1\mid 1)+\mathbb{E}^{\tilde{\sigma} _{1}^{Alice},\psi^{\sigma}}[V_{2}^{A}(\psi_{2}^{\sigma}(\pi_{1}^{\psi^{ \sigma}},X_{1}W_{1}),X_{1}A_{1}^{Alice})\mid X_{1}=1]\] \[= 1+(c-1)\tilde{\alpha}_{2}+(3\tilde{\alpha}_{2}-1)((1-p)\mathds{ 1}(q_{1}\leq 1/3)+p\mathds{1}(q_{-1}\leq 1/3))\] \[=: r_{1}^{A}(\tilde{\alpha}_{2},q) \tag{66}\] where \(q=(q_{-1},q_{1})\), \(q_{-1}=\psi_{2}^{\sigma,Bob}(\pi_{1}^{\psi^{\sigma}},-1)(1)\) and \(q_{1}=\psi_{2}^{\sigma,Bob}(\pi_{1}^{\psi^{\sigma}},1)(1)\) are the CIB beliefs \(\pi_{2}^{\psi^{\sigma},Bob}(1)\) of \(\{X_{2}=1\}\) when \(Z=-1\) and \(Z=1\), respectively, and \(\tilde{\alpha}=(\tilde{\alpha}_{1},\tilde{\alpha}_{2})\), \(\tilde{\alpha}_{1}=\tilde{\sigma}_{1}^{Alice}(-1\mid-1),\tilde{\alpha}_{2}= \tilde{\sigma}_{1}^{Alice}(1\mid 1)\) represents Alice's SIB strategy \(\tilde{\sigma}_{1}^{Alice}\). Note that from Bayes' rule in Definition 4, under the SIB strategy \(\sigma_{1}^{Alice}\), represented by \(\alpha_{1}=\sigma_{1}^{Alice}(-1\mid-1)\) and \(\alpha_{2}=\sigma_{1}^{Alice}(1\mid 1)\), we have \[q_{-1}=\psi_{2}^{\psi^{\sigma},Bob}(\pi_{1}^{\psi^{\sigma}},-1)(1 )=\frac{\mathbb{P}^{\alpha}(X_{2}=1,Z=-1)}{\mathbb{P}^{\alpha}(Z=-1)}=\alpha_{ 2}p+\alpha_{1}(1-p) \tag{67}\] \[q_{1}=\psi_{2}^{\psi^{\sigma},Bob}(\pi_{1}^{\psi^{\sigma}},1)(1 )=\frac{\mathbb{P}^{\alpha}(X_{2}=1,Z=1)}{\mathbb{P}^{\alpha}(Z=1)}=\alpha_{2} (1-p)+\alpha_{1}p \tag{68}\] Therefore, a SIB strategy \(\hat{\sigma}_{1}^{Alice}\), represented by \(\hat{\alpha}_{1}=\hat{\sigma}_{1}^{Alice}(-1\mid-1)\) and \(\hat{\alpha}_{2}=\hat{\sigma}_{1}^{Alice}(1\mid 1)\), is a BNE of the stage-game \(G_{1}(V_{2},\pi_{1}^{\psi^{\sigma}})\) at time \(t=1\) if \[\hat{\alpha}_{1}\in\underset{\tilde{\alpha}_{1}}{\arg\max}\ r_{-1}^{A}(\tilde{ \alpha}_{1},(\alpha_{2}p+\alpha_{1}(1-p),\alpha_{2}(1-p)+\alpha_{1}p)) \tag{69}\] \[\hat{\alpha}_{2}\in\underset{\tilde{\alpha}_{2}}{\arg\max}\ r_{1}^{A}( \tilde{\alpha}_{2},(\alpha_{2}p+\alpha_{1}(1-p),\alpha_{2}(1-p)+\alpha_{1}p)) \tag{70}\] _Dynamic Games with Asymmetric Information and Hidden Actions_ Consequently, the SIB strategy \(\sigma_{1}^{Alice}\), represented by \(\alpha_{1}=\sigma_{1}^{Alice}(-1\mid-1)\) and \(\alpha_{2}=\sigma_{1}^{Alice}(1\mid 1)\) will satisfy the sequential decomposition equations (49)-(50) if \[\alpha_{1}\in\operatorname*{arg\,max}_{\tilde{\alpha}_{1}}\,r_{-1}^{A}( \tilde{\alpha}_{1},(\alpha_{2}p+\alpha_{1}(1-p),\alpha_{2}(1-p)+\alpha_{1}p)) \tag{71}\] \[\alpha_{2}\in\operatorname*{arg\,max}_{\tilde{\alpha}_{2}}\,r_{1}^{A}(\tilde{ \alpha}_{2},(\alpha_{2}p+\alpha_{1}(1-p),\alpha_{2}(1-p)+\alpha_{1}p)) \tag{72}\] **Remark 10**.: _Note that the functions \(r_{-1}^{A}(\tilde{\alpha}_{1},q)\) and \(r_{1}^{A}(\tilde{\alpha}_{2},q)\) are not continuous in \(q\). Thus existence of equilibria cannot be established by the standard method relying on the continuity of the utility functions, and there may not no equilibria in the general case._ ### Existence of SIB-BNE under conditions on the instantaneous utility. The stage-game \(G_{1}(V_{2},\pi_{1}^{\psi^{\sigma}})\) is a normal-form game with a fixed \(\sigma_{1}\). According to Remark 5, a BNE \(\hat{\sigma}\) of \(G_{1}(V_{2},\pi_{1}^{\psi^{\sigma}})\) could be different from \(\sigma_{1}\) and the existence of a regular BNE of \(G_{1}(V_{2},\pi_{1}^{\psi^{\sigma}})\) is not sufficient to satisfy (50) at time \(t=1\). In order to apply equilibrium existence results for normal-form games to the sequential decomposition at time \(t=1\), we introduce an agent \(0\) who picks the \(q\)-belief \(q=(q_{-1},q_{1})\) so that (50) is satisfied. Formally, we construct an augmented stage-game \(\hat{G}_{1}\) between Alice and agent \(0\). Alice chooses \(\tilde{\alpha}=(\tilde{\alpha}_{1},\tilde{\alpha}_{2})\) and agent \(0\) chooses \(\tilde{q}=(\tilde{q}_{-1},\tilde{q}_{1})\). Alice's utility is \[r_{1}^{A}(\tilde{\alpha},\tilde{q})= 0.5r_{-1}^{A}(\tilde{\alpha}_{1},\tilde{q})+0.5r_{1}^{A}(\tilde {\alpha}_{2},\tilde{q})\] \[= 0.5c(1-\tilde{\alpha}_{1}+\tilde{\alpha}_{2})+0.5(2-\tilde{ \alpha}_{1}-\tilde{\alpha}_{2})\] \[+0.5(3(\tilde{\alpha}_{2}p+\tilde{\alpha}_{1}(1-p))-1)\mathds{1} (\tilde{q}_{-1}\leq 1/3)\] \[+0.5(3(\tilde{\alpha}_{2}(1-p)+\tilde{\alpha}_{1}p)-1)\mathds{1} (\tilde{q}_{1}\leq 1/3). \tag{73}\] Agent \(0\)'s utility is \[r_{1}^{0}(\tilde{\alpha},\tilde{q})=-(\tilde{q}_{-1}-\tilde{\alpha}_{2}p- \tilde{\alpha}_{1}(1-p))^{2}-(\tilde{q}_{1}-\tilde{\alpha}_{2}(1-p)-\tilde{ \alpha}_{1}p)^{2}. \tag{74}\] Both Alice and agent \(0\) are utility maximizers. The game \(\hat{G}_{1}\) with utilities (74)-(73) is a normal-form game with strategies \(\tilde{\alpha}=(\tilde{\alpha}_{1},\tilde{\alpha}_{2})\)\(\tilde{q}=(\tilde{q}_{-1},\tilde{q}_{1})\). Since the utility (74) of agent \(0\) is a quadratic function, any best response by agent \(0\) must satisfy \(\tilde{q}_{-1}=\tilde{\alpha}_{2}p+\tilde{\alpha}_{1}(1-p),\tilde{q}_{1}= \tilde{\alpha}_{2}(1-p)+\tilde{\alpha}_{1}p\). Note that in the augmented stage-game \(\hat{G}_{1}\), the utility function \(r_{1}^{A}(\tilde{\alpha},\tilde{q})\) is not continuous in \(\tilde{q}\). To show the existence of a Nash equilibrium for \(\hat{G}_{1}\), we proceed to apply existence results for games with discontinuous utilities in Barelli and Meneghel (2013). Specifically, Proposition 2.4 of Barelli and Meneghel (2013) guarantees the existence of a Nash equilibrium for games satisfying the generalized better reply secure property. From Definition 2.3 in Barelli and Meneghel (2013), the stage game is generalized better reply secure if for any \((\bar{\alpha},\bar{q})\) not an equilibrium, at least one of the followings is true * We can find an \(\epsilon>0\) and a closed correspondence \(\phi^{0}(\tilde{\alpha},\tilde{q})\) such that \[r_{1}^{0}(\tilde{\alpha},\phi^{0}(\tilde{\alpha},\tilde{q}))\geq r_{1}^{0}( \bar{\alpha},\bar{q})+\epsilon\] (75) for all \(\tilde{\alpha}_{1}\in(\bar{\alpha}_{1}-\epsilon,\bar{\alpha}_{1}+\epsilon)\), \(\tilde{\alpha}_{2}\in(\bar{\alpha}_{2}-\epsilon,\bar{\alpha}_{2}+\epsilon)\), \(\tilde{q}_{-1}\in(\bar{q}_{-1}-\epsilon,\bar{q}_{-1}+\epsilon)\), \(\tilde{q}_{1}\in(\bar{q}_{1}-\epsilon,\bar{q}_{1}+\epsilon)\) * We can find an \(\epsilon>0\) and a closed correspondence \(\phi^{A}(\tilde{\alpha},\tilde{q})\) such that \[r_{1}^{A}(\phi^{A}(\tilde{\alpha},\tilde{q}),\tilde{q})\geq r_{1}^{A}(\bar{ \alpha},\bar{q})+\epsilon\] (76) for all \(\tilde{\alpha}_{1}\in(\bar{\alpha}_{1}-\epsilon,\bar{\alpha}_{2}+\epsilon)\), \(\tilde{\alpha}_{2}\in(\bar{\alpha}_{2}-\epsilon,\bar{\alpha}_{2}+\epsilon)\), \(\tilde{q}_{-1}\in(\bar{q}_{-1}-\epsilon,\bar{q}_{-1}+\epsilon)\), \(\tilde{q}_{1}\in(\bar{q}_{1}-\epsilon,\bar{q}_{1}+\epsilon)\) In Appendix.2, we show that when \(c>24\) the augmented stage-game \(\hat{G}_{1}\) is generalized better reply secure. Thus, there exists a Nash equilibrium of the augmented state-game \(\hat{G}_{1}\) according to (Barelli and Meneghel, 2013, Proposition 2.4). Consider any Nash equilibrium \((\alpha,q)\) of \(\hat{G}_{1}\). Since \(q\) is a best response to \(\alpha\) for agent 0, from agent 0's utility (74) we have \[q_{-1}=\alpha_{2}p+\alpha_{1}(1-p) \tag{77}\] \[q_{1}=\alpha_{2}(1-p)+\alpha_{1}p \tag{78}\] Furthermore, since \(\alpha\) is a best response to \(q\) for Alice in \(\hat{G}_{1}\), \[\alpha\in \operatorname*{arg\,max}_{\tilde{\alpha}}\,\left(0.5r_{-1}^{A}( \tilde{\alpha}_{1},q)+0.5r_{1}^{A}(\tilde{\alpha}_{2},q)\right)\] \[= \operatorname*{arg\,max}_{\tilde{\alpha}}\,\left(0.5r_{-1}^{A}( \tilde{\alpha}_{1},(\alpha_{2}p+\alpha_{1}(1-p),\alpha_{2}(1-p)+\alpha_{1}p))\right.\] \[\qquad\qquad\qquad+0.5r_{1}^{A}(\tilde{\alpha}_{2},(\alpha_{2}p+ \alpha_{1}(1-p),\alpha_{2}(1-p)+\alpha_{1}p))\Big{)}\] \[= \Big{(}\operatorname*{arg\,max}_{\tilde{\alpha}_{1}}\,r_{-1}^{A}( \tilde{\alpha}_{1},(\alpha_{2}p+\alpha_{1}(1-p),\alpha_{2}(1-p)+\alpha_{1}p)),\] \[\qquad\qquad\operatorname*{arg\,max}_{\tilde{\alpha}_{2}}\,r_{1}^ {A}(\tilde{\alpha}_{2},(\alpha_{2}p+\alpha_{1}(1-p),\alpha_{2}(1-p)+\alpha_{1} p))\Big{)} \tag{79}\] Therefore, (71)-(72) hold for \(\alpha\), and consequently the sequential decomposition requirement (49)-(50) is satisfied at \(t=1\) by the SIB strategy \(\sigma_{1}^{Alice}\) represented by \(\alpha\), and we establish the existence of a SIB equilibrium based on Theorem 2. _Dynamic Games with Asymmetric Information and Hidden Actions_ ## 7 The case with no common observations We consider the model of Section 2 but we assume that the agents have no common observations, that is, \[Z_{t}=\emptyset\quad\forall t\in\mathcal{T}. \tag{80}\] The system's dynamics, the agents' private observations, the functional form of the agents' strategies, their utilities, and the equilibrium concept (BNE) remain the same as in Section 2. Even though the agents have no common observations in this special case, we can still define SIB strategies by Definition 3, and construct the consistent CIB belief system according to Definition 4 with \(Z_{t}=\emptyset\,\forall t\in\mathcal{T}\). Since there is no common observations, for any realization we always have \[\sum_{\hat{x}_{t+1},\hat{s}_{t+1}}F^{i}_{t}(\hat{x}_{t+1},\hat{s}_ {t+1},z_{t+1})(\pi^{\psi^{\sigma}}_{t};\;\sigma^{-i}_{t})\] \[= \sum_{\hat{x}_{t+1},\hat{s}_{t+1}}F^{i}_{t}(\hat{x}_{t+1},\hat{s} _{t+1})(\pi^{\psi^{\sigma}}_{t};\;\sigma^{-i}_{t})=1>0 \tag{81}\] Therefore, case (ii) in Definition 4 would never happen, and (20) can be simplified to \[\pi^{\psi^{\sigma},i}_{t+1}(x_{t+1},s_{t+1})\] \[= \frac{F^{i}_{t}(x_{t+1},s_{t+1})(\pi^{\psi^{\sigma}}_{t};\; \sigma^{-i}_{t})}{\sum_{\hat{x}_{t+1},\hat{s}_{t+1}}F^{i}_{t}(\hat{x}_{t+1}, \hat{s}_{t+1})(\pi^{\psi^{\sigma}}_{t};\;\sigma^{-i}_{t})}\] \[= F^{i}_{t}(x_{t+1},s_{t+1})(\pi^{\psi^{\sigma}}_{t};\;\sigma^{-i}_ {t})\] \[= \sum_{y_{t+1},x_{t},s_{t},a_{t}}\bigg{[}\mathbb{P}\{y_{t+1},x_{t+ 1}\mid x_{t},a_{t}\}\left(\prod_{j}\mathbb{1}\{s^{j}_{t+1}=\phi^{j}_{t+1}(s^{ j}_{t},y^{j}_{t+1},a^{j}_{t})\}\right)\] \[\qquad\qquad\left(\frac{1}{|A^{i}_{t}|}\prod_{j\neq i}\sigma^{j}_ {t}(a^{j}_{t})(\pi^{\psi}_{t},s^{j}_{t})\right)\pi^{\psi,i}_{t}(x_{t},s_{t}) \bigg{]}. \tag{82}\] Based on (82) we can write \[\Pi^{\psi^{\sigma},i}_{t+1} =\psi^{\sigma,i}_{t+1}(\Pi^{\psi^{\sigma}}_{t})\quad\forall i\in \mathcal{N}, \tag{83}\] \[\Pi^{\psi^{\sigma}}_{t+1} =\psi^{\sigma}_{t+1}(\Pi^{\psi^{\sigma}}_{t}). \tag{84}\] In other words, given a SIB strategy \(\sigma\), the update rule \(\psi^{\sigma}\) are deterministic functions given by (84), and the corresponding consistent CIB belief system \(\Pi^{\psi^{\sigma}}_{t},t\in\mathcal{T}\), evolves in a deterministic manner. Furthermore, since case (ii) in Definition 4 never happens without common observations, the update rule \(\psi_{t+1}^{\sigma,i}\) given by (82) becomes exactly the Bayes rule. As a result, the CIB belief \(\Pi_{t}^{\psi^{\sigma,i}}\) becomes a regular PMF given by \[\Pi_{t}^{\psi^{\sigma,i}}(x_{t},s_{t})=\mathbb{P}^{\tilde{g}^{i}, \sigma^{-i}}(x_{t},s_{t})\quad\forall i\in\mathcal{N} \tag{85}\] where \(\tilde{g}^{i}\) denotes the uniform strategy (i.e., the strategy that chooses every action \(a_{t}^{i}\in\mathcal{A}_{t}^{i}\) with equal probability for all \(t\in\mathcal{T}\)). **Remark 11**.: _If the \(N\) agents have identical utilities, i.e. we have a dynamic team problem, then \(\Pi_{t}^{\psi^{\sigma}},t\in\mathcal{T}\) is similar to the common knowledge that appears in Witsenhausen (1973) where a dynamic team is analyzed. The common knowledge in Witsenhausen (1973) is a sequence (over time) of PMFs on the system's history \(H_{t},t\in\mathcal{T}\). These PMFs evolve in a deterministic manner, similar to (82) for \(\Pi_{t}^{\psi^{\sigma}},t\in\mathcal{T}\), in the model of this section._ For this special case with no common observations, Theorem 2 becomes **Corollary 1**.: _Consider a SIB strategy profile \(\sigma=\{\sigma_{t},t\in\mathcal{T}\}\) and the corresponding update rule \(\psi^{\sigma}=\{\psi_{t}^{\sigma},t\in\mathcal{T}\}\) defined by (83)-(84) for the model of this section. Define_ \[V_{T+1}^{i}(\cdot,\cdot)=0\text{ for all }i \tag{86}\] \[V_{t}^{i}(\pi^{\psi_{t}^{\sigma}},s_{t}^{i})=\mathbb{E}^{\sigma_ {t},\psi^{\sigma}}[U_{G_{t}(V_{t+1},\pi^{\psi_{t}^{\sigma}})}^{i}\mid s_{t}^{ i}] \tag{87}\] _where \(U_{G_{t}(V_{t+1},\pi^{\psi^{\sigma}}_{t})}^{i}=u_{t}^{i}(X_{t},A_{t})+V_{t+1}^ {i}(\psi_{t+1}^{\sigma}(\pi^{\psi^{\sigma}}_{t}),S_{t+1}^{i})\), and in the conditional expectation \(\mathbb{E}^{\sigma_{t},\psi^{\sigma}}[\cdot]\), the distribution of \((X_{t},S_{t})\) conditioned on \(S_{t}^{i}\) is given by \(\pi_{t}^{\psi^{\sigma},i}(x_{t},s_{t}^{-i})\), \(A_{t}^{i},i\in\mathcal{N}\), are generated by \(\sigma_{t}^{i}(a_{t}^{i}\mid s_{t}^{i},\pi_{t}^{\psi^{\sigma}})\), \(S_{t+1}^{i}\) conditioned on \((X_{t},S_{t},A_{t})\) follows the conditional probability \(\sum_{x_{t+1},s_{t+1}^{-i}}\mathbb{P}(x_{t+1},s_{t+1}\mid x_{t},s_{t},a_{t})\) given by_ \[\mathbb{P}(x_{t+1},s_{t+1}\mid x_{t},s_{t},a_{t})\] \[= \sum_{y_{t+1}}\mathbb{P}\{x_{t+1}\mid x_{t},a_{t}\}\mathbb{P}\{y_ {t+1}\mid x_{t+1},a_{t}\}\left(\prod_{j}\mathbbm{1}\{s_{t+1}^{j}=\phi_{t+1}^{ j}(s_{t}^{j},y_{t+1}^{j},a_{t}^{j})\}\right).\] _If for all \(t\in\mathcal{T}\), there is a SIB strategy profile \(\hat{\sigma}_{t}\) such that \(\hat{\sigma}_{t}\) is a BNE of the stage-game \(G_{t}(V_{t+1},\pi^{\psi_{t}^{\sigma}})\), that is,_ \[\mathbb{E}^{\hat{\sigma}_{t}^{i},\hat{\sigma}_{t}^{-i},\psi^{ \sigma}}[U_{G_{t}(V_{t+1},\pi^{\psi_{t}^{\sigma}}_{t})}^{i}\mid s_{t}^{i}]=\max _{\hat{\sigma}_{t}^{i}\in\Lambda_{t}^{i}}\mathbb{E}^{\hat{\sigma}_{t}^{i}, \hat{\sigma}_{t}^{-i},\psi^{\sigma}}[U_{G_{t}(V_{t+1},\pi^{\psi^{\sigma}}_{t}) }^{i}\mid s_{t}^{i}] \tag{89}\] _Dynamic Games with Asymmetric Information and Hidden Actions for all \(i\in\mathcal{N}\), and_ \[\hat{\sigma}_{t}=\sigma_{t}, \tag{90}\] _then the SIB strategy profile \(\sigma\) is a SIB-BNE of the dynamic game without common observations defined in this section._ **Remark 12**.: _The SIB-BNE strategy profiles \(\{\sigma_{t},t\in\mathcal{T}\}\) determined by sequential decomposition in Corollary 1, along with the beliefs \(\{\Pi_{t}^{\psi^{*}},t\in\mathcal{T}\}\) are also Perfect Bayesian Equilibria (PBE) Fudenberg and Tirole (1991). This is true because \(\{\sigma_{t},t\in\mathcal{T}\}\) satisfy sequential rationality (Eq. (89)) and consistency holds because the beliefs \(\{\Pi_{t}^{\psi^{*}},t\in\mathcal{T}\}\) are always updated by Bayes rule._ ## 8 Conclusion We considered stochastic dynamic games where the underlying system is dynamic, the strategic agents' actions are hidden (not observable) and their information is asymmetric. We presented an approach for the computation of BNE strategy profiles that are based on a compressed version of the agents' information and can be determined sequentially in time moving backwards, if each step of this backward procedure has a solution. The approach highlights: (i) the importance of common information/common knowledge in identifying BNE strategy profiles that can be sequentially computed; (ii) the difference between common information that is sufficient for decision-making purposes in games and common information that is sufficient for decision-making purposes in teams. The difference is due to the fact that agents have an incentive to deviate from their predicted strategies in games whereas they don't have such an incentive in teams. As a consqence of this incentive, at each time instant each agent has his own view/belief of the game's status based on the common information, but all these different views/beliefs are common knowledge among all agents. As a result the CIB belief system is described by the sequence \(\Pi_{1:T}^{\psi}\) specified by Definition 2. Our investigation focused on determining SIB-BNE strategy profiles for the games under consideration. We note that the SIB-BNE strategy profiles determined by our methodology are also Perfect Bayesian Equilibrium (PBE) strategy profiles when the agents have no common observations (i.e., for the model of Section 7), but this is not true when the agents have common observations (the general model of Section 2). Determining PBE strategy profiles for the general model of Section 2 is an interesting problem worthy of investigation. ### Sufficient Information We compare conditions (i)-(iii) of Definition 1 to the conditions of Definition 2 in Tavafoghi et al (2022); for ease of readability, we include the definition from Tavafoghi et al (2022) below. **Definition 7** (Sufficient private information Tavafoghi et al (2022)).: _We say \(S^{i}_{t}=\zeta^{i}_{t}(P^{i}_{t},C_{t};\;g_{1:t-1})\), \(i\in\mathcal{N}\), \(t\in\mathcal{T}\), is sufficient private information for the agents if,_ 1. _it can be updated recursively as_ \[S^{i}_{t}=\phi^{i}_{t}(S^{i}_{t-1},H^{i}_{t}\backslash H^{i}_{t-1};\;g_{1:t-1}) \text{ for }t\in\mathcal{T}\backslash\{1\},\] (91) 2. _for any strategy profile_ \(g\) _and for all realizations_ \(\{c_{t},p_{t},p_{t+1},z_{t+1},a_{t}\}\in\mathcal{C}_{t}\times\mathcal{P}_{t} \times\mathcal{P}_{t+1}\times\mathcal{Z}_{t+1}\) _of positive probability,_ \[\mathbb{P}^{g_{1:t}}\left\{\!\!\left\{s_{t+1},z_{t+1}\,\right|\,p_{t},c_{t},a_{ t}\!\right\}\!\!=\!\mathbb{P}^{g_{1:t}}\left\{\!\!\left\{s_{t+1},z_{t+1}\,\right|\, s_{t},c_{t},a_{t}\!\right\},\] (92) _where_ \(s^{1:N}_{\tau}=\zeta^{1:N}_{\tau}(p^{1:N}_{\tau},c_{\tau};\;g_{1:\tau-1})\) _for_ \(\tau\in\mathcal{T}\)_;_ 3. _for every strategy profile_ \(\tilde{g}\) _of the form_ \(\tilde{g}\!:=\!\{\tilde{g}^{i}_{t}\!:\mathcal{S}^{i}_{t}\times\mathcal{C}_{t} \rightarrow\Delta(\mathcal{A}^{i}_{t}),i\!\in\!\mathcal{N},\!t\!\in\!\mathcal{ T}\}\) _and_ \(a_{t}\!\in\!\mathcal{A}_{t}\)_,_ \(t\!\in\!\mathcal{T}\)_;_ \[\mathbb{E}^{\tilde{g}_{1:t-1}}\left\{\!\!\left\{u^{i}_{t}(X_{t},A_{t})\,\right| \,c_{t},p^{i}_{t},a_{t}\!\right\}\!\!=\!\mathbb{E}^{\tilde{g}_{1:t-1}}\left\{\! \!\left\{u^{i}_{t}(X_{t},A_{t})\,\right|\,c_{t},s^{i}_{t},a_{t}\!\right\}\!\!,\] (93) _for all realizations_ \(\{c_{t},p^{i}_{t}\}\!\in\!\mathcal{C}_{t}\times\mathcal{P}^{i}_{t}\) _of positive probability where_ \(s^{1:N}_{\tau}\!=\!\zeta^{1:N}_{\tau}(p^{1:N}_{\tau},\!c_{\tau};\;\tilde{g}_{1: \tau-1})\) _for_ \(\tau\in\mathcal{T}\)_;_ 4. _given an arbitrary strategy profile_ \(\tilde{g}\) _of the form_ \(\tilde{g}\!:=\{\tilde{g}^{i}_{t}:\mathcal{S}^{i}_{t}\times\mathcal{C}_{t} \rightarrow\Delta(\mathcal{A}^{i}_{t}),i\!\in\!\mathcal{N},t\!\in\!\mathcal{T}\}\)_,_ \(i\!\in\!\mathcal{N}\)_, and_ \(t\!\in\!\mathcal{T}\)_,_ \(\!\in\!\mathcal{T}\)_,_ \(\!\)__ \[\mathbb{P}^{\tilde{g}_{1:t-1}}\left\{\!\!\left\{s^{-i}_{t}\,\right|\,p^{i}_{t},c _{t}\!\right\}\!\!=\!\mathbb{P}^{\tilde{g}_{1:t-1}}\left\{\!\!\left\{s^{-i}_{t }\,\right|\,s^{i}_{t},c_{t}\!\right\},\right.\] (94) _for all realizations_ \(\{c_{t},p^{i}_{t}\}\!\in\!\mathcal{C}_{t}\!\times\!\mathcal{P}^{i}_{t}\) _of positive probability where_ \(s^{1:N}_{\tau}\!=\!\zeta^{1:N}_{\tau}(p^{1:N}_{\tau},\!c_{\tau};\;\tilde{g}_{1: \tau-1})\) _for_ \(\tau\in\mathcal{T}\)_._ Condition (i) of Definition 1 appears in the definition of \(S^{i}_{t}\) in Definition 7, and condition (ii) of Definition 1 on recursive update is the same as condition (i) in Definition 7. Condition (iii) of Definition 1 directly leads to (iii) and (iv) of Definition 7; the utility \(u^{i}_{t}(X_{t},A_{t})\) in condition (iii) and the random variable \(s^{-i}_{t}\) in condition (\(iv\)) of Definition 7 are functions of \((x_{t},s_{t})\) whose distribution conditioned on \((p^{i}_{t},c_{t})\) is the same as conditioned on \((s^{i}_{t},c_{t})\) under condition (iii) of Definition 1. However, condition (ii) of Definition 7 may not hold for sufficient private information satisfying Definition 1. Consider the following example. Suppose \(X_{1}=Y^{1}_{1}\) XOR \(Y^{2}_{1}\), and \(Y^{1}_{1},Y^{2}_{1}\) takes values in \(\{0,1\}\) with equal probability. \(Z_{1}=\emptyset\) and \(Z_{2}=X_{1}\). Then \(S^{1}_{1}=S^{2}_{1}=\emptyset\) satisfies Definition 1 because \(\mathbb{P}(x_{1},s^{-i}_{1}\mid p^{i}_{1},c_{1})=\mathbb{P}(x_{1}\mid y^{i}_{1 })=0.5=\mathbb{P}(x_{1},s^{-i}_{1}\mid s^{i}_{1},c_{1})\). However, they don't satisfy condition (ii) of Definition 7 because \(\mathbb{P}(z_{2}\mid p_{1},c_{1},a_{1})=\mathbb{P}(x_{1}\mid y^{1}_{1},y^{2}_{1 })=\mathds{1}(x_{1}=y^{1}_{1}\) XOR \(y^{2}_{1})\neq\mathbb{P}(z_{2}\mid s_{1},c_{1},a_{1})=\mathbb{P}(x_{1})=0.5\). ### Proof of the generalized better reply secure property for the augmented stage-game We show that when \(c>24\) the augmented stage-game \(\hat{G}_{1}\) in Section 6 is generalized better reply secure. For that matter, we set \(\beta^{*}(q)=\mathds{1}(q\leq 1/3)\) and consider the following five cases. 1. \(r_{1}^{0}(\bar{\alpha},\bar{q})\neq 0\). In this case Bayes' rule doesn't hold at \((\bar{\alpha},\bar{q})\). We focus on agent \(0\) and select the belief to satisfy Bayes' rule as follows: \[\phi^{0}(\bar{\alpha},\bar{q})=(\tilde{\alpha}_{2}p+\tilde{\alpha}_{1}(1-p), \tilde{\alpha}_{2}(1-p)+\tilde{\alpha}_{1}p)\] (95) Then this \(\phi^{0}\) is a closed correspondence. From this construction of \(\phi^{0}\), we can pick \(\epsilon>0\) such that \[r_{1}^{0}(\tilde{\alpha},\phi^{0}(\tilde{\alpha},\tilde{q}))=0>r_{1}^{0}(\bar {\alpha},\bar{q})+\epsilon\] 2. \(r_{1}^{0}(\bar{\alpha},\bar{q})=0\), and \(\bar{\pi}_{-1}\neq 1/3\) and \(\bar{\pi}_{1}\neq 1/3\). Since \(\beta^{*}(q)=1\) if \(q<1/3\), \(\beta^{*}(q)=0\) if \(q>1/3\), \(\beta^{*}(\cdot)\) is continuous at points where \(q\neq 1/3\). Hence, we can find \(\epsilon>0\) s.t. \(\beta^{*}(\tilde{q}_{-1})=\beta^{*}(\bar{q}_{-1})\) for all \(\tilde{q}_{-1}\in(\bar{q}_{-1}-\epsilon,\bar{q}_{-1}+\epsilon)\), and \(\beta^{*}(\tilde{q}_{1})=\beta^{*}(\bar{q}_{1})\) for all \(\tilde{q}_{1}\in(\bar{q}_{1}-\epsilon,\bar{q}_{1}+\epsilon)\). In this region we have \[r_{1}^{A}(\alpha,\tilde{q})=r_{1}^{A}(\alpha,\bar{q})\] (96) for all \(\alpha\). Let \[\phi^{A}(\tilde{\alpha},\tilde{q})=\operatorname*{arg\,max}_{\alpha}\,r_{1}^{ A}(\alpha,\tilde{q})\] (97) Because \(r_{1}^{A}(\cdot)\) is continuous in the region under consideration, \(\phi^{A}(\cdot)\) has a closed graph from Berge's maximum theorem. Note that for all \(\tilde{q}_{1}\in(\bar{q}_{1}-\epsilon,\bar{q}_{1}+\epsilon)\), \(\tilde{q}_{1}\in(\bar{q}_{1}-\epsilon,\bar{q}_{1}+\epsilon)\) \[r_{1}^{A}(\phi^{A}(\tilde{\alpha},\tilde{q}),\tilde{q})=\max_{\alpha}r_{1}^{A} (\alpha,\tilde{q})=\max_{\alpha}r_{1}^{A}(\alpha,\bar{q})\] (98) If \(\max_{\alpha}r_{1}^{A}(\alpha,\bar{q})>r_{1}^{A}(\bar{\alpha},\bar{q})\) we can find \(\epsilon>0\) such that for \(\tilde{q}_{1}\in(\bar{q}_{1}-\epsilon,\bar{q}_{1}+\epsilon)\), \(\tilde{q}_{1}\in(\bar{q}_{1}-\epsilon,\bar{q}_{1}+\epsilon)\), \(r_{1}^{A}(\phi^{A}(\tilde{\alpha},\tilde{q}),\tilde{q})=\max_{\alpha}r_{1}^{A} (\alpha,\tilde{q})\geq r_{1}^{A}(\bar{\alpha},\bar{q})+\epsilon\). If \(\max_{\alpha}r_{1}^{A}(\alpha,\bar{q})=r_{1}^{A}(\bar{\alpha},\bar{q})\), then Alice has no profitable deviation. Furthermore, since \(r_{1}^{0}(\bar{\alpha},\bar{q})=0\), agent \(0\) has no profitable deviation. Consequently, \((\bar{\alpha},\bar{q})\) is an equilibrium if \(max_{\alpha}r_{1}^{A}(\alpha,\bar{q})=r_{1}^{A}(\bar{\alpha},\bar{q})\). 3. \(r_{1}^{0}(\bar{\alpha},\bar{q})=0\), \(\bar{\pi}_{-1}=1/3\) and \(\bar{\pi}_{1}\neq 1/3\). Note that \(\bar{q}_{-1}=0.8\bar{\alpha}_{1}+0.2\bar{\alpha}_{2}=1/3\) and \(\beta^{*}(\bar{q}_{-1})=1/3\). Since \(\bar{\pi}_{1}\neq 1/3\), we can find \(\epsilon>0\) s.t. \(\beta^{*}(\tilde{q}_{1})=\beta^{*}(\bar{q}_{1})\) for all \(\tilde{q}_{1}\in(\bar{q}_{1}-\epsilon,\bar{q}_{1}+\epsilon)\). Therefore, \[r_{1}^{A}(\bar{\alpha},\bar{q})=0.5c(1-\bar{\alpha}_{1}+\bar{\alpha}_{2})+0.5(2- \bar{\alpha}_{1}-\bar{\alpha}_{2})+0.5(3\bar{q}_{1}-1)\beta^{*}(\bar{q}_{1})\] Pick for Alice \[\phi^{A}(\tilde{\alpha},\tilde{q})=(0,1)\] (100) for all \(\tilde{\alpha}_{i}\in(\bar{\alpha}_{i}-\epsilon,\bar{\alpha}_{i}+\epsilon),i=1,2\), \(\tilde{q}_{i}\in(\bar{q}_{i}-\epsilon,\bar{q}_{i}+\epsilon),i=-1,1\). We get \[r_{1}^{A}(\phi^{A}(\tilde{\alpha},\tilde{q}),\tilde{q})= c+0.5+0.5(0.6-1)\beta^{*}(\tilde{q}_{-1})+0.5(2.4-1)\beta^{*}(\tilde{q}_{-1})\] \[= c+0.5-0.2\beta^{*}(\tilde{q}_{-1})+0.7\beta^{*}(\bar{q}_{-1})\] (101) and \[r_{1}^{A}(\phi^{A}(\tilde{\alpha},\tilde{q}),\tilde{q})-r_{1}^ {A}(\bar{\alpha},\bar{q})-\epsilon\] \[= 0.5c(1+\bar{\alpha}_{1}-\bar{\alpha}_{2})-0.5(1+\bar{\alpha}_{1 }+\bar{\alpha}_{2})\] \[-0.2\beta^{*}(\tilde{q}_{-1})+0.5(2.4-3\bar{q}_{1})\beta^{*}( \bar{q}_{-1})-\epsilon\] \[\geq 0.5c(1+\bar{\alpha}_{1}-\bar{\alpha}_{2})-0.5*3-0.2-0.5*0.6-\epsilon\] (102) When \(\bar{q}_{-1}=1/3\), then \(0.8\bar{\alpha}_{1}+0.2\bar{\alpha}_{2}=1/3\Rightarrow\bar{\alpha}_{1}=5/12- 3/12\bar{\alpha}_{2}\). Therefore, \[1+\bar{\alpha}_{1}-\bar{\alpha}_{2}=17/12-15/12\bar{\alpha}_{2}\geq 1/6\] (103) where the minimum is at \(\bar{\alpha}_{1}=1/6\) and \(\bar{\alpha}_{2}=1\). When \(c>24\), then \[0.5c(1+\bar{\alpha}_{1}-\bar{\alpha}_{2})\geq c/12>2\] (104) and \(r_{1}^{A}(\phi^{A}(\tilde{\alpha},\tilde{q}),\tilde{q})-r_{1}^{A}(\bar{\alpha },\bar{q})-\epsilon>0\). * \(r_{1}^{0}(\bar{\alpha},\bar{q})=0\), and \(\bar{\pi}_{1}=1/3\) and \(\bar{\pi}_{-1}\neq 1/3\). This case is similar to case (iii). Since \(\bar{\pi}_{-1}\neq 1/3\), we can find \(\epsilon>0\) s.t. \(\beta^{*}(\tilde{q}_{-1})=\beta^{*}(\bar{q}_{-1})\) for all \(\tilde{q}_{-1}\in(\bar{q}_{-1}-\epsilon,\bar{q}_{-1}+\epsilon)\). Furthermore, \[r_{1}^{A}(\bar{\alpha},\tilde{q})\] \[= 0.5c(1-\bar{\alpha}_{1}+\bar{\alpha}_{2})+0.5(2-\bar{\alpha}_{1} -\bar{\alpha}_{2})+0.5(3\bar{q}_{-1}-1)\beta^{*}(\bar{q}_{-1})\] Pick for Alice the closed correspondence (as in case (iii)) \[\phi^{A}(\tilde{\alpha},\tilde{q})=(0,1)\] (106) _Dynamic Games with Asymmetric Information and Hidden Actions_ \[\text{for all }\tilde{\alpha}_{i}\in(\bar{\alpha}_{i}-\epsilon,\bar{ \alpha}_{i}+\epsilon),i=1,2,\,\tilde{q}_{i}\in(\bar{q}_{i}-\epsilon,\bar{q}_{i}+ \epsilon),i=-1,1.\] Then \[r_{1}^{A}(\phi^{A}(\tilde{\alpha},\tilde{q}),\tilde{q})\] \[= c+0.5-0.2\beta^{*}(\bar{q}_{-1})+0.7\beta^{*}(\bar{q}_{-1})\] (107) and \[r_{1}^{A}(\phi^{A}(\tilde{\alpha},\tilde{q}),\tilde{q})-r_{1}^{A }(\bar{\alpha},\bar{q})-\epsilon\] \[= 0.5c(1+\bar{\alpha}_{1}-\bar{\alpha}_{2})-0.5(1+\bar{\alpha}_{1 }+\bar{\alpha}_{2})\] \[+0.5(0.6-3\bar{q}_{-1})\beta^{*}(\bar{q}_{-1})+0.7\beta^{*}(\bar{ q}_{-1})-\epsilon\] \[\geq 0.5c(1+\bar{\alpha}_{1}-\bar{\alpha}_{2})-0.5*3-0.5*2.4-\epsilon\] (108) When \(\bar{q}_{1}=1/3,\,0.2\bar{\alpha}_{1}+0.8\bar{\alpha}_{2}=1/3\Rightarrow \bar{\alpha}_{2}=5/12-3/12\bar{\alpha}_{1}.\) Therefore, \[1+\bar{\alpha}_{1}-\bar{\alpha}_{2}=7/12+15/12\bar{\alpha}_{1}\geq 7/12.\] (109) When \(c>24\), then \[0.5c(1+\bar{\alpha}_{1}-\bar{\alpha}_{2})\geq 7/24c>2.7\] (110) and \(r_{1}^{A}(\phi^{A}(\tilde{\alpha},\tilde{q}),\tilde{q})-r_{1}^{A}(\bar{ \alpha},\bar{q})-\epsilon>0.\) Case (v) \(r_{1}^{0}(\bar{\alpha},\bar{q})=0,\) and \(\bar{\pi}_{1}=1/3\) and \(\bar{\pi}_{-1}=1/3.\) We have \[r_{1}^{A}(\bar{\alpha},\bar{q})=0.5c(1-\bar{\alpha}_{1}+\bar{\alpha}_{2})+0.5 (2-\bar{\alpha}_{1}-\bar{\alpha}_{2})\] (111) Pick for Alice the closed correspondence (as in cases (iii) and (iv)) \[\phi^{A}(\tilde{\alpha},\tilde{q})=(0,1)\] (112) for all \(\tilde{\alpha}_{i}\in(\bar{\alpha}_{i}-\epsilon,\bar{\alpha}_{i}+\epsilon),i= 1,2,\,\tilde{q}_{i}\in(\bar{q}_{i}-\epsilon,\bar{q}_{i}+\epsilon),i=-1,1.\) Then \[r_{1}^{A}(\phi^{A}(\tilde{\alpha},\tilde{q}),\tilde{q})-r_{1}^{A }(\bar{\alpha},\bar{q})-\epsilon\] \[= 0.5c(1+\bar{\alpha}_{1}-\bar{\alpha}_{2})-0.5(1+\bar{\alpha}_{1 }+\bar{\alpha}_{2})-0.2\beta^{*}(\tilde{q}_{-1})+0.7\beta^{*}(\tilde{q}_{-1})-\epsilon\] \[\geq 0.5c(1+\bar{\alpha}_{1}-\bar{\alpha}_{2})-0.5*3-0.2-\epsilon\] (113) Then we have \(r_{1}^{A}(\phi^{A}(\tilde{\alpha},\tilde{q}),\tilde{q})-r_{1}^{A}(\bar{ \alpha},\bar{q})-\epsilon>0\) following the steps in (iv).
2307.01360
Fuchsian differential equations of order 3,...,6 with three singular points and an accessory parameter II, Equations of order 3
Fuchsian differential equations of order 3 with three singular points and with an accessory parameter are studied. When local exponents are generic, no shift operator is found, for codimension-1 subfamilies, neither. We found shift operators for several codimension-2 subfamilies of which accessory parameter is assigned as a cubic polynomial in the local exponents. The Dotsenko-Fateev equation is one of them.
Yoshishige Haraoka, Hiroyuki Ochiai, Takeshi Sasaki, Masaaki Yoshida
2023-07-03T21:30:29Z
http://arxiv.org/abs/2307.01360v1
Fuchsian differential equations of order 3,...,6 with three singular points and an accessory parameter II, Equations of order 3 ###### Abstract Fuchsian differential equations of order 3 with three singular points and with an accessory parameter are studied. When local exponents are generic, no shift operator is found, for codimension-1 subfamilies, neither. We found shift operators for several codimension-2 subfamilies of which accessory parameter is assigned as a cubic polynomial in the local exponents. The Dotsenko-Fateev equation is one of them. ###### Contents * 1 Equations \(H_{3}\), \(E_{6}\) and \(E_{3}\) * 1.1 Equation \(H_{3}\) * 1.2 Equation \(E_{6}\) found in [5] * 1.3 Definition of \(E_{3}\) as a middle convolution of \(E_{6}\) * 1.4 Symmetries of \(E_{3}\) * 1.5 Self-adjoint \(E_{3}\) * 2 Equation \(\mbox{\sl{SE}}_{3}\) * 2.1 The Dotsenko-Fateev equation * 2.2 Definition of \(\mbox{\sl{SE}}_{3}\) * 2.3 Shift operators of \(\mbox{\sl{SE}}_{3}\) and S-values * 2.4 Shift operators of \(\mbox{\sl{SE}}_{3}\) * 2.5 Reducible cases of \(\mbox{\sl{SE}}_{3}\) * 3 Equation \(Z_{3}\) * 3.1 Definition of \(Z_{3}\) * 3.2 Shift operators of \(Z_{3}\) * 3.3 S-values and reducibility conditions of \(Z_{3}\) * 3.4 Reducible cases of \(Z_{3}\) * 4 Symmetry of the cubic polynomial \(A_{00}(e)\) * 5 Other four specializations: \(E_{3a},E_{3b},E_{3c},E_{3d}\) * 5.1 \(S_{8}\)-orbits of \(S\!E_{3}\) and \(Z_{3}\) * 5.2 Equation \(E_{3a}\) * 5.3 Equation \(E_{3b}\) * 5.4 Equation \(E_{3c}\) * 5.5 Equation \(E_{3d}\) * 6 Other two specializations: \(E_{3e},E_{3f}\) * 6.1 Equation \(E_{3e}:\{e_{3}=e_{1}-e_{2},e_{4}=-e_{2}\}\) * 6.2 Equation \(E_{3f}:\{e_{2}=-e_{1}-e_{3}-e_{5}+1,e_{4}=e_{3}-e_{5}+1\}\) * 7 Some generalities * 7.1 Shift operators and shift relations * 7.2 S-values and reducibility conditions * 7.3 Reducibility type and shift operators **Subjectclass**[2020]: Primary 34A30; Secondary 34M35, 33C05, 33C20, 34M03. **Keywords**: Fuchsian differential equation, shift operators, reducibility, factorization, middle convolution, rigidity and accessory parameters, symmetry, hypergeometric differential equation, Dotsenko-Fateev equation. ## Introduction In this paper we study shift operators for Fuchsian differential equations of order 3 with three singular points and one accessory parameter. In the previous paper [5], we started from the equation of order 3 as above and lifted this equation, via addition and middle convolution, to a differential equation of order 6 with an accessory parameter. While studying this equation, we find that if the accessory parameter is assigned as a cubic polynomial in the local exponents, this equation has nice properties: shift operators and several symmetries. We push down this equation, via addition and middle convolution, to a differential equation of order 3, of which accessory parameter is now assigned as a cubic polynomial of the local exponents. This equation, called \(E_{3}\), is the differential equation we study in this paper. Though we can not find any shift operator for \(E_{3}\) if the local exponents are generic, we find several codimension-2 conditions (we could not find any codimension-1 condition) on the six local exponents of \(E_{3}\) to define differential equations with four free local exponents admitting shift operators for four independent shifts. The Dotsenko-Fateev equation is one of them. This paper is organized as follows: Section 1 explains how \(E_{3}\) is derived from the equation of order 6 found in [5]. The shift operators of the Dotsenko-Fateev equation are found in Section 2. The shift operators of the equation related to the equation of order four found and studied in [2] are obtained in Section 3. Section 4 studies the symmetry of the cubic polynomial \(A_{00}(e)\) in the local exponents. Thanks to this symmetry, we find other four codimension-2 restrictions of \(E_{3}\) admitting four independent shift operators. They are presented in Section 5. There are codimension-2 restrictions of \(E_{3}\) admitting less than four independent shift operators. Two examples are presented in Section 6. In order to define and study equations, we need several tools of investigation. We extract some of them from [5] and put in the last section. We acknowledge that we used the software Maple, especially DEtools -package for multiplication and division of differential operators. Interested readers may refer to our list of data written in text files of Maple format 1 for the differential equations and the shift operators treated in this document. Footnote 1: [http://www.math.kobe-u.ac.jp/OpenXM/Math/FDEdata](http://www.math.kobe-u.ac.jp/OpenXM/Math/FDEdata) ## 1 Equations \(H_{3}\), \(E_{6}\) and \(E_{3}\) In this section, we introduce a Fuchsian differential equation \(E_{3}\) of order \(3\) with three singular points, and with the unique accessory parameter specified as a cubic polynomial of the local exponents. We first recall a very symmetric Fuchsian differential equation \(E_{6}\) of order \(6\) found in [5], and transform it via addition and middle convolution to get \(E_{3}\). ### Equation \(H_{3}\) Any Fuchsian differential equation of order \(3\) with the Riemann scheme \[R_{3}(\epsilon)=\left(\begin{array}{ccc}0&\epsilon_{1}&\epsilon_{2}\\ 0&\epsilon_{3}&\epsilon_{4}\\ \epsilon_{5}&\epsilon_{6}&\epsilon_{7}\end{array}\right)\quad\epsilon_{1}+ \cdots+\epsilon_{7}=3\] can be written in \((x,\partial)\)-form 2 as Footnote 2: See [5] for the definition of \((x,\partial)\)-form and \((x,\theta,\partial)\)-form of differential operators. \[H_{3}:a_{3}\partial^{3}+a_{2}\partial^{2}+a_{1}\partial+a_{0},\quad\partial= \frac{d}{dx},\] where \[\begin{array}{rl}a_{3}&=&x^{2}(x-1)^{2},\ \ a_{2}=x(x-1)(a_{21}x+a_{20}),\\ a_{1}&=&a_{12}x^{2}+a_{11}x+a_{10},\ \ a_{0}=a_{01}x+a_{00},\\ a_{21}&=&6-\epsilon_{1}-\epsilon_{2}-\epsilon_{3}-\epsilon_{4},\ \ a_{20}=-3+ \epsilon_{1}+\epsilon_{2},\\ a_{12}&=&(\epsilon_{5}+\epsilon_{6}+1)\epsilon_{7}+(\epsilon_{6}+1)( \epsilon_{5}+1),\\ a_{11}&=&-(\epsilon_{5}+\epsilon_{6})\epsilon_{7}+(-\epsilon_{1}\epsilon_{2} +\epsilon_{3}\epsilon_{4}-\epsilon_{5}\epsilon_{6}+2\epsilon_{1}+2\epsilon_{2 }-4),\\ a_{10}&=&(\epsilon_{1}-1)(\epsilon_{2}-1),\ a_{01}\ =\ \epsilon_{5}\epsilon_{6} \epsilon_{7},\end{array}\] and in \((x,\theta,\partial)\)-form as 3 Footnote 3: The composition of two differential operators \(P\) and \(Q\) is denoted by \(P\circ Q\), often abbreviated as \(PQ\). \[xS_{n}+S_{0}+S_{1}\circ\partial,\quad\theta=x\partial,\] where \[\begin{array}{rl}S_{n}&=&(\theta+\epsilon_{5})(\theta+\epsilon_{6})(\theta+ \epsilon_{7}),\\ S_{0}&=&-2\theta^{3}+(2\epsilon_{1}+2\epsilon_{2}+\epsilon_{3}+\epsilon_{4}-3 )\theta^{2}\\ &&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +(-\epsilon_{1}\epsilon_{2}+(\epsilon_{3}-1)( \epsilon_{4}-1)-\epsilon_{5}\epsilon_{6}-\epsilon_{6}\epsilon_{7}-\epsilon_{7} \epsilon_{5})\theta+a_{00},\\ S_{1}&=&(\theta-\epsilon_{1}+1)(\theta-\epsilon_{2}+1).\end{array}\] The coefficient \(a_{00}\) does not affect the local exponents. In this sense one can call this coefficient the accessory parameter. ### Equation \(E_{6}\) found in [5] Any Fuchsian differential equation of order \(6\) with the Riemann scheme \[R_{6}:\left(\begin{array}{cccccc}x=0:&0&1&2&e_{1}&e_{2}&e_{3}\\ x=1:&0&1&2&e_{4}&e_{5}&e_{6}\\ x=\infty:&s&s+1&s+2&e_{7}&e_{8}&e_{9}\end{array}\right),\quad e_{1}+\cdots+e_{9 }+3s=6,\] and with spectral type \((3111,3111,3111)\) can be written in \((\theta,\partial)\)-form as \[H_{6}=T_{0}(\theta)+T_{1}(\theta)\partial+T_{2}(\theta)\partial^{2}+T_{3}( \theta)\partial^{3}, \tag{1.1}\] where \[T_{0} = (\theta+2+s)(\theta+1+s)(\theta+s)B_{0},\quad B_{0}=(\theta+e_{7 })(\theta+e_{8})(\theta+e_{9}), \tag{1.2}\] \[T_{1} = (\theta+2+s)(\theta+1+s)B_{1},\quad B_{1}=T_{13}\theta^{3}+T_{12 }\theta^{2}+T_{11}\theta+T_{10},\] (1.3) \[T_{2} = (\theta+2+s)B_{2},\quad B_{2}=T_{23}\theta^{3}+T_{22}\theta^{2}+ T_{21}\theta+T_{20},\] (1.4) \[T_{3} = (-\theta-3+e_{1})(-\theta-3+e_{2})(-\theta-3+e_{3}), \tag{1.5}\] and \[T_{13} = -3,\quad T_{23}=3,\quad T_{12}=-9+s_{11}-2s_{13},\quad T_{22}=18+ s_{13}-2s_{11},\] \[T_{11} = -8+(s_{11}^{2}+2s_{11}s_{13}-s_{12}^{2}+s_{13}^{2})/3+s_{11}-5s_{ 13}-s_{21}+s_{22}-2s_{23},\] \[T_{21} = 35+(-s_{11}^{2}-2s_{11}s_{13}+s_{12}^{2}-s_{13}^{2})/3-7s_{11}+5 s_{13}+2s_{21}-s_{22}+s_{23},\] \[T_{20} = -T_{10}+19+(s_{11}^{2}s_{13}-s_{11}s_{12}^{2}+s_{11}s_{13}^{2}-s_ {12}^{2}s_{13})/9+(s_{13}^{3}+s_{11}^{3}-2s_{12}^{3})/27\] \[+(-2s_{11}^{2}-4s_{11}s_{13}+s_{11}s_{22}+2s_{12}^{2}+s_{22}s_{12} -2s_{13}^{2}+s_{22}s_{13})/3\] \[-5s_{11}+4s_{13}+3s_{21}-2s_{22}-s_{31}-s_{32}-s_{33},\] except \(T_{10}\), which does not affect the local exponents. In this sense, one can call this coefficient the accessory parameter. Here \(s_{*}\) are symmetric polynomials of the exponents: \[s_{11} =e_{1}+e_{2}+e_{3},\quad s_{12}=e_{4}+e_{5}+e_{6},\quad s_{13}=e_ {7}+e_{8}+e_{9},\] \[s_{21} =e_{1}e_{2}+e_{1}e_{3}+e_{2}e_{3},\quad s_{22}=e_{4}e_{5}+e_{4}e_{6 }+e_{5}e_{6}, \tag{1.6}\] \[s_{23} =e_{7}e_{8}+e_{7}e_{9}+e_{8}e_{9},\quad s_{31}=e_{1}e_{2}e_{3}, \quad s_{32}=e_{4}e_{5}e_{6},\] \[s_{33} =e_{7}e_{8}e_{9},\quad s=-(s_{11}+s_{12}+s_{13}-6)/3.\] **Definition 1.1**.: The differential equation \(H_{6}\) with the following cubic polynomial \(S_{10}\) as the coefficient \(T_{10}\) is called \(E_{6}(e)\). \[54S_{10}:=2s_{11}^{3}+6s_{11}^{2}s_{13}-2s_{12}^{3}-6s_{12}^{2}s_ {13}+9s_{11}^{2}+18s_{11}s_{13}-9s_{11}s_{21}\] \[+18s_{11}s_{23}-9s_{12}^{2}+9s_{12}s_{22}+9s_{13}^{2}-18s_{13}s_ {21}+18s_{13}s_{22}+9s_{13}s_{23}\] \[+18s_{11}-126s_{13}-27s_{21}+27s_{22}-135s_{23}+27s_{31}-27s_{32}- 81s_{33}-135\] ### Definition of \(E_{3}\) as a middle convolution of \(E_{6}\) The \((\theta,\partial)\)-form of \(E_{6}\) suggests, thanks to the formulae \[(\theta+3)(\theta+2)(\theta+1)=\partial^{3}x^{3},\ (\theta+3)(\theta+2)\partial= \partial^{3}x^{2},\ (\theta+3)\partial^{2}=\partial^{3}x,\ \theta\partial= \partial(\theta-1),\] to modify the expression by replacing \(\theta\) by \(\theta-t\) (middle convolution with parameter \(t\)), where \[t:=s-1,\quad s=2-(e_{1}+\cdots+e_{9})/3,\] so that \(E_{6}(\theta=\theta-t)\) is divisible by \(\partial^{3}\) from the left and, if we write the quotient by \(mcE_{6}=x^{3}(x-1)^{3}\partial^{3}+\cdots\), then its Riemann scheme is \[\left(\begin{array}{cccc}e_{1}+t&e_{2}+t&e_{3}+t\\ e_{4}+t&e_{5}+t&e_{6}+t\\ e_{7}-t&e_{8}-t&e_{9}-t\end{array}\right).\] We next make an addition: \(mcE_{6}\circ x^{t+\epsilon_{3}}(x-1)^{t+e_{6}}\); the Riemann scheme changes into \[\left(\begin{array}{cccc}0&e_{1}-e_{3}&e_{2}-e_{3}\\ 0&e_{4}-e_{6}&e_{5}-e_{6}\\ e_{7}+e_{3}+e_{6}+t&e_{8}+e_{3}+e_{6}+t&e_{9}+e_{3}+e_{6}+t\end{array}\right).\] Introduce parameters \(\epsilon_{1},...,\epsilon_{7}\) by \[\begin{array}{cccc}e_{1}-e_{3}&=\epsilon_{1},&e_{2}-e_{3}&=\epsilon_{2},\\ e_{4}-e_{6}&=\epsilon_{3},&e_{5}-e_{6}&=\epsilon_{4},\\ e_{3}+e_{6}+e_{7}+t&=\epsilon_{5},&e_{3}+e_{6}+e_{8}+t&=\epsilon_{6},&e_{3}+e_ {6}+e_{9}+t&=\epsilon_{7},\end{array}\] where \(\epsilon_{1}+\cdots+\epsilon_{7}=3\). Then it is the equation \(H_{3}(\epsilon)\), with the accessory parameter \(a_{00}\) (the constant term of this equation) replaced by a cubic polynomial \(A_{00}(\epsilon)\), where \[\begin{array}{rl}54A_{00}(\epsilon)&=&-4(\epsilon_{1}+\epsilon_{2}-\epsilon_ {3}-\epsilon_{4})^{3}-27\epsilon_{5}\epsilon_{6}\epsilon_{7}\\ &&+9(\epsilon_{1}+\epsilon_{2}-\epsilon_{3}-\epsilon_{4})(\epsilon_{5}\epsilon _{6}+\epsilon_{5}\epsilon_{7}+\epsilon_{6}\epsilon_{7}-2)\\ &&+9\epsilon_{1}\epsilon_{2}(\epsilon_{1}+\epsilon_{2}-1)+18(\epsilon_{1}+ \epsilon_{2}-1)(\epsilon_{3}^{2}+\epsilon_{3}\epsilon_{4}+\epsilon_{4}^{2})\\ &&-9\epsilon_{3}\epsilon_{4}(\epsilon_{3}+\epsilon_{4}-1)-18(\epsilon_{3}+ \epsilon_{4}-1)(\epsilon_{1}^{2}+\epsilon_{1}\epsilon_{2}+\epsilon_{2}^{2}), \end{array} \tag{1.7}\] In this way, we get a middle convolution of \(E_{6}(e)\), which we call \(E_{3}(\epsilon)\). Changing the notation of the local exponents from \(\epsilon\) to \(e\), we have **Definition 1.2**.: \(E_{3}(e)\) is the equation with the Riemann scheme \(R_{3}(e)\) and with the accessory parameter \(a_{00}=A_{00}(e)\). Unlike the equation \(E_{6}(e)\), studied in [5], for the equation \(E_{3}(e)\), no shift operator is found if \(e=(e_{1},\ldots,e_{7})\) are generic, no codimension-1 reducibility condition is found to the authors. ### Symmetries of \(E_{3}\) **Proposition 1.3**.: * _Adjoint symmetry: the adjoint_ \(E_{3}^{*}\) _of_ \(E_{3}=a_{3}\partial^{3}+a_{2}\partial^{2}+a_{1}\partial+a_{0}\) _is by definition_ \(E_{3}^{*}:=\partial^{3}\circ a_{3}-\partial^{2}\circ a_{2}+\partial\circ a_{1} -a_{0}.\) _This is equal to_ \[E_{3}^{*}=E_{3}(-e_{1},\ldots,-e_{4},2-e_{5},2-e_{6}).\] * \((x\to 1-x)\)_-symmetry:_ \[E_{3}(e_{1},\ldots,e_{6})|_{x\to 1-x}=E_{3}(e_{3},e_{4},e_{1},e_{2},e_{5},e_{6}),\] * \((x\to 1/x)\)_-symmetry:_ \[x^{-s}E_{3}(e_{1},\ldots,e_{6}))|_{x\to 1/x}\circ x^{s}=E_{3}(e_{5}-s,e_{6}-s,e_{3},e _{4},e_{1}+s,e_{2}+s),\] _where_ \(E_{3}|_{x\to 1-x}\) _and_ \(E_{3}|_{x\to 1/x}\) _are_ \(E_{3}\) _after the coordinate changes_ \(x\to 1-x\) _and_ \(x\to 1/x\)_, respectively._ ### Self-adjoint \(E_{3}\) By Proposition 1.3, the equation \(E_{3}\) is self-adjoint if and only if \[e_{1}=e_{2}=e_{3}=e_{4}=0,\quad e_{4}=e_{5}=1.\] Its Riemann scheme of is \[\left(\begin{array}{cccc}x=0&0&0&0\\ x=1&0&0&0\\ x=\infty&1&1&1\end{array}\right),\] and the accessory parameter is \(A=-1/2\). It is irreduciblw and is solved by the square of the hypergeometric function \(F(1/2,1/2,1;x)^{2}\). ## 2 Equation \(\mbox{\it S\!E}_{3}\) \begin{tabular}{r l} \hline **2.1** & **The Dotsenko-Fateev equation** \\ **2.2** & **Definition of \(\mbox{\it S\!E}_{3}\)** \\ **2.3** & **Shift operators of \(\mbox{\it S\!E}_{3}\) and S-values** \\ **2.4** & **Shift operators of \(\mbox{\it S\!E}_{3}\)** \\ **2.5** & **Reducible cases of \(\mbox{\it S\!E}_{3}\)** \\ & 2.5.1 & \(a\in\mathbb{Z}\) \\ & 2.5.2 & \(g\in 2\mathbb{Z}+1\) \\ \hline \end{tabular} In this section, we first recall the Dotsenko-Fateev equation, of order 3, and find that it is a codimension-2 specialization of \(E_{3}\). We find for this equation shift operators for four independent shifts, the S-values (see Definition 7.5), and the reducible cases. ### The Dotsenko-Fateev equation The Dotsenko-Fateev equation is originally found as a differential equation satisfied by functions in \(x\) defined by the integral \[\int\omega(x),\quad\omega(x):=\prod_{i=1,2}t_{i}^{a}(t_{i}-1)^{b}(t_{i}-x)^{c} \cdot(t_{1}-t_{2})^{g}\ dt_{1}\wedge dt_{2} \tag{2.1}\] Consider in the real \((t_{1},t_{2})\)-plane the arrangement of seven lines: \[\prod_{i=1,2}t_{i}(t_{i}-1)(t_{i}-x)\cdot(t_{1}-t_{2})=0.\] Since the number of bounded chambers cut out by this arrangement is 6, if the exponents of the integrand is generic, functions defined by the above integral would satisfy a differential equation of order 6. But since the integrand is invariant under the change \(t_{1}\leftrightarrow t_{2}\), and the number of bounded chambers modulo this change is 3, the functions defined by the above integral satisfy an equation of order 3, which is the Dotsenko-Fateev equation. The Dotsenko-Fateev operator ([1]) is an operator of order 3 and is defined as \[D\!F(a,b,c,g)=x^{2}(x-1)^{2}\partial^{3}+D\!F_{1}\partial^{2}+D\!F_{2}\partial +D\!F_{3}, \tag{2.2}\] where \[D\!F_{1} = -(x-1)x(3ax+3bx+6cx+2gx-3a-3c-g),\] \[D\!F_{2} = 2a^{2}x^{2}+4abx^{2}+12acx^{2}+3agx^{2}+2b^{2}x^{2}+12bcx^{2}+3bgx^ {2}\] \[+12c^{2}x^{2}+8cgx^{2}+g^{2}x^{2}-4a^{2}x-4abx-16acx-4agx+ax^{2}-8 bcx\] \[-2bgx+bx^{2}-12c^{2}x-8cgx+6cx^{2}-g^{2}x+gx^{2}+2a^{2}+4ac+ag\] \[-2ax+2c^{2}+cg-6cx-gx+a+c,\] \[D\!F_{3} = c(2a+2b+2c+g+2)(-(2a+2b+4c+2g+2)x+2a+2c+g+1).\] The accessory parameter, the constant term of \(D\!F_{3}\), is \[c(2a+2c+g+1)(2a+2b+2c+g+2)\] and the Riemann scheme is \[R_{D\!F}=\left(\begin{array}{cccc}x=0&0&a+c+1&2a+2c+g+2\\ x=1&0&b+c+1&2b+2c+g+2\\ x=\infty&-2c&-a-b-2c-g-1&-2a-2b-2c-g-2\end{array}\right). \tag{2.3}\] ### Definition of \(S\!\!E_{3}\) **Theorem 2.1**.: _The Dotsenko-Fateev operator \(D\!F\) is equal to \(E_{3}\) with the Riemann scheme \(R_{D\!F}\)._ Proof.: Substitute \[\begin{array}{llll}&e_{1}&=a+c+1,&e_{2}&=2a+2c+g+2,\\ \mbox{$e$to$}g:&e_{3}&=b+c+1,&e_{4}&=2b+2c+g+2,\\ &e_{5}&=-2c,&e_{6}&=-a-b-2c-g-1,\end{array}\] in the expression of \(A_{00}(e)\) given in (1.7) to find \(A_{00}(etoag)=c(2a+2c+g+1)(2a+2b+2c+g+2)\). **Definition 2.2**.: The equation \(S\!\!E_{3}\) is a specialization of the equation \(E_{3}\) characterized by the system of two equations \[2e_{1}-e_{2}=2e_{3}-e_{4}=-e_{5}+2e_{6}-e_{7}.\] When the exponents are parameterized by \(\{a,b,c,g\}\) as in \(R_{D\!F}\), \(S\!\!E_{3}\) coincides with the Dotsenko-Fateev equation \(D\!F(a,b,c,g)\). The constant term of \(S\!\!E_{3}\) is \(A_{00}(etoag)\). ### Shift operators of \(S\!\!E_{3}\) and S-values **Theorem 2.3**.: \(S\!\!E_{3}\) _admits shift operators for the shifts_ \[a\pm:a\to a\pm 1,\ b\pm:b\to b\pm 1,\ c\pm:c\to c\pm 1\quad\mbox{and}\quad g\pm:g \to g\pm 2.\] Table of the shift operators are in the next subsection. (cf SE3PQ) _Remark 2.4_.: For the shifts \(a-,b-,c-\) and \(g-\), the shift operators have denominators such as \(x,x-1,x(x-1)\). This is to be compared with that for the other equations. See \(P_{(1010)}\) in Proposition 3.3. Let \(P_{a+},P_{c\pm},P_{g\pm}\) be the shift operators for the shifts \(a\pm\), \(c\pm\), \(g\pm\), respectively. We normalize them as \[P_{a-} =(x-1)^{2}\partial^{2}+\cdots,\ \ \ P_{c-}=\partial^{2}+\cdots,\] \[P_{a+} =x^{2}(x-1)^{2}\partial^{2}+\cdots,\ P_{c+}=x^{2}(x-1)^{2} \partial^{2}+\cdots,\] \[P_{g+} =x^{2}(x-1)^{2}\partial^{2}+\cdots\] and \[P_{g-}=((2c+g)(a+b+g)x^{2}-(2c+g)(2b+g)x+(2b+g)(a+c+g))\partial^{2}+\cdots,\] and the same for the \(Q\)'s. Note that \(P_{g-}\) (as well as \(Q_{g-}\)) has a strange head. _Remark 2.5_.: The shifts \(g\to g\pm 1\) do not admit shift operators. Since the domains of integration (2.1) in the \((t_{1},t_{2})\)-plane is folded along the diagonal, \((t_{1}-t_{2})^{g}\) should be considered to be \(((t_{1}-t_{2})^{2})^{g/2}\). **Proposition 2.6**.: _With the normalization above, we get the S-values:_ \[Sv_{a-} :=P_{a+}(a-1)\circ P_{a-}=a(2a+g)(a+1+g+b+c)(2a+2b+2c+2+g),\] \[Sv_{b-} :=P_{b+}(b-1)\circ P_{b-}=b(2b+g)(a+1+g+b+c)(2a+2b+2c+2+g),\] \[Sv_{c-} :=P_{c+}(c-1)\circ P_{c-}=c(2c+g)(a+1+b+c+g)(2a+2b+2c+2+g),\] \[Sv_{g-} :=P_{g+}(g-2)\circ P_{g-}=(g-1)(2a+g)(2b+g)(2c+g)(a+b+c+g+1)\] \[\times(a+b+c+g)(2a+2b+2c+g+2).\] _Here, for example, \(P_{a+}(a-1)\circ P_{a-}\) is an abbreviation of \(P_{a+}(a=a-1)\circ P_{a-}(a)\)._ ### Shift operators of \(S\!e_{3}\) For the shifts \(a+,c+,g+\), the operators are expressed in \((x,\theta)\)-form: \[P=x^{2}P_{nn}(\theta)+xP_{n}(x)+P_{0}(\theta),\] \[Q=x^{2}Q_{nn}(\theta)+xQ_{n}(x)+Q_{0}(\theta).\] For the shifts \(a-,c-,g-\), the operators are expressed in \((x,\partial)\)-form: \[P_{a-}: (x-1)^{2}\partial^{2}+(x-1)R_{1}/x\partial+R_{1}/x,\] \[P_{c-}: \partial^{2}+R_{1}/(x(x-1))\partial+R_{0}/(x(x-1)),\] \[P_{g-}: R_{2}\partial^{2}+R_{3}/(x(x-1))\partial+R_{2}/(x(x-1)),\] where \(R_{k}\) are used symbolically for polynomials of degree \(k\) in \(x\). Note that they have denominators. \[[a+] [e_{1}+1,e_{2}+2,e_{6}-1,e_{7}-2]\] \[\overline{P_{nn}} = (\theta+e_{5})(\theta+e_{6}),\] \[P_{n} = -2\theta^{2}+(4+4a+7c+2g+b)\theta-6ca-6c^{2}-2cb-3cg-6c,\] \[P_{0} = (\theta-e_{1})(\theta-e_{2}),\] \[Q_{nn} = (\theta+e_{5}+1)(\theta+e_{6}),\] \[Q_{n} = -2\theta^{2}+(4+4a+7c+2g+b)\theta-6ca-6c^{2}-2cb-3cg-6c+2+2a+g,\] \[Q_{0} = (\theta-e_{1})(\theta-e_{2}-1). \tag{2.4.1}\] \[[c+] [e_{1}+1,e_{2}+2,e_{3}+1,e_{4}+2,e_{5}-2,e_{6}-2,e_{7}-2]\] \[P_{nn} = \theta^{2}+(-5-3a-6c-2g-3b)\theta+10ca+10cb+3ag+7cg\] \[+8+10c^{2}+18c+2a^{2}+8a+2b^{2}+8b+g^{2}+6g+4ab+3bg,\] \[P_{n} = -2\theta^{2}+(8+6a+3b+9c+3g)\theta-6cb-14ca-8\] \[-7cg-2bg-4ab-4ag-18c-10c^{2}-12a-6g-4b-4a^{2}-g^{2},\] \[P_{0} = (\theta-e_{1})(\theta-e_{2}),\] \[Q_{nn} = P_{nn}[-1],\] \[Q_{n} = -2\theta^{2}+(3g+11+9c+3b+6a)\theta-6cb-14ca-14\] \[-7cg-2bg-4ab-4ag-24c-10c^{2}-16a-8g-6b-4a^{2}-g^{2},\] \[Q_{0} = (\theta-e_{1})(\theta-e_{2}-1). \tag{2.4.2}\] \[[g+] [e_{6}-2,e_{7}-2,e_{2}+2,e_{4}+2]\] \[P_{nn} = (k_{1}\theta+k_{0})(\theta+e_{5}),\] \[P_{n} = (6g+12+4b+4a+4c)\theta^{2}+(-6g^{2}-12ab-4b^{2}-16cb-24-8a^{2}-24cg\] \[-8bg-20ca-24g-46c-12c^{2}-16ag-32a-18b)\theta\] \[+8cb^{2}+28cb+36ca+12cbg+20acg+16abc+36c^{2}\] \[+8c^{3}+16bc^{2}+16c^{2}a+20c^{2}g+30cg+8ca^{2}+8cg^{2}+28c,\] \[P_{0} = (m_{1}(\theta-1)+m_{0})(\theta-e_{2}),\] \[Q_{nn} = (k_{1}(\theta-1)+k_{0})(\theta+e_{5}+1),\] \[Q_{n} = (6g+12+4b+4a+4c)\theta^{2}+(-6g^{2}-12ab-4b^{2}-16cb-24-8a^{2}-24cg\] \[-8bg-20ca-24g-46c-12c^{2}-16ag-32a-18b)\theta\] \[-20+8cb^{2}-28a+12cbg+26cg-8ab-12ag-4bg+8cg^{2}\] \[+28ca+20c-18g-8b-4g^{2}-8a^{2}+36c^{2}+28cb+16bc^{2}\] \[+20c^{2}g+16abc+20acg+8c^{3}+8ca^{2}+16c^{2}a,\] \[Q_{0} = (m_{1}\theta+m_{0})(\theta-e_{2}-1),\] \[k_{1}=-2a-2c-2b-3g-6,\quad k_{0}=4(a+b+c+g+2)^{2}+2c(g+1)-g-2,\] \[m_{1}=k_{1},\quad m_{0}=-2a-2c-8-g^{2}-2bg-4b-6g. \tag{2.4.3}\] \[[a-] [e_{1}-1,e_{2}-2,e_{6}+1,e_{7}+2]\] \[P_{an} = (x-1)^{2}\partial^{2}-(x-1)(3a+3b+4c+2g+2-(a+c)/x)\partial\] \[+(2a+2b+2c+g+2)(a+b+2c+g+1-c/x),\] \[Q_{an} = (x-1)^{2}\partial^{2}-(x-1)(1+2g+3b+4c+3a-(1+a+c)/x)\partial\] \[+(2a+2b+2c+g+1)(a+b+2c+g+1)\] \[-(c+1)(2a+2b+2c+g+2)/x+(a+c+1)/x^{2}. \tag{2.4.4}\] \[[c-] [e_{1}-1,e_{2}-2,e_{3}-1,e_{4}-2,e_{5}+2,e_{6}+2,e_{7}+2]\] \[P_{c-} = \partial^{2}-(xa+xb+2xc-a-c)/x/(x-1)\,\partial\] \[+c(2a+2+2b+2c+g)/((x-1)x),\] \[Q_{c-} = \partial^{2}-(xb+2xc+2x+xa-a-1-c)/x/(x-1)\,\partial\] \[+((2+a+2c^{2}+2ca+4c+cg+b+2cb)x^{2}\] \[+(-2ca-2c^{2}-2a-4c-cg-2-2cb)x+c+1+a)/x^{2}/(x-1)^{2}. \tag{2.4.5}\] (2.4.6) \([g-]\)\([e_{6}+2,e_{7}+2,e_{2}-2,e_{4}-2]\) \[\begin{array}{l}\begin{array}{l}\vspace{0.2cm}P_{g-}\;=\;cp_{2}\partial^{2}+cp_{ 1}/(x(x-1))\partial+cp_{0}/(x(x-1)),\\ Q_{g-}\;=\;cq_{2}\partial^{2}+cq_{1}/(x(x-1))\partial+cq_{0}/(x(x-1))^{2}, \end{array}\end{array}\] \[\begin{array}{l}\vspace{0.2cm}cp_{2}\;=\;(2c+g)(g+a+b)x^{2}-(g+2b)(2c+g)x+(g+ 2b)(g+a+c),\\ cp_{1}\;=\;-(2c+g)(g+a+b)(3a+2g+4c+3b+2)x^{3}\\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \; #### 2.5.1 \(a\in\mathbb{Z}\) Writing \(\mathit{DF}(-1)\) for \(\mathit{DF}(a=-1)\), \(P_{a-}(0)\) for \(P_{a-}(a=0)\), etc, we have shift relations \[(1)\ \mathit{DF}(-1)\circ P_{a-}(0)=Q_{a-}(0)\circ\mathit{DF}(0),\quad(2)\ \mathit{DF}(0)\circ P_{a+}(-1)=Q_{a+}(-1)\circ\mathit{DF}(-1),\] which factorize respectively as \[(1)\ [1,2]\ [1,1]=[1,1]\ [2,1],\qquad(2)\ [2,1]\ [2]=[2]\ [1,2].\] The equations and the shift operators factor as \[\mathit{DF}(-1)=Dn_{1}\circ Dn_{2}, P_{a-}(0)=P_{1}\circ P_{2},\] \[Q_{a-}(0)=Q_{1}\circ Q_{2},\ \mathit{DF}(0)=D0_{1}\circ D0_{2},\] where \[\begin{array}{rl}\mathit{Dn_{1}}&=(x-1)\partial-(2b+2c+g+1),\\ \mathit{Dn_{2}}&=x(\theta-2c)(\theta-b-2c-g)-(\theta-c)(\theta-2c-g),\\ P_{1}&=(x-1)\partial-(b+2c+g+1)+c/x,\quad P_{2}=Dn_{1}-1,\\ Q_{1}&=Dn_{1},\quad Q_{2}=P_{1}+1/x,\\ D0_{1}&=x(\theta-2c)(\theta-b-2c-g-1)-(\theta-c)(\theta-2c-g-1),\\ D0_{2}&=P_{2}.\end{array}\] When two operators \(R_{1}\) and \(R_{2}\) are related as \[R_{1}=x^{\mu_{0}}(x-1)^{\mu_{1}}\ R_{2}\circ x^{\nu_{0}}(x-1)^{\nu_{1}},\] we write \(R_{1}\sim R_{2}\). Since \[D0_{1}\sim Q_{a+}(-1),\quad D0_{2}\sim Dn_{1},\quad P_{a+}(-1)\sim Dn_{2},\] the relation (2) turns out to be a trivial identity. Since \[Dn_{1}\sim Q_{1},\quad P_{2}\sim D0_{2},\] cancelling them from both sides of (2), we get a relation of type \[[2]\ [1]=[1]\ [2],\] which is equivalent to a shift relation of the Gauss operator \(E_{2}\). The factorization above and Theorem 7.12 lead to **Proposition 2.8**.: _The reducible types of \(\mathit{DF}\) when \(a\in\mathbb{Z}\) are_ \[\begin{array}{ccccc}a=&-k&-2&-1&0&1&k\\ &[12]&[12]A0&[21]A0&[21]&[21]\end{array},\] _where \([12]\) means the equation factors as \(F_{1}\circ F_{2}\ (\mathrm{order}(F_{1})=1,\mathrm{order}(F_{2})=2\), and \(A0\) means the factors have no singularities out of \(\{0,1,\infty\}\)._ #### 2.5.2 \(g\in 2\mathbb{Z}+1\) Writing \(\mathit{DF}(-1)\) for \(\mathit{DF}(g=-1)\), \(P_{g-}(1)\) for \(P_{g-}(g=1)\), etc, we have shift relations \[(1)\ \mathit{DF}(-1)\circ P_{g-}(1)=Q_{g-}(1)\circ\mathit{DF}(1),\quad(2)\ \mathit{DF}(1)\circ P_{g+}(-1)=Q_{g+}(-1)\circ\mathit{DF}(-1),\] which factorize as \[(1)\ [1,2]\ [1,1]=[1,1]\ [2,1],\qquad(2)\ [2,1]\ [2]=[2]\ [1,2].\] The equations and the shift operators factor as \[\mathit{DF}(-1)=Dn_{1}\circ Dn_{2}, P_{g-}(1)=P_{1}\circ P_{2},\] \[Q_{g-}(1)=Q_{1}\circ Q_{2},\ \mathit{DF}(1)=D1_{1}\circ D1_{2},\] where \[Dn_{1} =x^{2}(x-1)^{2}\partial-x(x-1)(xa+xb+2xc-a-c-2x+1),\] \[Dn_{2} =\partial^{2}-2\frac{xa+xb+2xc-a-c}{x(x-1)}\partial+2c\frac{2a+1 +2b+2c}{x(x-1)},\] \[P_{1} =R\partial-\frac{\mathrm{Poly}_{3}}{x(x-1)},\quad P_{2}=\partial- \frac{xa+xb+2xc-a-c+2x-1}{x(x-1)},\] \[Q_{1} =R\partial-\frac{\mathrm{Poly}_{3}}{x(x-1)},\quad Q_{2}=\partial- \frac{\mathrm{Poly}_{3}}{x(x-1)R},\] \[D1_{1} =x^{2}(x-1)^{2}\partial^{2}-2x(x-1)(xa+xb+2xc-a-c)\partial\] \[+2((2c-1)(a+b+c+1)x^{2}+(-2ac-2bc-2c^{2}+2a-c+1)x-1-c-a),\] \[D1_{2} =\partial-\frac{xa+xb+2xc-a-c+2x-1}{x(x-1)},\] where \[R:=(2c+1)(a+b+1)x^{2}-(2c+1)(2b+1)x+(2b+1)(1+c+a),\] and \(\mathrm{Poly}_{3}\) stands symbolically for a polynomial of degree three of \(x\). Since \[D1_{1}\sim Q_{g+}(-1),\quad D1_{2}\sim Dn_{1},\quad P_{g+}(-1)\sim Dn_{2},\] the relation (2) turns out to be a trivial identity. We have \[P_{2}=D1_{2},\] but \(Dn_{1}\not\sim Q_{1}.\) Cancelling \(P_{2}\) and \(D1_{2}\) from both sides of (2), we get a mysterious relation of type \([1,2]\ [1]=[1,1]\ [2]:\) \[Dn_{1}\circ Dn_{2}\circ P_{1}=Q_{1}\circ Q_{2}\circ D1_{1}.\] **Proposition 2.9**.: _The reducible types of DF when \(g\in 2\mathbb{Z}+1\) are_ \[\begin{array}{cccccc}g=&-2k-1&-3&-1&1&3&2k+1\\ &[12]&[12]&[12]A0&[21]A0&[21]&[21],\end{array}\] ## 3 Equation \(Z_{3}\) \begin{tabular}{r l} **3.1** & **Definition of \(Z_{3}\)** \\ **3.2** & **Shift operators of \(Z_{3}\)** \\ **3.3** & **S-values and reducibility conditions of \(Z_{3}\)** \\ **3.4** & **Reducible cases of \(Z_{3}\)** \\ \end{tabular} In this section, we introduce another codimension-2 specialization of \(E_{3}\). We find for this equation shift operators for four independent shifts, S-values, and we study the reducible cases. ### Definition of \(Z_{3}\) The equation \(Z_{3}\) is defined as a specialization of \(E_{3}\) by the condition (a system of two equations) \[e_{1}+e_{2}+e_{5}=e_{3}+e_{4}+e_{5}=1.\] Parameterize the 4 (\(=6-2\)) exponents by \(A_{0},A_{1},A_{2},A_{3}\) as \[\left(\begin{array}{c}x=0:\quad 0\quad e_{1}\quad e_{2}\\ x=1:\quad 0\quad e_{3}\quad e_{4}\\ x=\infty:\quad e_{5}\quad e_{6}\quad e_{7}\end{array}\right)=\left(\begin{array} []{ccc}0&-A_{0}-A_{2}&A_{0}-A_{2}\\ 0&-A_{1}-A_{2}&A_{1}-A_{2}\\ 2A_{2}+1\;\;A_{3}+A_{2}+1\;\;-A_{3}+A_{2}+1\end{array}\right).\] _Remark 3.1_.: In [2], we encountered the Fuchsian equation \(Z(A)=Z(A_{0},\ldots,A_{3})\) of order 4. Define the equation \(Z_{4}(A,k)\) by \(\partial^{-k}\circ Z(A)\circ\partial^{k}\). Then \(Z_{4}(A,k=A_{2}+1/2)\) is reducible of type [1,3], and the second factor is our equation \(Z_{3}\). The equation \[Z_{3}=x^{2}(x-1)^{2}\partial^{3}+p_{2}\partial^{2}+p_{1}\partial+p_{0}\] is written as \[p_{2} = x(2x-1)(x-1)(3+2A_{2}),\] \[p_{1} = (5A_{2}^{2}-A_{3}^{2}+12A_{2}+7)x^{2}+(A_{0}^{2}-A_{1}^{2}-5A_{2} ^{2}+A_{3}^{2}-12A_{2}-7)x\] \[\quad-(A_{0}+1+A_{2})(A_{0}-1-A_{2}),\] \[p_{0} = (2A_{2}+1)(-A_{3}+A_{2}+1)(A_{3}+A_{2}+1)x\] \[\quad+(2A_{2}+1)(A_{0}^{2}-A_{1}^{2}-A_{2}^{2}+A_{3}^{2}-2A_{2}- 1)/2.\] The accessory parameter \(a_{00}\) defined in (1.7) is equal to the constant term of \(p_{0}\): \[a_{00}=(2A_{2}+1)(A_{0}^{2}-A_{1}^{2}-A_{2}^{2}+A_{3}^{2}-2A_{2}-1)/2.\] ### Shift operators of \(Z_{3}\) **Theorem 3.2**.: _The equation \(Z_{3}\) admits a shift operator for every even shift_ \[(A_{0},\ldots,A_{3})\rightarrow(A_{0}+n_{0},\ldots,A_{3}+n_{3})\quad n_{0}+ n_{1}+n_{2}+n_{3}\in 2\mathbb{Z}.\] The equation \(Z_{3}\) does not have the full symmetry relative to the parameters \(\{A_{0},\ldots,A_{3}\}\) that the equation \(Z_{4}\) has. Let \(P_{(n_{0},n_{1},n_{2},n_{3})}\) denote the shift operator for the even shift above. For \(j=0,1,3\), the shift operator for \(A_{j}\to A_{j}+1\) and that for \(A_{j}\to A_{j}-1\) are related by changing the sign of \(A_{j}\): \[P_{(\epsilon_{0}n_{0},\epsilon_{1}n_{1},n_{2},\epsilon_{3}n_{3})}=P_{(n_{0},n_{ 1},n_{2},n_{3})}(\epsilon_{0}A_{0},\epsilon_{1}A_{1},A_{2},\epsilon_{3}A_{3}), \quad n_{j}=\pm 1\ (j=0,1,3).\] Moreover, \[\begin{array}{rcl}P_{(0101)}(x,A)&=&-P_{(1001)}(x^{\prime},A^{\prime}),\\ P_{(01\bar{1}0)}(x,A)&=&P_{(10\bar{1}0)}(x^{\prime},A^{\prime}),\\ P_{(010)}(x,A)&=&-P_{(1010)}(x^{\prime},A^{\prime})\end{array}\] hold, where \((x,A)=(x,A_{0},A_{1},A_{2},A_{3})\), \((x^{\prime},A^{\prime})=(1-x,A_{1},A_{0},A_{2},A_{3})\), \(\bar{1}=-1\), and \((00\bar{1}1)\) stands for \((A_{0},A_{1},A_{2},A_{3})\to(A_{0},A_{1},A_{2}-1,A_{3}+1)\), etc. Up to this symmetry, we give shift operators for the six shifts \[(1100),\ (1001),\ (1010),\ (10\bar{1}0),\ (0011),\ (00\bar{1}1).\] **Proposition 3.3**.: _Let the shift operators \((P_{\sigma},Q_{\sigma})\), for a shift \(\sigma\), are given as_ \[P_{\sigma}=p_{2}\partial^{2}+p_{1}\partial+p_{0},\quad Q_{\sigma}=q_{2}\partial ^{2}+q_{1}\partial+q_{0},\quad p_{2}=q_{2}.\] _The coefficients_ \[\sigma:p_{2},\ p_{1},\ p_{0};\quad p_{1}-q_{1},\ p_{0}-q_{0}\] _are given as follows:_ \[\begin{array}{rl}(1100):&p_{2}=x(x-1),\\ &p_{1}=-(A_{0}+A_{1}-2A_{2}-2)x+A_{0}-A_{2}-1,\\ &p_{0}=(A_{0}^{2}+A_{1}^{2}+A_{2}^{2}-A_{3}^{2}+1)/2+A_{0}A_{1}-A_{2}(A_{0}+A _{1}-1),\\ &p_{1}-q_{1}=2x-1,\\ &p_{0}-q_{0}=2A_{2}-A_{0}-A_{1}.\\ \end{array}\] \[\begin{array}{rl}(1001):&p_{2}=x(x-1)^{2},\\ &p_{1}=(x-1)((3A_{2}+A_{3}+3)x+A_{0}-A_{2}-1),\\ &p_{0}=(2A_{2}+1)(A_{2}+A_{3}+1)x+(A_{0}^{2}-A_{1}^{2}-3A_{2}^{2}\\ &\quad+A_{3}^{2}+2A_{0}A_{2}-2A_{2}A_{3}+2A_{0}A_{3}+2A_{0}-4A_{2}-1)/2,\\ &p_{1}-q_{1}=-(x-1)(x+1),\\ &p_{0}-q_{0}=-(2A_{2}+1)x-A_{0}-A_{3}-1.\\ \end{array}\] \[\begin{array}{rl}(1010):&p_{2}=x-1,\\ &p_{1}=((-A_{0}^{2}+A_{1}^{2}+5A_{2}^{2}-A_{3}^{2}+4A_{0}A_{2}+4A_{0}+10A_{2}+ 5)x\\ &\quad-2(A_{0}+A_{2}+1)(A_{2}-A_{0}+1))/(2x(A_{0}+1+A_{2})),\\ &p_{0}=-((A_{0}^{2}-A_{1}^{2}-A_{2}^{2}+A_{3}^{2}-2A_{2}-1)(2A_{2}+1))/(2x(A_{0 }+1+A_{2})),\\ &p_{1}-q_{1}=(x-2)/x,\\ &p_{0}-q_{0}=(2A_{2}x+A_{0}-A_{2}+x+1)/x^{2}.\\ \end{array}\] \[\begin{array}{rl}(10\bar{1}0):&p_{2}=x^{2}(x-1)^{2},\\ &p_{1}=x(x-1)(2A_{2}x+3x-2),\\ &p_{0}=-(A_{2}+1+A_{3})(A_{3}-1-A_{2})x^{2}+(A_{0}^{2}-A_{1}^{2}+A_{2}^{2}+A_{3} ^{2}\\ &\quad-2A_{2}A_{0}+A_{0}-3A_{2}-1)x-(A_{2}-A_{0})(A_{2}-A_{0}-1),\\ &p_{1}-q_{1}=0,\\ &p_{0}-q_{0}=-2(x-1)(A_{0}+1-A_{2}).\end{array}\] \[(0011):\ p_{2}=x(A_{3}+A_{2}+1)(x-1),\] \[p_{1}=(A_{3}+3A_{2}+3)(A_{2}+1+A_{3})x\] \[\quad-(A_{0}^{2}-A_{1}^{2}+3A_{2}^{2}+A_{3}^{2}+4A_{2}A_{3}+6A_{2} +4A_{3}+3)/2,\] \[p_{0}=((2A_{2}+1)(A_{3}+A_{2}+1))/2;\] \[p_{1}-q_{1}=-(2x-1)(A_{3}+A_{2}+1),\] \[p_{0}-q_{0}=-(3A_{2})/2-A_{3}/2-3/2.\] \[(00\bar{1}1):\ p_{2}=x^{2}(x-1)^{2},\] \[p_{1}=x(2x-1)(x-1)(2A_{2}+1),\] \[p_{0}=-(A_{2}+1+A_{3})(A_{3}-3A_{2})x^{2}+(A_{0}^{2}-A_{1}^{2}-3 A_{2}^{2}+A_{3}^{2}\] \[\quad-2A_{2}A_{3}-3A_{2}+A_{3})x+(A_{2}-A_{0})(A_{2}+A_{0}),\] \[p_{1}-q_{1}=0,\] \[p_{0}-q_{0}=-2x(x-1)(A_{2}-A_{3}-1).\] Note that the shift operator for the shift \((1010)\) has poles. ### S-values and reducibility conditions of \(Z_{3}\) **Proposition 3.4**.: _For the shift operator \(P_{\sigma}\), write the S-value \(P_{-\sigma}(\sigma A)\circ P_{\sigma}\) mod \(Z_{3}\) by \(Sv_{\sigma}\). The S-values for the six shifts above are given as follows:_ \[Sv_{(1100)} =(A_{01\bar{2}\bar{3}}+1)(A_{01\bar{2}3}+1)(A_{012\bar{3}}+1)(A_{01 23}+1),\] \[Sv_{(1001)} =(A_{01\bar{2}3}+1)(A_{01\bar{2}3}+1)(A_{0123}+1)(A_{01\bar{2}3}+1),\] \[Sv_{(1010)} =(2A_{2}+1)(A_{0123}+1)(A_{012\bar{3}}+1)(A_{012\bar{3}}+1)(A_{01 2\bar{3}}+1)/(A_{0}+A_{2}+1),\] \[Sv_{(10\bar{1}0)} =(2A_{2}-1)(A_{01\bar{2}\bar{3}}+1)(A_{01\bar{2}\bar{3}}+1)(A_{0 1\bar{2}3}+1)(A_{01\bar{2}3}+1)/(A_{0}-A_{2}+1),\] \[Sv_{(0011)} =(2A_{2}+1)(A_{01\bar{2}\bar{3}}-1)(A_{01\bar{2}\bar{3}}-1)(A_{0 1\bar{2}3}+1)(A_{0123}+1)/(A_{3}+A_{2}+1),\] \[Sv_{(00\bar{1}1)} =(2A_{2}-1)(A_{01\bar{2}3}+1)(A_{012\bar{3}}-1)(A_{01\bar{2}3}+1) (A_{01\bar{2}\bar{3}}-1)/(A_{2}-A_{3}-1),\] _where \(A_{01\bar{2}\bar{3}}=A_{0}+A_{1}-A_{2}-A_{3}\), and so on._ **Theorem 3.5**.: _If one of_ \[2A_{2}+1,\quad\epsilon_{0}A_{0}+\epsilon_{1}A_{1}+\epsilon_{2}A_{2}+A_{3}+1 \quad(\epsilon_{0},\epsilon_{1},\epsilon_{2}=\pm 1)\] _is an even integer, then the equation \(Z_{3}\) is reducible._ ### Reducible cases of \(Z_{3}\) **Proposition 3.6**.: _When the equation \(Z_{3}\) is reducible, it factors as_ \[A_{2}= \cdots -3/2 -1/2 1/2 3/2 \cdots\] \[\cdots [21] [21]A0 [12]A0 [12] \cdots\] \[\epsilon_{0}A_{0}+\epsilon_{1}A_{1}+A_{2}+A_{3}= \cdots -3 -1 1 3 \cdots\] \[\cdots [12] [12]A0 [21]A0 [21] \cdots\] \[\epsilon_{0}A_{0}+\epsilon_{1}A_{1}-A_{2}+A_{3}= \cdots -3 -1 1 3 \cdots\] \[\cdots [21] [21]A0 [12]A0 [12] \cdots\] _Here \(\epsilon_{0},\epsilon_{1}=\pm 1\). When it appears no apparent singular point, the factor \([2]\) is equivalent to \(E_{2}\)._ We omit the proof. ## 4 Symmetry of the cubic polynomial \(A_{00}(e)\) In this section, we study the cubic polynomial \(A_{00}(e)\) given in (1.7) with \(\epsilon\) replaced by \(e\). It is invariant under permutations of exponents at \(0,1\) and at \(\infty\): \[\{e_{1},e_{2}\},\quad\{e_{3},e_{4}\},\quad\text{and}\quad\{e_{5},e_{6},e_{7}\};\] which we call _obvious symmetry_. We show that \(A_{00}\) is invariant under an action of the symmetry group of degree eight; much bigger symmetry than the obvious one. The following theorem is suggested by Eiichi Sato, to whom the authors are grateful: **Theorem 4.1**.: _Put_ \[F_{8}:=y_{1}^{3}+\cdots+y_{8}^{3},\quad\text{where}\quad y_{1}+\cdots+y_{8}=0,\] _and define \(hA_{00}(e_{0},\ldots,e_{6})\), the homogenized \(A_{00}(e_{1},\ldots,e_{6})\), by_ \[hA_{00}:=e_{0}^{3}\cdot A_{00}(e_{1}/e_{0},\ldots,e_{6}/e_{0}).\] _Put_ \[\begin{array}{rcl}y_{1}&=&-3e_{0}-2e_{1}+4e_{2}+2e_{3}+2e_{4},\\ y_{2}&=&-3e_{0}-2e_{2}+4e_{1}+2e_{3}+2e_{4},\\ y_{3}&=&3e_{0}+2e_{4}-4e_{3}-2e_{1}-2e_{2},\\ y_{4}&=&3e_{0}+2e_{3}-4e_{4}-2e_{1}-2e_{2},\\ y_{5}&=&-9e_{0}+4e_{1}+4e_{2}+2e_{3}+2e_{4}+6e_{5},\\ y_{6}&=&-9e_{0}+4e_{1}+4e_{2}+2e_{3}+2e_{4}+6e_{6},\\ y_{7}&=&9e_{0}-4e_{3}-4e_{4}-2e_{1}-2e_{2}-6e_{5}-6e_{6},\\ y_{8}&=&9e_{0}-4e_{1}-4e_{2}-2e_{3}-2e_{4},\quad y_{1}+\cdots+y_{8}=0,\end{array}\] _then we have_ \[F_{8}(ytoe)=-2^{4}\cdot 3^{4}\cdot hA_{00}.\] _Conversely, put_ \[\begin{array}{rcl}e_{1}&=&-2y_{1}-y_{2}-3y_{3}-3y_{4}-y_{5}-y_{6}-y_{7},\\ e_{2}&=&-2y_{2}-y_{1}-3y_{3}-3y_{4}-y_{5}-y_{6}-y_{7},\\ e_{3}&=&-2y_{3}-y_{4}-y_{5}-y_{6}-y_{7},\\ e_{4}&=&-2y_{4}-y_{3}-y_{5}-y_{6}-y_{7},\\ e_{5}&=&-y_{1}-y_{2}-y_{3}-y_{4}-y_{6}-y_{7},\\ e_{6}&=&-y_{1}-y_{2}-y_{3}-y_{4}-y_{7}-y_{5},\\ e_{7}&=&-y_{1}-y_{2}-y_{3}-y_{4}-y_{5}-y_{6},\\ e_{0}&=&-2y_{1}-2y_{2}-4y_{3}-4y_{4}-2y_{5}-2y_{6}-2y_{7},\quad e_{1}+\cdots +e_{7}=3e_{0},\end{array}\] _then we have_ \[hA_{00}(etoy)=-F_{8}/6.\] _The obvious symmetry above is translated in terms of the \(y\)-coordinates as the permutations of_ \[\{y_{1},y_{2}\},\quad\{y_{3},y_{4}\},\quad\text{and}\quad\{y_{5},y_{6},y_{7}\}.\] Once the transformation was found then proof is just a computation. The last statement is checked as follows: If we substitute \(e_{6}\) by \(3-(e_{1}+\cdots+e_{6})\) (\(=e_{7}\)) in \(y_{6}=-9e_{0}+4e_{1}+4e_{2}+2e_{3}+2e_{4}+6e_{6}\), then it changes into \(y_{7}=9e_{0}-2e_{1}-2e_{2}-4e_{3}-4e_{4}-6e_{5}-6e_{6}\). \(F_{8}\) is invariant under the permutations of \(\{y_{1},\ldots,y_{8}\}\). Thus \(A_{00}\) admits the action of the symmetry group of degree eight, which we denote by \(S_{8}\). **Definition 4.2**.: Let \(X_{5}\) be the 5-dimensional subvariety defined by \[F_{8}=0,\quad y_{1}+\cdots+y_{8}=0\] in the seven dimensional projective space coordinatized by \(y_{1}:\cdots:y_{8}\). **Lemma 4.3**.: _The singular points of \(X_{5}\) are isolated; they are_ \[\varepsilon_{1}:\cdots:\varepsilon_{8},\quad\varepsilon_{i}=\pm 1,\quad \varepsilon_{1}+\cdots+\varepsilon_{8}=0.\] The 3-dimensional linear subspaces \[y_{a}+y_{b}=y_{c}+y_{d}=y_{e}+y_{f}=0,\quad\{a,\ldots,f\}\subset\{1,\ldots,8\}\] live in \(X_{5}\). _Remark 4.4_ (Private communication with E. Sato).: They are the maximal dimensional linear subspaces in \(X_{5}\). Conjecture: The maximal dimensional linear subspaces in \[X_{n}:y_{1}^{3}+\cdots+y_{n+3}^{3}=y_{1}+\cdots+y_{n+3}=0,\quad n:\text{odd}\] in \(\mathbf{P}^{n+2}\) is the \(S_{n+3}\)-orbit of the linear subspace \[y_{1}+y_{2}=\cdots=y_{n}+y_{n+1}=0.\] It is proved only when \(n=1,3,5,7\). The following is easy to show. **Proposition 4.5**.: _The four dimensional linear subspaces spanned by one of the 3-dimensional subspaces in \(X_{5}\) and one of the singular points are of two types_ \[y_{a}+y_{b}=y_{c}+y_{d}=0\quad\text{and}\quad y_{a}+y_{b}+y_{c}+y_{d}=y_{a}+y_ {b}+y_{e}+y_{f}=0.\] ### \(S_{8}\)-orbits of \(\mbox{\it S}\mbox{\it E}_{3}\) and \(Z_{3}\) In the previous subsections, we studied the two restrictions of the exponents \[e_{1}-e_{2}-e_{6}+1-e_{3}=e_{4}+e_{1}+e_{6}-1-e_{3}=0\quad\mbox{and}\quad e_{1}+e _{5}+e_{2}-1=e_{3}+e_{4}+e_{5}-1=0,\] defining \(\mbox{\it S}\mbox{\it E}_{3}\) and \(Z_{3}\). In terms of the \(y\)-coordinates, these restrictions turn out to be \[y_{1}+y_{4}=y_{3}+y_{6}=0\quad\mbox{and}\quad y_{1}+y_{2}+y_{3}+y_{4}=y_{3}+y_{ 4}+y_{6}+y_{7}=0.\] respectively. We consider restrictions of \(E_{3}(e)\) corresponding to the two types of the 4-dimensional subspaces above (the \(S_{8}\)-orbits of the two subspaces above), and look for equations admitting four independent shift operators \((P,Q)\) of type \[P,Q=\frac{R_{k-2}\cdot dx^{2}}{x^{m}(x-1)^{m}}+\frac{R_{k-1}\cdot dx}{x^{m+1}( x-1)^{m+1}}+\frac{R_{k}}{x^{m+2}(x-1)^{m+2}},\quad k=18,\ m=4,\] where \(R_{k}\) is used symbolically for a polynomial of degree \(k\) in \(x\). Up to the obvious symmetry, we find four codimension-2 restrictions \[E_{3a},\quad E_{3b},\quad E_{3c},\quad E_{3d}\] of \(E_{3}(e)\), defined by the following four subspaces \[E_{3a}:y_{1}+y_{4}=y_{2}+y_{3}=0,\] \[E_{3b}:y_{3}+y_{5}=y_{4}+y_{6}=0,\] \[E_{3c}:y_{1}+y_{5}+y_{2}+y_{8}=y_{1}+y_{5}+y_{3}+y_{4}=0,\] \[E_{3d}:y_{1}+y_{5}+y_{3}+y_{4}=y_{1}+y_{5}+y_{6}+y_{8}=0,\] in terms of the \(e\)-coordinates, they are \[E_{3a}:e_{3}=e_{1},\quad e_{4}=e_{2},\] \[E_{3b}:e_{2}=-e_{3}-e_{5}-2e_{6}+3-e_{1},\quad e_{4}=e_{3}-e_{5} +e_{6},\] \[E_{3c}:e_{2}=2e_{1}+e_{3}+e_{4},\quad e_{5}=1-e_{1}-e_{3}-e_{4},\] \[E_{3d}:e_{2}=-e_{3}/2-e_{4}/2+3/2-e_{1}-(3e_{6})/2,\quad e_{5}=e _{1}+e_{6}.\] ### Equation \(E_{3a}\) In this subsection we study a specialization \(E_{3a}=E_{3a}(e_{1},e_{2},e_{5},e_{6})\) of \(E_{3}\) with the condition \[e_{3}=e_{1},\ e_{4}=e_{2}.\] The accessory parameter takes the value \[A_{00}=-e_{5}e_{6}e_{7}.\qquad e_{7}=s=-(2e_{1}+2e_{2}+e_{5}+e_{6}-3).\] #### 5.2.1 Shift operators of \(E_{3a}\) **Theorem 5.1**.: _The equation \(E_{3a}\) admits a shift operator for every shift in \(e_{i}\to e_{i}\pm 1,\ e_{j}\to e_{j}\pm 2\quad(i=1,2,\ j=5,6)\)._ **Shift operators for \(e_{1}\to e_{1}\pm 1\):** The shift relation \(E_{3as}\circ P_{1p}=Q_{1p}\circ E_{3a},\ E_{3as}=E_{3a}(e_{1}=e_{1}+1)\) is solved by \[P_{1p} = x^{2}(x-1)^{2}\partial^{2}+1/2x(2x-1)(x-1)(e_{5}+e_{6}+1)\partial+e _{5}e_{6}x(x-1)\] \[-e_{1}(e_{5}+e_{6}+2e_{1}-1)/2,\] \[Q_{1p} = x^{2}(x-1)^{2}\partial^{2}+1/2x(2x-1)(x-1)(e_{5}+e_{6}+3)\partial\] \[+(e_{5}+1)(e_{6}+1)x(x-1)-e_{1}(e_{5}+e_{6}+2e_{1}+1)/2,\] and the shift relation \(E_{3as}\circ P_{1n}=Q_{1n}\circ E_{3a},\ E_{3as}=E_{3a}(e_{1}=e_{1}-1)\) by \[P_{1n} = x(x-1)\partial^{2}-(2x-1)(e_{2}-1)\partial-(2e_{2}+e_{7}-1)e_{7},\] \[Q_{1n} = x(x-1)\partial^{2}-(2x-1)(e_{2}-1)\partial-(2e_{2}+e_{7})(e_{7}+1).\] **Shift operators for \(e_{5}\to e_{5}\pm 2\):** The shift relation \(E_{3as}\circ P_{5pp}=Q_{5pp}\circ E_{3a},\ E_{3as}=E_{3a}(e_{5}=e_{5}+2)\) for the shift \(e_{5}\to e_{5}+2\) is solved by \[P_{5pp} = x^{2}(x-1)^{2}\partial^{2}+(1/2)x(2x-1)(x-1)(1+e_{5}+e_{6})\partial\] \[+e_{5}e_{6}x(x-1)+(1/4)e_{5}(2-e_{7}-2e_{1})(2-e_{7}-2e_{2})/(2-e _{7}+e_{5}),\] \[Q_{5pp} = x^{2}(x-1)^{2}\partial^{2}+(1/2)x(2x-1)(x-1)(e_{5}+e_{6}+5)\partial\] \[+(e_{5}+3)(e_{6}+1)x(x-1)+(1/16)\left\{(e_{5}+2)e_{7}^{2}+2(e_{5}+ 2)(e_{1}+e_{2}-3)e_{7}\right.\] \[\left.+4(4+e_{1}e_{2}e_{5}+3e_{5}-2(e_{1}+e_{2})(e_{5}+1))\right\} /(2-e_{7}+e_{5}),\] and the shift relation \(E_{3as}\circ P_{5nn}=Q_{5nn}\circ E_{3a},\ E_{3as}=E_{3a}(e_{5}=e_{5}-2)\) for the shift \(e_{5}\to e_{5}-2\) by \[P_{5nn} = x^{2}(x-1)^{2}\partial^{2}-1/2x(2x-1)(x-1)(-4+2e_{1}+2e_{2}+e_{5 })\partial\] \[+e_{7}e_{6}x(x-1)+(1/4)e_{7}(2e_{2}+e_{5}-2)(2e_{1}+e_{5}-2)/(2+e _{7}-e_{5}),\] \[Q_{5nn} = x^{2}(x-1)^{2}\partial^{2}-1/2x(2x-1)(x-1)(-4+2e_{1}+2e_{2}+e_{5 })\partial\] \[+(e_{7}+3)(e_{6}+1)x(x-1)-(1/4)\left\{(2(4-e_{5})(e_{1}+e_{2})-e_ {5}^{2}+6e_{5}\right.\] \[\left.-4e_{1}e_{2}-12)e_{7}+2(e_{5}-2)(e_{7}+e_{6}+1)\right\}/(2+ e_{7}-e_{5}),\] #### 5.2.2 S-values and reducibility conditions of \(E_{3a}\) **Proposition 5.2**.: _The S-values for the shifts above:_ \[Sv_{1p} = P_{1n}(e_{1}=e_{1}+1)\circ P_{1p}\] \[= -(2e_{1}+e_{6}-2)(2e_{1}+e_{5}-2)(2e_{1}+e_{5}+e_{6}-3)e_{7}/4,\] \[Sv_{5pp} = P_{5nn}(e_{5}=e_{5}+2)\circ P_{5pp}\] \[= e_{5}(2e_{1}+e_{5})(2e_{2}+e_{5})(2-e_{7})(2e_{1}+e_{7}-2)(2e_{2 }+e_{7}-2)\] \[/(16(2-e_{7}+e_{5})^{2}).\] Hence, we have the following theorem. **Theorem 5.3**.: \(E_{3a}\) _is reducible if one of_ \[e_{j},\quad 2e_{i}+e_{j},\quad 2e_{i}+e_{56}+1,\quad 2e_{12}+e_{56}+1\quad(i=1,2, \ j=5,6)\] \((e_{12}=e_{1}+e_{2},\ e_{56}=e_{5}+e_{6})\) _is an even integer._ #### 5.2.3 Reducible cases of \(E_{3a}\) **Proposition 5.4**.: _When the equation is reducible, it factors as_ \[e_{5},\ 2e_{1}+e_{5}= \cdots -2\ \ \ 0\ \ \ \ \ \ 2\ \ \ \ 4\ \cdots\] \[\cdots \ [21]\ \ [21]A0\ \ [12]A0\ \ [12]\ \cdots\] \[2e_{1}+e_{56}+1,\ 2e_{12}+e_{56}+1= \cdots -2\ \ \ 0\ \ \ \ \ \ 2\ \ \ 4\ \cdots\] \[\cdots \ [12]\ \ [12]A0\ \ [21]A0\ \ [21]\ \cdots\] _When apparent singular point does not appear, the factor \([2]\) is equivalent to \(E_{2}\)._ We omit the proof. For \(E_{3b}\),..., when the equation is reducible, it factors of type \(\{1,2\}\). We do not give details. ### Equation \(E_{3b}\) In this subsection we study a specialization \(E_{3b}=E_{3b}(e_{1},e_{3},e_{5},e_{6})\) of \(E_{3}\) with the condition \[e_{2}=-e_{3}-e_{5}-2e_{6}+3-e_{1},\ \ \ e_{4}=e_{3}-e_{5}+e_{6}.\] The accessory parameter takes the value \[A_{00}=-((e_{3}-e_{5})(e_{1}-1)(e_{1}+e_{3}+e_{5}+2e_{6}-2))/2.\] #### 5.3.1 Shift operators of \(E_{3b}\) **Theorem 5.5**.: _The equation \(E_{3b}\) admits a shift operator for every shift \(e_{i}\to e_{i}\pm 2,\ e_{6}\to e_{6}\pm 1\ \ \ (i=1,3,5)\)._ Since the operators \(P\) and \(Q\) are very long, we give only the coefficients of \(\partial^{2}\), and the denominators of the other coefficients. For full expression, see \(E3bPQ.txt\) in _FDEdata_ mentioned in the end of Introduction. **Shift operators \(P_{1pp},Q_{1pp},P_{1nn},Q_{1nn}\) for \(e_{1}\to e_{1}\pm 2\):** \[P_{1pp},\ P_{1nn}:(x-1)^{2}\partial^{2}+\frac{R_{1}(x-1)}{x} \partial+\frac{R_{1}}{x},\] \[Q_{1pp},\ Q_{1nn}:(x-1)^{2}\partial^{2}+\frac{R_{1}(x-1)}{x} \partial+\frac{R_{2}}{x^{2}},\] where \(R_{k}\) is used symbolically for a polynomial of degree \(k\) in \(x\). **Shift operators \(P_{3pp},Q_{3pp},P_{3nn},Q_{3nn}\) for \(e_{3}\to e_{3}\pm 2\):** \[P_{3pp}:(x-1)^{2}R_{2}\partial^{2}+\frac{(x-1)R_{3}}{x}\partial+ \frac{R_{3}}{x},\] \[Q_{3pp}:(x-1)^{2}R_{2}^{2}\partial^{2}+\frac{(x-1)R_{3}}{x} \partial+\frac{R_{4}}{x^{2}},\] \[P_{3nn}:x^{2}\partial^{2}+\frac{xR_{1}}{x-1}+\frac{R_{1}}{x-1},\] \[Q_{3nn}:x^{2}\partial^{2}+\frac{xR_{1}}{x-1}+\frac{R_{2}}{(x-1) ^{2}},\] where \(R_{k}\) is used symbolically for a polynomial of degree \(k\) in \(x\). **Shift operators \(P_{5pp},Q_{5pp},P_{5nn},Q_{5nn}\) for \(e_{5}\to e_{5}\pm 2\):** \[P_{5pp}:R_{2}\partial^{2}+\frac{R_{3}}{(x-1)x}\partial+\frac{R_{2} }{(x-1)x},\] \[Q_{5pp}:R_{2}\partial^{2}+\frac{R_{3}}{(x-1)x}\partial+\frac{R_{4 }}{(x-1)^{2}x^{2}},\] \[P_{5nn},\ Q_{5nn}:x^{2}(x-1)^{2}\partial^{2}+x(x-1)R_{1}\partial +R_{2},\] where \(R_{k}\) is used symbolically for a polynomial of degree \(k\) in \(x\). **Shift operators \(P_{6p},Q_{6p},P_{6n},Q_{6n}\) for \(e_{6}\to e_{6}\pm 1\):** \[P_{6p} =(x-1)^{2}\partial^{2}-1/2(x-1)(e_{1}x+2e_{3}x-2e_{5}x-2e_{1}-3x+2 )/x\partial\] \[-1/2(e_{1}e_{6}x+2e_{3}e_{6}x-2e_{5}e_{6}x+2e_{6}^{2}x+e_{1}e_{3}- e_{1}e_{5}-e_{6}x-e_{3}+e_{5})/x,\] \[Q_{6p} =(x-1)^{2}\partial^{2}-1/2(x-1)(e_{1}x+2e_{3}x-2e_{5}x-2e_{1}-x-2 )/x\partial\] \[-1/2(e_{1}e_{6}x^{2}+2e_{3}e_{6}x^{2}-2e_{5}e_{6}x^{2}+2e_{6}^{2}x ^{2}+e_{1}e_{3}x-e_{1}e_{5}x+e_{6}x^{2}+2e_{1}x\] \[+e_{3}x-e_{5}x-2e_{1}+2x-2)/x^{2},\] \[P_{6n} =(x-1)x^{2}\partial^{2}-(e_{3}x-2e_{5}x+2e_{5}-x)x\partial-e_{3}e _{5}x+e_{5}^{2}x+e_{1}^{2}\] \[+2e_{1}e_{3}+4e_{1}e_{6}+e_{3}^{2}+4e_{3}e_{6}-e_{5}^{2}+4e_{6}^{ 2}-5e_{1}-5e_{3}+e_{5}-10e_{6}+6,\] \[Q_{6n} =(x-1)x^{2}\partial^{2}-(e_{3}x-2e_{5}x+2e_{5}-x)x\partial-e_{3}e _{5}x+e_{5}^{2}x+e_{1}^{2}\] \[+2e_{1}e_{3}+4e_{1}e_{6}+e_{3}^{2}+4e_{3}e_{6}-e_{5}^{2}+4e_{6}^{ 2}-7e_{1}-7e_{3}+e_{5}-14e_{6}+12.\] #### 5.3.2 S-values and reducibility conditions of \(E_{3b}\) **Proposition 5.6**.: _The numerators of the S-values for the shifts above:_ \[Sv_{1pp} = (-1+e_{1}+2e_{6})(e_{1}+2e_{5}-1)(e_{1}-1+2e_{3}+2e_{6})(e_{1}+e_ {5}+e_{3})(-e_{3}+e_{1}+e_{5})\] \[\times(e_{1}+e_{3}+2e_{6}-e_{5}),\] \[Sv_{3pp} = (e_{3}+e_{5})(e_{3}+2-e_{5})(e_{3}+2e_{6}-e_{5})(e_{1}+1+2e_{3}+2 e_{6})(e_{1}-1+2e_{3}+2e_{6})\] \[\times(e_{1}+e_{5}+e_{3})(e_{1}-2+e_{5}-e_{3})(e_{1}+e_{3}+2e_{6}- e_{5}),\] \[Sv_{5pp} = (e_{3}+e_{5})(e_{3}-e_{5}-2+2e_{6})(e_{1}+2e_{5}+1)(e_{1}+2e_{5}-1 )(e_{1}+e_{5}+e_{3})\] \[\times(-e_{3}+e_{1}+e_{5})(e_{1}+e_{3}-e_{5}-2+2e_{6})(e_{3}-e_{5 }),\] \[Sv_{6p} = (e_{3}+2e_{6}-e_{5})(-1+e_{1}+2e_{6})(e_{1}-1+2e_{3}+2e_{6})(e_{1} +e_{3}+2e_{6}-e_{5}).\] Note that the factors of the numerators of the S-values above are \(\mathbb{Z}\)-linear forms in \[1,\quad e_{1},\quad e_{3},\quad e_{5},\quad 2e_{6}.\] Hence, we have the following theorem. **Theorem 5.7**.: _Let \(\varphi(e_{1},e_{3},e_{5},e_{6})\) be one of the factors of the numerators of the S-values above. If \(\varphi\) is an even integer, then \(E_{3b}\) is reducible._ #### 5.3.3 Detour Expressions of the shift operators for the shifts \(e_{i}\to e_{i}\pm 2\)\((i=1,3,5)\) are fairly long. But those for the shifts \((e_{3},e_{5})\to(e_{3}\pm 1,e_{5}\pm 1),(e_{1},e_{6})\to(e_{1}\pm 2,e_{6}\mp 1)\) have relatively short expressions (see \(E3cPQ.txt\) in the list of \(\mathit{FDEdata}\)). They together with the shift operators for \(e_{6}\to e_{6}\pm 1\) generate the shifts \(e_{i}\to e_{i}\pm 2\)\((i=1,3,5)\). For example, \[\begin{array}{ll}P_{3p5p}&=(x-1)^{2}\partial^{2}-1/2(x-1)(e_{1}x+2e_{3}x-2e_{ 5}x-2e_{1}-3x+2)/x\partial\\ &-1/2(e_{1}e_{5}x+2e_{3}e_{5}x+e_{1}e_{3}-e_{1}e_{5}-e_{5}x-e_{3}+e_{5})/x,\\ P_{3n5n}&=(x-1)x^{2}\partial^{2}-(e_{3}x-e_{5}x-e_{6}x+2e_{6}-x)x\partial-e_{3} e_{6}x+e_{5}e_{6}x+e_{1}^{2}+2e_{1}e_{3}\\ &+2e_{1}e_{5}+2e_{1}e_{6}+e_{3}^{2}+2e_{3}e_{5}+2e_{3}e_{6}+e_{5}^{2}+2e_{5}e_{6 }-5e_{1}-5e_{3}-5e_{5}-4e_{6}+6,\\ P_{1pp6n}&=(x-1)x^{2}\partial^{2}-(e_{3}x-2e_{5}x+2e_{5}-x)x\partial-e_{3}e_{5} x+e_{5}^{2}x+e_{1}^{2}+2e_{1}e_{5}-e_{1},\\ \\ Sv3p5p&=(e_{3}+e_{5})(e_{1}+2e_{5}-1)(e_{1}+e_{3}+e_{5})(e_{1}+2e_{3}+2e_{6}-1),\\ Sv3p5n&=(e_{3}+2-e_{5})(e_{3}-e_{5}+2e_{6})(e_{1}+2e_{5}-3)(e_{1}+2e_{3}+2e_{6 }-1)\\ &\times(e_{1}-e_{3}+e_{5}-2)(e_{1}+e_{3}-e_{5}+2e_{6})/(e_{3}-e_{5}+e_{6}+1)^{ 2},\\ Sv1pp6n&=(e_{1}+2e_{5}-1)(e_{1}+e_{3}+e_{5})(e_{1}-e_{3}+e_{5})(e_{3}-e_{5}+2e _{6}-2).\end{array}\] ### Equation \(E_{3c}\) In this subsection we study a specialization \(E_{3c}=E_{3c}(e_{1},e_{3},e_{4},e_{6})\) of \(E_{3}\) with the condition \[e_{2}=2e_{1}+e_{3}+e_{4},\quad e_{5}=1-e_{1}-e_{3}-e_{4}.\] The accessory parameter takes the value \[A_{00}=-((2e_{1}+e_{3}+e_{4}-1)(2e_{1}e_{6}+e_{3}e_{4}+e_{3}e_{6}+e_{4}e_{6}+e _{6}^{2}-2e_{6}))/2.\] #### 5.4.1 Shift operators of \(E_{3c}\) **Theorem 5.8**.: _The equation \(E_{3c}\) admits a shift operator for every shift \(e_{1}\to e_{1}\pm 1,\ e_{i}\to e_{i}\pm 2\quad(i=3,4,6)\)._ We give only the coefficients of \(\partial^{2}\), and the denominators of the other coefficients. For full expression, see \(E3cPQ.txt\) in the list of _FDEdata_. **Shift operators \(P_{1p},Q_{1p},P_{1n},Q_{1n}\) for \(e_{1}\to e_{1}\pm 1\):** \[P_{1p},\ Q_{1p}:x^{2}(x-1)^{2}\partial^{2}+x(x-1)R_{1}\partial+R_{2},\] \[P_{1n}::(x-1)^{2}\partial^{2}+\frac{(x-1)R_{1}}{x}+\frac{R_{1}}{x},\] \[Q_{1n}::(x-1)^{2}\partial^{2}+\frac{(x-1)R_{1}}{x}+\frac{R_{2}}{x^{2}},\] where \(R_{k}\) is used symbolically for a polynomial of degree \(k\) in \(x\). **Shift operators \(P_{3pp},Q_{3pp},P_{3nn},Q_{3nn}\) for \(e_{3}\to e_{3}\pm 2\):** \[P_{3pp},\ Q_{3pp}:x^{2}(x-1)^{e}\partial^{2}+x(x-1)R_{1}\partial+R_{2},\] \[P_{3nn}:R_{2}\partial^{2}+\frac{R_{3}}{x(x-1)}+\frac{R_{2}}{x(x-1)},\] \[Q_{3nn}:R_{2}\partial^{2}+\frac{R_{3}}{x(x-1)}+\frac{R_{4}}{x^{2}(x-1)^{2}},\] where \(R_{k}\) is used symbolically for a polynomial of degree \(k\) in \(x\). The shift operators for \(e_{4}\to e_{4}\pm 2\) are similar to those for \(e_{3}\to e_{3}\pm 2\). **Shift operators \(P_{6pp},Q_{6pp},P_{6nn},Q_{6nn}\) for \(e_{6}\to e_{6}\pm 2\):** \[P_{6pp},\ Q_{6pp},\ P_{6nn},\ Q_{6nn}:x^{2}(x-1)^{2}\partial^{2}+x(x-1)R_{1} \partial+R_{2},\] where \(R_{k}\) is used symbolically for a polynomial of degree \(k\) in \(x\). #### 5.4.2 S-values and reducibility conditions of \(E_{3c}\) **Proposition 5.9**.: _The numerators of the S-values for the shifts above:_ \[Sv_{1p} = (2e_{1}+e_{4}+e_{6})(2e_{1}+e_{3}+e_{6})(2e_{1}+2e_{3}+e_{4}+e_{6})( 2e_{1}+e_{3}+2e_{4}+e_{6}),\] \[Sv_{3pp} = (e_{3}+e_{6})(e_{3}+e_{4}+1)(2e_{1}+e_{3}+e_{6})(2e_{1}+2e_{3}+e_{4 }+e_{6}+2)\] \[\times(2e_{1}+2e_{3}+e_{4}+e_{6})(2e_{1}+e_{3}+2e_{4}+e_{6}),\] \[Sv_{6pp} = (e_{4}-e_{6})(e_{4}+e_{6})(e_{3}-e_{6})(e_{3}+e_{6})(2e_{1}+e_{4}+ e_{6})(2e_{1}+e_{3}+e_{6})\] \[\times(2e_{1}+2e_{3}+e_{4}+e_{6})(2e_{1}+e_{3}+2e_{4}+e_{6}).\] Note that the factors of the numerators of the S-values above are \(\mathbb{Z}\)-linear forms in \[1,\quad 2e_{1},\quad e_{3},\quad e_{4},\quad e_{6}.\] Hence, we have the following theorem. **Theorem 5.10**.: _Let \(\varphi(e_{1},e_{3},e_{4},e_{6})\) be one of the factors of the numerators of the S-values above. If \(\varphi\) is an even integer, then \(E_{3c}\) is reducible._ ### Equation \(E_{3d}\) In this subsection we study a specialization \(E_{3d}=E_{3d}(e_{1},e_{3},e_{4},e_{6})\) of \(E_{3}\) with the condition \[e_{2}=-e_{3}/2-e_{4}/2+3/2-e_{1}-(3e_{6})/2,\quad e_{5}=e_{1}+e_{6}.\] The accessory parameter takes the value \[A_{00}=e_{6}(2e_{1}^{2}+e_{1}e_{3}+e_{1}e_{4}+3e_{1}e_{6}+e_{3}e_{4}+e_{3}e_{6 }+e_{4}e_{6}+e_{6}^{2}-3e_{1}-e_{3}-e_{4}-3e_{6}+1)/2.\] #### 5.5.1 Shift operators of \(E_{3d}\) **Theorem 5.11**.: _The equation \(E_{3d}\) admits a shift operator for every shift \(e_{1}\to e_{1}\pm 1,\ e_{i}\to e_{i}\pm 2\quad(i=3,4,6)\)._ We give only the coefficients of \(\partial^{2}\), and the denominators of the other coefficients. For full expression, see \(E3dPQ.txt\) in the list of \(\mathit{FDE}data\). **Shift operators \(P_{1p},Q_{1p},P_{1n},Q_{1n}\) for \(e_{1}\to e_{1}\pm 1\):** \[P_{1p},\ Q_{1p},\ P_{1n},\ Q_{1n}:x(x-1)^{2}\partial^{2}+(x-1)R_{1}\partial+R_ {1},\] where \(R_{k}\) is used symbolically for a polynomial of degree \(k\) in \(x\). **Shift operators \(P3p,Q3p,P3n,Q3n\) for \(e_{3}\to e_{3}\pm 2\):** \[P_{3pp},\ Q_{3pp}:x(x-1)^{2}\partial^{2}+(x-1)R_{1}\partial+R_{1},\] \[P_{3nn}:x^{2}\partial^{2}+\frac{xR_{1}}{x-1}\partial+\frac{R_{1} }{x-1},\] \[Q_{3nn}:x^{2}\partial^{2}+\frac{xR_{1}}{x-1}\partial+\frac{R_{2} }{(x-1)^{2}},\] where \(R_{k}\) is used symbolically for a polynomial of degree \(k\) in \(x\). The shift operators for \(e_{4}\to e_{4}\pm 2\) are similar to those for \(e_{3}\to e_{3}\pm 2\). **Shift operators \(P_{6p},Q_{6p},P_{6n},Q_{6n}\) for \(e_{6}\to e_{6}\pm 2\):** \[P_{6p}:\frac{(x-1)^{2}R_{2}}{x}\partial^{2}+\frac{(x-1)R_{3}}{x^{2 }}+\frac{R_{3}}{x^{2}},\] \[Q_{6p}:\frac{(x-1)^{2}R_{2}}{x}\partial^{2}+\frac{(x-1)R_{3}}{x^ {2}}+\frac{R_{4}}{x^{3}},\] \[P_{6nn},\ Q_{6nn}:x^{2}(x-1)^{2}\partial^{2}+x(x-1)R_{1}\partial +R_{2},\] where \(R_{k}\) is used symbolically for a polynomial of degree \(k\) in \(x\). #### 5.5.2 S-values and reducibility conditions of \(E_{3d}\) **Proposition 5.12**.: _The numerators of the S-values for the shifts above:_ \[Sv_{1p} : (2e_{1}+e_{6})(2e_{1}+e_{4}+e_{6})(2e_{1}-1+e_{4}+2e_{6})(2e_{1}+e _{3}+e_{6})(2e_{1}+2e_{6}+e_{3}-1)\] \[\times(2e_{1}-1+2e_{6}+e_{3}+e_{4}),\] \[Sv_{3pp} : (e_{6}+e_{3})(2e_{1}+2e_{6}+e_{3}-1)(2e_{1}+e_{3}+e_{6})(2e_{1}-1+ 2e_{6}+e_{3}+e_{4}),\] \[Sv_{6pp} : e_{6}(e_{4}+e_{6})(e_{6}+e_{3})(2e_{1}+e_{6})(2e_{1}+e_{4}+e_{6})( 2e_{1}+1+2e_{6}+e_{4})\] \[\times(2e_{1}-1+e_{4}+2e_{6})(2e_{1}+e_{3}+e_{6})(2e_{1}+1+e_{3}+2 e_{6})(2e_{1}+2e_{6}+e_{3}-1)\] \[\times(2e_{1}+1+e_{3}+2e_{6}+e_{4})(2e_{1}-1+2e_{6}+e_{3}+e_{4}),\] Note that the factors of the numerators of the S-values above are \(\mathbb{Z}\)-linear forms in \[1,\quad 2e_{1},\quad e_{3},\quad e_{4},\quad e_{6}.\] Hence, we have the following theorem. **Theorem 5.13**.: _Let \(\varphi(e_{1},e_{3},e_{4},e_{6})\) be one of the factors of the numerators of the S-values above. If \(\varphi\) is an even integer, then \(E_{3d}\) is reducible._ ## 6 Other two specializations: \(E_{3e},E_{3f}\) \begin{tabular}{r l} \hline **6.1** & **Equation**\(E_{3e}:\{e_{3}=e_{1}-e_{2},e_{4}=-e_{2}\}\) \\ **6.2** & **Equation**\(E_{3f}:\{e_{2}=-e_{1}-e_{3}-e_{5}+1,e_{4}=e_{3}-e_{5}+1\}\) \\ \hline \end{tabular} We found two specializations admitting shift operators which are not in the \(S_{8}\) orbits of \[y_{1}+y_{4}=y_{3}+y_{6}=0\quad\text{nor}\quad y_{1}+y_{2}+y_{3}+y_{4}=y_{3}+y _{4}+y_{6}+y_{7}=0.\] The six specializations we treated in the previous sections, they have shift operators for four independent shifts. But the two equations in this section have less than four independent shifts. ### Equation \(E_{3e}:\{e_{3}=e_{1}-e_{2},e_{4}=-e_{2}\}\) In \(y\)-coordinates: \(\{y_{1}=-y_{4},y_{6}=-2y_{4}-2y_{3}-y_{5}-y_{7}-y_{2}\}\). **Theorem 6.1**.: _The equation \(E_{3e}\) admits a shift operator for every shift \(e_{1}\to\pm 1\), \(e_{j}\to\pm 2\) (\(j=5,6\)),_ **Shift operator for \(e_{1}\to e_{1}+1\)**: \[P =x^{2}(x-1)^{2}dx^{2}+x(x-1)((e_{5}+e_{6}+1)x-1/2+e_{2}-e_{5}/2-e_{6}/2)dx\] \[+e_{5}e_{6}x^{2}+(-1/2e_{2}+1/2e_{5}e_{2}+1/2e_{6}e_{2}-e_{5}e_{6})x\] \[+e_{1}/2-e_{1}^{2}+e_{1}e_{2}-e_{5}e_{1}/2-e_{6}e_{1}/2,\] \[Q =x^{2}(x-1)^{2}dx^{2}+x(x-1)((e_{5}+e_{6}+3)x+e_{2}-e_{5}/2-e_{6}/2 -3/2)dx\] \[+(e_{5}e_{6}+e_{5}+e_{6}+1)x^{2}+(1/2e_{5}e_{2}+1/2e_{6}e_{2}-e_{5} e_{6}+1/2e_{2}-e_{5}-e_{6}-1)x\] \[-e_{1}^{2}+e_{1}e_{2}-e_{5}e_{1}/2-e_{6}e_{1}/2-e_{1}/2\] S-value: \[(2e_{1}+e_{5}+e_{6}-1)(2e_{1}-e_{2}+e_{6})(2e_{1}-e_{2}+e_{5})(2e_{1}-2e_{2}+e _{5}+e_{6}-1).\] **Shift operator for \(e_{5}\to e_{5}+2\)**: \[P =x^{2}(x-1)^{2}dx^{2}+x(x-1)((e_{5}+e_{6}+1)x-1/2+e_{2}-e_{5}/2-e_ {6}/2)dx+e_{5}e_{6}x^{2}\] \[+(-1/2e_{2}+1/2e_{5}e_{2}+1/2e_{6}e_{2}-e_{5}e_{6})x-(2e_{1}e_{2} e_{5}+2e_{1}e_{2}e_{6}-2e_{1}e_{5}^{2}\] \[-2e_{1}e_{5}e_{6}-2e_{2}^{2}e_{5}-2e_{2}^{2}e_{6}+3e_{2}e_{5}^{2}+ 4e_{2}e_{5}e_{6}+e_{2}e_{6}^{2}-e_{5}^{3}\] \[-2e_{5}^{2}e_{6}-e_{5}e_{6}^{2}-2e_{1}e_{2}+2e_{1}e_{5}+2e_{2}^{2} -4e_{2}e_{5}-2e_{2}e_{6}+2e_{5}^{2}+2e_{5}e_{6}+e_{2}-e_{5})\] \[/(4(2e_{1}-e_{2}+2e_{5}+e_{6}-1)),\] \[Q =x^{2}(x-1)^{2}dx^{2}+x(x-1)((e_{5}+e_{6}+5)x+e_{2}-e_{5}/2-e_{6}/ 2-5/2)dx+(e_{5}e_{6}\] \[+e_{5}+3e_{6}+3)x^{2}+(-3+1/2e_{5}e_{2}-e_{5}e_{6}+1/2e_{6}e_{2}-3 e_{6}-e_{5}+3/2e_{2})x\] \[-(2e_{1}e_{2}e_{5}+2e_{1}e_{2}e_{6}-2e_{1}e_{5}^{2}-2e_{1}e_{5}e_{ 6}-2e_{2}^{2}e_{5}-2e_{2}^{2}e_{6}\] \[+3e_{2}e_{5}^{2}+4e_{2}e_{5}e_{6}+e_{2}e_{6}^{2}-e_{5}^{3}-2e_{5} ^{2}e_{6}-e_{5}e_{6}^{2}+6e_{1}e_{2}-2e_{1}e_{5}\] \[-4e_{1}e_{6}-2e_{2}^{2}+6e_{2}e_{5}+4e_{2}e_{6}-2e_{5}^{2}-4e_{5}e _{6}-2e_{6}^{2}-4e_{1}-e_{2}-3e_{5}+2)\] \[/(4(2e_{1}-e_{2}+2e_{5}+e_{6}-1)),\] S-value: \[(e_{2}-e_{5})(e_{2}+e_{5})(e_{5}+e_{6}-1)(2e_{1}+e_{5}+e_{6}-1)(2e_{1}-e_{2}+e _{5})(2e_{1}-2e_{2}+e_{5}+e_{6}-1).\] The shift operators for \(e_{6}\to e_{6}+2\) is obtained from above by the change \(e_{5}\leftrightarrow e_{6}\). ### Equation \(E_{3f}:\{e_{2}=-e_{1}-e_{3}-e_{5}+1,e_{4}=e_{3}-e_{5}+1\}\) In \(y\)-coordinates: \(\{y_{3}=-y_{5},y_{6}=-y_{2}-y_{1}+2y_{5}-2y_{4}-y_{7}\}\). **Theorem 6.2**.: _The equation \(E_{3f}\) admits a shift operator for every shift \((e_{1},e_{6})\rightarrow(e_{1}\pm 1,e_{6}\pm 1)\)._ **Shift operator for \((e_{1},e_{6})\rightarrow(e_{1}+1,e_{6}+1)\)**: \[P =x(x-1)^{2}dx^{2}+(x-1)((e_{5}+e_{6}+1)x+e_{1}-1)dx+e_{5}e_{6}x-e_ {1}/2-e_{6}/2+e_{1}^{2}/2\] \[+e_{1}e_{3}/2+e_{5}e_{1}/2+e_{6}e_{1}+e_{6}e_{3}/2-e_{5}e_{6}/2+e _{6}^{2}/2:\] \[Q =x(x-1)^{2}dx^{2}+(x-1)((e_{5}+e_{6}+2)x+e_{1})dx+(e_{5}e_{6}+e_ {5})x+e_{1}^{2}/2+e_{1}e_{3}/2\] \[+e_{5}e_{1}/2+e_{6}e_{1}+e_{6}e_{3}/2-e_{5}e_{6}/2+e_{6}^{2}/2+e _{1}/2+e_{6}/2\] S-value: \[(e_{1}+e_{6})(e_{1}+2e_{3}+e_{6})(e_{1}+e_{3}+e_{5}+e_{6}-1)(e_{6}+1-e_{5}+e_ {3}+e_{1}).\] **Shift operator for \((e_{1},e_{6})\to(e_{1}-1,e_{6}+1)\)**: \[\begin{array}{ll}P&=x(x-1)^{2}dx^{2}+(x-1)((e_{5}+e_{6}+1)x-e_{1}-e_{3}-e_{5}) dx+e_{5}e_{6}x\\ &-e_{1}/2+e_{6}/2+e_{1}^{2}/2+e_{1}e_{3}/2+e_{5}e_{1}/2-e_{6}e_{1}-e_{6}e_{3}/2 -(3e_{5}e_{6})/2+e_{6}^{2}/2,\\ Q&=x(x-1)^{2}dx^{2}+(x-1)((e_{5}+e_{6}+2)x-e_{1}-e_{3}-e_{5}+1)dx+(e_{5}e_{6}+e_{5 })x\\ &+e_{1}^{2}/2+e_{1}e_{3}/2+e_{5}e_{1}/2-e_{6}e_{1}-e_{6}e_{3}/2-(3e_{5}e_{6})/2 +e_{6}^{2}/2-(3e_{1})/2\\ &-e_{3}-e_{5}+(3e_{6})/2+1\end{array}\] S-value: \[(e_{1}-e_{6})(e_{1}+2e_{5}-e_{6}-2)(e_{1}+e_{3}+e_{5}-e_{6}-1)(-e_{6}-1+e_{5}-e _{3}+e_{1}).\] ## 7 Some generalities \begin{tabular}{|c c|} \hline **7.1** & **Shift operators and shift relations** \\ **7.2** & **S-values and reducibility conditions** \\ **7.3** & **Reducibility type and shift operators** \\ \hline \end{tabular} In this section we extract some definitions and theorems from [5] needed in this paper. Let \(D:=\mathbb{C}(x)[\partial]\). ### Shift operators and shift relations Let \(E(e)\) be a Fuchsian differential equation of order \(3\), and \(e=(e_{1},e_{2}\dots)\) a system of local exponents. Assume \(E(e)\) is irreducible for generic \(e\). **Definition 7.1**.: Let \(\mathrm{Sol}(E(e))\) be the solution space of \(E(e)\). For a shift \(sh_{+}:e\to e_{+}\): \[sh_{+}:(e_{+})_{i}=e_{i}+n_{i},\quad n_{i}\in\mathbb{Z},\] a non-zero operator \(P\in D\) of order lower than \(3\) sending \(\mathrm{Sol}(E(e))\) to \(\mathrm{Sol}(E(e_{+}))\) is called a _shift operator_ for the shift \(sh_{+}\) and is denoted by \(P_{+}\). A shift operator for the shift \(sh_{-}:(e_{-})_{i}=e_{i}-n_{i}\) is denoted by \(P_{-}\). Suppose a shift operator \(P_{+}\in D\) for a shift \(sh_{+}\) exists. Since \(E(e_{+})\circ P_{+}\) is divisible from right by \(E(e)\), there is an operator \(Q_{+}\in D\) satisfying the _shift relation_: \[(EPQE):\quad E(e_{+})\circ P_{+}=Q_{+}\circ E(e).\] Conversely, if there is a pair of non-zero operators \((P_{+},Q_{+})\in D^{2}\) of order smaller than \(n\) satisfying this relation, then \(P_{+}\) is a shift operator for the shift \(sh_{+}\). We often call also the pair \((P_{+},Q_{+})\) the shift operator for \(sh_{+}\). **Proposition 7.2**.: _Notation as above, if \(P_{+}\) exists, then \(P_{-}\) exists, and vice versa. If \(P_{+}\) exists, then it is unique up to multiplicative constant (independent of \((x,\partial)\)). For every shift operator, we can assume that the coefficients are polynomials of \(e\) free of common factors._ _Remark 7.3_.: When a differential _equation_ in question is \(Eu=0\), by multiplying a non-zero polynomial to the _operator_\(E\), we can assume that \(E\) has no poles. However, shift operators may have poles as functions of \(x\). **Proposition 7.4**.: _If an operator \(E(e)\) with the adjoint symmetry \(E(e)^{*}=\)\(E(adj(e))\) admits a shift relation \(E(\sigma(e))\circ P=Q\circ E(e)\), then_ \[Q=(-)^{\nu}P(adj\circ\sigma(e))^{*},\quad\nu=\mathrm{order}(P).\] ### S-values and reducibility conditions Let a shift \(sh_{+}:e\to e_{+}\) and its inverse \(sh_{-}:e\to e_{-}\) admit shift operators \[P_{+}(e):\operatorname{Sol}(E(e))\to\operatorname{Sol}(E(e_{+}))\] and \[P_{-}(e):\operatorname{Sol}(E(e))\to\operatorname{Sol}(E(e_{-})).\] Consider compositions of shift operators: \[P_{+}(e_{-})\circ P_{-}(e):\operatorname{Sol}(E(e)\to\operatorname{Sol}(E(e_{- })\to\operatorname{Sol}(E(e)),\] and \[P_{-}(e_{+})\circ P_{+}(e):\operatorname{Sol}(E(e)\to\operatorname{Sol}(E(e_{+ })\to\operatorname{Sol}(E(e));\] these are constants (times the identity). **Definition 7.5**.: These constants will be called the _S-values_ for \(sh_{\mp}\), and are denoted as \[Sv_{sh_{-}}=P_{+}(e_{-})\circ P_{-}(e)\mod E(e)\] and \[Sv_{sh+}=P_{-}(e_{+})\circ P+(e)\mod E(e).\] **Proposition 7.6**.: _The two S-values are related as_ \[Sv_{sh_{-}}(e)=Sv_{sh_{+}}(e_{-}).\] **Proposition 7.7**.: _If for some \(e=\epsilon\), \(Sv_{sh_{+}}(\epsilon)=0\)\((\)resp. \(Sv_{sh_{-}}(\epsilon)=0)\), then \(E(\epsilon)\) and \(E(\epsilon_{+})\)\((\)resp. \(E(\epsilon_{-}))\) are reducible. If \(Sv_{sh_{+}}(\epsilon)\neq 0\)\((\)resp. \(Sv_{sh_{-}}(e)\neq 0)\), then \(P_{sh_{+}}\)\((\)resp. \(P_{sh_{-}})\) gives an isomorphism: \(\operatorname{Sol}(E(\epsilon))\to\operatorname{Sol}(E(\epsilon_{+}))\)\((\)resp. \(\operatorname{Sol}(E(\epsilon))\to\operatorname{Sol}(E(\epsilon_{-})))\)._ ### Reducibility type and shift operators We discuss factorization of Fuchsian operators in \(D=\mathbb{C}(x)[\partial]\). **Definition 7.8**.: When \(E\in D\) is reducible and factorizes as \[E=F_{1}\circ\cdots\circ F_{r},\quad F_{j}\in D,\quad 0<\operatorname{order}(F_ {j})=n_{j},\ (j=1,\ldots,r),\] we say \(E\) is _reducible of type \([n_{1},\ldots,n_{r}]\)_; we sometimes call \([n_{1},\ldots,n_{r}]\) the _type of factors_. We often forget commas, for example, we write [12] in place of [1,2]. When only a set of factors matters, we say \(E\) is _reducible of type \(\{n_{1},\ldots,n_{r}\}\)_. Note that even if the equation \(E\) has singularity only at \(S=\{0,1,\infty\}\), the factors may have singularities out of \(S\). **Proposition 7.9**.: _If \(E\) has singularity only at \(S\), then the singular points of \(F_{1}\) and \(F_{r}\) out of \(S\) are apparent._ _Remark 7.10_.: The way of factorization is far from unique. When we discuss the singularity of the factors of a decomposition, we usually choose the factors so that they have least number of singular points. **Proposition 7.11**.: _Suppose \(E(e)\) and \(E(e_{\pm})\) are connected by shift relations. If \(Sv_{+}(\epsilon)\neq 0\) (resp. \(Sv_{-}(\epsilon)\neq 0\)) for some \(e=\epsilon\), then \(E(\epsilon)\) and \(E(\epsilon_{+})\) (resp. \(E(\epsilon_{-})\) admit the factorization of the same type._ **Theorem 7.12**.: _If an equation \(E(e)\) admits a shift operator \(P_{+}\) for a shift \(sh_{+}:e\to e_{+}\), and if for some \(e=\epsilon\), \(E(\epsilon)\) is reducible of type \([1,2]\), then there are two cases:_ * \(E(\epsilon_{+})\) _is reducible of type_ \([1,2]\)_; in this case,_ \(E(sh_{+}^{n}(\epsilon))\) _is also reducible of type_ \([1,2]\)_, and_ \(Sv_{sh_{+}}(sh_{+}^{n}(\epsilon))\neq 0\) _for_ \(n=1,2,\dots\)_,_ * \(E(\epsilon_{+})\) _is reducible of type_ \([2,1]\)_; in this case,_ \(E(sh_{+}^{n}(\epsilon))\) _is reducible of type_ \([2,1]\)_, for_ \(n=1,2,\dots\) _and_ \(Sv_{sh_{+}}(\epsilon)=Sv_{sh_{+}}(\epsilon_{+})=0\)_, and_ \(Sv_{sh_{+}}(sh_{+}^{n}(\epsilon))\neq 0\)_, for_ \(n=2,3,\dots\)_._
2303.04089
Tidal Deformability of Fermion-Boson Stars: Neutron Stars Admixed with Ultra-Light Dark Matter
In this work we investigate the tidal deformability of a neutron star admixed with dark matter, modeled as a massive, self-interacting, complex scalar field. We derive the equations to compute the tidal deformability of the full Einstein-Hilbert-Klein-Gordon system self-consistently, and probe the influence of the scalar field mass and self-interaction strength on the total mass and tidal properties of the combined system. We find that dark matter core-like configurations lead to more compact objects with smaller tidal deformability, and dark matter cloud-like configurations lead to larger tidal deformability. Electromagnetic observations of certain cloud-like configurations would appear to violate the Buchdahl limit. The self-interaction strength is found to have a significant effect on both mass and tidal deformability. We discuss observational constraints and the connection to anomalous detections. We also investigate how this model compares to those with an effective bosonic equation of state and find the interaction strength where they converge sufficiently.
Robin Fynn Diedrichs, Niklas Becker, CΓ©dric Jockel, Jan-Erik Christian, Laura Sagunski, JΓΌrgen Schaffner-Bielich
2023-03-07T17:49:06Z
http://arxiv.org/abs/2303.04089v2
# Tidal Deformability of Fermion-Boson Stars: ###### Abstract In this work we investigate the tidal deformability of a neutron star admixed with dark matter, modeled as a massive, self-interacting, complex scalar field. We derive the equations to compute the tidal deformability of the full Einstein-Hilbert-Klein-Gordon system self-consistently, and probe the influence of the scalar field mass and self-interaction strength on the total mass and tidal properties of the combined system. We find that dark matter core-like configurations lead to more compact objects with smaller tidal deformability, and dark matter cloud-like configurations lead to larger tidal deformability. Electromagnetic observations of certain cloud-like configurations would appear to violate the Buchdahl limit. The self-interaction strength is found to have a significant effect on both mass and tidal deformability. We discuss observational constraints and the connection to anomalous detections. We also investigate how this model compares to those with an effective bosonic equation of state and find the interaction strength where they converge sufficiently. ## I Introduction Neutron stars are highly compact remnants of massive stars. Due to the high densities inside of neutron stars, they allow us to probe nuclear matter at high densities, a region that is not readily accessible with analytic techniques. The equation of state (EoS) describes the interplay between density and pressure, which is needed to close the Tolman-Oppenheimer-Volkoff (TOV) equations [1; 2] that describe the density profile of a spherically symmetric star and the curvature of space-time that is produced self-consistently. A significant constraint on the EoS is the mass value of the most massive known compact star. If an EoS is not able to generate a star of this mass, it cannot describe reality. There are multiple pulsars with masses at or above \(2\,\mathrm{M}_{\odot}\)[3; 4; 5; 6; 7]. Recently even a \(2.35\pm 0.17\,\mathrm{M}_{\odot}\) neutron star was reported by Romani et al. [8]. There is also some speculation that the lighter companion of the GW190814 gravitational wave event [9] was the most massive neutron star ever observed, with a mass of about \(2.6\,\mathrm{M}_{\odot}\). However, there is some evidence that the object should be considered the lightest observed black hole instead [10; 11; 12; 13; 14; 15]. Such high masses require stiff EoSs, where the energy density strongly rises with increasing pressure. This constraint is supported by the NICER measurements of the pulsars J0030+0451 [16; 17; 18] and J0740+6620 [19; 20; 21], which report quite large radii. The contrary is true for the neutron star merger event GW170817 detected by LIGO/Virgo [22; 23; 24], which favors more compact configurations generated by soft EoSs. It is additionally possible that neutron stars accumulate dark matter (DM) in a sufficient abundance to modify their observables, such as the mass, radius, and tidal deformability. These quantities have been measured in recent observations made by, e.g., NICER [16; 17; 18; 19; 20; 21] and the LIGO/Virgo collaborations, which thus allow to constraint the properties of DM. DM is an integral part of the \(\Lambda\)CDM model, which is the concordant model of cosmology [25]. Despite decades of searches, its nature and properties are still largely unknown [26; 27]. The connection between neutron stars - where the highest densities of matter are expected - and DM has also been explored in numerous publications and is an active area of research [28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48]. DM could be around neutron stars as a _cloud_ or inside neutron stars as a _core_. Neutron stars with DM cores could form 1) from a DM'seed' through accretion of baryonic matter [49], 2) through mergers of neutron- and boson stars, 3) through accretion and subsequent accumulation of DM inside the neutron star [28; 29; 30; 32; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45] or 4) through the decay of standard model particles inside the neutron star into DM [41; 42; 43; 46; 47]. The presence of DM clouds and cores in and around neutron stars will affect the observable properties of the neutron stars, thus making them indirect laboratories for DM properties. Present and future gravitational wave detectors have the potential to detect the possible presence of DM in merging neutron stars and to constrain the properties of DM, such as its mass and its self-interaction strength [28; 39; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65]. In this work, we model DM as a minimally coupled complex scalar field that only interacts with the standard model (SM) via gravity and study its impact on the neutron star observables. To this end, we construct equilibrium solutions and their first-order perturbations and solve the coupled Einstein-Hilbert-Klein-Gordon (EHKG) system of equations. Such systems, termed fermion-boson stars (FBS), were first introduced by Henriques et al. [66] and subsequently an alytically studied in terms of stability under radial perturbations [67]. In [68] they were connected to current constraints on the mass and radii of NSs and their dynamical properties were explored in [69; 70; 71; 72]. In all of these cases, these systems were investigated using a perfect fluid for the nuclear matter and a classical scalar field for the bosonic DM, which is an approach that we will also follow in this work. The described system is closely related to boson stars [73; 74; 75], as it can be seen as a boson star that coexists with a neutron star at the same location in space. The tidal deformability of such systems was first investigated in [50], where the authors considered scalar bosonic DM with masses in the MeV to GeV range, which is gauged by a U(1) vector boson and focused on the parameter space that results in the formation of a dark halo. They further constructed an EoS for the bosonic sector by using mean field theory. In order to obtain solutions for their system, they thus extended the TOV equations to account for two fluids at the same time. This model was subsequently further investigated first in [76] in terms of detectability prospects and in [77], where the resulting tidal deformability was presented for a wider range of parameters that also include scenarios in which the DM form a core. Similarly, in [78; 54], scalar DM that self-interacts via a quartic coupling was considered. Here, the authors used an effective EoS that was first derived in [75] and then also used the two-fluid approach. The utilized EoS is however only valid if the self-interactions are sufficiently strong. This paper is structured as follows: In section II we present the construction of equilibrium solutions and further extend these equations in section III to also include the first-order perturbations. In section IV we present the resulting tidal deformability and compare it to observational constraints. In section V we compare the EHKG solutions to the two-fluid model. Finally, in section VI we summarize our findings. Throughout this work, we use units in which \(G=\mathrm{M}_{\odot}=c=1\). See also appendix A for information on the unit conversion. ## II Equilibrium solutions In this section, we discuss the construction of equilibrium solutions of FBS, which was first presented in [66]. We model dark matter as a massive and complex scalar field that only interacts with the SM via gravity, such that the action of the combined system is given by \[S=\int d^{4}x\sqrt{-g}\left[\frac{R}{16\pi}-\nabla_{\alpha}\bar{\Phi}\nabla^{ \alpha}\Phi-V(\bar{\Phi}\Phi)+\mathcal{L}_{m}\right], \tag{1}\] where \(\mathcal{L}_{m}\) is the Lagrangian describing nuclear matter and \(V(\bar{\Phi}\Phi)\) is the scalar field's potential. The scalar field is invariant under a global U(1) symmetry that gives rise to a conserved Noether current \[j_{\mu}=i\left(\bar{\Phi}\nabla_{\mu}\Phi-\Phi\nabla_{\mu}\bar{\Phi}\right), \tag{2}\] which allows to generally define the total number of bosons in the system as \[N_{\mathrm{b}}\equiv\int d^{3}x\sqrt{-g}g^{0\mu}j_{\mu}. \tag{3}\] The energy-momentum tensor for the scalar part is given by \[\begin{split} T^{(\Phi)}_{\mu\nu}=-g_{\mu\nu}\left(\partial_{ \alpha}\bar{\Phi}\partial^{\alpha}\Phi+V(\bar{\Phi}\Phi)\right)\\ +\partial_{\mu}\bar{\Phi}\partial_{\nu}\Phi+\partial_{\mu}\Phi \partial_{\nu}\bar{\Phi}.\end{split} \tag{4}\] Varying the action with respect to the scalar field results in the Klein-Gordon equation, \[\nabla_{\mu}\nabla^{\mu}\Phi=\Phi V^{\prime}(\bar{\Phi}\Phi),\text{ with }V^{\prime}(\bar{\Phi}\Phi):=\frac{dV}{d|\Phi|^{2}}. \tag{5}\] This equation directly implies that the energy-momentum tensor of the scalar field is separately conserved from the perfect fluid energy-momentum tensor. The energy-momentum tensor for nuclear matter is assumed to be of the perfect fluid form: \[T^{(\mathrm{NS})}_{\mu\nu}=[\rho(1+\epsilon)+P]u_{\mu}u_{\nu}+Pg_{\mu\nu}, \tag{6}\] where \(\rho\) is the rest-mass energy density and \(\epsilon\) is the internal energy density, such that \(\rho(1+\epsilon)\) describes the total energy density \(e\). Requiring that the Noether current is conserved, i.e. \(\nabla_{\mu}(\rho u^{\mu})=0\), allows to define the total number of baryons in the system generally as \[N_{\mathrm{f}}=\int d^{3}x\sqrt{-g}g^{0\mu}\rho u_{\mu}. \tag{7}\] We consider the system (for now) to be in spherical symmetric equilibrium, such that the metric can be written as \[g_{\mu\nu}=\mathrm{diag}\left(-e^{v(r)},e^{u(r)},r^{2},r^{2}\sin^{2}\theta \right). \tag{8}\] We further consider a static perfect fluid, such that \(u_{\mu}=(e^{v/2},0,0,0)\) and write the scalar field as \[\Phi(t,r)=\phi_{0}(r)e^{-i\omega t}, \tag{9}\] Using the spherical symmetric ansatz together with the Klein-Gordon equation results in an equation describing the radial dependence of the bosonic field \[\phi_{0}^{\prime\prime}=e^{u}\left(V^{\prime}(\phi_{0}^{2})-\omega^{2}e^{-v}\right) \phi_{0}+\left(\frac{u^{\prime}-v^{\prime}}{2}-\frac{2}{r}\right)\phi_{0}^{\prime}. \tag{10}\] Additionally, the Einstein equations simplify to the following two equations regarding the metric functions \(u(r)\) and \(v(r)\): \[\begin{split} u^{\prime}=8\pi re^{u}\Big{[}\omega^{2}\phi_{0}^{ 2}e^{-v}&+V(\phi\bar{\phi})\\ &+e^{-u}\phi_{0}^{\prime 2}+\rho(1+\epsilon)\Big{]}-\frac{e^{u}-1}{r },\\ v^{\prime}=8\pi re^{u}\Big{[}\omega^{2}\phi_{0}^{2}e^{-v}& -V(\phi\bar{\phi})\\ &+e^{-u}\phi_{0}^{\prime 2}+P\Big{]}+\frac{e^{u}-1}{r}.\end{split} \tag{11}\] Also, the conservation of the energy-momentum tensor of nuclear matter \(\nabla_{\mu}T^{\mu\nu\,\mathrm{(NS)}}=0\) provides a differential equation for \(P\): \[P^{\prime}=-[\rho(1+\epsilon)+P]\frac{v^{\prime}}{2} \tag{13}\] This system of equations is closed by providing an EoS \(P(\rho,\epsilon)\) (or \(P(e)\)) for the nuclear matter part. Further, for the considered system, the expressions for the total number of fermions (nuclear matter) and bosons (dark matter) simplify to \[N_{\mathrm{b}} =8\pi\int_{0}^{\infty}drr^{2}e^{(u-v)/2}\omega\phi_{0}^{2}, \tag{14}\] \[N_{\mathrm{f}} =4\pi\int_{0}^{R_{\mathrm{f}}}drr^{2}e^{u/2}\rho, \tag{15}\] where \(R_{\mathrm{f}}\) denotes the fermionic radius, which is determined by the radial position at which the fermionic pressure \(P\) vanishes. The total gravitational mass of the system is given by \[M_{\mathrm{tot}}=\lim_{r\rightarrow\infty}\frac{r}{2}\left(1-e^{-u(r)}\right). \tag{16}\] In order to integrate these equations, it is still necessary to provide suitable initial conditions. We do this by enforcing asymptotic flatness and regularity at the origin, i.e. \[\begin{split}\lim_{r\rightarrow\infty}v(r)&=0,\qquad v (0)=v_{c},\\ \lim_{r\rightarrow\infty}u(r)&=0,\qquad u(0)=0,\\ \lim_{r\rightarrow\infty}\phi_{0}(r)&=0,\qquad\phi_{ 0}(0)=\phi_{c},\\ \phi_{0}^{\prime}(0)&=0,\qquad\rho(0)=\rho_{c}.\end{split} \tag{17}\] Asymptotic flatness generally requires fine-tuning \(v_{c}\) to some non-zero value. However, it is possible to absorb a constant shift in \(v(r)\) (e.g. \(v\to v^{\prime}=v-v_{c}\)) by rescaling the above set of equations by \(\omega\rightarrow\omega^{\prime}=\omega e^{-v_{c}/2}\). This Figure 1: **Left panel:** Density plot that displays the total gravitational mass as a function of the central rest-mass density (\(\rho_{c}\)) and central value of the scalar field (\(\phi_{c}\)). Additionally, it displays the stability curve as the solid black line calculated using Eq. (18), i.e. all configurations that lie within the bottom left parameter region that is bordered by the black line are stable against radial perturbations. **Right panel:** Mass-radius diagram displaying the fermionic radius (the radius of the fermionic component) vs the total gravitational mass for configurations that are within the stable region displayed in the left panel. Each point corresponds to a single configuration and is color-coded according to the rest mass fraction of the dark matter component. The solid black line shows the mass-radius curve for pure fermionic matter. For both plots a massive scalar field with no self-interactions and the mass set to \(m=1.3\times 10^{-10}\,\mathrm{eV}\) was considered in addition to the DD2 EoS. rescaling leaves the set of equations invariant and has the advantage that we automatically have \(v_{c}=0\). After integrating to obtain a solution, we can retrieve the physical values of \(\omega\) and \(v\) by doing the inverse transformation using the asymptotic value of \(v(r)\). Finally, in order to obtain a solution to this system, it is now only necessary to find the value of \(\omega\), which results in profiles that fulfill all mentioned conditions eqs. (17), which we will denote by eigenvalues. In general, there are infinitely many possible profiles that are characterized by how many nodes (i.e. radial positions with \(\phi(r)=0\)) are present in the scalar field profile. In order to efficiently find solutions, we use the fact that the scalar field profile either diverges towards \(+\infty\) or \(-\infty\) and changes its direction of divergence, when \(\omega\) passes an eigenvalue. This provides us with a binary criterion and thus allows us to implement a binary search on \(\omega\), which converges exponentially fast. Once a sufficiently accurate \(\omega\) is found, we modify the integration, such that \(\phi_{0}\) is set to \(0\) at a finite radius \(r_{B}^{*}\) where \(\phi_{0}(r_{B}^{*})=0\). The convergence condition for this is \(\phi_{0}(r_{B}^{*})/\phi_{c}<10^{-4}\). This is necessary because otherwise, the numerical integration diverges at finite radii. Since we have the additional neutron matter component, in some part of the parameter space, the integration would diverge before \(P\) has converged to \(0\). Therefore, we artificially set \(\phi_{0}=0\) for \(r>r_{B}^{*}\), which allows us to circumvent the divergence. For some configurations, due to numerical precision limits, this condition cannot be fulfilled. This generally happens for small initial field values \(\phi_{c}\lesssim 10^{-4}\), where the bosonic cloud extends far outside the neutron star. In these cases, we extract the mass \(M_{\rm tot}\) at the point where its derivative has a global minimum. The radial profiles of pure neutron- and pure boson stars are completely given by the central value of the density and scalar field value, respectively. As such, each system is described solely by a single parameter. The stability of a certain solution can be investigated by finding the point at which the total gravitational mass reaches its maximum in regard to that parameter. However, since here we deal with a two-parameter family of solutions, the stability criterion gets slightly modified. In [67] the stability criterion for FBSs was first presented. It states that the transition between stable and unstable configurations is given by the point at which \[\frac{dN_{\rm f}}{d\sigma}=\frac{dN_{\rm b}}{d\sigma}=0, \tag{18}\] where \(d/d\sigma\) denotes the derivative in the direction of constant total mass, i.e. \[\frac{dN_{\rm f}}{d\sigma}=-\frac{\partial M_{\rm tot}}{\partial\rho_{c}} \frac{\partial N_{\rm f}}{\partial\phi_{c}}+\frac{\partial M_{\rm tot}}{ \partial\phi_{c}}\frac{\partial N_{\rm f}}{\partial\rho_{c}}. \tag{19}\] Figure 1 shows what configurations are stable depending on the central value of the rest mass density and the central value of the scalar field according to the above condition for the case of a massive scalar field with no self-interactions and the mass set to \(m=1.34\times 10^{-10}\) eV. Additionally, the resulting mass and radii for the stable configurations are also displayed. ## III Tidal deformability In order to obtain the tidal deformability, we will follow the same procedure that was used in [79] to obtain the tidal deformability of pure neutron stars and subsequently also applied to pure boson stars in [80; 81]: We are expanding the matter and gravitational field around a static, spherically symmetric configuration and then insert this expansion into the linearized Einstein equations to obtain a system of differential equations that allows solving for the linear perturbations, from which we then extract the tidal deformability. Applying an external quadrupolar tidal field \(\mathcal{E}_{ij}\) to a spherically symmetric star results in it developing a quadrupolar moment \(Q_{ij}\) as a response. At linear order, this response is proportional to the applied tidal field, such that \(Q_{ij}=-\lambda_{\rm tidal}\mathcal{E}_{ij}\), where \(\lambda_{\rm tidal}\) is the tidal deformability. The induced quadrupolar moment modifies the \(g_{tt}\) metric component, such that at leading order in the asymptotic rest frame at large radii [82] \[g_{tt}=-1+\frac{2M_{\rm tot}}{r}-\mathcal{E}_{ij}x^{i}x^{j}\left(1+\frac{3 \lambda_{\rm tidal}}{r^{5}}\right), \tag{20}\] where \(x^{i}\) define a Cartesian coordinate system with \(r^{2}=\delta_{ij}x^{i}x^{j}\). We now turn to explicitly deriving the equations governing the linear perturbations from the linearized Einstein equations. We focus on static, even-parity, and quadrupolar (\(l=2\)) metric perturbations, which we denote by \(h_{\mu\nu}\). Further, we choose to work in the Regge-Wheeler gauge, in which \(h_{\mu\nu}\) takes the form \[h_{\mu\nu}=Y_{20}(\theta,\varphi)\times \tag{21}\] \[\mathrm{diag}\left(-e^{v(r)}H_{0}(r),e^{u(r)}H_{2}(r),r^{2}K(r),r^ {2}K(r)\sin^{2}\theta\right),\] where \(H_{0}\), \(H_{2}\) and \(K\) describe the radial dependence of each perturbed metric component and \(Y_{20}\) is the \((l,m)=(2,0)\) spherical harmonic. At the same time, we expand the scalar field. We denote the first-order perturbation as \(\delta\Phi\) with \[\delta\Phi(t,r,\theta,\varphi)=\phi_{1}(r)\frac{e^{-i\omega t}}{r}Y_{20}( \theta,\varphi), \tag{22}\] where the same time dependence was chosen for the perturbations in order to ensure that the energy-momentum tensor remains static. We can obtain a set of differential equations that relate the perturbations to the background solutions by expanding the Einstein equations to first order in \(h_{\mu\nu}\) and \(\phi_{1}\). Using this expansion in the Klein-Gordon equation and only keeping terms linear in the perturbation results in \[\begin{split}\phi_{1}^{\prime\prime}&=\frac{u^{\prime }-v^{\prime}}{2}\phi_{1}^{\prime}+\left[-2\phi_{0}^{\prime}-r\phi_{0}^{\prime \prime}+\frac{v^{\prime}+u^{\prime}}{2}r\phi_{0}^{\prime}+\omega^{2}r\phi_{0}e^ {u-v}\right]H_{0}\\ &+\left[\frac{6e^{u}}{r^{2}}+\frac{v^{\prime}-u^{\prime}}{2r}+16 \pi\phi_{0}^{\prime 2}+e^{u}\left(V^{\prime}(\phi_{0}^{2})+2\phi_{0}^{2}V^{\prime \prime}(\phi_{0}^{2})-\omega^{2}e^{-v}\right)\right]\phi_{1}.\end{split} \tag{23}\] Similarly, we expand the Einstein equations, i.e. we look at \(\delta G_{\mu\nu}=8\pi\delta T_{\mu\nu}\). The perturbed energy-momentum tensor of the fermionic part is written as \(\delta T_{\nu}^{\mu(\text{NS})}=\text{diag}(-\delta P/c_{s}^{2},\delta P, \delta P,\delta P)\), where we used \(\delta e=\delta P\,\partial e/\partial P=\delta P/c_{s}^{2}\), with \(c_{s}\) the sound speed. The perturbed energy-momentum tensor of the scalar field is computed by expanding Eq. (4). Subtracting the \(\theta\theta\) from the \(\phi\phi\) component of the perturbed Einstein equations reveals \(H_{2}(r)=-H_{0}(r)\). Adding the \(\theta\theta\) component to the \(\phi\phi\) component allows to obtain an expression for \(\delta P\), which can be substituted into the \(tt\) minus the \(rr\) component to obtain a differential equation for \(H_{0}\): \[H_{0}^{\prime\prime}+\left(\frac{v^{\prime}-u^{\prime}}{2}+\frac {2}{r}\right)H_{0}^{\prime}\] \[+\left[-4\pi\frac{c_{s}^{2}+3}{c_{s}^{2}}\phi_{0}^{\prime 2}+4\pi \omega^{2}e^{u-v}\frac{c_{s}^{2}-1}{c_{s}^{2}}\phi_{0}^{2}-\frac{u^{\prime}v^ {\prime}+v^{\prime 2}}{2}+v^{\prime\prime}+\frac{3u^{\prime}+7v^{\prime}}{2r}+ \frac{u^{\prime}+v^{\prime}}{2rc_{s}^{2}}-\frac{6}{r}e^{u}\right]H_{0} \tag{24}\] \[=\left[-\frac{8\pi}{r}\frac{c_{s}^{2}+3}{c_{s}^{2}}\phi_{0}^{ \prime\prime}+\frac{4\pi}{r}\left(u^{\prime}+v^{\prime}+\frac{u^{\prime}-v^{ \prime}}{c_{s}^{2}}-\frac{4}{r}\frac{c_{s}^{2}+3}{c_{s}^{2}}\right)\phi_{0}^{ \prime}+\frac{8\pi}{r}e^{u}\left(\mu^{2}+\omega^{2}e^{-v}+\frac{V^{\prime}( \phi_{0}^{2})-\omega^{2}e^{-v}}{c_{s}^{2}}\right)\phi_{0}\right]\phi_{1}.\] Here primes denote derivatives with respect to the coordinate radius \(r\). The above equation contains a term depending on \(v^{\prime\prime}\), which is explicitly given by \[\begin{split} v^{\prime\prime}&=8\pi e^{u}\left(r Pu^{\prime}+rP^{\prime}+P\right)+8\pi r\phi_{0}^{\prime}\phi_{0}^{ \prime\prime}+16\pi\phi_{0}^{\prime 2}+16\pi re^{u}\left(-V^{\prime}(\phi_{0}^{2})+ \omega^{2}e^{-v}\right)\phi_{0}\phi_{0}^{\prime}\\ &\quad-8\pi e^{u}V(\phi_{0}^{2})\left(ru^{\prime}+1\right)+8\pi \omega^{2}e^{u}\left[r(u^{\prime}-v^{\prime})e^{-v}+e^{-v}\right]\phi_{0}^{2} +e^{u}\frac{ru^{\prime}-1}{r^{2}}+\frac{1}{r^{2}}.\end{split} \tag{25}\] As mentioned in [81], for radii larger than the typical size of the combined system, the differential equation for \(H_{0}\) reduces to \[H_{0}^{\prime\prime}+\left(\frac{2}{r}+e^{u}\frac{2M}{r^{2}}\right)H_{0}^{ \prime}-\left(\frac{6e^{u}}{r^{2}}+e^{2u}\frac{4M^{2}}{r^{4}}\right)H_{0}=0, \tag{26}\] which has a solution in terms of associate Legendre functions \[H_{0}\approx c_{1}Q_{2}^{2}\left(\frac{r}{M}-1\right)+c_{2}P_{2}^{2}\left( \frac{r}{M}-1\right). \tag{27}\] Expanding this equation in \(r/M\) and matching to Eq. (20) results in \[\lambda_{\text{tidal}} =\frac{16}{5}M^{5}(1-2\mathcal{C})^{2}[2+2\mathcal{C}(y-1)-y] \tag{28}\] \[\quad\times\{3(1-\mathcal{C})^{2}[2-y+2\mathcal{C}(y-1)]\log(1-2 \mathcal{C})\] \[\quad+2\mathcal{C}[6-3y+3\mathcal{C}(5y-8)]+4\mathcal{C}^{3}[13- 11y\] \[\quad+\mathcal{C}(3y-2)+2\mathcal{C}^{2}(1+y)]\}^{-1},\] where \(y\equiv r_{\text{ext}}H_{0}^{\prime}(r_{\text{ext}})/H_{0}(r_{\text{ext}})\), \(\mathcal{C}\equiv M_{\text{ext}}/r_{\text{ext}}\) and \(r_{\text{ext}}\) denotes radial position at which \(\lambda_{\text{tidal}}\) is calculated. The dimensionless tidal deformability is defined as \(\Lambda_{\text{tidal}}:=\lambda_{\text{tidal}}/M_{\text{tot}}^{5}\). We impose the boundary conditions \[\lim_{r\to 0}H_{0}\sim\tilde{H}_{0}r^{2},\qquad\lim_{r\to 0}H_{0}^{ \prime}\sim 2\tilde{H}_{0}r,\] \[\lim_{r\to\infty}\phi_{1}=0,\qquad\lim_{r\to 0}\phi_{1}^{ \prime}\sim 3\tilde{\phi}_{1}r^{2}, \tag{29}\] where the values superscribed by tilde are initial values to be determined. Now, we can use the fact that Eq. (24) is invariant under a simultaneous rescaling of \(\phi_{1}\) and \(H_{0}\). Due to this, we can rescale the equations to automatically have \(\tilde{H}_{0}=1\). Similarly to the procedure for \(\omega\), we use a bisection algorithm to then find the initial \(\tilde{\phi}_{1}\) such that the above conditions are fulfilled. \(\phi_{1}\) converges to \(0\) just as \(\phi_{0}\), so we also set \(\phi_{1}(r)=0\) for \(r>r_{B}^{*}\). This allows us to circumvent the divergence of the perturbations, while having no effect on the tidal deformability, since the equations for \(\phi_{1},H_{0}\) decouple with \(\phi_{0}\equiv 0\). Then, the tidal deformability is constant for any \(r>r_{B}^{*}\) and can easily be extracted. In case the convergence condition cannot be fulfilled, we follow the procedure in [81] and extract \(y\) at \(r_{\text{ext}}\) such that it is a local maximum. Since there are two components in the neutron star at play, there can be multiple local maxima, of which we choose the one at the largest radius. The code is publicly available along with examples and the procedures to obtain the results.1 Footnote 1: github.com/DMGW-Goethe/FBS-Solver ## IV Results We now specialize to a potential that is quartic in the field: \[V(\bar{\Phi}\Phi)=m^{2}\bar{\Phi}\Phi+\frac{\lambda}{2}(\bar{\Phi}\Phi)^{2}, \tag{30}\] where \(m\) is the particle mass and \(\lambda\) is the self-interaction parameter. To allow for easy comparison with previous works, we use the effective interaction parameter \(\Lambda_{\rm int}=\lambda/(8\pi m^{2})\). This was originally introduced in [75] to quantify the self-interaction strength, i.e. for \(\Lambda_{\rm int}\ll 1\) the total gravitational mass of a pure boson stars scales as \(M\propto 1/m\), while for \(\Lambda_{\rm int}\gg 1\) we have \(M\propto 1/m^{2}\). Also, in this regime the stress-energy tensor becomes approximately isotropic, meaning that an EoS might be used to model this case (see sec. V below). It is important to keep in mind that \(\Lambda_{\rm int}\) was introduced in the context of pure boson stars and thus the scaling relations of the total mass are not generally valid for the mixed system, i.e. FBSs. Nonetheless, we still find it convenient to use it as a general measure to compare different choices of the mass and self-interaction strength. For the fermionic component, we employ the DD2 EoS (with electrons) from the CompOSE database [83; 84]. ### Mass-Radius Relations and Tidal Deformability First, the mass-radius relations are plotted in Fig. 2 and Fig. 3. We show nine different models with \(m=\{0.1,1,10\}\cdot 1.34\times 10^{-10}\,\)eV and \(\Lambda_{\rm int}=\{0,10,100\}\). We used a grid of \(\rho_{c},\phi_{c}\) to populate the plots, selecting only the stable configurations as explained in section II. Each point is colored by the resulting DM mass fraction \(N_{B}/(N_{B}+N_{F})\). Instead of a mass-radius curve, this gives a mass-radius region for the FBSs with different fermionic and bosonic content. Important to note is that in Fig. 2 we plot the fermionic radius, the radius where the fermionic component vanishes. The bosonic radius can be orders of magnitudes larger or smaller, depending on the mass and self-interaction parameter. To better understand these objects, we also plot the effective gravitational radius - the radius at which 99% of the rest mass is contained - in Fig. 3. Here, the compactness of the FBS can be inferred. For pure neutron stars with the DD2, the crust has comparatively low density, which makes this effective gravitational radius significantly smaller than the fermionic one. Which radius is more relevant for a given problem depends on the observation, e.g. the fermionic radius would be crucial for electromagnetic signatures, such as those observed by the NICER telescope. The effective gravitational radius would be more relevant for the inspiral in binary mergers and enters through the compactness and the tidal deformability. Some general trends can be seen in the figures. Stars dominated by the fermionic part are close to the pure DD2 solution, as expected. For stars dominated by the bosonic component, the pure boson star solutions are recovered. For \(m=\{1,10\}\cdot 1.34\times 10^{-10}\,\)eV, the regions in Fig. 2 extend to lower masses with similar apparent compactness. These results are consistent with the lines shown in [72]. A look at Fig. 3 reveals the behavior of these solutions. For \(m=1.34\times 10^{-9}\,\)eV, the bosonic component is predominantly inside the fermionic one as a DM _core_. For \(m=1.34\times 10^{-10}\,\)eV, the bosonic and fermionic distributions have a similar extent, for low DM mass fraction the compactness is increased, while for higher DM mass fraction the compactness decreases as the DM forms a _cloud_. This is similar to the behavior seen in [85] for a different mass range, where increasing the DM mass fraction leads to cloud formation. For \(m=1.34\times 10^{-11}\,\)eV, the bosonic component completely envelops the fermionic one in a cloud and can significantly decrease the compactness of the object (notice the different scales on the x-axis). The apparent compactness of the fermionic part increases on the other hand. Here, only observing the fermionic radius as in Fig. 2 would seem like a violation of GR, as the apparent compactness exceeds the Buchdahl limit of 4/9. The relation between tidal deformability and total gravitational mass is plotted in Fig. 4. Here, we show the dimensionless tidal deformability \(\Lambda_{\rm tidal}=\lambda_{\rm tidal}/M^{5}\). In blue-bordered lines, the tidal deformability of the DD2 EoS is shown, while the tidal deformability of a pure boson star is shown in yellow-bordered lines. The latter agrees with the trend lines shown in [81]. For \(m=1.34\times 10^{-9}\,\)eV, the DM is mostly confined to the inner part of the neutron star as a core and therefore does not affect the tidal deformability significantly. Only for stars completely dominated by DM, the results are close to the pure boson star solutions. For larger interactions \(\Lambda_{\rm int}\approx 100\), the tidal deformability is decreased. For \(m=1.34\times 10^{-11}\,\)eV on the other hand, where the bosonic component forms a cloud, there is a significant effect on the tidal deformability. The tidal deformability of boson stars is much higher than the one of purely fermionic ones, so even small amounts of DM can significantly increase the tidal deformability of the FBS. For constant \(\rho_{c}\), the tidal deformability increases orders of magnitude as \(\phi_{c}\) increases. Then, there is a turning point where the tidal deformability decreases while increasing total gravitational mass and converges to the purely bosonic solutions. Overall, this opens up a vast new pa Figure 2: The relation between total gravitational mass \(M\) and the fermionic radius \(R_{f}\) for the FBSs with different DD2 forms mass fractions. The rows correspond to three different bosonic masses \(m=\{1,10,0.1\}\cdot 1.34\times 10^{-10}\,\mathrm{eV}\), while the columns correspond to three different \(\Lambda_{\mathrm{int}}=\{0,10,100\}\). The EoS we employ for the fermionic part is the DD2. Notice the different scale of the bottom plots. Observing only the fermionic radius of these systems would appear to violate the Buchdahl limit, even though the whole FBS does not. Figure 3: The relation between total gravitational mass \(M\) and the effective gravitational radius \(R_{g}\) for the FBSs with different DD2 forms mass fractions. The effective gravitational radius is the radius at which 99% of the mass is contained. The rows correspond to three different bosonic masses \(m=\{1,10,0.1\}\cdot 1.34\cross 10^{-10}\,\mathrm{eV}\), while the columns correspond to three different \(\Lambda_{\mathrm{int}}=\{0,10,100\}\). The EoS we employ for the fermionic part is the DD2. In the case of pure neutron stars, the crust has comparatively low density, which makes this effective gravitational radius significantly smaller than the fermionic one. Notice the different scales of the bottom plots. For low masses, the bosonic component forms a core and the total compactness of the object increases. For higher masses, the bosonic component forms a cloud and can significantly decrease the compactness of the object. rameter space, even for small DM mass fractions. While the presence of these bosonic clouds in small quantities would barely be observable in the mass-radius plane, it would clearly affect the tidal deformability even in small quantities, as visible in Fig. 5. For \(m=1.34\crosscross 10^{-10}\,\mathrm{eV}\), the behavior is more dependent on the interaction strength \(\Lambda_{\mathrm{int}}\). For weaker interactions, the tidal deformability stays roughly in the same order of magnitude for constant \(\rho_{c}\), while slowly converging to the pure bosonic solution for increasing \(\phi_{c}\). Figure 4: The relation between dimensionless tidal deformability \(\Lambda_{\mathrm{tidal}}=\lambda_{\mathrm{tidal}}/M^{5}\) and total gravitational mass \(M\) for the FBSs with different DM mass fraction. The rows correspond to three different bosonic masses \(m=\{1,10,0.1\}\cdot 1.34\cross 10^{-10}\,\mathrm{eV}\), while the columns correspond to three different interactions strengths \(\Lambda_{\mathrm{int}}=\{0,10,100\}\). For the fermionic part, we employ the DD2 EoS. For stronger interactions, the tidal deformability actually increases as it converges to the bosonic solution, as the bosonic component starts to form a cloud. This behavior is consistent with the observations of [54], where an effective EoS was used for modeling the bosonic component. ### Comparison to Observational Constraints There are measurements of the (fermionic) radius of neutron stars by the NICER telescope, tracking hot spots on their surface with X-ray observations. For the millisecond pulsar PSR J0030+0451 they derive the constraints on the mass \(M=1.34^{+0.15}_{-0.16}\,\mathrm{M}_{\odot}\) (68%) and radius \(R=12.71^{+1.14}_{-1.19}\) (68%) [17]. A second, heavier millisecond pulsar PSR J0740+6620 has been measured at \(M=2.07^{+0.07}_{-0.07}\,\mathrm{M}_{\odot}\) (68%) with radius \(R=12.39^{+1.30}_{-0.98}\,\mathrm{km}\) (68%) [20]. These measurements constitute only two single points on the mass-radius curve (in the neutron star case) or region (in the FBS case), but it can show which curves/regions would support the existence of such stars. We plot the posterior distributions of these measurements in Fig. 5 which should be compared to the regions in Fig. 2, where the fermionic radius is plotted. At first glance, the FBS solutions with a core, where the apparent compactness is greater, seem to be disfavored by the measurements (assuming the DD2 EoS), while the DM cloud solutions are well within the posteriors. This is in accordance with [85], who modeled the FBS with an effective EoS and also included the changing photon geodesics due to the DM cloud, and [76] who performed a Bayesian analysis with the effective EoS. Another measurement comes from the supernova remnant HESS J1731-347. Modeling the X-ray spectrum with accurate distance information from GAIA, they report a mass of \(M=0.77^{+0.20}_{-0.17}\,\mathrm{M}_{\odot}\) (68%) with radius \(R=10.4^{+0.86}_{-0.78}\,\mathrm{km}\) (68%) [86]. This is an unusually light neutron star, which standard star evolution theory struggles to explain, see e.g. [88]. The authors of [86] propose it to be a strange star, but looking at Fig. 2, this region is also well populated by DM core solutions. Of course, one would have to repeat their analysis with an actual bosonic component to get accurate constraints, which we leave for future work. Lastly, there is the observation of GW170817, a binary neutron star merger. Reference [24] has derived constraints with minimal assumptions on the nature of the compact objects. They use a mass-weighted linear combination of the individual tidal deformabilities and cite an upper limit of 630. Alternatively, assuming neutron stars with the same EoS, ref. [23] has derived constraints on the tidal deformability with the help of universal relations [89; 90]. These constraints are not perfectly applicable to our case, as the I-Love-Q relations are not necessarily applicable (although they might be [56] - we leave this for future work) and our two FBS stars might have the same EoS but different DM mass fractions. Nevertheless, we can make some initial guesses. It can be seen that the measurements generally favor lower tidal deformabilities. Extrapolating this to Fig. 4, this would mean that the DM cloud scenarios with larger tidal deformability are disfavored. Favored on the other hand are DM core situations, which can lower the tidal deformability. A more thorough analysis might place quantitative constraints on these models, which we leave for future work. Previous studies using an effective EoS description for the bosonic component reach similar conclusions and have placed initial constraints on different mass ranges, Figure 5: **Left panel:** Resulting mass and radii of FBS for the two cases of \(m=1.34\times 10^{-10}\,\mathrm{eV}\) and \(m=1.34\times 10^{-11}\,\mathrm{eV}\) shown together with constraints from HESS J1731-347 [86], PSR J0030+0451 [17], PSR J0740+6620 [20], PSR J1311-3430 [87] and J0952-0607 [8]. In both cases, the self-interactions was set to zero and the percentage number denotes the DM mass fraction. **Right panel:** Dimensionless tidal deformability for the same set of parameters shown together with the constraint coming from the GW170817 event [24]. such as [77; 91; 54]. Overall, the different measurements seem complementary, and combining them in a proper analysis might significantly constrain the parameter space. Of course, these properties are most likely degenerate with the neutron star EoS, which makes certainty hard to obtain. Breaking these degeneracies requires other methods, such as looking at correlations in the galactic DM distribution with the neutron star (FBS) mass distribution [91; 77]. ## V Comparison with an effective EoS Due to the significant numerical effort associated with solving the full system of equations (eqs. 10 - 13) self-consistently, earlier studies [78; 75] have used an effective EoS \(P(e)\) for the scalar field, treating it like a perfect fluid with pressure \(P\) and total energy density \(e\). The effective EoS was originally derived in [75] for the cases where \(\Lambda_{\rm int}=\lambda/8\pi m^{2}>0\) is large (strong self-interactions). It models exclusively the ground state of the scalar field and assumes an isotropic energy-momentum tensor (which is only valid in the given limit). The EoS has the advantage that the scalar field must not be solved for directly, and the evolution equations simplify to the default TOV-equations. The effective EoS is given by \[P=\frac{4}{9}\rho_{0}\left[\left(1+\frac{3}{4}\frac{e}{\rho_{0}}\right)^{1/2} -1\right]^{2}, \tag{31}\] where \(\rho_{0}=m^{4}/2\lambda\). Note that our expressions for \(\rho_{0}\) and \(\Lambda_{\rm int}\) deviate from [78; 75] by a factor of two due to the different normalization of the scalar field \(\Phi\) and the self-interaction parameter \(\lambda\) in the potential (30). The authors of [78] used the effective EoS in a two-fluid system of perfect fluids, which interact only gravitationally, to compute the tidal deformability of FBS. In the following, we compare the results obtained from integrating the two-fluid model (see [78] for details) and from solving the full system (10-13). In addition, the tidal deformability is computed as one would for a single-fluid system (details in [78]) for the two-fluid model, and as described in section III for the full system. For the initial conditions of the two-fluid model we choose the same conditions as in [78]. For better comparability between the full system and the effective EoS, we first want to find an expression relating the scalar field \(\phi\) to the energy density \(e_{\rm eff}\) of the effective fluid. To derive this relation, we set the \(T_{tt}\) component of Eq. (4) equal to the \(T_{tt}\)-component of a perfect fluid (therefore \(T_{tt}^{(\Phi)}\stackrel{{!}}{{=}}e_{\rm eff}\cdot e^{v}\), and use the approximations used in [75] (i.e. neglecting spatial derivatives). We obtain an expression that depends only on the scalar field value \(\phi\) (see Eq. (17)), the scalar field mass \(m\) and the self-interaction parameter \(\lambda\) \[e_{\rm eff}(\phi)=2m^{2}\phi^{2}+\frac{3}{2}\lambda\phi^{4}, \tag{32}\] where \(e^{-v}\omega^{2}=m^{2}+\lambda\phi^{2}\) was substituted using the Klein-Gordon equation (10). Equation (32) holds for all radii (under the approximations stated above). To get the initial conditions for \(e_{\rm eff,c}\), one simply plugs in the corresponding central value of the scalar field \(\phi_{c}\). Figure 6 shows the relative error \(\epsilon_{\rm rel}\) for the quantities \(M_{\rm tot}\) and the tidal deformability \(\Lambda_{\rm tidal}\), computed using the full system and the effective two-fluid system, with respect to \(\Lambda_{\rm int}\). It can be seen that the errors (the shaded regions) generally decrease for increasing \(\Lambda_{\rm int}\). This is consistent with the assumption that the effective EoS Eq. (31) becomes exact only in the limit of strong self-interactions. For small \(\Lambda_{\rm int}\) the relative error reaches 100% for the total mass and diverges for the tidal deformability. This is to be expected since the total mass converges to zero for pure boson stars when using the effective EoS in the limit \(\Lambda_{\rm int}\to 0\) (see fig. 2 in [75]), while it reaches a constant value when computing the mass using the full system. Likewise, due to the definition of the tidal deformability (see above), a diverging error is Figure 6: Distribution of the relative error (e.g. \(|\Lambda_{\rm tidal,full}-\Lambda_{\rm tidal,eff}|/\Lambda_{\rm tidal,full}\)) of the dimensionless tidal deformability \(\Lambda_{\rm tidal}\) (upper panel) and total mass \(M_{\rm tot}\) (lower panel) as a function of \(\Lambda_{\rm int}\). For example, the straight line shows the boundary, below which half of the FBS configurations lie, meaning that half of them have a relative error of less than the shown value for a given \(\Lambda_{\rm int}\). Subscripts of _full_ and _eff_ denote quantities obtained from the full system and from the effective EoS, respectively. Only stable FBS were considered for the relative error at a given \(\Lambda_{\rm int}\) and computations were performed for \(m=6.7\times 10^{-11}\) eV. The agreement between the full system and the effective EoS becomes generally better for large \(\Lambda_{\rm int}\), however, at some point, numerical inaccuracies in the full system dominate the relative error, which starts to be problematic for \(\Lambda_{\rm int}\gtrsim 400\). to be expected. For \(\Lambda_{\rm int}\approx 100\) the maximal error of the total mass (tidal deformability) is on the order of \(88\,\%\) (\(>10^{4}\,\%\)), whereas the lower 95-th percentiles of errors are noticeably smaller at around \(<47\,\%\) (\(<240\,\%\)). This means that only 5% of the computed configurations have relative errors higher than \(47\,\%\) (\(250\,\%\)). The median error denoted by the dashed line is around \(1\,\%\) (\(2\,\%\)). At \(\Lambda_{\rm int}=300\) the maximal error reaches \(85\,\%\) (\(>10^{4}\,\%\)) and the median error reaches \(0.4\,\%\) (\(0.8\,\%\)). Asymptotically, the error is constrained by floating-point precision and the inherent error of the effective EoS as compared to the full system. To gain a better understanding how the effective EoS and the full system compare, we compute the tidal deformability \(\Lambda_{\rm tidal}\) using both systems. The left panel of figure 7 shows the tidal deformability of pure boson stars calculated for different self-interaction strengths \(\Lambda_{\rm int}=\{10,100,200,400\}\). The solid lines show the solutions using the full system and the dashed lines are the values obtained using the effective EoS. The effective EoS can qualitatively reproduce the solution of the full system, even for small \(\Lambda_{\rm int}\). With increasing lambda, the agreement between full and effective system becomes better. At around \(\Lambda_{\rm int}=400\), the quantitative agreement reaches a few % relative difference. Next, we consider the case for mixed configurations with nonzero scalar field- and central density. The right panel of Fig. 7 shows the tidal deformability with respect to the FBS mass. Several curves of constant central scalar field \(\phi_{c}\) were calculated at different \(\Lambda_{\rm int}=\{10,100,200,400\}\). The choice of constant \(\phi_{c}\) is per se arbitrary but was made for the sake of simpler comparability with future works. The solid lines show the solutions obtained using the full system and the dashed lines were computed with the effective EoS (all other values being equal). With increasing \(\Lambda_{\rm int}\), the solutions using the effective EoS agree with the full system with increasing accuracy. Even though at lower \(\Lambda_{\rm int}<200\) the deviations are quite large, the qualitative trend is correctly recovered. At \(\Lambda_{\rm int}=400\), both systems produce reasonably similar results (within a few % of relative difference). This supports the usage of the effective EoS for large \(\Lambda_{\rm int}\gtrsim 400\) also for the computation of the tidal deformability \(\Lambda\). A few notes on the usefulness of the effective EoS Eq. (31) and the two-fluid system: We were able to verify the general notion, that the effective EoS becomes asymptotically more accurate, for most configurations. However, a significant percentage of FBS configurations with high relative errors remain, especially when considering the tidal deformability, where the relative error surpasses 200% for roughly five percent of all configurations. This is due to the different low mass limits and the definition of the dimensionless tidal deformability. Nevertheless, we conclude that the usage of the effective EoS is justified in the cases where \(\Lambda_{\rm int}\gtrsim 400\), as the errors are acceptable for most (massive) configurations. Of course, solving the full system eqs. (10-13) will always yield the exact results in theory. In practice, it can be numerically difficult to integrate the full system at high \(\Lambda_{\rm int}\gtrsim 400\) because (1) the frequency \(\omega\) must be tuned up to higher accuracy than what is possible using Figure 7: **Left panel:** Tidal deformability \(\Lambda_{\rm tidal}\) plotted against the total gravitational mass \(M_{\rm tot}\) for pure BS and various self-interaction strengths \(\Lambda_{\rm int}\). The boson mass is \(m=6.7\times 10^{-11}\,\)eV in all cases. The solid lines are the values obtained using the full system eqs. (10-13) and the dotted lines are the corresponding solutions using the effective bosonic EoS Eq. (31). **Right panel:** Tidal deformability \(\Lambda_{\rm tidal}\) with respect to the total gravitational mass \(M\) of different FBS for different self-interaction strengths \(\Lambda_{\rm int}\). The boson mass is \(m=6.7\times 10^{-11}\,\)eV in all cases. All lines have a constant central value of the scalar field \(\phi_{c}=0.02\), but different central densities \(\rho_{c}\). Only stars within the stability region are shown. The solid lines are the values obtained using the full system eqs. (10-13) and the dotted lines are the corresponding solutions using the effective bosonic EoS Eq. (31). 64-bit floating-point numbers and (2) increasingly small step-sizes are needed, to solve the equations correctly. During our tests, we could determine that the more relevant constraining factor is the high needed accuracy for \(\omega\), rather than the step-size. Smaller initial \(\phi_{c}\) lead to larger bosonic radii \(\gg 10\,\mathrm{km}\), for which the numerical integration becomes problematic. This concerns \(5\,\%\) of the considered configurations. In contrast, the two-fluid system together with the effective bosonic EoS is numerically robust and does not require numerical root-finding for \(\omega\), and can manage well with larger numerical step-sizes. With equal step-sizes and initial conditions, the two-fluid system takes around two orders of magnitude less computation time than solving the full system. The speedup can be increased further when considering that the two-fluid system also tolerates larger step-sizes while staying numerically accurate. ## VI Conclusions In this work, we considered the impact of a complex scalar field on the mass and tidal deformability of neutron stars. The scalar field was assumed to be massive and self-interacting, but to only interact gravitationally with the fermionic neutron star matter. We derived the equations describing the linear perturbations of the combined FBS system induced by the presence of an external gravitational tidal field and numerically solved them to obtain the tidal deformability of the combined system. We found that the scalar field masses \(m\) and self-interaction strengths \(\lambda\) which result in the _core-like_ configurations of the dark matter lead to objects with higher compactness and reduced tidal deformability. This is the case for masses \(m\gtrsim 1.34\times 10^{-10}\,\mathrm{eV}\). However, large self-interactions \(\lambda\) allow for higher FBS masses or can in some cases result in _cloud-like_ configurations. In some of these cases, observing only the fermionic radius would appear to violate the Buchdahl limit. When comparing the results to available observational data of pulsars, it becomes clear that their uncertainties are currently too large (apart from the pulsar mass measurements) to derive quantitative constraints on the dark matter component. The degeneracy of the effects of DM in the FBS with the EoS poses an additional challenge. As certain DM masses can increase the total mass of the system while leaving the fermionic radius roughly constant, this makes previously excluded EoS possible again, if they appear in a mixed configuration of NS matter and DM. Likewise, the unusually light neutron star HESS J1731-347 is difficult to reconcile with known high-mass pulsar measurements, using a regular EoS. The relatively weak constraint from GW170817 on the tidal deformability (\(\Lambda_{\mathrm{tidal}}\leq 800\) at \(M_{\mathrm{tot}}\approx 1.4\,\mathrm{M}_{\odot}\)) is currently also not strong enough to significantly narrow down the dark matter properties. With the upcoming joint run of LIGO, Virgo and KAGRA, we expect more observational data, which will enable us to derive quantitative constraints. We plan to investigate how to constrain dark matter properties using these observations in the future. In addition to solving for the scalar field explicitly, we also utilized an effective EoS to describe its contribution to the stress-energy-tensor and reduce the complexity of this model to a two-fluid system. This approach was recently used by [78] to compute the tidal deformability. In this work, we compared the result of using the effective EoS to solving the full system of equations. We found that for \(m=6.7\times 10^{-11}\,\mathrm{eV}\) and interactions strengths \(\Lambda_{\mathrm{int}}>300-400\) with \(\Lambda_{\mathrm{int}}=\lambda/(8\pi m^{2})\), the usage of the effective EoS is typically justified. We do not expect this conclusion to be dependent on the value of the mass \(m\), but rather only on \(\Lambda_{\mathrm{int}}\). Still, even for large values of \(\Lambda_{\mathrm{int}}\), we find a significant number of configurations with relative errors of \(>\mathcal{O}(10^{2})\). Finally, it would be interesting to study the exact impact the additional scalar field has on binary merger dynamics. In [92] this was initially studied for a non-self-interacting scalar field. In general, it will be necessary to extend this study to also account for self-interactions, as this can drastically modify the FBS properties and thus impact the observed gravitational wave signal. We will study this in detail in the future. ###### Acknowledgements. The authors acknowledge support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the CRC-TR 211 'Strong-interaction matter under extreme conditions'- project number 315477589 - TRR 211. ## Appendix A Units In this work, we considered units in which \(c=G=M_{\odot}=1\). As a direct consequence, distances are measured in units of \(\approx 1.48\,\mathrm{km}\), \(\hbar\approx 1.2\times 10^{-76}\neq 1\) and \(m_{\mathrm{planck}}=\sqrt{\hbar c/G}\approx 1.1\times 10^{-38}\). We describe the Boson star using the Klein-Gordon equation, which in SI units and flat spacetime reads as \((\square-(mc/\hbar)^{2})\phi=0\). The term \(mc/\hbar\) is the inverse of the reduced Compton-wavelength \(\lambda_{c}=\hbar/mc\), which sets the typical length scale for the system even in the self-gravitating case. Setting it equal to the gravitational radius \(GM/c^{2}\), which in the case of mass-scales of \(\sim M_{\odot}\) is approximately \(1.48\,\mathrm{km}\), leads to \(m=\hbar/c\lambda_{c}\), which corresponds to \(1.34\times 10^{-10}\,\mathrm{eV}\), which then also automatically results in Boson stars with masses \(\sim 1\,\mathrm{M}_{\odot}\). Previous works such as e.g. [68] therefore specify the mass of the scalar particle in units of \(1.34\times 10^{-10}\,\mathrm{eV}\).
2305.00161
ViewFormer: View Set Attention for Multi-view 3D Shape Understanding
This paper presents ViewFormer, a simple yet effective model for multi-view 3d shape recognition and retrieval. We systematically investigate the existing methods for aggregating multi-view information and propose a novel ``view set" perspective, which minimizes the relation assumption about the views and releases the representation flexibility. We devise an adaptive attention model to capture pairwise and higher-order correlations of the elements in the view set. The learned multi-view correlations are aggregated into an expressive view set descriptor for recognition and retrieval. Experiments show the proposed method unleashes surprising capabilities across different tasks and datasets. For instance, with only 2 attention blocks and 4.8M learnable parameters, ViewFormer reaches 98.8% recognition accuracy on ModelNet40 for the first time, exceeding previous best method by 1.1% . On the challenging RGBD dataset, our method achieves 98.4% recognition accuracy, which is a 4.1% absolute improvement over the strongest baseline. ViewFormer also sets new records in several evaluation dimensions of 3D shape retrieval defined on the SHREC'17 benchmark.
Hongyu Sun, Yongcai Wang, Peng Wang, Xudong Cai, Deying Li
2023-04-29T03:58:20Z
http://arxiv.org/abs/2305.00161v1
# ViewFormer: View Set Attention for Multi-view 3D Shape Understanding ###### Abstract This paper presents _ViewFormer_, a simple yet effective model for multi-view 3d shape recognition and retrieval. We systematically investigate the existing methods for aggregating multi-view information and propose a novel "view set" perspective, which minimizes the relation assumption about the views and releases the representation flexibility. We devise an adaptive attention model to capture pairwise and higher-order correlations of the elements in the view set. The learned multi-view correlations are aggregated into an expressive view set descriptor for recognition and retrieval. Experiments show the proposed method unleashes surprising capabilities across different tasks and datasets. For instance, with only 2 attention blocks and 4.8M learnable parameters, ViewFormer reaches 98.8% recognition accuracy on ModelNet40 for the first time, exceeding previous best method by 1.1%. On the challenging RGBD dataset, our method achieves 98.4% recognition accuracy, which is a 4.1% absolute improvement over the strongest baseline. ViewFormer also sets new records in several evaluation dimensions of 3D shape retrieval defined on the SHREC'17 benchmark. ## 1 Introduction With the advancement of 3D perception devices and methods, 3D assets (point clouds, meshes, RGBD images, CAD models, _etc._) become more and more common in daily life and industrial production. 3D object recognition and retrieval are basic requirements for understanding the 3D contents and the development of these technologies will benefit downstream applications like VR/AR/MR, 3D printing, and autopilot. Existing methods for 3D shape analysis can be roughly divided into three categories according to the input representation: point-based [32, 34, 45, 41, 48, 57, 26, 52, 50, 58, 30], voxel-based [49, 31, 33, 59], and view-based methods [39, 40, 13, 44, 11, 17, 16, 7, 28, 46, 47, 56, 12, 14]. Among them, view-based methods recognize a 3D object based on its rendered or projected images, termed _multiple views_. Generally, methods in this line [40, 46, 6, 51] outperform the point- and voxel-based counterparts[33, 52, 50, 58, 30]. On one hand, view-based methods benefit from massive image datasets and the advances in image recognition over the past decade. On the other hand, the multiple views of a 3D shape contain richer visual and semantic signals than the point or voxel form. For example, one may not be able to decide whether two 3D shapes belong to the same category by observing them from one view, but the answer becomes clear after watching other views of these shapes. The example inspires a central problem, _e.g_., how to exploit multi-view information effectively for a better understanding of 3D shape. This paper systematically investigates existing methods on how they aggregate the multi-view information and the findings are summarized in Figure 1. In the early stage, MVCNN [39] and its follow-up work [40, 13, 55, 44, 54] independently process multiple views of a 3D shape by a shared CNN. The extracted features are fused with pooling operation or some variants to form a compact 3D shape descriptor. We group these methods into _Independent Views Figure 1: A division for multi-view 3D shape analysis methods based on how they organize views and aggregate multi-view information. View Set is the proposed perspective that the views of a 3D shape are organized in a set. in Figure 0(a). Although the simple design made them stand out at the time, they did not take a holistic perspective to the multiple views of a 3D shape and the information flow among views was insufficient. In the second category, a growing number of methods model multiple views as a sequence [17, 16, 7, 28, 53], which are grouped into _View Sequence_ in Figure 0(b). They deploy RNNs, like GRU [9] and LSTM [19], to learn the view relations. However, a strong assumption behind _View Sequence_ is that the views are collected from a circle around the 3D shape. In many cases, the assumption may be invalid since the views can be rendered from random viewpoints, so they are unordered. To alleviate this limitation, later methods describe views with a more general structure, _e.g_., graph [46, 47] or hyper-graph [56, 12, 14], and develop graph convolution networks (GCNs) to propagate and integrate view features, called _View Graph_ in Figure 0(c). Methods in this category show both flexibility and promising performance gains, whereas they require constructing a view graph according to the positions of camera viewpoints. But sometimes the viewpoints may be unknown and graph construction introduces additional computation overheads. In addition, message propagation between remote nodes on the view graphs may not be straightforward. Some other methods explore rotations [22, 11], multi-layered height-maps representations [37], view correspondences [51], viewpoints selection [15] when analyzing 3D shapes. They can hardly be divided into the above categories, but multi-view correlations in their pipelines still need to be enhanced. By revisiting existing works, two ingredients are found critical for improving multi-view 3D shape analysis. The first is how to organize the views so that they can communicate with each other flexibly and freely. The second is how to integrate multi-view information effectively. It is worth noting that the second ingredient is usually coupled with the first, just like GCNs defined on the view graphs, and RNNs defined on the view sequences. In this paper, we present a novel perspective that multiple views of a 3D shape are organized into a _View Set_ in Figure 0(d), where elements are permutation invariant, which is consistent with the fact that 3D shape understanding is actually not dependent on the order of input views. For example, in Figure 0(b), whether the side view is placed first, middle or last in the inputs, the recognition result should always be airplane. Unlike existing methods analyzed above, this perspective also makes no assumptions about the correlations of views, which is more flexible and practical in real-world applications. Instead, to aggregate multi-view information, a view set attention model, ViewFormer, is devised to learn the pairwise and higher-order relations among the views adaptively. The attention architecture is a natural choice because it aligns with the view set characteristics. First, the attention mechanism is essentially a set operator and inherently good at capturing correlations between the elements in a set. Second, this mechanism is flexible enough that it makes minimal assumptions about the inputs, which matches our expectation that there are no predefined relations or additional requirements for views. The proposed model has four components: Initializer, Encoder, Transition, and Decoder. Initializer initializes the representations of views. Encoder is adapted from standard Transformer encoder [43] with specific modifications. i) The position encodings of input views are removed since views are permutation invariant. ii) The class token is removed because it is irrelevant to capturing the correlations of views in the set. iii) The number of attention blocks is greatly reduced as the size of a view set is relatively small (\(\leq\) 20 in most cases) so it is unnecessary to employ deeper blocks. Transition summarizes the learned correlations into a compact View Set Descriptor (VSD) to express the ViewFormer's understanding of the 3D shape. Decoder is designed towards downstream tasks, such as recognition and retrieval. The simple designs around the view set show not only great flexibility but also powerful capability for 3D shape understanding. New records are obtained by ViewFormer in downstream tasks of 3D shape recognition and retrieval. In summary, the contributions of this paper include: * A systematical investigation of existing methods in aggregating multi-views for 3D shape understanding. A novel perspective is proposed that multiple views are incorporated in a _View Set_. And a simple yet effective view set attention model, ViewFormer, is designed to adaptively capture pairwise and higher-order correlations among the views for better understanding. * Extensive evaluations demonstrate the superb performances of the proposed approach. The recognition accuracy on ModelNet40 can reach as high as 98.8%, surpassing all existing methods. On the challenging RGBD dataset, ViewFormer achieves 98.4% classification accuracy, which is a 4.1% absolute improvement over previous state-of-the-art. ViewFormer-based 3D shape retrieval sets new records in several evaluation dimensions on SHREC'17 benchmark. * Ablation studies shed light on the various sources of performance gains for 3D shape understanding and the visualizations provide some insightful conclusions. ## 2 Related Work In this section, we review the multi-view 3D shape analysis methods and explore the deployment of set and attention in these methods. **Multi-view 3D Shape Analysis.** Existing methods aggregate multi-view information for 3D shape understanding in different ways. (1) _Independent Views_. Early work like MVCNN series [39] and its follow-up [40, 13, 55, 44, 54] extract view features independently using a shared CNN, then fuse the extracted features using the pooling operation or some variants. The simple strategy may discard a lot of useful information and the views are not well treated as a whole thus information flow among views needs to be increased. (2) _View Sequence_. Researchers perceive the problems and propose various descriptions to incorporate multiple views of a 3D shape into a specific data structure. For example, RNN-based methods [17, 16, 7, 53, 28, 6] are proposed to operate on the view sequence. (3) _View Graphs_. The graph-based models [12, 56, 46, 47, 14] assume the relations among views as graphs and develop GCNs to capture multi-view interaction. However, message propagation on view graphs may not be straightforward and graph construction leads to additional overheads. (4) This paper presents a flexible and practical perspective, _View Set_, which neither makes assumptions about views nor introduces additional overheads. Based on that, a view set attention model is devised to adaptively integrate the correlations for all view pairs. Some other methods also explore rotations [22, 11], multi-layered height-maps representations [37], view correspondences [51], viewpoints selection [15] when analyzing 3D shapes. Their multi-view interaction still needs to be strengthened. **Set in Multi-view 3D Shape Analysis.** Previous works also mention "set" in multi-view 3D shape analysis. But they basically refer to different concepts from the proposed one. For instance, RCPCNN [44] introduces a dominant set clustering and pooling module to improve MVCNN [39]. Johns _et al_. decompose a sequence of views into a set of view pairs. They classify each pair independently and weigh the contribution of each pair [21]. MHBN [55] considers patches-to-patches (set-to-set) similarity of different views and aggregates local features using bilinear pooling. Yu _et al_. extend MHBN by introducing VLAD layer [54]. The basic idea is to calculate the similarity between two sets of local patches, while our view set idea provides a foundation for adaptively learning inter-view attentions. **Attention in Multi-view 3D Shape Analysis.** The attention mechanisms have been embedded in existing multi-view 3D shape recognition methods, but they vary in motivation, practice and effectiveness. VERAM [7] uses a recurrent attention model to select a sequence of views to classify 3D shapes adaptively. SeqViews2SeqLabels [17] introduces the attention mechanism to increase the discriminative ability for the RNN-based model and reduces the effect of selecting the first view position. 3D2SeqViews [16] proposes hierarchical attention to incorporate view-level and class-level importance for 3D shape analysis. Nevertheless, there are two points worth noting for the attention of the above methods. First, the attention operation in these methods differs from multi-head self-attention in standard Transformer [43]. Second, the dedicated designed attention does not seem to produce satisfactory results since the highest recognition accuracy they achieve on ModelNet40 is 93.7%, whereas our solution reaches 99.0% on the same dataset. Recent work MVT [6] also explores the attention architecture for view-based 3D recognition. It is inspired by the success of ViT [10] in image recognition and wants to strengthen view-level communications with patch-level correlations. MVT deploys a ViT to extract patch-level features for all images and adopts another ViT to learn the correlations for all views. However, ViewFormer shows it is unnecessary to take the patch-level interactions into account to achieve the best results, thus the computation budgets are considerably reduced compared to MVT. ## 3 ViewFormer In this section, we firstly formulate the problem of multi-view 3D shape recognition and retrieval based on the view set, then elaborate on the devised model and how it handles a set of views. ### Problem Formulation **View Set.** The views of a 3D shape refer to the rendered or projected RGB images from it. For example, a 3D shape \(\mathcal{S}\) corresponds to views \(v_{1},v_{2},\dots,v_{M}\in\mathbb{R}^{H\times W\times C}\), where \(M\) is the number of views and \(H\times W\times C\) indicates the image size. In our perspective, the views of \(\mathcal{S}\) simply form a set \(\mathcal{V}=\{v_{1},v_{2},\dots,v_{M}\}\), where elements are permutation invariant. Thus \(\mathcal{V}\) can be instantiated as a random permutation of the views. This perspective matches the basic fact that views can be generated from random viewpoints in the real world. It neither assumes relations for views nor introduces additional overheads, distinguished from previous methods analyzed above. **3D Shape Recognition & Retrieval.** In many cases [38], 3D shape retrieval can be regarded as a classification problem. It aims to find the most relevant shapes to the query. Meanwhile, the relevance is defined according to the ground truth class and subclass of the query, which means if a retrieved shape has the same class and subclass as the query, they match perfectly. Therefore, the tasks of 3D shape recognition and retrieval can be unified by predicting a category distribution \(\hat{\mathbf{y}}\in\mathbb{R}^{K}\) of the target shape \(\mathcal{S}\), where \(K\) is the number of 3D shape categories. In this paper, we design a simple yet effective view set attention model \(\mathcal{F}\) to predict the distribution. The input of \(\mathcal{F}\) is a view set \(\mathcal{V}\in\mathbb{R}^{M\times H\times W\times C}\), corresponding to the shape \(\mathcal{S}\). The procedure is formulated by Eq. 1 and the details are dissected in the next section. \[\hat{\mathbf{y}}=\mathcal{F}(\mathcal{V}) \tag{1}\] ### View Set Attention Model The proposed view set attention model, ViewFormer, is to adaptively grasp pairwise and higher-order correlations among views in the set. And it summarizes the learned correlations into an expressive descriptor for 3D shape analysis. ViewFormer is more straightforward in modeling the correlations of views than graph-based methods because it explicitly computes the attention scores for all view pairs. The overall architecture of ViewFormer includes four modules: Initialize, Encoder, Transition, and Decoder. **Initializer.** This module initializes the feature representations of views in \(\mathcal{V}\) to feed Encoder. We denote the module as Init and it converts \(v_{i}\in\mathbb{R}^{H\times W\times C}\) to the feature representation \(z_{i}\in\mathbb{R}^{D}\), where \(D\) is the feature dimension. After this module, the view set \(\mathcal{V}=\{v_{1},\ldots,v_{i},\ldots,v_{M}\}\) is transformed to the initialized feature set \(\mathbf{z}^{0}=\{z_{1},\ldots,z_{i},\ldots,z_{M}\}\), shown in Eq. 2. \[\mathbf{z}^{0}=\text{Init}(\mathcal{V}) \tag{2}\] Init has various choices, such as linear projection, MLP, CNN or ViT. The complexity and efficiency are tradeoffs. A simple linear projection from a \(224\times 224\times 3\) view to a 512-dimensional vector will result in \(\sim\)77M parameters in Init, and the MLP will produce more. Some work [55, 54, 6] also consider fine-grained patch-level features within each view and then combine them with the view-level ones. But this mean is computation expensive. In ViewFormer, we adopt lightweight CNNs (_e.g_., AlexNet [24], ResNet18 [18]) as Init because they are efficient and good at image feature extraction. **Encoder.** This module that consists of consecutive attention blocks is adapted from standard Transformer [43] encoder with the following modifications. First, the position encodings are removed since the views should be unaware of their order in the view set. Second, the class token is removed because it is irrelevant to the target of modeling the correlations of views in the set. Third, the number of attention blocks is greatly reduced as the size of a view set is relatively small (\(\leq\) 20 in most cases), so employing very complex encoder is inappropriate. Encoder receives the initialized view feature set \(\mathbf{z}^{0}\in\mathbb{R}^{M\times D}\) and processes them with \(L\) attention blocks. Each attention block stacks the multi-head self-attention[43] (MSA) and MLP layers with residual connections. LayerNorm (LN) is deployed before MSA and MLP, whereas Dropout is applied after them. The feature interaction is explicitly calculated for all view pairs in each attention block and by going deeper, the higher-order correlations are learned. The procedure in the \(\ell\)th block is summarized by Eq. 3 and 4. \[\hat{\mathbf{z}}^{\ell}=\text{Dropout}(\text{MSA}(\text{LN}(\mathbf{z}^{\ell-1 })))+\mathbf{z}^{\ell-1}\quad\ell=1\ldots L \tag{3}\] \[\mathbf{z}^{\ell}=\text{Dropout}(\text{MLP}(\text{LN}(\hat{\mathbf{z}}^{\ell} )))+\hat{\mathbf{z}}^{\ell}\quad\ell=1\ldots L \tag{4}\] **Transition.** The last attention block of Encoder outputs the collective correlations of multiple views \(\mathbf{z}^{L}\in\mathbb{R}^{M\times D}\) and we convert the learned correlations into a view set descriptor by the Transition module (Transit). The pooling operations are typical options in existing methods [39, 44, 13, 55, 54]. In this paper, we concatenate (Concat) the results of max and mean pooling along the first dimension of \(\mathbf{z}^{L}\) to stabilize the optimization and the operation does not introduce learnable parameters. The output is denoted as \(\mathbf{t}^{L}\in\mathbb{R}^{2D}\) in Eq. 5. \[\mathbf{t}^{L}=\text{Transit}(\mathbf{z}^{L})=\text{Concat}(\text{Max}(\mathbf{ z}^{L}),\text{Mean}(\mathbf{z}^{L})) \tag{5}\] **Decoder.** This module decodes the view set descriptor \(\mathbf{t}^{L}\) to a 3D shape category distribution \(\hat{\mathbf{y}}\in\mathbb{R}^{K}\). In ViewFormer, we show the decoder can be designed extremely lightweight, as light as a single Linear. We also make a look into the performance of heavier heads, such as 2- or 3-layer MLP preceded by BatchNorm (BN) and ReLU in each layer. We find both of them work well, reflecting the summarized view set descriptor \(\mathbf{t}^{L}\) is highly expressive. \[\hat{\mathbf{y}}=\text{Decoder}(\mathbf{t}^{L}) \tag{6}\] By combining the simple design of each component, the proposed method exhibits powerful capabilities across different datasets and tasks, supported by systematic experiments and extensive ablation studies in the next section. ## 4 Experiments In this section, firstly, we explain the experimental settings of ViewFormer. Then the proposed method is evaluated on 3D shape recognition and retrieval tasks. Thirdly, we conduct controlled experiments to justify the design choices of ViewFormer. Finally, visualizations are presented for a better understanding of the method. ### Basic Configurations **Architecture.** For Initializer, we adopt lightweight CNNs. There are several candidates (AlexNet, ResNet18, _etc_.) and we will compare them later. The view \(z_{i}\in\mathcal{V}\) is mapped to a 512-dimensional vector through Initializer. For Encoder, there are \(L\)=4 attention blocks and within each block, the MSA layer has 8 attention heads and the widening factor of the MLP hidden layer is 2. The Transition module converts the collective correlations in \(\mathbf{z}^{L}\) into a 1024-dimensional descriptor. Finally, the descriptor is projected to a category distribution by Decoder, which is a 2-layer MLP of shape {1024, 512, \(K\)}. The design choices are verified by ablated studies in Section 4.4. **Optimization.** The loss function is defined as CrossEntropyLoss for 3D shape recognition. Following previous methods [40, 46], the learning is divided into two stages. In the first stage, the Initializer is individually trained on the target dataset for 3D shape recognition. The purpose is to provide good initializations for views. In the second stage, the pre-trained Initializer is loaded and jointly optimized with other modules on the same dataset. Experiments show this strategy will significantly improve performance in a shorter period. More explanations about network optimization and evaluations of learning efficiency are provided in the supplementary material. ### 3D Shape Recognition **Datasets & Metrics.** We conduct 3D shape recognition on three datasets, ModelNet10 [49], ModelNet40 [49] and RGBD [25]. ModelNet10 has 4,899 CAD models in 10 categories and ModelNet40 includes 12,311 objects across 40 categories. For ModelNet10/40, we use their rendered versions as in previous work [40, 46], where each object corresponds to 20 views. RGBD is a large-scale, hierarchical multi-view object dataset [25], containing 300 objects organized into 51 classes. In RGBD, we use 12 views for each 3D object as in [22, 46]. Two evaluation metrics are computed for 3D shape recognition: mean class accuracy (Class Acc.) and instance accuracy (Inst. Acc.). We record the best results of these metrics during optimization. **Results.** Table 1 compares representative methods on ModelNet40 and these methods have different input formats: voxels, points and views. ViewFormer achieves 98.9% mean class accuracy and 98.8% overall accuracy, surpassing the voxel- and point-based counterparts. Also, it sets new records in view-based methods. For example, compared to early works [39, 40, 55, 13, 44] that aggregate multi-view information independently by pooling or some variants, ViewFormer exceeds their instance accuracies by 3.8% at least. ViewFormer also significantly improves the \begin{table} \begin{tabular}{l c c} \hline \hline Method & Input & Class Acc. & Inst. Acc. \\ & & (\%) & (\%) \\ \hline 3DShapeNets [49] & & 77.3 & – \\ VoxNet [31] & & 83.0 & – \\ VRN Ensemble [4] & & – & 95.5 \\ MVCNN-MR [33] & & 91.4 & 93.8 \\ PointNet++ [34] & – & – & 91.9 \\ DGCNN [45] & & 90.2 & 92.9 \\ RSCNN [26] & – & 93.6 \\ KPConv [41] & Points & – & 92.9 \\ CurveNet [50] & – & 93.8 \\ PointMLP [30] & & 91.3 & 94.1 \\ MVCNN [39] & & 90.1 & 90.1 \\ MVCNN-new [40] & & 92.4 & 95.0 \\ MHBN [55] & & 93.1 & 94.7 \\ GVCNN [13] & & 90.7 & 93.1 \\ RCPCNN [44] & – & 93.8 \\ RN [53] & & 92.3 & 94.3 \\ 3D2SeqViews [16] & & 91.5 & 93.4 \\ SV2SL [17] & & 91.1 & 93.3 \\ VERAM [7] & & 92.1 & 93.7 \\ Ma [29] & – & 91.5 \\ iMHL [56] & Views & – & 97.2 \\ HGNN [12] & – & 96.7 \\ HGNN\({}^{+}\)[14] & – & 96.9 \\ View-GCN [46] & & 96.5 & 97.6 \\ View-GCN++ [47] & & 96.5 & 97.6 \\ DeepCCFV [20] & – & 92.5 \\ EMV [11] & & 92.6 & 94.7 \\ RotationNet [22] & – & 97.4 \\ MVT [6] & – & 97.5 \\ CARNet [51] & – & 97.7 \\ MVTN [15] & & 92.2 & 93.5 \\ \hline **ViewFormer** & Views & **98.9** & **98.8** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of 3D shape recognition on ModelNet40. The best score is in bold black and the second best is in blue. The convention is kept in the following tables. \begin{table} \begin{tabular}{l c c} \hline \hline Method & \#Views & Inst. Acc. (\%) \\ \hline CFK [8] & \(\geq\) 120 & 86.8 \\ MMDCNN [35] & \(\geq\) 120 & 89.5 \\ MDSICNN [1] & \(\geq\) 120 & 89.6 \\ MVCNN [39] & 12 & 86.1 \\ RotationNet [22] & 12 & 89.3 \\ View-GCN(ResNet18) [46] & 12 & 94.3 \\ View-GCN(ResNet50) [46] & 12 & 93.9 \\ \hline **ViewFormer**(ResNet18) & 12 & **98.4** \\ **ViewFormer**(ResNet50) & 12 & **95.6** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of 3D shape recognition on ModelNet10. \begin{table} \begin{tabular}{l c c} \hline \hline Method & Input & Class Acc. & Inst. Acc. \\ & & (\%) & (\%) \\ \hline 3D2SeqViews [16] & & 94.7 & 94.7 \\ SV2SL [17] & & 94.6 & 94.7 \\ VERAM [7] & Views & 96.1 & 96.3 \\ RotationNet [22] & – & 98.5 \\ CARNet [51] & – & 99.0 \\ MVT [6] & – & 99.3 \\ \hline **ViewFormer** & Views & **100.0** & **100.0** \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of 3D shape recognition on RGBD. results of methods built on view sequence, such as RelationNet [53], 3D2SeqViews [16], SeqViews2SeqLabels [17], VERAM [7]. Methods defined on view graph and hypergraph achieve decent performances [56, 12, 14, 46, 47] because of enhanced information flow among views. ViewFormer still outreaches the strongest baseline of this category, increasing 2.4% Class Acc. and 1.2% Inst Acc. over View-GCN [46]. Table 2 presents the recognition results on ModelNet10. Although the dataset is relatively easy and previous methods can work very well (as high as 99.3% Inst. Acc.), it is a bit surprising that ViewFormer successfully recognizes all shapes in the test set and obtains 100% accuracy. Previous best method MVT [6] combines patch- and view-level feature communications by applying ViT [10] twice. ViewFormer achieves better results without taking patch-level interaction into account. Table 3 records the comparison with related work on the challenging RGBD [25] dataset. The dataset designs 10-fold cross-validation for multi-view 3D object recognition. We follow this setting and report the average instance accuracy of 10 folds. ViewFormer shows consistent improvements over View-GCN under the same Initializer. Especially, it gets 98.4% accuracy that is a 4.1% absolute improvement over the runner-up, suggesting ViewFormer can produce more expressive shape descriptors when dealing with challenging cases. ### 3D Shape Retrieval **Datasets & Metrics.** 3D shape retrieval aims to find a rank list of shapes most relevant to the query shape in a given dataset. We conduct this task on ShapeNet Core55 [5, 38]. The dataset is split into train/val/test set and there are 35764, 5133 and 10265 meshes, respectively. 20 views are rendered for each mesh as in [22, 46]. According to the SHREC'17 benchmark [38], the rank list is evaluated based on the ground truth category and subcategory. If a retrieved shape in a rank list has the same category as the query, it is positive. Otherwise, it is negative. The evaluation metrics include micro and macro version of P@N, R@N, F1@N, mAP and NDCG. Here N is the length of returned rank list and its maximum value is 1,000 according to the requirement. Please refer to [38] for more details about the metrics. **Retrieval.** We generate the rank list for each query shape in two steps. First, ViewFormer is trained to recognize the shape categories in ShapeNet Core55 [5]. We retrieve shapes that have the same predicted class as the query \(\mathcal{Q}\) and rank the retrieved shapes according to class probabilities in descending order, resulting in L\({}_{1}\). Second, we train another ViewFormer to recognize the shape subcategories of ShapeNet Core55 [5], then re-rank L\({}_{1}\) to ensure shapes that have same predicted subcategory as the query \(\mathcal{Q}\) rank before shapes that are not in same subcategory with \(\mathcal{Q}\) and keep the remaining unchanged, resulting in L\({}_{2}\), which is regarded as the final rank list for the query \(\mathcal{Q}\). **Results.** ViewFormer is compared with the methods that report results on SHREC'17 benchmark [38], shown in Table 4. The methods in the first three rows use voxel representations of 3D shapes as inputs, while the remaining methods exploit multiple views. The overall performances of view-based methods are better than voxel-based ones. Previously, View-GCN achieved state-of-the-art results by enhancing view interaction and aggregating multi-view in \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c}{micro} & \multicolumn{4}{c}{macro} \\ \cline{2-11} & P@N & R@N & F1@N & mAP & NDCG & P@N & R@N & F1@N & mAP & NDCG \\ \hline ZFDR & 53.5 & 25.6 & 28.2 & 19.9 & 33.0 & 21.9 & 40.9 & 19.7 & 25.5 & 37.7 \\ DeepVoxNet & 79.3 & 21.1 & 25.3 & 19.2 & 27.7 & 59.8 & 28.3 & 25.8 & 23.2 & 33.7 \\ DLAN & **81.8** & 68.9 & 71.2 & 66.3 & 76.2 & 61.8 & 53.3 & 50.5 & 47.7 & 56.3 \\ GIFT [2] & 70.6 & 69.5 & 68.9 & 64.0 & 76.5 & 44.4 & 53.1 & 45.4 & 44.7 & 54.8 \\ Improved GIFT [3] & 78.6 & 77.3 & 76.7 & 72.2 & 82.7 & 59.2 & 65.4 & 58.1 & 57.5 & 65.7 \\ ReVGG & 76.5 & 80.3 & 77.2 & 74.9 & 82.8 & 51.8 & 60.1 & 51.9 & 49.6 & 55.9 \\ MVFusionNet & 74.3 & 67.7 & 69.2 & 62.2 & 73.2 & 52.3 & 49.4 & 48.4 & 41.8 & 50.2 \\ CM-VGG5-6DB & 41.8 & 71.7 & 47.9 & 54.0 & 65.4 & 12.2 & **66.7** & 16.6 & 33.9 & 40.4 \\ MVCNN [39] & 77.0 & 77.0 & 76.4 & 73.5 & 81.5 & 57.1 & 62.5 & 57.5 & 56.6 & 64.0 \\ RotationNet [22] & 81.0 & 80.1 & 79.8 & 77.2 & **86.5** & 60.2 & 63.9 & 59.0 & 58.3 & 65.6 \\ View-GCN [46] & **81.8** & 80.9 & 80.6 & **78.4** & 85.2 & 62.9 & 65.2 & 61.1 & 60.2 & 66.5 \\ View-GCN++ [47] & 81.2 & 79.9 & 80.0 & 77.5 & 83.9 & 61.2 & 65.8 & 61.1 & 59.0 & 63.8 \\ \hline **ViewFormer** & 81.6 & **82.0** & **81.3** & **78.4** & 81.7 & **64.5** & 65.4 & **62.9** & **60.6** & **67.5** \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison of 3D shape retrieval on ShapeNet Core55. formation on on view-graphs. But experiments show ViewFormer goes beyond View-GCN in terms of micro-version R@N, F1@N and mAP as well as macro-version P@N, F1@N, mAP and NDCG. For example, we achieve at least 1.0% absolute improvements for both micro-version R@N and macro-version NDCG over View-GCN. ### Ablation Studies We conduct a series of controlled experiments to verify the choices in ViewFormer design. The used dataset is ModelNet40. **Initializer.** We explore different means to initialize view representations, including shallow convolution operations and lightweight CNNs. The idea of shallow convolution operation is inspired by the image patch projection (1x1 Conv) in ViT [10] and the specific configurations are explained in the supplementary material. Table 5 compares their recognition accuracies. We observe that initializations by 1- and 2-layer convolution operations do not yield satisfactory results. Instead, lightweight CNNs work well, especially when receiving the initialized features by AlexNet and jointly optimizing with other modules, ViewFormer reaches 98.9% class accuracy and 98.8% overall accuracy, both are new records on ModelNet40. By default, AlexNet serves as the Initializer module. **Position Encoding.** According to the view set perspective, ViewFormer should be unaware of the order of elements in the view set so we remove the position encoding from the devised encoder. We examine this design in Table 6. The results show if learnable position embeddings are forcibly injected into the initialized view features to make the model position-aware, the performance will be hindered, dropping by 0.5% for class accuracy and 0.3% for overall accuracy. **Class Token.** Unlike standard Transformer [43], the proposed method does not insert the class token into the inputs since it is irrelevant to the target of capturing the correlations among views in the set. This claim is supported by the results in Table 6, which shows that inserting the class token results in decreasing recognition accuracies. **Number of Attention Blocks.** In ViewFormer, the number of attention blocks in Encoder is considerably compressed because the size of a view set is relatively small and it is unnecessary to deploy a deeper encoder to model the interactions between the views in the set. The results in Table 7 demonstrate the encoder can be highly lightweight, as light as two attention blocks, but with outstanding performance compared to existing methods. The results also indicate increasing the attention blocks does not receive gains but additional parameters and overheads. **Transition.** We investigate three kinds of operations for the Transition module. The results are reported in Table 8. We find the simple pooling operations (Max and Mean) can work well (98.0+% Acc.) and both outreach the performances of previous state of the art. By concatenating the outputs of max and mean pooling, the optimization is more stable and the overall accuracy is lifted to 98.8%. It is worth noting that the same pooling operations are adopted by MVCNN [39] and its variants [40, 13, 55, 44, 54], but their accuracies are up to 95.0%, implying that the view set descriptors learned by our encoder are more informative. **Decoder.** The decoder projects the view set descriptor to a shape category distribution. The choices for the decoder are compared in Table 9. ViewFormer with a decoder of a single Linear can recognize 3D shapes at 98.1% instance accuracy, which outperforms all existing methods and again, reflects the summarized view set descriptor is highly discriminative. The advantage is enlarged when the decoder is \begin{table} \begin{tabular}{l c c} \hline \hline Variants & Class Acc. (\%) & Inst. Acc. (\%) \\ \hline w/ pos. enc. & 98.4 & 98.5 \\ w/o pos. enc. & **98.9** & **98.8** \\ \hline w/ cls. token & 98.8 & 98.5 \\ w/o cls. token & **98.9** & **98.8** \\ \hline \hline \end{tabular} \end{table} Table 6: Ablation study: position encoding and class token. \begin{table} \begin{tabular}{l c c} \hline \hline Module & \#Params (M) & Inst. Acc. (\%) \\ \hline AlexNet & 42.3 & 85.1 \\ + 2 Attn. Blocks & **4.8** & **98.8** \\ + 4 Attn. Blocks & 9.0 & **98.8** \\ + 6 Attn. Blocks & 13.2 & 98.3 \\ \hline \hline \end{tabular} \end{table} Table 7: Ablation study: number of attention blocks. \begin{table} \begin{tabular}{l c c} \hline \hline \multirow{2}{*}{Initializer} & \#Params & Class Acc. & Inst. Acc. \\ & (M) & (\%) & (\%) \\ \hline 1-layer Conv & 102.8 & 90.1 & 92.5 \\ 2-layer Conv & 12.9 & 88.9 & 93.7 \\ alexnet & 42.3 & **98.9** & **98.8** \\ resnet18 & 11.2 & 96.7 & 97.6 \\ resnet34 & 21.3 & 96.9 & 97.1 \\ \hline \hline \end{tabular} \end{table} Table 5: Ablation study: choices for Initializer. deepened to a 2-layer MLP. However, further tests show it is unnecessary to exploit deeper transformations. We conduct additional analysis of the proposed model, including the training strategy, running efficiency, the number of views, the structure of the view set encoder and the effect of patch-level correlations, please refer to the supplementary material for more insights. ### Visualization **Multi-view Attention Map.** For better understanding, we visualize the attention map of eight views of a 3D airplane in Figure 2. The attention scores are taken from the outputs of the last attention block of our model. The map indicates the 6th view is representative since it receives more attentions from other views. On the other hand, we can manually infer the 6th view is representative based on the visual appearances of these views. The results reflect that ViewFormer can adaptively capture the multi-view correlations and assign more weights to the representative views for recognition. model is devised to learn the pairwise and higher-order correlations of the views in the set adaptively. ViewFormer shows outstanding performances across different datasets and sets new records for recognition and retrieval tasks. But note that the performance gap between point/voxel-based and view-based methods is relatively large. In the future, we plan to explore cross-modal distillation between point/voxel-based and view-based models to narrow the gap.
2304.00248
How Does Driver Non-compliance Destroy Traffic Routing Control?
Routing control is one of important traffic management strategies against urban congestion. However, it could be compromised by heterogeneous driver non-compliance with routing instructions. In this article we model the compliance in a stochastic manner and investigate its impacts on routing control. We consider traffic routing for two parallel links. Particularly, we consider two scenarios: one ignores congestion spillback while the other one considers it. We formulate the problem as a non-linear Markov chain, given random compliance rates. Then we propose the stability and instability conditions to reveal when the routing is able or unable to stabilize the traffic. We show that for links without congestion spillback there exists a sufficient and necessary stability criterion. For links admiting congestion propagation, we present one stability condition and one instability condition. This stability conditions allow us to quantify the impacts of driver non-compliance on the two-link network in terms of throughput. Finally, we illustrate the results with a set of numerical examples.
Yu Tang, Li Jin, Kaan Ozbay
2023-04-01T07:11:09Z
http://arxiv.org/abs/2304.00248v2
# How Does Driver Non-compliance Destroy Traffic Routing Control? ###### Abstract Routing control is one of important traffic management strategies against urban congestion. However, it could be compromised by heterogeneous drivers' non-compliance with routing instructions. In this article we model the compliance in a stochastic manner and investigate its impacts on routing control. We consider traffic routing for two parallel links. Particularly, we consider two scenarios: one ignores congestion spillback while the other one considers it. We formulate the problem as a non-linear Markov chain, given random drivers' adherence. Then we propose the stability and instability conditions to reveal when the routing is able or unable to stabilize the traffic. We show that for links without congestion spillback there exists a sufficient and necessary stability criterion. For links adimiting congestion propagation, we present one stability condition and one instability condition. This stability conditions allow us to quantify the impacts of drivers' non-compliance on the two-link network in terms of throughput. Finally, we illustrate the results with a set of numerical examples. ## I Introduction ### _Motivation_ Dynamic traffic routing provides drivers with route recommendations based on real-time road information. It has been used as one of promising control policies for alleviating congestion [1, 2], and is expected to find extensive applications in a connected vehicle environment [3, 4]. Nevertheless, it is also reported that driver non-compliance with route guidance could undermine the performance of dynamic routing [5], especially social routing advice that deliberately detours part of vehicles to achieve benefits in terms of road networks [6]. Although more and more surveys have confirmed this dissoedience [7, 8, 9, 10, 11], limited studies have investigated in an analytical way how drivers' adherence influences the effect of traffic routing control. In this paper, we focus on routing advise released by traffic system operators/agencies. We study the above problem by considering a setting with random demand and random driver non-compliance. We analyze the resulting stochastic dynamical system under routing control. Specially, we focus on a network comprised of two parallel links; see Fig. 1. Though simple, the two-link network serves as a typical scenario for studying routing control [12, 13]; it turns out to be an appropriate abstraction of multiple parallel links: one stands for arterials and the other denotes a set of local streets [14]. Furthermore, we adopt a Markov chain to model the compliance rate that possibly depends on traffic states. It allows us to study stability and instability criteria that determine whether the network is destabilized by the random compliance rate. We also quantify the impacts of drivers' dissobelience in terms of throughput, namely the maximum constant inflow under which the network can be stabilized. ### _Related work_ Previous work on evaluating impacts of the conformity with routing advice typically applied static or dynamic traffic assignment (STA or DTA). These methods are favored since they easily provide numerical assessment in terms of efficiency, equity and so on [6, 13, 15] and can be implemented even for large-scale networks. However, they also have disadvantages. STA finds equilibrium by solving mathematical programming. It fails to capture significant traffic dynamics, such as congestion spillback and fluctuations of drivers' compliance rate and thus could induce unrealistic equilibrium. Though DTA can address the shortcomings of STA to some extent, it introduces a new problem. As we see later, low compliance rates could make traffic networks unstable. In that case, it could be problematic to apply DTA since we do not have guaranteed convergence in advance. Noting this, we aim at developing methods that allow stability and instability analysis, at least in some conditions, before numerical evaluation. To our best knowledge, limited studies discussed this topic for routing control subject to random compliance. Our model belongs to discrete-time nonlinear stochastic systems. Although the general theories of stochastic stability have been studied extensively [16, 17, 18, 19, 20], how to apply them in our problem is still unclear. Typically, stability analysis can be refined for specific non-linear systems. Besides, it should be noted that most of studies mainly Fig. 1: The two-link network. discuss sufficient stability conditions for general nonlinear stochastic systems, while we also have interest in instability conditions. ### _Our contributions_ In this paper we address the two following questions: 1. How to to determine whether the network can be stabilized by routing control subject to driver non-compliance? 2. How to evaluate efficiency losses of routing control due to the non-compliance in an analytic way? We answer the first question for two types of networks. In the first one, the two parallel links have infinite space and there are no congestion spillback, while in the second one, the two parallel links with finite space may. We formulate discrete-time nonlinear stochastic systems for the two networks, respectively. Then we apply the Foster-Lyapunov criteria [20] to derive the stability condition and scrutinize transience of Markov chains [20, 19] to obtain instability conditions. For the first network, we successfully obtain a sufficient and necessary stability condition (Theorem 1); for the second one, we have one stability criterion (Theorems 2) and one instability criterion (Theorems 3). Even when the network is stable, we want to know to what extent the network performance decrease. Thus to answer the second question, we take throughput as the metric to measure efficiency losses. However, throughput is not always available even for the two-link network. For the two links with infinite buffer sizes, we indeed derive exact values of throughput since we have a sufficient and necessary stability condition. For the two links with finite buffer sizes, we use the stability and instability conditions to yield lower and upper bounds, respectively. The rest of the paper is organized as follows. Section II introduces our modeling framework. Section III presents the results when the two parallel links have infinite storage space, and Section IV provides the results in case of two links with limited buffer sizes. Finally, Section V summarizes our work and discusses future research. ## II Modeling and formulation Consider the two-link network in Fig. 1: one is the major link \(e_{1}\), typically with a higher free-flow speed or capacity, and the other is the minor link \(e_{2}\). We suppose that the system operator tries to route part of flows to the minor link \(e_{2}\) to reduce congestion in the major link \(e_{1}\). We denote by \(X_{e}(t)\in\mathbb{R}_{\geq 0}\) traffic density of link \(e\in\{e_{1},e_{2}\}\) at time \(t\). Each link \(e\in\{e_{1},e_{2}\}\) is associated with a sending flow \(f_{e}(x_{e}):\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0}\) and a receiving flow \(r_{e}(x_{e}):\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0}\). Here the sending flow \(f_{e}(x_{e})\) indicates the desired outflow from link \(e\) given traffic density \(x_{e}\), and the receiving flow \(r_{e}(x_{e})\) stands for the maximum flow allowed into link \(e\). We assume that the flow functions satisfy: **Assumption 1** (Sending & receiving flows).: 1. _Sending flows: For link_ \(e\)_,_ \(f_{e}(x_{e})\) _is Lipschitz continuous and_ \(\mathrm{d}f_{e}(x_{e})/\mathrm{d}x_{e}\geq 0\) _almost everywhere (a.e.). Moreover,_ \(f_{e}(0)=0\) _and_ \(\sup_{x_{e}}f_{e}(x_{e})<\infty\)_._ 2. _Receiving flows: For link_ \(e\) _with a finite buffer size_ \(x_{e}^{\max}<\infty\)_,_ \(r_{e}(x_{e})\) _is Lipschitz continuous and_ \(\mathrm{d}r_{e}(x_{e})/\mathrm{d}x_{e}\leq 0\) _a.e.. Moreover,_ \(r_{e}(x_{e}^{\max})=0\) _and_ \(\sup_{x_{e}}r_{e}(x_{e})<\infty\)_. For link_ \(e\) _with an infinite buffer size,_ \(r_{e}=\infty\)_._ The assumptions above follow the conventional modeling of road traffic. We also define _link capacity_ as \[Q_{e}:=\sup_{x_{e}}\min\{f_{e}(x_{e}),r_{e}(x_{e})\}, \tag{1}\] which denotes an upper bound of sustainable discharging flow from link \(e\). Note that Assumption 1.2 implies that it is reasonable to only consider \(X_{e}(t)\in[0,x_{e}^{\max}]\) for link \(e\) with limited storage. Compared with supposing finite buffer sizes, the assumption of infinite buffer sizes seems a little unrealistic, but it helps understand and design routing control, even on complex networks. In this paper, we discuss both of them. For demand modeling, we consider an independent and identically distributed (i.i.d.) stochastic process \(\{D(t):t\geq 0\}\) with a distribution \(\Gamma^{d}\), \(\mathbb{E}[D(t)]=\alpha\) and \(D(t)\in\mathcal{D}\) for \(t\geq 0\), where \(\mathcal{D}\) is a compact set. This is based on the observation that during rush hours, of interest to traffic management, traveling demands are relatively stationary and only fluctuate within certain bounds [21]. Obviously, we require \[\mathbb{E}[D(t)]=\alpha<Q_{e_{1}}+Q_{e_{2}}, \tag{2}\] otherwise the traffic densities must blow up. Next, we introduce routing control. Let \(\beta_{e}(x):\mathbb{R}_{\geq 0}^{2}\rightarrow[0,1]\) denote a proportion of traffic routed to link \(e\). We assume the routing policies to satisfy: **Assumption 2** (Routing control).: _The routing proportions \(\beta_{e_{1}}(x_{e_{1}},x_{e_{2}})\) and \(\beta_{e_{2}}(x_{e_{1}},x_{e_{2}})\) are continuous and have the following monotonicity a.e.:_ 1. \(\frac{\partial}{\partial x_{e_{1}}}\beta_{e_{1}}(x_{e_{1}},x_{e_{2}})\leq 0\) _and_ \(\frac{\partial}{\partial x_{e_{2}}}\beta_{e_{1}}(x_{e_{1}},x_{e_{2}})\geq 0\)_;_ 2. \(\frac{\partial^{1}}{\partial x_{e_{1}}}\beta_{e_{2}}(x_{e_{1}},x_{e_{2}})\geq 0\) _and_ \(\frac{\partial^{2}}{\partial x_{e_{2}}}\beta_{e_{2}}(x_{e_{1}},x_{e_{2}})\leq 0\)_._ The assumption above implies that the routing proportion \(\beta_{e}(x_{e_{1}},x_{e_{2}})\) tends to decrease (resp. increase) as link \(e\) (resp. the other link) becomes more congested. It holds true for typical routing policies, such as logit routing [22]. Recalling that routing proportions could be compromised due to heterogeneous drivers' choice behavior, we denote by \(C(t)\in[0,1]\) the compliance rate of drivers' routed to the minor link \(e_{2}\) at time \(t\). Then the compromised routing ratios, denoted by \(\tilde{\beta}_{e}(x_{e_{1}},x_{e_{2}},c):\mathbb{R}_{\geq 0}^{2}\times[0,1] \rightarrow[0,1]\), are given by \[\tilde{\beta}_{e_{1}}(X_{e_{1}}(t),X_{e_{2}}(t),C(t))\] \[=\beta_{e_{1}}(X_{e_{1}}(t),X_{e_{2}}(t))+\beta_{e_{2}}(X_{e_{1}}(t ),X_{e_{2}}(t))(1-C(t)), \tag{3a}\] \[\tilde{\beta}_{e_{2}}(X_{e_{1}}(t),X_{e_{2}}(t),C(t))\] \[=\beta_{e_{2}}(X_{e_{1}}(t),X_{e_{2}}(t))C(t). \tag{3b}\] Note that (3a)-(3b) imply the compliance rate of drivers routed to the major link \(e_{1}\) equals one. This is because in our setting drivers are assumed to prefer the major link \(e_{1}\) while the system operator tries to route some of them to the minor link \(e_{2}\). The assumption is not necessary, just for simplifying the problem. In fact, we can introduce the second compliance rate, and apply our method to obtain stability and instability criteria, which are more complicated. We consider that \(C(t+1)\) depends on \(X_{e_{1}}(t)=x_{e_{1}}\) and \(X_{e_{2}}(t)=x_{e_{2}}\) with a distribution \(\Gamma^{c}_{x_{e_{1}},x_{e_{2}}}\). For convenience of analysis, we assume that the distributions \(\Gamma^{c}_{x_{e_{1}},x_{e_{2}}}(c)\), for any \(x_{e_{1}}\) and \(x_{e_{2}}\), have lower semi-continuous densities with the same support \(\mathcal{C}\subseteq[0,1]\). We define \(\mathbb{E}_{x_{e_{1}},x_{e_{2}}}[C]:=\mathbb{E}[C(t+1)|X_{e_{1}}(t)=x_{e_{1}},X _{e_{2}}(t)=x_{e_{2}}]\) and assume it to satisfy: **Assumption 3** (Drivers' compliance).: _The expected compliance rate has the following monotonicity a.e.:_ \[\frac{\partial}{\partial x_{e_{1}}}\mathbb{E}_{x_{e_{1}},x_{e_{2}}}[C]\geq 0 \text{, and }\frac{\partial}{\partial x_{e_{2}}}\mathbb{E}_{x_{e_{1}},x_{e_{2}}}[C] \leq 0. \tag{4}\] Clearly, the assumption implies that more drivers follow the routing advise to the minor link \(e_{2}\) if the major link \(e_{1}\) becomes more congested or the minor link \(e_{2}\) becomes less congested. The following specifies the inflows into links \(e_{1}\) and \(e_{2}\). Given an upstream flow \(F(t)\), we denote by \(q^{\text{in}}_{e}:\mathbb{R}^{3}_{\geq 0}\times[0,1]\to\mathbb{R}_{\geq 0}\) the inflow into link \(e\in\{e_{1},e_{2}\}\): \[q^{\text{in}}_{e}(F(t),X_{e_{1}}(t),X_{e_{2}}(t),C(t))\\ =\min\{\tilde{\beta}_{e}(X_{e_{1}}(t),X_{e_{2}}(t),C(t))F(t),r_{e} (X_{e}(t))\}. \tag{5}\] Supposing \(r_{e_{1}}=r_{e_{2}}=\infty\), we have the following network dynamics: \[\Delta X_{e_{1}}(t)= \frac{\delta}{l_{e_{1}}}\Big{(}q^{\text{in}}_{e_{1}}(D(t),X_{e_{1 }}(t),X_{e_{2}}(t),C(t))\] \[-f_{e_{1}}(X_{e_{1}}(t))\Big{)}, \tag{6a}\] \[\Delta X_{e_{2}}(t)= \frac{\delta}{l_{e_{2}}}\Big{(}q^{\text{in}}_{e_{2}}(D(t),X_{e_{1 }}(t),X_{e_{2}}(t),C(t))\] \[-f_{e_{2}}(X_{e_{2}}(t))\Big{)}. \tag{6b}\] where \(\Delta X_{e}(t):=X_{e}(t+1)-X_{e}(t)\) for any link \(e\), \(\delta\) denotes the time step size and \(l_{e}\) denotes length of link \(e\). Clearly, if links \(e_{1}\) and \(e_{2}\) have finite space, congestion could block the inflows. For the sake of analysis, we consider another link \(e_{0}\) upstream of links \(e_{1}\) and \(e_{2}\), satisfying \(Q_{e_{0}}\geq Q_{e_{1}}+Q_{e_{2}}\) and \(r_{e_{0}}=\infty\), to accept inflows. It leads to the network dynamics as follows: \[\Delta X_{e_{0}}(t)=\frac{\delta}{l_{e_{0}}}\Big{(}D(t)\] \[-\sum_{e\in\{e_{1},e_{2}\}}q^{\text{in}}_{e}(f_{e_{0}}(X_{e_{0}}( t)),X_{e_{1}}(t),X_{e_{2}}(t),C(t))\Big{)} \tag{7a}\] \[\Delta X_{e_{1}}(t)=\frac{\delta}{l_{e_{1}}}\Big{(}q^{\text{in}} _{e_{1}}(f_{e_{0}}(X_{e_{0}}(t)),X_{e_{1}}(t),X_{e_{2}}(t),C(t))\] \[-f_{e_{1}}(X_{e_{1}}(t))\Big{)},\] (7b) \[\Delta X_{e_{2}}(t)=\frac{\delta}{l_{e_{2}}}\Big{(}q^{\text{in}} _{e_{2}}(f_{e_{0}}(X_{e_{0}}(t)),X_{e_{1}}(t),X_{e_{2}}(t),C(t))\] \[-f_{e_{2}}(X_{e_{2}}(t))\Big{)}. \tag{7c}\] For notional convenience, we assume \(\delta/l_{e}\) to be the same for any link \(e\) and ignore them in the following analysis. Then, (6a)-(6b) indicate that \[\Phi_{1}:=\{(X_{e_{1}}(t),X_{e_{2}}(t),D(t),C(t)):t\geq 0\} \tag{8}\] is a Markov chain with a state space \(\mathbb{R}_{\geq 0}\times\mathbb{R}_{\geq 0}\times\mathcal{D}\times \mathcal{C}\), and (7a)-(7c) indicate \[\Phi_{2}:=\{(X_{e_{0}}(t),X_{e_{1}}(t),X_{e_{2}}(t),D(t),C(t)):t\geq 0\} \tag{9}\] is also a Markov chain with a state space \(\mathbb{R}_{\geq 0}\times\mathcal{X}_{e_{1}}\times\mathcal{X}_{e_{2}}\times\mathcal{ D}\times\mathcal{C}\). Note that \(\mathcal{X}_{e_{1}}\subseteq[0,x_{e_{1}}^{\max}]\) and \(\mathcal{X}_{e_{2}}\subseteq[0,x_{e_{2}}^{\max}]\) are bounded sets. We make the last assumption as follows: **Assumption 4**.: 1. _For the system (_6a_)-(_6b_), there exists_ \(c\in\mathcal{C}\) _and_ \(d\in\mathcal{D}\) _such that_ \(\lim_{t\to\infty}X_{e}(t)=x_{e}^{*}<\infty\)_,_ \(e\in\{e_{1},e_{2}\}\)_, given_ \(C(t)\equiv c\) _and_ \(D(t)\equiv d\)_;_ 2. _For the system (_7a_)-(_7c_), there exists_ \(c\in\mathcal{C}\) _and_ \(d\in\mathcal{D}\) _such that_ \(\lim_{t\to\infty}X_{e}(t)=x_{e}^{*}<\infty\)_,_ \(e\in\{e_{1},e_{2},e_{3}\}\)_, given_ \(C(t)\equiv c\) _and_ \(D(t)\equiv d\)_. Moreover,_ \[\tilde{\beta}_{e_{1}}((x_{e_{1}}^{*},x_{e_{2}}^{*}),c)f_{e_{0}}(x_{ e_{0}}^{*})<< r_{e_{1}}(x_{e_{1}}^{*}),\] (10a) \[\tilde{\beta}_{e_{2}}((x_{e_{1}}^{*},x_{e_{2}}^{*}),c)f_{e_{0}}(x_{ e_{0}}^{*})<r_{e_{2}}(x_{e_{2}}^{*}).\] (10b) The above assumption essentially states that there exists \(c\) and \(d\) such that the systems (6a)-(6b) and (7a)-(7c) are stable. Note that (10a)-(10b) are mild technical assumptions. The system (6a)-(6b) does not require them due to \(r_{e_{1}}=r_{e_{2}}=\infty\). The equations (10a)-(10b) imply that the inflows into links \(e_{1}\) and \(e_{2}\) are strictly fewer than the corresponding receiving flows. That is, the inflow can smoothly pass links \(e_{1}\) and \(e_{2}\) when there is no congestion. By noting (2), (10a)-(10b) are easy to achieve for appropriate routing polices. We have the following lemma proved in Appendix A: **Lemma 1**.: _Given Assumption 4.1, the Markov chain (8) is \(\varphi\)-irreducible; and given Assumption 4.2, the Markov chain (9) is \(\varphi\)-irreducible._ Here \(\varphi\) is a certain measure. The \(\varphi\)-irreducibility means that any set with positive measure can be reached by the Markov chain given any initial state. It implies that any large set can be reached from any initial condition and thus the state space is indecomposable. It is a prerequisite of discussing stability of Markov chains. Finally, we define the stability of interest below: **Definition 1** (Stability & Instability).: _A stochastic process \(\{Y(t):t\geq 0\}\) with a state space \(\mathcal{Y}\) is stable if there exists a scalar \(Z<\infty\) such that for any initial condition \(y\in\mathcal{Y}\)_ \[\limsup_{t\to\infty}\frac{1}{t}\sum_{\tau=0}^{t}\mathbb{E}[|Y(\tau)|]\leq Z, \tag{11}\] _where \(|Y(\tau)|\) denotes 1-norm of \(Y(\tau)\). The network is unstable if there does not exist \(Z<\infty\) such that (11) holds for any initial condition \(y\in\mathcal{Y}\)._ The notion of stability follows a classical definition [23] and is widely used in studying traffic control [24]. Practically, if the time-average traffic density in all links are bounded, the network is stable; otherwise, it is unstable. ## III Stability analysis of the network without congestion propagation We state the main result as follows: **Theorem 1**.: _The Markov chain (8) with the state space \(\mathbb{R}_{\geq 0}\times\mathbb{R}_{\geq 0}\times\mathcal{D}\times\mathcal{C}\) is stable if and only if there exists a vector \(\theta:=[\theta_{e_{1}},\theta_{e_{2}}]^{\mathrm{T}}\in\mathbb{R}_{\geq 0}^{ 2}\) such that_ \[\Big{(}\beta_{e_{1}}(\theta)+\beta_{e_{2}}(\theta)\mathbb{E}_{ \theta}[1-C]\Big{)}\alpha-f_{e_{1}}(\theta_{e_{1}}) <0, \tag{12a}\] \[\beta_{e_{2}}(\theta)\mathbb{E}_{\theta}[C]\alpha-f_{e_{2}}( \theta_{e_{2}}) <0. \tag{12b}\] Note that the stability condition is sufficient and necessary. Thus we can use it to derive exact values of throughput. In the following sections, we first present a numerical example and then prove Theorem 1. ### _Numerical example_ First, we set \(\delta=0.1\) and \(l_{e_{1}}=l_{e_{2}}=1\). We consider the sending flows \[f_{e}(x_{e})=\min\{v_{e}x_{e},Q_{e}\},\ e\in\{e_{1},e_{2}\} \tag{13}\] with \(v_{e_{1}}=1\), \(v_{e_{2}}=0.8\), \(Q_{e_{1}}=0.6\), \(Q_{e_{2}}=0.4\), as illustrated in Fig. 2. For the purpose of routing, we adopt the classical logit routing as follows: \[\beta_{e}(x)=\frac{e^{-\nu_{e}x_{e}}}{e^{-\nu_{e_{1}}x_{e_{1}}}+e^{-\nu_{e_{2 }}x_{e_{2}}}},\ e\in\{e_{1},e_{2}\}, \tag{14}\] where \(\nu_{e_{1}}=1\) and \(\nu_{e_{2}}=2\) are routing parameters. We assume the demands \(D(t)\in[\underline{d},1.2]\), \(t\geq 0\), are independent and identically distributed (i.i.d.) uniform random variables. It follows \(\mathbb{E}[D(t)]=\underline{d}/2+0.6\). We also assume the routing compliance rates \(C(t)\in[0,\bar{c}]\), \(t\geq 0\), are i.i.d. uniform random variables, along with \(\mathbb{E}[C(t)]=\bar{c}/2\). It indicates that in our numerical example the compliance rates are independent of traffic states. It should be noted that this independence is not necessary for our approach. Here we assume it just for simplification. However, we still have non-trivial observations in this case. We first analyze the stability and instability of scenarios with different \(\underline{d}\) and compliance rate \(\bar{c}\). Fig. 2(a) shows the time-average traffic densities after \(5\times 10^{5}\) steps and reveals the stability and instability regions. We observe a non-linear boundary: given moderate traffic demands, improvements of compliance rates can stabilize the network; but given a high demand close to the network capacity, we hardly see the effect of improving compliance rate. Then we compute the throughput, the maximum expected demand under which the network can be stabilized. It is interesting to find that we can achieve a relatively high throughput (around 0.987) when \(\mathbb{E}[C(t)]=0.395\). Further improvement is marginal when \(\mathbb{E}[C(t)]\) exceed 0.395. ### _Proof of Theorem 1_ We first prove the sufficiency by the Foster-Lyapunov criterion [20]: **Foster-Lyapunov criterion**.: _Consider a \(\varphi\)-irreducible Markov chain \(\{Y(t);t\geq 0\}\) with a state space \(\mathcal{Y}\), an infinitesimal generator \(\mathscr{L}\), and a Lyapunov function \(V:\mathcal{Y}\to\mathbb{R}_{\geq 0}\). If there exist constants \(m>0\), \(n<\infty\), a function \(g:\mathcal{Y}\to\mathbb{R}_{\geq 0}\) and a compact set \(\mathcal{E}\) such that for any \(y\in\mathcal{Y}\)_ \[\mathbb{E}[V(Y(t+1))|Y(t)=y]-V(y)\leq-mg(x)+n\mathbf{1}_{\mathcal{E}}(y),\] _where \(\mathbf{1}_{\mathcal{E}}(y)\) is an indicator function, then, for each initial condition \(y(0)\in\mathcal{Y}\),_ \[\limsup_{t\to\infty}\frac{1}{t}\sum_{\tau=0}^{t}\mathbb{E}[g(Y(\tau))]\leq m/n.\] Fig. 3: Analysis of stability and throughput given links \(e_{1}\) and \(e_{2}\) with infinite buffer sizes. Fig. 2: Sending flows of major and minor links. To proceed, we consider the following Lyapunov function \[V(x)=\begin{cases}0&x\in\mathcal{X}^{1},\\ \frac{1}{2}(x_{e_{1}}-\theta_{e_{1}})_{+}^{2}&x\in\mathcal{X}^{2},\\ \frac{1}{2}(x_{e_{2}}-\theta_{e_{2}})_{+}^{2}&x\in\mathcal{X}^{3},\\ \frac{1}{2}((x_{e_{1}}-\theta_{e_{1}})_{+}+(x_{e_{2}}-\theta_{e_{2}})_{+})^{2} &x\in\mathcal{X}^{4},\end{cases} \tag{15}\] where \((\cdot)_{+}:=\max\{\cdot,0\}\), \(\mathcal{X}_{e_{1}}:=[0,\theta_{e_{1}}]\times[0,\theta_{e_{2}}]\), \(\mathcal{X}_{e_{2}}:=(\theta_{e_{1}},\infty)\times[0,\theta_{e_{2}}]\), \(\mathcal{X}_{e_{3}}:=[0,\theta_{e_{1}}]\times(\theta_{e_{2}},\infty)\) and \(\mathcal{X}_{e_{4}}:=(\theta_{e_{1}},\infty)\times(\theta_{e_{2}},\infty)\). The rest is devoted to show that there exist constants \(m^{\prime}>0\) and \(n^{\prime}<\infty\) such that for every \(x\in\mathbb{R}^{2}_{\geq 0}\) \[\mathbb{E}[V(X(t+1))|X(t)=x]-V(x)\] \[\leq -m^{\prime}\Big{(}(x_{e_{1}}-\theta_{e_{1}})_{+}+(x_{e_{2}}- \theta_{e_{2}})_{+}\Big{)}+n^{\prime}. \tag{16}\] If (16) holds, we must have \(0<m<m^{\prime}\), \(n<\infty\), and a compact set \(\mathcal{E}=[0,M]\times[0,M]\) such that \[\mathbb{E}[V(X(t+1))|X(t)=x]-V(x)\] \[\leq -m\Big{(}(x_{e_{1}}-\theta_{e_{1}})_{+}+(x_{e_{2}}-\theta_{e_{2}} )_{+}\Big{)}+n\mathbf{1}_{\mathcal{E}}(x), \tag{17}\] which indicates that \(X_{e_{1}}(t)\) and \(X_{e_{2}}(t)\) and thus concludes the stability. To show (16), we need to discuss whether \(X_{e_{1}}(t)\) is larger than \(\theta_{e_{1}}\) and whether \(X_{e_{2}}(t)\) is larger than \(\theta_{e_{2}}\), up to four cases. Here we present the proofs for the typical cases and the remaining can be proved in a similar way. When \(X_{e_{1}}(t)\leq\theta_{e_{1}}\) and \(X_{e_{2}}(t)\leq\theta_{e_{2}}\), the proof is trivial by noting that \(X_{e_{1}}(t+1)\) and \(X_{e_{2}}(t+1)\) must be bounded a sufficiently large number. Now we assume \(X_{e_{1}}(t)>\theta_{e_{1}}\) and \(X_{e_{2}}(t)\leq\theta_{e_{2}}\). It follows \[\mathbb{E}[V(X(t+1))|X(t)=x]-V(x)\] \[\leq \frac{1}{2}\Big{(}\int(x_{e_{1}}+G_{e_{1}}-\theta_{e_{1}})_{+}^{2 }-(x_{e_{1}}-\theta_{e_{1}})^{2}\Big{)}\] \[\leq \frac{1}{2}\Big{(}\int(x_{e_{1}}+G_{e_{1}}-\theta_{e_{1}})^{2}-(x _{e_{1}}-\theta_{e_{1}})^{2}\Big{)}\] \[\leq \Big{(}\Big{(}\beta_{e_{1}}(x)+\beta_{e_{2}}(x)\mathbb{E}_{x}[1-C ]\Big{)}\alpha-f_{e_{1}}(x_{e_{1}})\Big{)}x_{e_{1}}+n,\] where \(n\) is a sufficiently large number and \[G_{e_{1}}:=q_{e_{1}}^{\mathrm{in}}(D(t),X_{e_{1}}(t),X_{e_{2}}(t),C(t))-f_{e_{ 1}}(X_{e_{1}}(t)). \tag{18}\] Note that we omit \(\delta/l_{e_{1}}\). By Assumptions 1-3, \[\Big{(}\beta_{e_{1}}(x_{e_{1}},x_{e_{2}})+\beta_{e_{2}}(x_{e_{1}},x_{e_{2}}) \mathbb{E}_{x}[1-C]\Big{)}\alpha-f_{e_{1}}(x_{e_{1}})\] is non-increasing in \(x_{e_{1}}\) and non-decreasing in \(x_{e_{2}}\). Thus (12a) indicates \[\mathbb{E}[V(X(t+1))|X(t)=x]-V(x)<0,\forall x_{e_{1}}>\theta_{e_{1}},x_{e_{2}} \leq\theta_{e_{2}}.\] Finally, we consider \(X_{e_{1}}(t)>\theta_{e_{1}}\) and \(X_{e_{2}}(t)>\theta_{e_{2}}\). It turns out that we obtain \[\mathbb{E}[V(X(t+1))|X(t)=x]-V(x)\] \[\leq (\alpha-f_{e_{1}}(x_{e_{1}})-f_{e_{2}}(x_{e_{2}})(x_{e_{1}}+x_{e_{2 }})+n.\] Note the \(\alpha-f_{e_{1}}(x_{e_{1}}))-f_{e_{2}}(x_{e_{2}}\) is non-increasing in both \(x_{e_{1}}\) and \(x_{e_{2}}\). Combing (12a)-(12b) indicates \[\mathbb{E}[V(X(t+1))|X(t)=x]-V(x)<0,\forall x_{e_{1}}>\theta_{e_{1}},x_{e_{2}} >\theta_{e_{2}}.\] Thus we finish proving the sufficient condition. The following proves the necessary condition. That is, we show that if there does not exist a vector \(\theta\) satisfying (12a)-(12b), the system is unstable. We prove the instability by showing the Markov chain (8) is transient, which indicates that some state tends towards infinity in the long term, and thus transience can be as instability [19]. We consider the following transience criterion [20]: **Transience criterion**.: _Consider a \(\varphi\)-irreducible Markov chain \(\{Y(t):t\geq 0\}\) with a state space \(\mathcal{Y}\). Then \(\{Y(t):t\geq 0\}\) is transient if there exists a bounded function \(V:\mathcal{Y}\rightarrow\mathbb{R}_{\geq 0}\) and a sublevel set of \(V\), denoted by \(S\), such that_ * \(\varphi(S)>0\) _and_ \(\varphi(\mathcal{Y}\setminus S)>0\)_;_ * \(\mathbb{E}[V(Y(t+1))|Y(t)=y]-V(y)\geq 0,\ \forall y\in\mathcal{Y}\setminus S\)_._ Note that our Markov chain (8) is \(\varphi\)-irreducible, stated by Lemma 1. To proceed, we first assume that for any \(\theta\in\mathbb{R}^{2}_{\geq 0}\) \[\Big{(}\beta_{e_{1}}(\theta)+\beta_{e_{2}}(\theta)\mathbb{E}_{\theta}[1-C] \Big{)}\alpha-f_{e_{1}}(\theta_{e_{1}})\geq 0. \tag{19}\] We consider a bounded test function \(W:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0}\): \[W(x_{e_{1}})=\xi_{1}-\frac{1}{x_{e_{1}}+\xi_{2}} \tag{20}\] where \(\xi_{1}\) and \(\xi_{2}\) are sufficiently large numbers. Then, we obtain \[\mathbb{E}[W(X_{e_{1}}(t+1)|X_{e_{1}}(t)=x_{e_{1}}]-f_{e_{1}}(x_{e_{1}})\] \[= \frac{1}{x_{e_{1}}+\xi_{2}}-\int\frac{1}{x_{e_{1}}+G_{e_{1}}+\xi_ {2}}\] \[= \int\frac{G_{e_{1}}}{(x_{e_{1}}+\xi_{2})(x_{e_{1}}+G_{e_{1}}+\xi_ {2})}\] \[\geq \frac{1}{(x_{e_{1}}+\xi_{2})(x_{e_{1}}+\xi_{2}^{\prime})}\int G _{e_{1}}\] \[= \frac{1}{(x_{e_{1}}+\xi_{2})(x_{e_{1}}+\xi_{2}^{\prime})}\Big{(} \Big{(}\beta_{e_{1}}(\theta)\] \[+\beta_{e_{2}}(\theta)\mathbb{E}_{\theta}[1-C]\Big{)}\alpha-f_{e_{1}}( \theta_{e_{1}})\Big{)}\] \[\geq 0,\] where \(G_{e_{1}}\) is given by (18). Note that the first inequality holds because \(G_{e_{1}}\) is bounded and \(\xi_{2}\) is sufficiently large. We must have some \(0<\xi_{2}^{\prime}<\xi_{2}\). Note that the above inequality holds over \(\mathbb{R}^{2}_{\geq 0}\). It indicates that the conditions (i)-(ii) above are satisfied. Thus we conclude the Markov chain (8) is transient given (19). Then we assume that for any \(\theta\in\mathbb{R}^{2}_{\geq 0}\) \ ## IV Stability analysis of the network with congestion propagation We state the main results as follows: **Theorem 2**.: _Given Assumptions 1-4, the Markov chain (9) with the state space \(\mathbb{R}_{\geq 0}\times\mathcal{X}_{e_{1}}\times\mathcal{X}_{e_{2}}\times \mathcal{D}\times\mathcal{C}\) is stable if there exists a vector \(\theta:=[\theta_{e_{1}},\theta_{e_{2}}]^{\mathrm{T}}\in[0,1]^{2}\) and a positive scalar \(\gamma>0\) such that_ \[\alpha-\sum_{e\in\{e_{1},e_{2}\}}(1-\theta_{e})\mathbb{E}_{x_{e_{ 1}},x_{e_{2}}}[q_{e}^{\mathrm{in}}(f_{e_{0}}(x_{e_{0}}^{c}),x_{e_{1}},x_{e_{2}},C)]-\] \[-\sum_{e\in\{e_{1},e_{2}\}}\theta_{e}f_{e}(x_{e})<-\gamma,\ \forall(x_{e_{1}},x_{e_{2}})\in\mathcal{X}_{e_{1}}\times\mathcal{X}_{e_{2}}, \tag{22}\] _where \(x_{e_{0}}^{c}:=\inf\{x_{e_{0}}|f_{e_{0}}(x_{e_{0}})=Q_{e_{0}}\}\) and_ \[\mathbb{E}_{x_{e_{1}},x_{e_{2}}}[q_{e}^{\mathrm{in}}(f_{e_{0}}(x_ {e_{0}}),x_{e_{1}},x_{e_{2}},C)]\] \[:=\int_{\mathcal{C}}q_{e}^{\mathrm{in}}(f_{e_{0}}(x_{e_{0}}),x_{e _{1}},x_{e_{2}},c))\Gamma_{x_{e_{1},e_{2}}}(\mathrm{d}c). \tag{23}\] **Theorem 3**.: _Given Assumptions 1-4, the Markov chain (9) with the state space \(\mathbb{R}_{\geq 0}\times\mathcal{X}_{e_{1}}\times\mathcal{X}_{e_{2}}\times \mathcal{D}\times\mathcal{C}\) is unstable if there exists a vector \(\theta:=[\theta_{e_{1}},\theta_{e_{2}}]^{\mathrm{T}}\in[0,1]^{2}\) and a non-negative scalar \(\gamma\geq 0\) such that_ \[\alpha-\sum_{e\in\{e_{1},e_{2}\}}(1-\theta_{e})\mathbb{E}_{x_{e_{ 1}},x_{e_{2}}}[q_{e}^{\mathrm{in}}(f_{e_{0}}(\bar{x}_{e_{0}}),x_{e_{1}},x_{e_{ 2}},C)]-\] \[-\sum_{e\in\{e_{1},e_{2}\}}\theta_{e}f_{e}(x_{e})\geq\gamma,\ \forall(x_{e_{1}},x_{e_{2}})\in\mathcal{X}_{e_{1}}\times\mathcal{X}_{e_{2}}, \tag{24}\] _where \(\bar{x}_{e_{0}}:=\infty\)._ Note that \(x_{e_{0}}^{c}\) defined in Theorem 2 is usually interpreted as _critical density_ since link \(e_{0}\) with \(x_{e_{0}}>x_{e_{0}}^{c}\) is considered as "congested" in practice. Theorem 2 indicates that though link \(e_{0}\) could be congested with extremely high traffic densities, we only need to check the critical density. Besides, Theorem 3 says that we need to check \(x_{e_{0}}=\infty\), namely to consider \(\sup f_{e_{0}}\). One can implement Theorem 2 by solving the following semi-infinite programming (SIP [25]): \[(P_{1})\ \min_{\theta_{1},\theta_{2},\gamma}\ \gamma\ s.t.\ \eqref{eq \((D(t),C(t))\in[\underline{d},1.2]\times[0,\bar{c}]\). Besides, for any initial condition \((X_{e_{0}}(0),X_{e_{1}}(0),X_{e_{2}}(0))\in\mathbb{R}_{\geq 0}\times[0,x_{e_{1}}^{ \max}]\times[0,x_{e_{2}}^{\max}]\), \(X_{e_{0}}(t),X_{e_{1}}(t),X_{e_{2}}\) enters \(\tilde{\mathcal{X}}\) almost surely. Fig. 4(a) presents the time-average of traffic densities after \(5\times 10^{5}\) steps and discloses the stability and instability regions. We have two observations. First, there exists a gap between the stability and instability regions. This is because our stability and instability criteria are only sufficient. Second, the stability region in Fig. 4(a) shrinks significantly, compared with that in Fig. 2(a). It indicates that congestion spillback may not be neglected in analyzing real-world scenarios. Fig. 4(b) shows upper and lower bounds of throughput. We note that the gap tends to be enlarged as the compliance rate increases. Per our discussion on Theorems 2 and 3, it is possible to narrow down the gap by considering more advanced Lyapunov/test functions. ### _Proof of Theorem 2_ We still apply the Foster-Lyapunov criterion to prove the stability condition. Given the Lyapunov function (27), we obtain \[\mathbb{E}[\tilde{V}(X(t+1))|X(t)=x]-\tilde{V}(x)\] \[\leq \Big{(}\alpha-\sum_{e\in\{e_{1},e_{2}\}}(1-\theta_{e})\mathbb{E} _{x_{e_{1}},x_{e_{2}}}[q_{e}^{\rm in}(f_{e_{0}}(x_{e_{0}}^{c}),x_{e_{1}},x_{e_ {2}},C)]\] \[-\sum_{e\in\{e_{1},e_{2}\}}\theta_{e}f_{e}(x_{e})\Big{)}x_{e_{0}}+n,\] where \(n<\infty\). By noting that \(\theta_{e_{1}},\theta_{e_{2}}\in[0,1]\) and that \[\mathbb{E}_{x_{e_{1}},x_{e_{2}}}[q_{e}^{\rm in}(f_{e_{0}}(x_{e_{0}}),x_{e_{1}},x_{e_{2}},C)]\] is non-decreasing in \(x_{e_{0}}\), we conclude that \[\mathbb{E}[\tilde{V}(X(t+1))|X(t)=x]-\tilde{V}(x)<-mx_{e_{0}}+n\] holds in the whole state space. This completes the proof. ### _Proof of Theorem 3_ We prove the instability condition by showing the Markov chain (9) is transient given (24). For the test function (28), we apply the same technique in proving the necessary condition in Theorem 1 and obtain \[\mathbb{E}[\tilde{W}(X(t+1))|X(t)=x]-\tilde{W}(x)\] \[\geq \frac{1}{Z(x_{e_{0}})}\Big{(}\mathbb{E}[D(t)]-\sum_{e\in\{e_{1},e _{2}\}}\theta_{e}f_{e}(x_{e})\] \[-\sum_{e\in\{e_{1},e_{2}\}}(1-\theta_{e})\int_{\mathcal{C}}q_{e} ^{\rm in}(f_{e_{0}}(x_{e_{0}}),x_{e_{1}},x_{e_{2}},c))\Gamma_{x_{e_{1},e_{2}} }({\rm dc})\Big{)}\] \[= \frac{1}{Z(x_{e_{0}})}\Big{(}\alpha-\sum_{e\in\{e_{1},e_{2}\}} \theta_{e}f_{e}(x_{e})\] \[-\sum_{e\in\{e_{1},e_{2}\}}(1-\theta_{e})\mathbb{E}_{x_{e_{1}},x _{e_{2}}}[q_{e}^{\rm in}(f_{e_{0}}(x_{e_{0}}),x_{e_{1}},x_{e_{2}},C)]\Big{)},\] where \(Z(x_{e_{0}}):\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0}\) is a certain positive function. By noting \(\theta_{e_{1}},\theta_{e_{2}}\in[0,1]\) and \[\mathbb{E}_{x_{e_{1}},x_{e_{2}}}[q_{e}^{\rm in}(f_{e_{0}}(x_{e_{0}}),x_{e_{1}},x_{e_{2}},C)]\] is non-decreasing in \(x_{e_{0}}\), we conclude that \[\mathbb{E}[\tilde{W}(X(t+1))|X(t)=x]-\tilde{W}(x)\geq 0\] holds in the whole state space, which completes the proof. ## V Concluding remarks In this paper, we considered the traffic stability and throughput of a parallel-link network subject to non-compliant traffic flows. We formulated a Markov chain that captures the traffic evolution under a dynamic routing strategy and in the face of a state-dependent non-compliance rate of drivers. Using Lyapunov methods, we derived stability conditions for several typical settings with or without traffic spillbacks. We also used the results to analytically quantify the impact of driver non-compliance on network throughput. Possible future directions include extension of the results to general single-origin-single-destination networks with cyclic structures and multi-commodity scenarios. ### _Proof of Lemma 1_ By Proposition 7.1.4 in [20], we see that the system (6a)-(6b) is forward accessible. Then Assumption 4.1 indicates that the Markov chain (8) is \(\varphi\)-irreducible by Theorem 7.2.6 in [20]. Since we assume the flow functions are continuous (see Assumptions 1-2), there exists a neighborhood such that (10a)-(10b) hold. It implies that the system (7a)-(7c) is forward accessible at least over the neighborhood. Then the Markov chain (9) is also \(\varphi\)-irreducible.